Rankings Have No Deep Impact

It is increasingly difficult to take the Times Higher Education (THE) Impact Rankings seriously but the tone-deafness, doublethink, obfuscation and self-delusion becomes ever more extraordinary.  The only blot on the comedic value of the Rankings is that they continue to highlight Russia and Afghanistan universities – one group in a country in thrall to a leader whose war has killed tens of thousands and the other in a country where women cannot enrol in higher education.  There seems no way in which the SDGs were intended to be used to provide publicity and credibility for countries deliberately applying policies decried by the United Nations. 

Same Old, Same Old

We are told that the “The Impact Rankings are inherently dynamic…we expect and welcome regular change in the ranked order of institutions (and we discourage year-on-year comparisons)…”.  Unfortunately, the THE corporate communications department didn’t read the memo because they announced, “Western Sydney University claims the Impact title for the second year running with a near-perfect score” – which sounds rather like a year on year comparison.  Further diminishing the sense of dynamism is that eight of the top twelve are the same as last year.

Five of the 2023 top twelve have been in the top twelve since 2020.  It would probably be higher but Kings College, University of Leeds and University of Sydney, who were in the top twelve in 2020, have all dropped out of the table completely.  There seems to be the possibility that some of the very best universities with strong SDG credentials are ignoring the Impact Rankings because they recognize the inherent weaknesses. 

It should not be surprising that universities who choose to be part of this manipulable process are able to enhance their performance.  Universities are full of administrators and academics who are good at passing exams so shame on Newcastle University and Hakkaido University for falling from eight and ten last year to 24 and 22 in 2023.  Perhaps a new ranking should be based on calling out institutions that cannot maintain or improve their position on a yearly basis.

It is slightly bemusing that King AbdulAziz University was a non-runner in 2023 after being in fourth place the year before.  Is it possible that they could not find any researchers willing to sign over a sufficient number of citations or maybe the failure to come top was too much to bear?  A related anomaly is that it features rankings from AWRU, QS and US News World Report on its International Rankings page but nothing from THE.

Living In the Past

As previously noted, the data in the rankings is based on 2021, the era before ChatGPT, the Russian invasion of Ukraine and the implosion of Boris Johnson’s premiership.  Unfortunately, this means that any student relying on the rankings to make judgements about institutions is going to be sadly misled.  Not that this matters to the way THE and their enablers like Study Portals use rankings to monetize student eyeballs.

The most egregious example of the Impact Rankings failure to keep up to date is the increase in the number of Afghanistan universities in the Impact Rankings.  Going from two to three listed entrants is bad enough after a year in which they have followed their government’s edict to prevent women going to university.  Two of the three have scores under SDG 5, which is specifically about Gender Equality and the aim to “Achieve gender equality and empower all women and girls”, while to add insult to grievous injury their score in that category is better than hundreds of other institutions.

It seems extraordinary that nobody at the THE was paying sufficient attention to understand the condemnation of the world at the exclusion of women from education in Afghanistan.  As noted in previous blogs it might be reasonable to think that the lack of women in the board ranks of THE and its owners contributes to this indifference.  It is, however, very difficult to think of a good excuse for the Advisory Board which one might hope has some members with a broader perspective on justice, equity and decency.

From Glasnost to Skrytnost

It was all the rage to celebrate glasnost and perestroika in the 1980s but openness and restructuring in Russia have long given way to autocratic rule and whim.  Maybe that’s why the THE’s treats some Russian university scores in the spirit of what the Washington Post termed “skrytnost: – derived from the Russian verb skryt meaning “to conceal”.   It is unacceptable for a ranking that trumpets its supposed transparency to offer no explanation for blanking Russian university scores for SDG 16, Peace, Justice and Strong Institutions.

It must be bad enough for the compilers that Russian universities continue to be the single largest number of entrants to the Impact Rankings but totally infuriating that many choose to be scored on their support of the very virtues that the country currently seems to lack.  Unfortunately, the THE seem to accept whatever is submitted, adds it to the total, then blanks it out as if it was some secret.  There is no explanation in the methodology which only reminds everyone that the scoring itself is a matter of, um, autocratic rule and whim.

The continued presence of Russian universities in the league tables and the way they are publicized as study destinations by THE Student is another reminder that the entire premise of the tables is to commercialize data and sell consultancy rather than enrich the sector.  While the Ukranians are on a counter-offensive to remove the aggressors from their lands the Impact Rankings celebrate universities whose Rectors publicly endorsed Putin’s war.  If their decision was based on a quick Russian victory it is time to reconsider.

Reputation Bust?

For all the noise from those going up in the Impact Rankings an analysis shows that only three of the top 12 institutions (Manchester, Arizona State and Alberta) feature in the THE’s own World Reputation Rankings.  This might suggest that academics see the Impact Rankings as a refuge for those who feel the need to please their governing bodies but not as a genuine marker of global quality.  It’s a bit like football fans getting excited when their team wins the Europa Conference League while those supporting serial contenders for the Champions League are not so easily impressed1.

Nobody expects the THE to give up on its money-go-round of league tables any time soon but it is remarkable that after five years most universities have declined to spend the time, effort or money to engage in the Impact Rankings.  One might argue this is because they recognize the dangers of being involved in a competition that is easily rigged and where the referees might just be willing to tip the scales a different way to create a headline.  The evidence suggests that absence does not impact the credibility of absentees at all.

Notes

1.            For those who do not follow European football, the Europa Conference League is the third tier of European competition after the Champions League and Europa League.  With apologies to West Ham United fans I would say it has much in common with any other conference – you go not knowing anything about the people you’ll meet, you end up in many dreary rooms discussing irrelevant things and you return to a pile of work.  Football fans will know that in that sentence you can replace “people” with teams, “rooms” with stadiums and “pile of work” with relegation trouble.  If you’re lucky you get a certificate of attendance (known as the Europa Conference League trophy).   

Image by WikiImages from Pixabay

Keep Your Virtue*…Ignore the Reputation Rankings

If Government took an “experienced, published” scholar and said they would have to spend the next 21 years working only on a vanity project, the higher education community would be outraged at the waste of time, intellect and potential.  But that’s what the THE Reputation Rankings 2021 appear to be doing by having around 11,000 such scholars submitting views on which other universities are good at research and teaching.  It’s only the beginning of the problem for another dreary day in the self-interested world of the rankers.

We are told that academics are hard-pressed and facing mental burnout but that doesn’t stop the THE taking their valuable time to seek opinions about other institutions.  If each of the 10,963 respondents spent half a working day on their submission that would be 5,481 days.  Dividing that by 260 working days a year (a five-day working week times 52 with no holidays) suggests THE may have taken more than two decades of academic time away from scholarship that could save humanity.

Twisted by Knaves to Make a Trap for Fools

Despite, or perhaps because of, all that effort it comes as a yawn inducing revelation that the top five in 2021 are exactly the same universities as the top five in both 2020 and 2019.  The procession of Cambridge, Harvard, MIT, Oxford and Stanford – placed alphabetically here because the whole notion of ranking them seems ludicrous – continues.  In a world where the purpose of higher education, its value and its relevance are all under question this parade of hierarchy seems irrelevant.  

I wonder how Harvard’s reputation really looks to people who agree with Scott Galloway that higher education is becoming “the enforcer of a caste system” where “elite institutions no longer see themselves as public servants”.  Or Michael Crow, President of Arizona State University, when he says that HE is increasingly a system “where status is derivative of exclusion”.  Or to those who have listened to Malcolm Gladwell’s forensic dissection of the US News rankings, where he notes that they are “a self-fulfilling prophecy”.

There should also be an enquiry into the fact that California Institute of Technology (CalTech) is only placed at 14 in the Reputation Rankings.  On THE Student the compilers are telling thousands of candidates that CalTech is the best university in the US,  QS rank it at 6 in the world, it’s number 9 in US News and number 2 in the THE’s own World Rankings.  There are several other examples illustrating inconsistencies which confirm that the whole exercise isn’t really about understanding or reflecting excellence.

One might guess that the opportunity to reach out and flatter 10,963 potential readers and event attendees by asking for their opinion is a primary motivator of the approach taken.  But for THE to then claim the rankings are “based on the opinions of those who truly know” seems typically hyperbolic and ill-founded.  Donald A. Norman is quoted as saying – “academics get paid for being clever, not for being right” – which is an alternative view worth considering.  

Fill the Unforgiving Minute  

The 21-year estimate of time does, of course, presume that the academics involved took the weighty task of considering the reputational implications for thousands of their colleagues at thousands of universities seriously.  Half a day hardly seems long enough to do the job properly but some cynics might suggest that it was more likely half-an-hour over a cup of coffee.  Plenty of time to see scores settled, biases reinforced and best friends rewarded. 

Even half an hour for each submission would be about two-and-a-half years of time stolen from saving the world and brings equally good questions about the value of the exercise.  Each academic submitted around 14 names on average which which, in 30 minutes, means they would take about two minutes to weigh and consider each nomination.  It’s less time and attention than most people spend on selecting their top ten favourite party records for the Christmas playlist.  

Make Allowance for their Doubting Too

The reputation rankings methodology** specifically gives “more weight to research”.  This is not because research is intrinsically more important but because “our expert advisers have suggested that there is greater confidence in respondents’ ability to make accurate judgements about research quality”.  It is interesting to read that THE’s advisers thinks academics can’t really be trusted to review the teaching quality of their peers. 

Pity the poor student who believe the Reputation Rankings have much to do with the teaching they might receive.  The methodology places a premium on the score for research reputation of 2:1 over the score for teaching reputation.  This gives some idea of the importance that THE attributes to teaching in establishing an institution’s standing and the extent to which academics are contributing to perceptions about their priorities.

One Heap of All Your Winnings

It also seems that the eventual ranking is driven as much by volume as by quality.  Respondents are asked to simply name, rather than rank, up to 15 institutions which are best at research and teaching) in their specific subject discipline.  But the number one institution “is the one selected most often” as being one of the best in its field.

It doesn’t seem to matter if the twenty most eminent researchers in each field believes your university is the best.  You will not be top if enough other “experienced, published” researchers pick the place where they are angling for a job, enjoy to visit or where the overlord of research is a personal friend.  There is no indication in the methodology that there is a weighting to account for the ability of the respondents to make or offer a well-informed judgement.

Or Being Lied About Don’t Deal In Lies

However, there are adjustments to ensure that the ranking represents the global distribution of scholars.  Participants are selected randomly from Elsevier’s SCOPUS database which can presumably create a sample in line with the global distribution of academics.  As responses do not reflect the UNESCO distribution of scholars so they have to be weighted.   

Engineering overperformed with 15.8% of respondents and had to be levelled down to a weighted 12.7% while Business and Economics and Arts and Humanities had to be levelled up from 8.2% and 7.7% of respondents to 13.1% and 12.5% respectively.  Maybe it’s just that engineers like questionnaires and giving their opinion but it would be nice to think that some scholars are too dismissive of the process to join in.

If the argument is that the Reputation Rankings are only a game and for entertainment then academics might consider how wise it is to be wasting their time on something as meaningful as voting on The X-Factor or Dancing With the Stars.  But if it is intended to inform politicians, funders, philanthropists and business about quality it carries the danger of reinforcing bias while devaluing other institutions, and their students.  Next time the email from THE pops up in the inbox it might be a good moment to press delete and get on with doing something important. 

NOTES

*Part of the title and all the sub-heading are fragments from Rudyard Kipling’s wonderful and inspirational poem ‘If’

** The page of description of the methodology is, in my view, neither clear or well written. I would be happy to clarify or correct any areas where my interpretation or understanding is incorrect on the advice of an authoritative source.

Image by Gerd Altmann from Pixabay