Universities are increasingly turning their backs on global higher education rankings, and it’s sparking a heated debate. But why are these institutions, tasked with advancing knowledge, suddenly questioning the very systems that measure their success? In September, Sorbonne University, a prestigious French research institution, made headlines by withdrawing from the Times Higher Education (THE) World University Rankings, citing deep concerns about their methodology and transparency. This move has reignited a long-standing controversy: Are these rankings truly a fair reflection of academic excellence, or do they oversimplify the complex world of higher education?
Global higher education rankings, compiled by organizations like QS, THE, and ShanghaiRanking Consultancy, wield immense influence. They shape perceptions among prospective students, guide employer decisions, and even drive policy changes. Yet, for years, these rankings have faced criticism from institutions worldwide, including several in India. The core issue? A growing consensus that these rankings lack transparency, fail to capture the full breadth of academic activities, and prioritize metrics that don’t reflect the true diversity of higher education.
But here’s where it gets controversial: While rankings claim to provide a comprehensive snapshot of university performance, their methodologies are often seen as biased. For instance, Sorbonne University pointed out that the focus on English-language journals disadvantages disciplines like humanities and social sciences, which rely on diverse publication methods and languages. And this is the part most people miss: The reliance on reputational surveys raises ethical and methodological concerns, as the data collection process and final results are rarely disclosed in full.
Let’s break down how these rankings work. There are three major players: QS, THE, and the Shanghai Ranking. QS uses 10 indicators, with academic reputation (30%) and citations per faculty (20%) carrying the most weight. THE employs 17 indicators, with research environment and quality accounting for nearly 59% of the score. The Shanghai Ranking focuses on six indicators, heavily weighted toward Nobel laureates, Fields medalists, and highly cited researchers. But is this really a fair way to measure academic excellence? Or are we reducing centuries of intellectual pursuit to a mere numbers game?
Sorbonne University’s concerns are particularly telling. Ranked highly across all three systems, the institution still withdrew from THE rankings, criticizing their overemphasis on reputational surveys and lack of transparency. What does this say about the credibility of these rankings? Other institutions, like the older IITs in India, have also boycotted THE rankings, citing similar transparency issues. Prof V Ramgopal Rao, former Director of IIT Delhi, highlighted the ‘black box’ nature of reputation and perception scores, questioning their geographical distribution and fairness.
Here’s a thought-provoking question: Are universities gaming the system? Prof Rao pointed out the issue of institutional self-citations, where faculty members artificially inflate their citation counts. Additionally, the inclusion of papers with hundreds of authors and the lack of proper accounting for retractions further muddy the waters. Is this a fair representation of academic rigor, or are rankings inadvertently encouraging unethical practices?
Utrecht University took a stand in 2023, refusing to participate in THE rankings, arguing that the focus on scoring and competition undermines collaboration and open science. They also questioned the use of subscription-based databases like Scopus and Web of Science, which Sorbonne University has similarly criticized for being closed and non-participatory. Shouldn’t the pursuit of knowledge be open and accessible, rather than locked behind paywalls?
Education Minister Dharmendra Pradhan has set an ambitious goal: to have 25 Indian institutions in the top 100 of the QS rankings by 2026. But is this push for rankings-driven success the right approach? Are we prioritizing global visibility over the quality and diversity of education?
THE rankings have defended their process, emphasizing the use of independent data, including millions of academic votes and research citations. They argue that participation is key to global visibility, especially for countries like India. But the question remains: At what cost?
What do you think? Are global university rankings a necessary tool for benchmarking academic excellence, or do they oversimplify and distort the true value of higher education? Share your thoughts in the comments—let’s spark a conversation that challenges the status quo.