How, by whom and for what purpose are these rankings created? Can they be fully trusted? Why can the same university occupy different positions in different rankings? Read more in the TV BRICS article

Applicants and students, researchers and education specialists, university managers and those involved in developing higher education institutions closely follow university rankings every year.
Why do university rankings exist?
University rankings are a kind of “league table”. On the one hand, they help applicants and students make choices by showing a university’s position among its peers. On the other, they serve as a reference point for universities themselves, providing valuable information about their status to administrators and managers. Rankings are also of interest to researchers and education experts. Rankings such as Times Higher Education (THE), QS World University Rankings and Academic Ranking of World Universities have long become powerful decision-making tools. They influence the futures of students and even the strategies of education systems.
At the same time, international university rankings are a relatively recent phenomenon. The first global ranking appeared in 2003, when Shanghai Jiao Tong University introduced the Academic Ranking of World Universities (ARWU), now widely known as the Shanghai Ranking. This was no coincidence. In the early 2000s, China launched a large-scale programme to bring its universities to a global level, which required an objective comparison tool – the ranking.
Following the Shanghai Ranking, others emerged, each with its own purpose. While ARWU aimed to measure the gap between Chinese and global universities, QS World University Rankings initially focused on helping students worldwide choose a university. Times Higher Education World University Rankings were designed to provide a comprehensive assessment for the academic community.
Over two decades, rankings have evolved beyond purely academic tools. Today, they influence national education policies, shape development priorities for university management, and stimulate competition and improvements in education quality.
“Among the most widely recognised and authoritative rankings are QS World University Rankings, Times Higher Education World University Rankings and Academic Ranking of World Universities. These rankings are considered authoritative due to their global coverage, consistent methodologies and broad range of indicators. They are also widely referenced by governments, institutions and students worldwide,” said Raymond Matlala, an expert in business, education and the Global South and Founder and Chairman of the South African BRICS Youth Association, in an interview with TV BRICS.
However, the picture is not entirely straightforward. Authority does not mean a ranking can be trusted unconditionally. Experts advise paying close attention to evaluation criteria before analysing and comparing rankings.
“Evaluation criteria can be divided, for example, into internal ones – focused on academic performance, student assessment, teaching quality and programme compliance with standards – and external systems, which rely primarily on standardised and more independent assessment mechanisms and often take the form of rankings,” explained Natalia Kaurova, Adviser to the Russian Academy of Natural Sciences, in an interview with TV BRICS.
Main groups of evaluation criteria
One of the key criteria in leading rankings is research performance. Indicators include the number of publications in highly cited journals, a university’s h-index measuring productivity and impact, and citations per publication.
Another important factor is teaching quality, reflected in staff-to-student ratios, employer reputation, the share of graduates with advanced degrees, and student satisfaction.
A third major criterion is internationalisation, measured by the proportion of international students and staff, as well as international research collaboration. Rankings also take into account innovation and links with industry, including research income, patents, start-ups and graduate employability.
Experts note that evaluation criteria are constantly evolving. In recent years, greater attention has been paid to sustainable development and environmental responsibility. The level of digitalisation in education and even gender balance in academia are becoming increasingly important.

Types of university rankings
The focus, characteristics and even target audience of a ranking depend on the criteria and evaluation methodologies it uses. For example, rankings such as QS World University Rankings (which place significant weight on employer reputation) are particularly popular among students, Margarita Isaakova, an expert in academic diplomacy, the export of Russian education, international healthcare cooperation and global medical research and Head of the International Department at Pirogov University, told TV BRICS in an interview. Times Higher Education is more focused on the research agenda. In its latest methodology, the research quality pillar accounts for around 30 per cent of the overall score. Academic Ranking of World Universities (ARWU) does not use surveys, relying only on measurable indicators such as publications in “Nature” and “Science” and the number of highly cited researchers. According to experts, this provides the most rigorous, yet transparent, assessment of a university’s research strength.
In general, university rankings can be divided into the following:
- global rankings, assessing universities worldwide;
- national rankings, comparing institutions within a country;
- subject rankings (e.g., in chemistry or economics), focusing on specific disciplines;
- specialised rankings, evaluating particular sectors such as the arts or hospitality.
Among specialised rankings is the BRICS ESG University Ranking. ESG rankings provide an independent assessment of a university’s performance in three areas: environmental impact, social responsibility and governance. They indicate how environmentally sustainable an institution is, how safe it is for students and staff, and how effectively it is managed.
University evaluation methodologies can broadly be grouped into several key approaches: reputation-based surveys of experts and employers; bibliometric analysis assessing publication activity and citation impact; resource indicators, including infrastructure and funding; as well as internationalisation and links with industry.
Methodology and criteria directly influence a university’s position in a given ranking. This is why the same institution can occupy different positions across different lists.
“One ranking may be composed of 40 per cent reputation surveys, another of 30 per cent citation metrics, and a third of 50 per cent publication output. As a result, a university with strong research but a weaker international reputation will rank highly in the Shanghai Ranking and lower in QS. In addition, many rankings assign different weights to different subject areas. THE calculates indicators differently for medicine and for the humanities,” Margarita Isaakova noted in an interview with TV BRICS.

Pitfalls of university rankings
As noted earlier, all rankings pursue specific objectives and therefore use different evaluation criteria. Despite their apparent scientific rigour, many experts consider them highly subjective.
“Unfortunately, there is no single ranking that can be considered the most reliable. They need to be evaluated, interpreted and filtered. […] The methods, questions, criteria, objects, goals, interests, tasks and data themselves are already biased,” said Marcelo Barbosa Duarte, an expert in history and culture and an independent researcher collaborating with leading public universities in Brazil, in an exclusive interview with TV BRICS.
Marcelo Barbosa Duarte believes that university rankings do not reflect even 60 per cent of reality. The margin of error stems from criteria, requirements and data sources that are not always objective. Margarita Isaakova agrees: “Rankings can be trusted, but with an understanding of their limitations. The degree of subjectivity in some of them is very high,” she stressed in an interview with TV BRICS. “In QS and THE, up to 40–45 per cent of the score is based on reputation surveys, which are essentially opinions that may be biased or simply insufficiently informed.”
Moreover, as rankings increasingly resemble competition among higher education institutions, methods of gaining an advantage are becoming more sophisticated and not always entirely fair, experts say.
“Rankings can be manipulated. Universities sometimes hire highly cited researchers on short-term contracts, create informal citation networks, and selectively submit data. It is not a widespread phenomenon, but it does exist,” Margarita Isaakova acknowledged in an interview with TV BRICS.
As Natalia Kaurova noted in an interview with TV BRICS, the fact that some global rankings are produced by commercial organisations may lead to conflicts of interest.
“There is a possibility of technological ‘optimisation’, when universities deliberately improve indicators (for example, by artificially increasing the share of international students or the number of co-authored publications) without actually improving the quality of education,” she said.
Critics also point to the growing inequality between universities in different countries and the neglect of national educational traditions and specificities. For these and other reasons, institutions from developing countries often find it difficult to achieve high positions in international rankings. Historically, such rankings have tended to favour well-funded, English-speaking universities. However, the emergence of regional rankings, experts note, allows for comparisons within a more homogeneous context.

The impact of rankings on university choice
When choosing a university, experts recommend not relying solely on rankings. Raymond Matlala, an expert in business, education and the Global South and Founder and Chairman of the South African BRICS Youth Association, advises considering the following factors:
- the strengths and accreditation of specific programmes;
- alignment with career goals;
- opportunities for practical experience and industry exposure;
- location and cost of study;
- partnerships with other institutions and opportunities for international mobility.
Natalia Kaurova, Adviser to the Russian Academy of Natural Sciences, suggested in an interview with TV BRICS that applicants should, paradoxically, also listen to their intuition. In personal development, it can complement official information. “It is not specific methods and programmes that teach, but people,” she noted. Moreover, no ranking directly measures teaching quality or shows how effectively a lecturer explains material.
“Rankings are a useful guide. But the final choice should always be based on real feedback from graduates, programme accreditation and, most importantly, your own priorities. Even at a top-10 university, you may not find what is right for you,” Margarita Isaakova concluded.
The article was prepared by Svetlana Khristoforova. African Times published this article in partnership with International Media Network TV BRICS


