|Department of Public Information • News and Media Division • New York|
Press Conference on Global Research Benchmarking System for University Performance
Announcing the launch of the Global Research Benchmarking System (GRBS) at a Headquarters press conference today, Jean-Marc Coicaud, Director of the United Nations University, said that university performance was comprised of complex and informative components and, as such, more than numbers defined their ranking.
Earlier in the day, at the event “Ranking is not Enough: Measuring University Performance”, the newly formed Global Alliance for Measuring University Performance, which would be developing the benchmarking system was introduced. The Alliance included as collaborating partners, not just universities from around the world, but the United States-based Center for Measuring University Performance, the United Nations University’s International Institute for Software Technology (UNU-IIST) and Elsevier, one of the world’s largest science publishers.
Peter Haddawy, Director of UNU-IIST, in an overview of the project, described the “broad vision of the Alliance”, which, by providing objective data to universities, would help universities improve their performance in all areas, including education, community engagement, and research, as well as the societal impact of their activities. The Alliance’s first project would be a benchmarking initiative on evaluating university research performance. That was “so important”, he said, and should be done in a rigorous manner with full participation of the academic community.
The benchmarking system overall was in stark contrast to existing university ranking systems, he said. The “richness of the contributions of the universities can’t be represented by simple number in a lead table.” Their contributions were much more complex and the new system would be designed to measure and represent that.
Developing the new system would involve approximately 140 multiple disciplines, as well as inter-disciplinary fields, he said. Higher education, especially at the United Nations, helped address some of the pressing problems in the world, such as climate change and poverty reduction, and the project would be benchmarking research activities in those inter-disciplinary areas.
Many of the ranking systems, besides trying to give one number, were “in a business of selling something”, most commonly a publication with the rankings, said Elizabeth Capaldi, University Provost and Executive Vice President, Arizona State University and Co-Founder of The Center for Measuring University Performance. However, the Alliance was not interested in selling anything. As an academic and intellectual analysis project, all of the data would be publicly available, downloadable, and all the measures picked would ensure reliable and valid sources. That would offer governing bodies and institutions information that would help them manage their operations more efficiently. “It’s a different purpose than when you are trying to go 1, 2, 3, 4, 5 and sell that,” she said of the traditional ranking system.
One of the flaws in international ranking systems, said Craig Abbey, Assistant Vice President for Academic Planning and Budget at the University of Buffalo (UB) and Director of Research for the Center for Measuring University Performance, was that they failed to take into the “unique context of most of these institutions”, often going with available data which was at some times is based on inaccurate information. “If you use bad data, you get some bizarre results in ranking,” he said. The new benchmark project would only use a much higher quality and verifiable data.
Asked what the rankings would look like if not “1, 2, 3…”, Dr. Capaldi said that in the United States her Center used nine variables and gave the actual data on those nine variables, which was then clustered. That avoided a one-dimensional profile. Regarding the Global Research Benchmarking System, at this early stage, it was unclear how many measures would be utilized, but “it would be many”.
Turning to a question about how the benchmarking system would help universities address issues such as climate change, Professor Haddawy pointed out that “you have to have an idea of where your strengths are” in order to develop better allocation decisions, make a case for funding and show where the successes were. At the current time, there were no systems that could measure research impact in areas like climate change and renewable energy, among others. The new system would provide universities with data, demonstrating that kind of impact on research through citations of the work being done. The system would also enable universities to seek partnerships.
The quality of citations was also addressed by Dr. Capaldi, when asked if those were based on web-hits or on how many times a research project was cited. “You could be cited [many times] because you did something really stupid,” she said. In this initiative, the quality of the work and how the citation was made would be included, which was another aspect of the project that had not been done before.
A question was asked regarding the database of Elsevier, which would be used as the backbone of the benchmarking system. Niels Weertman, Vice-President Product Management of Elsevier said that the partnership between his organization and the Alliance would “help the sector forward” and that the arrangement of providing publication counts and citations was without a fee.
Concerns about the manipulation of rankings were addressed, with Professor Haddawy observing that many universities self-reported their data to ranking systems, which allowed for manipulation of information. Within the new benchmarking system, quality control would be built in and tracked, and thus verifiable.
The press conference concluded with the signing of a memorandum of understanding between Dr. Capaldi, representing the Center for Measuring University Performance, and Professor Haddawy, representing the United Nations University’s International Institute for Software Technology.
* *** *For information media • not an official record