What benchmarks!

Published on 2012. szeptember 21. péntek, 14:24

There is no yardstick for everything!

Now everyone wants to measure everything that might be thought as a good way forward, because in the science measureing and quantifying induced massive progress. Many see the essence of measurements in quantifying, but it's not only an erroneous view, but also a very harmful one, because it supports such weirdness as the listings of QS Topuniversities and Webometrics Ranking of World Universities.

The University of Szeged is pretty good at these lists, but is it really worth to monitor these kind of lists?

Multi-dimensional objects, such as universities can not be objectively measured by one single number, just as because the volume of a bottle can not tell how high is the bottle or how big the area of its bottom is, although this information may become very important even if the main goal is to load as much liquid into it as possible, because it may turn out that a too high bottle simply can not be handled.

The situation changes if similar multi-dimensional objects are measured. In this case, the measurement provides much more objective information, but even if we know the shape of the bottles, say the ratios of their heights and floor areas are approximately the same, may turn out that several small bottles are more manageable than a large bottle with volume equal to the sum of the small ones.

Since the universities are in some sense similar to each other benchmarks specified in the first paragraph may happen to be good in general if you have no specific goal with universities. In the latter case, however, the best practice is to set up your own list by little digging in the information given about the universities -- at least you do not have to be objective -- you can safely be subjective.

However the question still arises, whether there is at all a good method to measure such a complex object as a university. An answer in the affirmative is necessary for this question even if objectivity is not required, but the measurement results are for sorting.

A number of mathematical examples show that with rare exceptions there are objects for every measurement are not measurable, or objects that behave very differently from what is expected. For this reason, it is pointless to consider the question of how to measure a university, instead, following Max Planck's idea on conceptualizing in physics, the point is to decide which method of measuring to accept as a benchmark of universities.

Max Planck's idea seems to work in real physics (there are problems with it with respect to the uncertainty principle) and offers easy solution for the question "how to benchmark universities" by changing the question to a decision of "which benchmark is to use for universities", but in fact such a decision affects the universities so that the benchmark will soon lose its meaning. A good example of such processes is the history of crash tests for cars that showed initially poor results for a number of cars, but nowadays every car seems to be perfect, although this is obviously not the reality.

We are facing similar problems regarding the measurement of scientific work. Despite every racional critics the non-science world would like to measure science such much that it generates humor! Still what these "measurements" resulted in? When the number of publications seemed authoritative, the number of publications multiplied (the average number of publications a researcher publishes today is multiple more than the best scientists published a 100 yearsago). When the number of citations came to the fore then the number of cited articles in every publications multiplied. Since the citations are balanced with the so-called impact factor, references in the articles tend to point to the publishing journal.

Further rankings: