Product has been added to the basket

Living with metrics

Greg Shellnutt* gives a personal view of evaluation metrics as they appear from East Asia.  While we worry about the nuts and bolts - are we not missing the bigger picture?

lkjOutside the graduate student office I occupied many years ago at The University of Hong Kong was a full-length poster listing the rankings of over 200 Earth science journals according to impact factor (IF).  I had no idea what an impact factor was when I first saw that poster.  Six years later I was applying for an entry-level professorship and my h-index was requested as part of my application.  I had no idea what an h-index was.


When I was hired by my employer, the performance expectation was clearly articulated, “… to be promoted you must publish X papers in the top X% of SCI journals in three years”.  Additionally, I was informed about ‘publication bonuses’, which are tiered according to the IF-based journal ranking.  In many cases one’s income can double by reaching the stated targets.

Quantitative performance metrics may be the bane of a researcher’s existence but they are bureaucrat-friendly.  The ability to quantify a researcher’s ‘performance’ is one the many criteria which factor into global university rankings; but they also help to streamline hiring, promotion and grant applications.  There is complete transparency with quantitative metrics and the best part is they apply equally to everyone.  Quantitative metrics, as opposed to qualitative metrics, cannot be easily manipulated by the ‘malevolent forces’ that lurk in the corridors of academia, public institutions and granting agencies.


I do not advocate the use of IF as a tool to evaluate anything of a scientific nature.  To me, IF is simply another number that occasionally appears in my life, like my age or credit card PIN.  The quality of most research is best measured by time, as many theories in the Earth sciences took decades to become widely accepted.  Such will likely be the case in the future.  Having a committee evaluate the quality of one’s work is just as problematic as quantitative metrics because there is a good chance that a committee may stick to the orthodoxy of the day and inadvertently obstruct vibrant young researchers who have different ideas. 

Imagine if Alfred Wegner, freshly graduated and advocating plate tectonics, applied for a professorship in the 1930s or 40s and had his work evaluated by a ‘quality-control’ committee.  Would his application be evaluated fairly if his interpretations conflicted with the work of one or more members of the evaluation committee? A similar scenario is played out in the first 20 minutes of the 1978 Superman movie… (spoiler alert!).  It did not work out well for Kryptonians.

Academic institutions are bureaucracies and, as with all bureaucracies, they must evaluate performances.  The point I wish to emphasize is that, for better or worse, performances of all employees in any field will be measured by some type of metric.  Whether that metric is IF, h-index or SCImago Journal Rank does it matter? As I understand it, quantitative metric-based evaluations attempt to level the playing field for everyone, regardless of their qualitative failures.

* Greg Shellnutt is Associate Professor in the Department of Earth Sciences, National Taiwan Normal University, Taipei, Taiwan.