Skip to Main Content

Research Metrics

Responsible Use of Metrics

It is important to acknowledge that while research metrics can provide useful quantitative measures of publication and citation activity and patterns, they do not provide a comprehensive picture of research activities and tend to reduce the complex and nuanced impacts of research outputs to a single number.

The use of metrics has become an intrinsic part of evaluating research, but the origins and original purpose of some of these measurement tools may surprise you. For example, one of the most used metrics - the Journal Impact Factor - was originally designed to assist librarians with stock selection. Over time it has been appropriated for an entirely different use and is now used to inform opinions around the scholarly prestige of a journal, the quality of the research therein, and the hiring potential of the author. 

This alarming example, combined with the proliferation of different metrics in recent years, illustrates why there is a concern that increasingly heavy reliance on these numbers is being used as a proxy for expert judgement in assessing the quality of research.

Resources

Source: Wilsdon, J., et al. (2015). The metric tide: Report of the independent review of the role of metrics in research assessment and management. https://doi.org/10.13140/RG.2.1.4929.1363

The Metric Tide articulates five main principles for responsible metrics assessment:

Robustness: Basing metrics on the best possible data in terms of accuracy and scope. 

Humility: Recognising that quantitative evaluation should support - but not supplant - qualitative, expert assessment. 

Transparency: Keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results.

Diversity: Accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the systems. 

Reflexitivity: Recognising and anticipating the systemic and potential effects of indicators, and updating them in response. 

The Declaration on Research Assessment (DORA) identifies the need to improve the ways in which scientific research is evaluated.

The outputs from scientific research are many and varied including: research articles reporting new knowledge, data, reagents, software, intellectual property, and highly trained young scientists.

Funding agencies, institutions that employ scientists, and scientists themselves, all have a desire, and need, to assess the quality and impact of scientific outputs. It is thus imperative that scientific output is measured accurately and evaluated wisely.

DORA's general recommendation is that researchers do not use journal-based metrics, like the Journal Impact Factor (JIF), as surrogate measures of the quality of individual research articles, to assess the individual contributions, or allow them to influence hiring, promotion, or funding decisions.

Click to enlarge image

The Leiden Manifesto outlines ten principles to guide quantitative research evaluation.


  1. Quantitative evaluation should support qualitative, expert assessment.
  2. Measure performance against the research missions of the institution, group, or researcher.
  3. Protect excellence in locally relevant research.
  4. Keep data collection and analytical processes open, transparent, and simple.
  5. Allow those evaluated to verify data and analysis.
  6. Account for variation by field in publication and citation practices.
  7. Base assessment of individual researchers on a qualitative judgement of their portfolio.
  8. Avoid misplaced concreteness and false precision.
  9. Recognise the systemic effects of assessment and indicators.
  10. Scrutinise indicators regularly and update them.

Source: Hicks, D., Wouters, P. Waltman, L de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548). https://doi.org/10.1038/520429a

A growing number of research institutions are acknowledging this issue by developing position statements and policy documents around the responsible use of research metrics and, more broadly, the responsible assessment of research. 

See also:

Profiles, not metrics (Clarivate)

This report draws attention to the information that is lost when data about researchers is squeezed into a simplified metric. It looks at four familiar types of analysis that can obscure real research performance when misused and discusses some alternative visualisations that support sound and responsible research management.

HuMetricsHSS (Humane Metrics Initiative)

HuMetricsHSS is an initiative that creates and supports values-enacted frameworks for understanding and evaluating all aspects of the scholarly life well-lived and for promoting the nurturing of these values in scholarly practice.

Responsible Metrics Explained

Watch this video by the Office of Scholarly Communications, Cambridge (03:20) for an overview of the Responsible Metrics movement. 


Source: Office of Scholarly Communication, Cambridge. (2019, April 18). Research in 3 minutes: Responsible metrics [Video]. Youtube. www.youtube.com/watch?v=zbGb08jJzt0

© Western Sydney University, unless otherwise attributed.
Library guide created by Western Sydney University Library staff is licenced under a Creative Commons Attribution 4.0 International (CC BY)