What are research metrics?

Research metrics or indicators are quantitative measures designed to help evaluate research outputs. Find out more about the different metrics available and how to use them responsibly.

Bibliometrics or citation metrics use traditional citation counts – the number of times an item has been referenced in other publications – to give an indication of research impact. Alternative metrics seek to do the same using likes, shares, mentions and other indicators of attention in the news, social media, and other platforms.

Responsible metrics

Metrics alone cannot give a full picture of research impact: qualitative assessments must be used alongside quantitative indicators, and traditional metrics will only reflect academic citations, so may not reflect valuable industry or other non-academic impact.

Citation patterns differ significantly across disciplines, both in terms of absolute numbers of citation and the rate at which they accumulate. As a result, raw citation counts cannot be used to compare researchers in different disciplines, research areas, or career stages. If comparisons of this type are required, use normalised or field-weighted indicators. Normalised indicators attempt to correct for differences in citation patterns and publication age.

The source and accuracy of citation data also have a significant impact on quantitative metrics. Most metrics tools use a single source of citation data, so consider whether this source contains enough of your papers to give an accurate picture or your research output, and whether those papers are correctly attributed. See Cleaning up your researcher profile for information on how to make sure your papers are contributing to your profile and metrics.

The University of Bristol is committed to using a broad range of qualitative and quantitative measures to evaluate research impact, and is a signatory to the San Francisco Declaration on Research Assessment. For more information on the University's position on responsible research assessment see Responsible Metrics.

Common research metrics

Research metrics may apply at article, author, journal or institution level, and should ideally be calculated in a transparent, standardised manner.  The University of Bristol is a founding member of the Snowball Metrics group, which seeks to agree metrics that are data source- and system-agnostic (not tied to any particular provider or tool).  See below for a glossary of common metrics, or use the Metrics Toolkit for a more in-depth examination of what different metrics mean and how to apply them.

Article level indicators

What is it? A simple count of the number of citations an article has received.  It will be affected by disciplinary citation norms

Where can I see it?  Available in most bibliographic databases (e.g. Scopus, Web of Science), and in the University of Bristol’s preferred metrics reporting tool, SciVal. This data is also shown in Pure records at article and author level

What is it? A ratio of citations received by an article to the number expected for similar articles.  It attempts to correct for differences in citation norms across disciplines

Where can I see it? Available from Scopus/SciVal

What is it? A simple count of the number of likes, mentions, shares, etc. an article or other research output has received.  These metrics will be affected by differences in media attention across disciplines, but may give a useful early indicator of attention before traditional citations accumulate

Where can I see it? Available from Altmetric and PlumX via Scopus

Author level indicators

What is it? The number of documents an author has produced.  It will be affected by researcher career length and disciplinary publication norms.

Where can I see it? Available in most bibliographic databases (e.g. Scopus, Web of Science), and in the University of Bristol’s preferred metrics reporting tool, SciVal

What is it? The number (h) of articles by an author that have received at least h citations.  It will be affected by researcher career length and disciplinary publication and citation norms

Where can I see it? Available in most bibliographic databases (e.g. Scopus, Web of Science), and in the University of Bristol’s preferred metrics reporting tool, SciVal, and can also be calculated by standalone software such as Publish or Perish

Journal level indicators

What is it? The ratio of citations received in a year by items in a given period to the total number of citable items published in that period (usually calculated over a 2- or 5-year period). It will be impacted by citation norms in a discipline. An indicator of journal reach and popularity, not quality

Where can I see it? Available in Web of Science. A similar metric, CiteScore, is available in Scopus/SciVal

What is it? The ratio of average number of weighted citations received in a year to the number of documents published in the previous 3 years, based on Scopus citation data. It attempts to give value to different citations by weighting them by the field and the rank of the citing journal

Where can I see it? Available from https://www.scimagojr.com/

What is it? The ratio of a journal’s citation count per paper to its citation potential in its subject field. It attempts to correct for differences in citation norms across disciplines. Broadly equivalent to the field-weighted citation impact for articles

Where can I see it? Available in Scopus and from CWTS Journal Indicators

Institutional indicators

Institution level metrics seek to rank universities by various criteria, which may include bibliometric metrics. Common ranking systems are THE World University Rankings, Academic Ranking of World Universities, and QS World University Rankings.