PhD on Track » Share and publish » Citation impact

Citation impact

picture - citation impact

“Whether a text is interesting and you get something out of it is more important than whether it is published somewhere important”.
PhD candidate, humanities

Hopefully, your PhD research will make an impact by advancing knowledge in your field or by contributing to real-world applications. While these kinds of impact are difficult to measure validly, more or less useful approximations of the degree of impact originate in data on how often and how broadly research is cited. There are, however, important aspects of research beyond those captured by citation-based metrics, and recent initiatives have spurred a growing interest in a broader and fairer basis for research assessment. On this page you will learn about

  • the Declaration on Research Assessment (DORA) and the recent movement to find fair and robust ways of evaluating research that do not rely on impact factors
  • how bibliometric indicators, such as journal rank (e.g. the impact factor) and h-index, are calculated
  • criticisms voiced against bibliometric impact measures and their application
  • the possible roles of citations in research and in research evaluations
  • possible implications of bibliometric indicators for your research and your career
  • how practising open science may improve your research impact

The Declaration on Research Assessment (DORA)

Evaluating research and researchers is not easy. While citation-based impact metrics, such as the journal impact factor, are convenient and have been popular, they have serious limitations and drawbacks as research assessment tools.

The Declaration on Research Assessment (DORA) is a global, cross-disciplinary initiative that embodies an awareness of the need to develop better methods for evaluating research and researchers. The number of signatories is growing, and in 2018 it has been signed by the Research Council of Norway and a number of Norwegian research institutions. Several major research funders are also among its signatories (e.g. the Wellcome Trust).

Signatories of DORA commit to not using journal-based metrics (such as the impact factor) in decisions regarding funding, hiring and promotion, to be explicit about the criteria used for making such decisions, to consider the value and impact of all research outputs (e.g. datasets and methods innovations), and to expand the range of impact measures to include such things as influence on policy and practice.

DORA represents an important development. Arguably, it implies that you may benefit from considering ways in which you can describe and provide documentation of any influence your work may have that are not captured by citation-based impact metrics. That said, citation based metrics continue to be important and to evolve. In the following sections, we will provide an introduction to some of them.

Journal rank

The most well-known measure of journal rank is the journal impact factor (often abbreviated to IF or JIF). It was developed in order to select the journals to be included in the Science Citation Index (Garfield, 2006). The impact factor is a measure of how often articles in a particular journal have been cited on average in a given year. The central idea is that the impact factor and similar measures of journal rank indicate the journal’s relative influence among journals within the same subject category.

Calculating a journal’s impact factor

A journal’s impact factor is based on citation data from the Web of Science database, owned by Clarivate analytics. If your institution has purchased the appropriate licence, you may be able to look up a journal’s impact factor and related statistics there. While using the impact factor in research evaluation is controversial (see Critical remarks, below), as a PhD candidate, you should know what it is, and how it is calculated.

The impact factor (IF) is the ratio of the number of citations (A) in the current year to items published in the previous two years and the number of citable articles (B) published in the same two years: IF=A/B.

Figure 1: General formula for calculating a journal’s impact factor.

Consider the journal Proceedings of the National Academy of Sciences (PNAS). This journal published a total of 6 467 citable articles in 2015-2016. In 2017, the total number of citations to articles from these two previous years was 61 460 (see table below).

IF for 2017 2015 2016 Sum
Citations in 2017
to articles published in
35 338 26 122 61 460
Articles
published in
3 282 3 185 6 467

Table 1: Citations and publications involved in the calculation of the impact factor for PNAS for 2017

Substituting in the general formula, we get at an impact factor for PNAS for 2017 of 9.5 (61 460 / 6467 = 9.5).

Critical remarks: impact factor

Over the years, criticism has been raised against the impact factor. You can read more about some of the critical remarks here.

No verifiable relation to quality

The impact factor is associated with a journal’s prestige, and is sometimes considered a proxy for the scientific quality of the work it publishes. Unfortunately, there is no verifiable association between journal impact factor and reasonable indicators of quality (for an overview of the relevant research, see Brembs, 2018).

Invalid measure of central tendency

As can be seen from the formula and example calculation above, the journal impact factor is, roughly, a mean. Means are sensible indicators of central tendency if the distribution of values is symmetrical. However, citations to scholarly articles are not symmetrically distributed. Most published articles receive few, or even no, citations, while a small number of articles become very highly cited. This skewness means that a journal’s impact factor is a poor predictor of the citation count of any given single article published in that journal (Seglen, 1997; Zhang, Rousseau & Sivertsen, 2017).

Field dependency

Because the impact factor is field dependent only journals within the same scientific field are comparable. Nevertheless, the impact factor has been used to compare different fields.

Figure 2: Amount of references by age to articles published in 2011 (the figure is based on Adler, Ewing, & Taylor, 2009)

This figure demonstrates the field dependency of the impact factor. The citations contributing to the calculation of the impact factor are the ones inside the two-year citation window, marked grey. Citations outside the window do not count, even though most of them lie outside and refer to older articles. For rapidly developing fields (blue line), the impact factor is considerably higher than in slowly developing fields (red line). A lower impact factor does not mean a lower quality of one field compared to another. The citation window may be too short to be representative for subjects which develop slowly.

Anglo-American bias

The pool of selected journals has a strong Anglo-American bias. Influential journals written in other languages are rarely captured by Clarivate Analytics Journal Citation Reports.

“Typically, when the author’s bibliography is examined, a journal’s impact factor is substituted for the actual citation count. Thus, use of the impact factor to weight the influence of a paper amounts to a prediction, albeit coloured by probabilities.” (Garfield, 1999)

Unintended use

The impact factor is not only used for ranking journals according to their relative influence, as initially intended, but also for measuring the performance of individual researchers. Given the skewness of citation distributions described above, this is a misapplication. The use of the impact factor when applied to individual researchers has been criticized by a broad scholarly community, not least the co-creator of the Science Citation Index, Eugene Garfield, himself.

Manipulation

The impact factor can be manipulated. It is influenced by the point in time when a journal issue is published. Issues published at the beginning of a year have a greater chance to accumulate citations than those published at the end of the year. Furthermore, editors may influence the value of their journal’s impact factor by writing editorials containing references to articles in their journal (journal self-citations). In addition, references given in the editorial count to the numerator, while editorials do not count to the denominator. By definition the denominator only consists of citable articles and editorials are not regarded as such.

Incomplete references

References given in the articles may be incomplete and incorrect. Incorrect references are not corrected automatically and therefore are not added to the citations. This fact influences the value of the impact factor and other citation indicators such as the h-index.

Alternative journal indicators

In order to compensate for some of the weaknesses of the impact factor (field dependency, inclusion of self-citations, length of citation window, quality of citations), efforts have been undertaken to develop better journal indicators. More advanced metrics are usually based on network analysis, such as the SCImago Journal Rank (SJR) and the Source Normalized Impact per Paper (SNIP), both based on data from Scopus. While such measures arguably do a better job of ranking journals, they are still only applicable to journals and should not be used to evaluate research output at the level of individual researchers. For that purpose, the h-index, introduced below, is better suited.

The h-index

The h-index is a measure of the total, citation-based impact of a researcher measured by how often she/he is cited.

  • A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np − h) papers have no more than h citations each (Hirsch, 2005).
  • The index combines both an author’s scientific production (publications) and impact (number of citations).

When exploring the literature of your research field, the h-index may give you a picture of the impact of individual researchers and research groups. You may retrieve the h-index from e.g. Web of Science, Scopus and Google Scholar.

When applying for a scholarship, project funding or a job, you may be required to state your h-index.

Calculating someone’s h-index

The h-index for a given author (Karen) calculated step-by-step:

Step 1: Search for author Karen in a given database

illustrasjon_01

Figure 3: Search result for author Karen’s publications in a given database

The figure illustrates the search result for author Karen in an arbitrary database. It also indicates all citing publications (counting lines) within the same database. In our example, author Karen has 10 publications (a, b, c, d, e, f, g, h, i and j). NP = 10.

Step 2: Sort publications by decreasing number of citations

Rank 1 2 3 4 5 6 7 8 9 10
Publication c i a g f h d j b e
Citations 90 45 20 10 4 4 3 3 1 1

Table 2: Karen’s publications sorted by decreasing number of citations. View table as graph

Step 3: Fulfil the condition: h of the publications have at least h citations each
h = max(Rank) provided Cit ≥ Rank
In our example, four publications have received more than four citations. These are publications c, i, a and g. The remaining (NP – h) publications do not have more than h citations each. In our example, the remaining six publications (f, h, d, j, b and e) do not have more than four citations each.

Result: Karen’s h-index is equal to 4.

Example: H-index retrieved in Web of Science, Scopus and Google Scholar

In this example, we use a renowned Norwegian researcher in ecology and evolutionary biology: Nils C. Stenseth. We demonstrate that his h-index is different in each of the databases due to their different coverage of content.

Step 1: Search for the author, making sure you cover all possible versions of the author’s name

Step 2: Go to the statistics available in the three databases

  • Web of Science: Author Search > Create Citation Report
  • Scopus: Author Search > Analyse author output
  • Google Scholar: Advanced search > Return articles authored by

The results presented here are based on data as of April 2018. Citation counts typically increase with time and so does the h-index. To determine the present value, perform a new search.

Web of Science

h-index = 76

To make sure that all publications by the author are retrieved, the example shows search results for stenseth n*, stenseth nc*, and stenseth nils. It is possible to add rows to make sure that different spellings of the name are included. The time span is 1945-2018, and the number of publications is 630. The h-index, including self-citations, is 76.

Scopus

h-index = 75 (80)

The time interval covers publications from 1974 to 2018. In Scopus, the h-index excluding self citations is 75. With self-citations, the h-index would be 80. The number of publications is 595.

Google Scholar

h-index = 100

Google Scholar covers a wider range of publication types; therefore the h-index is higher here. The number of publications is 756. The h-index for all years is 100, while the h-index since 2013 is 57.


Critical remarks: H-index

The h-index alone does not give a complete picture of the performance of an individual researcher or research group.

The h-index underrepresents the impact of the most cited publications and does not consider the long tail of rarely cited publications. In particular, the h-index cannot exceed the total number of publications of a researcher. The impact of researchers with a short scientific career may be underestimated and their potential undiscovered. Read more about this below: “Problem: The Matthew effect in science”.

Be aware:

“Using a three-year citation window we find that 36% of all citations represent author self-citations. However, this percentage decreases when citations are traced for longer periods. We find the highest share of self-citations among the least cited papers.” (Aksnes, 2003)

  • The h-index is comparable only for authors working in the same field.
  • The h-index is comparable for authors of the same scientific age.
  • The h-index differs between databases, depending on the coverage in the individual database.
  • The h-index depends on your institution’s subscription time range. The h-index may underestimate researchers’ impact if their older publications are not included.
  • The h-index is manipulable. Exaggerated use of self-citations may influence the h-index and result in an inflated value.

Citations in communication

Citing is an activity maintaining intellectual traditions in scientific communication. Usually, citations and references provide peer recognition; when you use others’ work by citing that work, you give credit to its creator. Citations are used for reasons of dialogue and express participation in an academic debate. They are aids to persuasion; assumed authoritative documents are selected to underpin further research. However, citations may be motivated by other reasons as well.

Citations may also express

  • criticism of earlier research
  • friendship to support colleagues
  • payment of intellectual debt, e.g. toward supervisors or collaborators
  • self-marketing one’s own research, i.e. self-citations

Citations and evaluation

Applicable across fields?

Note that scholarly communication varies from field to field. Comparisons across different fields are therefore problematic.

However, there are attempts to make citation indicators field independent. For example, The Times Higher Education World University Rankings involve citation indicators which are field independent, i.e. normalized (Times Higher Education, 2013).

Citations are basic units measuring research output. Citations are regarded as an objective (or at least less subjective) measure to determine impact, i.e. influence and importance. They are used in addition to, or as a substitute for peer judgments.

There is a strong correlation between peer judgments and citation frequencies. For this reason, citations are relied on as indicators of quality and are used for e.g.

  • benchmarking universities
  • scholarship and employment decisions
  • political decisions regarding research funding
  • exploring research fields and identifying influential works and research trends

Citations must be handled carefully when evaluating research.

Citation data vary from database to database, depending on the coverage of content of the database.

Furthermore, two problematic factors are different motivations for citing, and the the considerable skewness of the distribution of citations.

Problem: The Matthew effect in science

To those who have, shall be given…

When sorting a set of publications by the numbers of citations received, the distribution shows a typical exponential or skewed pattern. Works which have been cited are more visible and are more easily cited again (vertical tail in figure), while other works remain hidden and are hardly ever cited (horizontal tail in figure). This phenomenon is referred to as the Matthew effect in science.

Citation pattern

Citation pattern

What is the problem with skewed distributions? Skewed patterns make it difficult to determine an average citation count. Different approaches may be applied, see the figure.

  • Mean citation count: Long vertical or horizontal tails distort the average value. The impact factor is an example of this type of average.
  • Citation median: Long vertical or horizontal tails distort the median value. For example, a long tail of rarely cited publications results in a low median value, while a minority of highly cited publications is ignored.
  • H-index: The h-index is an alternative average value. It is designed to compensate for the effect of long tails.


Improve your impact

Good research + high visibility
=
Your best chance to make an impact

Being aware of how academic performance is evaluated allows you to make informed decisions and devise strategies to build and document your impact, and thereby improve your career prospects. Our general advice centres on making your work visible, accessible and understandable.

Make your work visible to other researchers:

  • Publish with international publishers/journals that are known and read in your field. Your work is then more likely to be cited by your peers.
  • Share your work on social networks (beware of copyright issues). Impact measures based on usage and social network mentions are emerging and available on many websites, e.g. on journal and database websites. Work that has been shared and spread is more likely to get cited.
  • Engage and participate in scholarly debates in society, e.g. in the press. News coverage may also increase scholarly attention.
  • Showcase your work:
    – Create your scholarly online identity. People with common names are difficult to distinguish. To avoid ambiguity create your profile (e.g. ORCID, Google Scholar) and link your publications to your profile.
    – Cite your previous work in order to give a broader picture of your research. However, do not overdo this, and make sure that you always stay in line with the topic discussed.
  • When looking for a publishing venue, consider journals (or publishers) that are indexed in databases used in your field; databases increase the visibility of your work.
  • Make sure your work is added to the research register at your institution via Cristin. Usually these data are used for performance measures, so state your name and institutional affiliation on your publications in a correct and consistent manner.
  • Collaborate with other researchers. In general, collaboration can benefit your career by increasing your production. Co-publishing may also imply borrowing status from more renowned co-authors who are read and cited regularly.

Make your work accessible to other researchers by adopting open science practices:

  • Post your work in repositories. If you have published in a subscription journal, archiving a version of your manuscript in an open repository will make it markedly more accessible.
  • Publish open access. Openly available articles or books are easily spread and cited. Publishing open access is encouraged and supported at many institutions, and mandated by many funding bodies.
  • Share your research data along with your publication. This strengthens your research and makes your findings replicable and verifiable.
  • Making your publications and your data openly available (as far as possible), is likely to increase your chances of having your work cited (McKiernan et al., 2016).

Make your work understandable to other researchers:

  • Use informative, memorable descriptions including key words in the title and abstract.
  • Place your findings in a larger context by citing the work of other researchers in your field.
  • Publish in English for a wider international distribution. If you have written in your native language, consider republishing it in English, or publish a summary of your main findings in an international journal.

References

Adler, R., Ewing, J., & Taylor, P. (2009). Citation statistics. Statistical Science, 24(1), 1-14. Retrieved from https://www.jstor.org/stable/20697661

Aksnes, D. W. (2003). A macro study of self-citation. Scientometrics, 56(2), 235-246. https://doi.org/10.1023/A:102191922

Brembs, B. (2018). Prestigious science journals struggle to reach even average reliability. Frontiers in Human Neuroscience, 12(37), 1-7. https://doi.org/10.3389/fnhum.2018.00037

Garfield, E. (1999). Journal impact factor: A brief review: Canadian Medical Association Journal, 161, 979-980. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1230709/

Garfield, E. (2006). The history and meaning of the journal impact factor. JAMA, 295, 90-93. https://doi.org/10.1001/jama.295.1.90

Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569-16572. https://doi.org/10.1073/pnas.0507655102

McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., . . . Yarkoni, T. (2016). How open science helps researchers succeed. eLife, 5, 1-19. https://doi.org/10.7554/eLife.16800

Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ, 314(7079), 498-502. https://doi.org/10.1136/bmj.314.7079.497

Zhang, L., Rousseau, R., & Sivertsen, G. (2017). Science deserves to be judged by its contents, not by its wrapping: Revisiting Seglen’s work on journal impact and research evaluation. PLoS ONE, 12(3), e0174205. https://doi.org/10.1371/journal.pone.0174205

A s k -u s