Research Impact Indicators & Metrics
- Research Impact Indicators & Metrics Home
- Maximize Your Impact
- Apply Metrics Responsibly
- Author Metrics
- Article Metrics
- Journal Metrics
- Book/Chapter Metrics
- Data Metrics
- Open Educational Resource Metrics
RIIM Team - Contact Us!
We welcome your questions about impact metrics and/or your suggestions to improve this guide. Please contact us at RIIM@groups.umass.edu.
This guide has been developed by a team of librarians:
- Christine Turner, Scholarly Communication Librarian
- Jennifer Chaput, Data Services Librarian
- Melanie Radik, Science and Engineering Librarian
- Rebecca Reznik-Zellen, Head, Science and Engineering Librarian, and
- Sarah Fitzgerald, Assessment and Planning Librarian
Creative Commons License
Telling Your Story
Competition for prestige and funding have greatly elevated the stakes for demonstrating research potential and impact for grants, policy development and recognition for promotion and tenure, among other rewards. The International Network of Research Management Societies (INORMS) established a Research Evaluation Working Group in 2018 which developed the SCOPE Framework for guiding your narrative approach:
- START with what you value - not external drivers and not based purely on available data sources.
- CONTEXT considerations - who are you evaluating?, why are you evaluating? and do you need to evaluate?
- OPTIONS for evaluating - consider qualitative and quantitative, don't use quantities to indicate quality and include the evaluated in the evaluation.
- PROBE deeply - who might be discriminated against?, how might your approach be gamed?, what might be unintended consequences?, and does the cost outweigh the benefit?
- EVALUATE your evaluation - did your evaluation achieve its aims? and continue to consider your approach.
Keep in mind that no data source is comprehensive and major indexing databases cover predominantly English language, journal-based, STEM literature from the Global North. Working through the SCOPE process, referencing norms for your research field and following best practices for applying indicators will put you in good stead.
Best Practices
Beth Mitchneck, University of Arizona, and Joya Misra, University of Massachusetts, have studied Equitable Faculty Evaluation Practices and noted research on biases in teaching evaluations, service & leadership, research (grant funding, citations) and letters of reference based on gender, race, ethnicity and nationality. Theses biases are intensified by intersectionality. The authors present ways to disrupt bias, including holistic approaches, collective assessment, structure, contextual considerations and sufficient time.
Several researchers and organizations have proposed standards and recommendations for best practices. Specific recommendations vary, but two principles are consistent:
- Evaluate and give most weight to research quality, then consider quantitative indicators; and
- Use more than one indicator and note their sources.
San Francisco Declaration on Research Assessment (DORA) - recommendations for funding agencies, institutions, publishers, researchers and organizations that supply metrics along three themes:
- eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
- assess research on its own merits rather than on the basis of the journal in which the research is published; and
- capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).
The Leiden Manifesto for Research Metrics - 10 principles to guide research evaluation outlined in this video:
The Leiden Manifesto for Research Metrics from Diana Hicks on Vimeo.
The Metric Tide: Final Report with Executive Summary by the Independent Review of the Role of Research Metrics in Research Measurement and Assessment, including these dimensions of responsible metrics (p. X):
-
Robustness: basing metrics on the best possible data in terms of accuracy and scope;
-
Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment;
-
Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results;
-
Diversity: accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system;
-
Reflexivity: recognising and anticipating the systemic and potential effects of indicators, and updating them in response.
Metric Evaluation Tools
The DORA organization has released a guide for research institutions, "SPACE, a rubric for analyzing institutional conditions and progress indicators." The rubric covers:
- Standards for Scholarship
- Process Mechanics and Policies
- Accountability
- Culture within Organizations
- Evaluative and Iterative Feedback
at foundation, expansion and scaling levels. This recognizes that the culture, policies and practices of a researcher's affiliated organization can drive choices about what metrics are chosen and how they are used.
The Metrics Toolkit has definitions, scope, appropriate use cases, limitations, inappropriate use cases, transparency and more about indicators for authors, books, book chapters, datasets, journals, journal articles and software/code/scripts.
It's Funny Because It's True
Image used under CC-BY-NC 2.5 licence. XKCD: The Types of Scientific Paper
- Last Updated: Jul 3, 2024 10:30 AM
- URL: https://guides.library.umass.edu/Research_Impact
- Print Page