Skip to Main Content
The University of Manitoba campuses are located on original lands of Anishinaabeg, Cree, Ojibwe-Cree, Dakota, and Dene peoples, and on the National Homeland of the Red River Métis. More

Scholar Identity and Research Impact

This guide describes the digital scholarship landscape, how to build and manage your research identity, and the different methods and tools for tracking the impact of scholarship.

Research Metrics: Introduction

Scientometrics is the field of study that concerns itself with measuring and analysing scholarly literature. It explores various aspects, including assessing the impact of research and academic journals, understanding how scholars cite each other's work, and utilizing these metrics in policy and academic administration. Research in this field employs qualitative, quantitative, and computational methods.

Traditional quantitative measures such as Journal Impact Factor (JIF) are typically associated with bibliometrics, a branch of scientometrics. However, recent advancements in algorithms, search capabilities, machine learning, and data mining, combined with mathematical techniques, has led to more sophisticated and precise evaluations of relationships within the academic world.

 Altmetrics, or 'alternative metrics', is a newer approach that relies on open-source data. It quantitatively compiles mentions, citations and digital transactions (e.g. downloads, views, saves) of works, persons, and non-traditional outputs (eg. software, datasets, videos, presentations, repositories). This information is readily available in the open scholarly ecosystem through public APIs. Interpreting these metrics is a subject of ongoing discussion. However, like all metrics, they should be considered in context. Altmetrics are not meant for assessing the impact of research but are valuable for measuring attention or engagement (positive or negative), most especially in public discussion arenas or in media.

Contextualizing Research Metrics: strategies

In understanding research metrics, the 4 type classification of scientometric indexes provides easy entry: citation, author-level, journal-level and altmetrics. Unfortunately this classification does not help in building an accurate narrative and arguably leads to single-point metrics, which is strongly discouraged. It is also important to recognize that most metrics contribute to surrogate, 'lag' indicators: in other words, they often do not directly measure concepts like impact and take time to show how the scholarship is being used in the literature. See below for guidance on responsible use.

Both Elsevier (Research Metrics Guidebook) and Clarivate (Unpacking Research Profiles: moving beyond metrics by ISI) have produced publications that aid in guiding appropriate use and interpretation of the metrics they provide in their various data and analytic products. These together with other white papers have grouped these various indexes into narrative categories like collaboration, societal impact, scholarly output and advancement, and economic and legislative impact. 

In creating your narrative it is important to identify what story you need to tell. If you are not sure what that is or what indicators are most appropriate for your narrative category, see Bibliometrics & Research Impact services.

Responsible Use of Metrics

Evaluating research is complex: research evaluation has many stakeholders and is conducted for many reasons. Assessments are popular with governments because it is felt that such frameworks provide accountability to the public. Funders use such assessments to create benchmarks for the standard of research being done. Universities also benefit financially when they write their research strategies around the requirements of assessments. However, the demands of assessments can be cumbersome and stressful for researchers and can create tension between faculty colleagues.

Change to traditional research evaluations have been influenced by initiatives such as the 2013 San Francisco Declaration on Research Assessment (DORA), the 2015 Leiden Manifesto for research metrics and the 2020 Hong Kong Principles for assessing researchers. DORA has evolved to be an organizational leader in reforming research assessment and part of this leadership is supporting a 'registry of endorsement'. Many Canadian organizations, including the Tri-Agencies, are signatories to DORA however as of 2023 there are no higher education organizations or funders in Manitoba that are listed as signatories. 

A 2018 joint report by The University of Manitoba with the University of Manitoba Faculty Association affirmed many of these white paper recommendations for appropriate use in their Principles of Agreement. The following guidance summarizes both the Principles and the recommendations. Used together with the strategies for contextualizing metrics will assure responsible use:

  • Select topics/groupings with sufficient data
  • Always use a mixed methods approach (quantitative and qualitative)
  • Understand and state the limits and indexing approaches of sources like Web of Science, Scopus, and Google Scholar
  • When comparing use normalized indexes or scores
  • Describe/define known discipline behaviours or norms (e.g. citation counts in mathematics is exceedingly low and takes a long time to acquire; the 'review' document type in health sciences tend to be hyper-cited compared to other types)
  • State what metrics were used and why