Onderstaande printversie van het indicatorenboek werd door uw browser gegenereerd, en zal niet steeds optimaal ogen. Via de ingebouwde printfunctie op de website van het Indicatorenboek (ronde knop rechts bovenaan) kan u een printvriendelijke PDF genereren met mooi ogende lay-out.
8.2Towards a framework for a responsible bibliometrics-aided research assessment
By Wolfgang Glänzel (ECOOM, KU Leuven, Belgium), Cinzia Daraio (Sapienza University of Rome, Italy) and Juan Gorraiz (University of Vienna, Austria)
The growing reliance on bibliometric indicators in research evaluation has led to increasing criticism, both from the academic community and from recent European initiatives advocating more holistic, peer review–centred approaches. This paper addresses the urgent need for responsible and contextualised use of such metrics. We propose a conceptual framework to support the appropriate application of bibliometric indicators in a broader evaluative context, tailored to the goals of the assessment. This framework promotes a balanced approach that values transparency, interpretive care, and the ethical use of quantitative indicators within broader and comprehensive evaluation systems. It calls for shared protocols, cross-sector collaboration, and recognition of disciplinary diversity to ensure that indicators make a meaningful contribution to research assessment – neither being excluded nor dominating the framework.
Introduction
The use of quantitative approaches – particularly ready-made bibliometric indicators – in research assessment has come under increasing scrutiny. Much of this criticism stems from concerns about the unintended consequences that arise when these tools are used improperly. However, many reform initiatives lack conceptual clarity: they rarely specify what exactly is being assessed, the specific purpose and concrete context of assessment, the level of aggregation involved, or the degree of granularity required. This ambiguity raises the question whether research is conceived as a holistic academic endeavour or merely as a collection of quantifiable outputs. Further complicating the debate, much of the criticism in favour of peer review over metrics is based on issues observed at the level of individual researchers – concerns already acknowledged within the bibliometric community (Wouters et al., 2013).
This scepticism towards indicators has spurred a wave of manifestos and declarations – such as DORA and the Leiden Manifesto – calling for more responsible and meaningful approaches to research assessment (Wilsdon et al. 2015; Biagioli and Lippman, 2020; Curry et al., 2020).
At the European policy level, demands for change have intensified. The European Commission’s 2021 scoping report advocated a re-evaluation of current systems and laid the foundations for the CoARA agreement in July 2022 (European Commission, 2021; CoARA, 2022). While these initiatives represent significant progress, they fall short of providing concrete operational tools or criteria for the responsible use of indicators (see also Daraio and Maletta, 2025).
In response, this paper argues that bibliometric indicators should not be dismissed altogether. Rather, reform efforts should target their “inappropriate” use – particularly their application in contexts for which they were never intended (Glänzel, 2006). Bibliometric indicators are analytical instruments developed through rigorous scientific methods within the fields of scientometrics and information science. Hence, discrediting them broadly is both unjustified and counterproductive.
What is required is a structured framework to determine whether the use of a specific indicator is fit-for-purpose within each evaluation context. The aim is not to oppose quantitative methods with qualitative ones, but rather to develop criteria that guide appropriate use, acknowledging that even peer review has limitations.
As part of our recent initiative (Daraio at al., 2025a), we proposed a multidimensional framework (Daraio et al., 2025b) that outlines how indicators should – and can - be selected and applied responsibly across different evaluation contexts. The framework concludes by identifying key questions and limitations, while affirming the value of indicators – when applied with expertise and care – in contemporary research evaluation.
The key elements of this framework will be briefly outlined and illustrated in the following sections.
Acknowledgement
This dossier is based on the conference papers by Daraio et al. (2024; 2025) presented as part of a special session at the STI 2024 Conference in Berlin and in a Special Track also organised by the authors at the ISSI 2025 Conference in Yerevan.