MORE@DIAG Seminar: New developments in Applied Evaluative Informetrics
This book presents an introduction to the field of applied evaluative informetrics, dealing with the use of bibliometric or informetric indicators in research assessment. It is written for interested scholars from all domains of science and scholarship, and especially for all those subjected to research assessment, research students at advanced master and PhD level, research managers, funders and science policy officials, and to practitioners and students in the field. The book sketches the field’s history, recent achievements, and its potential and limits. The book dedicates special attention to the application context of quantitative research assessment. It describes research assessment as an evaluation science, and distinguishes various assessment models, in which the domain of informetrics and the policy sphere are disentangled analytically. It illustrates how external, non-informetric factors influence indicator development, and how the policy context impacts the setup of an assessment process. It also clarifies common misunderstandings in the interpretation of some often used statistics.
Addressing the way forward, the book expresses the author’s critical views on a series of fundamental problems in the current use of research performance indicators in research assessment. Highlighting the potential of informetric techniques, a series of new features is proposed that could be implemented in future assessment processes. It sketches a perspective on altmetrics and proposes new lines in longer term, strategic indicator research.
The lecture presents recent developments in evaluative informetrics, related to the assessment of national academic systems in an European context, and the adequacy of the bibliographic databases Web of Science, Scopus and Google Scholar.
Professor Henk F. Moed has been active in numerous research topics, including: the creation of bibliometric databases from raw data from Thomson Scientific’s Web of Science and Elsevier’s Scopus; analysis of inaccuracies in citation matching; assessment of the potentialities and pitfalls of journal impact factors; the development and application of science indicators for the measurement of research performance in the basic natural- and life sciences; the use of bibliometric indicators as a tool to assess peer review procedures; the development and application of performance indicators in social sciences and humanities; studies of the effects of ‘Open Access’ upon research impact and studies of patterns in ‘usage’ (downloading) behaviour of users of electronic scientific publication warehouses; studies of the effects of the use of bibliometric indicators upon scientific authors and journal publishers; development of a new journal impact measure (SNIP); the relationship between full text downloads and citations; bibliometric studies of international scientific migration and collaboration; comparisons of Web of Science, Scopus and Google Scholar; multi-dimensional assessment of research impact; the potential of altmetrics; ontology-based bibliometric data management. He published over 100 research articles in international, peer reviewed journals, was program chair of numerous international conferences in the field, and is editorial board member of several journals in his field. He is a winner of the Derek de Solla Price Award in 1999. He edited in 2004, jointly with W. Glanzel and U. Schmoch, a Handbook on Quantitative Science and Technology Research (Kluwer, 800 pp), and published in 2005 a monograph, Citation Analysis in Research Evaluation (Springer, 346 pp.), which is one of the very few textbooks in the field. In September 2017 he published a second monograph with Springer, entitled ‘Applied Evaluative Informetrics’, which will be presented during the seminar, and he is currently editing a second Handbook on S&T indicators (with W. Glanzel, U. Schmoch and M. Thelwall), to be published in 2018.