
Journal indexation: The misconception of guaranteed quality
Higher education institutions and research institutes are no strangers to commercial scientific journal indexers such as Scopus and Web of Science (WoS). These platforms serve as primary benchmarks for academic success and research performance evaluation. Within the scientific community, indexers function as a credibility currency for research’s perceived prestige and reliability.
This requirement has led to the misconception that indexation equates to research quality — while it does not necessarily guarantee high-quality research.
The indexation process merely demonstrates compliance with administrative requirements set by indexers. These include a peer review system, transparent editorial policies, and properly structured metadata or supporting data.
In other words, journal indexation alone cannot serve as the sole metric for research performance. Instead, evaluation systems should prioritise impact-driven indicators.
Journal indexation vs research quality
There are many factors that contribute to research quality, including the compatibility between the research question and the chosen methodology, the integrity and transparency of the research process, and the accessibility of data or supplementary research materials (such as datasets, methods of analysis, and research logs). Peer review processes typically evaluate these aspects.
Take Nature journal, for example. This prestigious publication retracted an article titled A Specific Amyloid-ß Protein Assembly in the Brain Impairs Memory in 2024 after it was proven that the lead researcher had manipulated images. Unfortunately, before the retraction, the article had already been cited 2,375 times and accessed by more than 74,000 readers.
According to the indexer Web of Science, Nature has an impact factor of 50.5 and is classified in journal quartile Q1 (18.51) in the ScimagoJR indexer (Scopus) under the multidisciplinary category.
Despite being widely regarded as a reputable reference, bibliometric indicators — statistical analyses of published books and articles — have inherent limitations.
For instance, the impact factor only measures the average number of citations per article in a journal over the past two years. However, citation distribution is often uneven — while some articles receive many citations, others may receive none. As a result, the impact factor does not necessarily reflect the overall quality of all published articles in a given journal.
The misconception impacts
This misconception about journal quality has negatively affected the academic climate, particularly in developing countries like Indonesia. Many policies prioritise the quantity of publications and citation rates over research quality.
As a result, policymakers in higher education and research direct academics to focus on topics with global appeal to increase their chances of publication in indexed journals recognised by international indexing institutions.
Regrettably, this trend often leads to the neglect of local social and humanities issues, such as environmental sustainability and community-based problems, which are considered less appealing to an international audience.
The pressure to publish in indexed journals also increases the risk of unethical academic practices. Misconduct such as plagiarism, data fabrication, and ‘salami slicing’ — the practice of splitting a single study into multiple smaller papers to inflate publication counts — has become more prevalent. In fact, paper mills—cartels of publishing companies that sell fabricated scientific articles — are a documented issue.
These practices not only damage researchers’ credibility but also undermine the integrity of the academic community as a whole. As publicly funded institutions, universities and research institutes must prioritise disseminating inclusive and impactful knowledge to society.
What are the alternatives?
Research quality appraisal requires a more inclusive and holistic paradigm to counter the negative effects of indexation-based performance evaluation. Several global science initiatives advocate for such changes, including the Declaration on Research Assessment (DORA), launched in 2012, the Leiden Manifesto in 2021, and the Coalition for Advancing Research Assessment (COARA) in 2023.
Reforming academic policies at both institutional and national levels is crucial to fostering a thriving research ecosystem. Relevant ministries must promote evaluation systems that prioritise research impact, while bibliometric indicators should serve as complementary rather than primary assessment tools.
Governments and academic institutions can also offer incentives for research that addresses strategic national issues rather than focusing solely on indexation standards.
Additionally, academic institutions should enhance capacity-building programmes for journal editors and researchers, including training in academic writing and editorial management. This approach can help local journals meet international standards while retaining their unique identity.
Transparency is equally vital. One concrete step is facilitating researchers’ storage of raw data and related materials in the National Scientific Repository (RIN), ensuring public accessibility.
Scientific articles undergoing peer review can also be shared as pre-prints, allowing the public to read and provide feedback. For publicly funded research, adhering to transparency principles demonstrates the researcher’s accountability to the society that finances their work.
While journal indexation improves the accessibility of scientific articles, it should not be the sole performance metric — let alone a measure of research quality. Relying on bibliometric indicators as a ‘shortcut’ for performance appraisal could ultimately reduce the research’s relevance and societal impact in Indonesia.
Kezia Kevina Harmoko contributes in this translation process Läs mer…