Conclusions
This study is the first of its kind to conduct a critical analysis of
Stanford’s list of the World’s Top 2% Scientists. There are already
several criticisms of such lists, which are frequently used as a metric
for evaluating the impact and quality of a researcher’s work. Among
these criticisms are:
(1) The popularity of a particular field or research topic greatly
influences the list. Regardless of the quality of their work,
researchers working in highly cited fields are more likely to be
included on these lists.
(2) The list does not consider the quality of citations received by a
researcher. A researcher may receive a large number of citations, but if
the majority of them are from low-impact or low-quality sources, they
may not accurately reflect the researcher’s work.
(3) The list does not include any early-career researchers. A young
researcher may not have had enough time to accumulate a large number of
citations, so the list may not give them the credit they deserve.
Overall, the list of highly cited or top 2% researchers is a good
indicator of impact, but it should not be used as the sole indicator of
a researcher’s quality and impact, as it has limitations and biases.
However, more importantly, this research reveals that the so-called
standardized database of the world’s top 2% of scientists is flawed.
Among these flawed are:
(1) The database incorrectly listed researchers as first published in
the nineteenth century and continued to publish until 2022.
(2) Many peculiar authors with low publication number and carrer
lengths, for example, an author with only 2 papers but is ranked 612.
(3) Many authors with a large number of publications, and some of these
are just news and editorial articles.
(4) Some of the authors listed in the database were journalists and
editors, and their news articles were deemed “peer-reviewed” by this
list.
(5) Some of the authors are an institute, not an individual
(6) Many authors with more than 50% self-citations in the list.
The study also discovered that there are deeply fundamental flaws in the
so-called “databases of standardized citation indicators” which do not
recognize if an author is a journalist and the articles are news
articles. The use of such “standardized” ranking should not be
encouraged.