this is for holding javascript data
Alberto Pepe edited the fork factor.tex
almost 10 years ago
Commit id: a063235e5a7fc35e3acabb5a3107eebe925acab6
deletions | additions
diff --git a/the fork factor.tex b/the fork factor.tex
index d2c4887..256b65a 100644
--- a/the fork factor.tex
+++ b/the fork factor.tex
...
\textbf{How are academic journals evaluated?} There are many different ways to determine the prestige of an academic journal. One of the oldest and best established measures is the Impact Factor (IF) which is very simply the average number of citations to recent articles published in a journal. The Impact Factor is important because the reputation of a journal is also used as a proxy to evaluate the relevance of past research performed by a scientist when s/he is applying to a new position or for funding. Several \href{http://en.wikipedia.org/wiki/Impact_factor#Criticisms}{criticisms} have been made to the use and misuse of the Impact Factor. One of these is the policies that editors adopt to boost the Impact Factor of their journal (and get more ads), to the detriment of readers, writers and science at large. These policies promote the publication of sensational claims by
researchers, researchers who are in turn rewarded by funding agencies
that consider whether the scientist published for publishing in
such high IF journals. This effect is broadly recognized by the scientific community and represents a conflict of interests, that in the long run increases public distrust in published data and slows down scientific discoveries. Academic publishing should instead foster on new findings, the sharing of scientific data, and a fast advancement of the pace of scientific research. It is apparent that the IF is a crucially deviated player in this situation. To resolve the conflict of interest, it is thus fundamental that funding agents start complementing the IF with a better proxy for the relevance of publishing venues and, in turn, scientists' work.
\textbf{Academic impact in the era of forking.} A number of alternative metrics for evaluating academic impact are emerging.
We, at Authorea, strongly believe that these alternative