Alberto Pepe edited untitled.tex  almost 10 years ago

Commit id: c7bdc8f8d8de8d4547bca3520e9abf50005a117f

deletions | additions      

       

\textit{Oh, \textbf{How are academic journals evaluated?} There are many different ways to determine the prestige of  an empty article!} academic journal. One of the oldest and best established measures is the Impact Factor (IF) which is very simply the average number of citations to recent articles published in a journal. The Impact Factor is important because the reputation of a journal is also used as a proxy to evaluate the relevance of past research performed by a scientist when s/he is applying to a new position or for funding. Several \href{http://en.wikipedia.org/wiki/Impact_factor#Criticisms}{criticisms} have been made to the use and misuse of the Impact Factor. One of these is the policies that editors adopt to boost the Impact Factor of their journal (and get more ads), to the detriment of readers, writers and science at large. These policies promote the publication of sensational claims by researchers, who are rewarded by funding agencies that consider whether the scientist published in such high IF journals. This effect is broadly recognized by the scientific community and represents a conflict of interests, that in the long run increases public distrust in published data and slows down scientific discoveries. Academic publishing should instead foster on new findings, the sharing of scientific data, and a fast advancement of the pace of scientific research. It is apparent that the IF is a crucially deviated player in this situation. To resolve the conflict of interest, it is thus fundamental that funding agents start complementing the IF with a better proxy for the relevance of publishing venues and, in turn, scientists' work.  You can get started by \textbf{double clicking} \textbf{Academic impact in the era of forking.} A number of alternative metrics for evaluating academic impact are emerging.   We, at Authorea, strongly believe that these alternative  This is not a prediction, it is a fact: a few years from now the impact of scientific     To  this text block and begin editing. You can also click the \textbf{Insert} button below end, it is useful  to add note that the proposed system for continuos publishing of \textbf{SME} will take advantage of a web-based hosting service similar to \href{http://en.wikipedia.org/wiki/GitHub}{GitHub}. Although GitHub is currently used only by computer scientists, its features perfectly fit the needs of biomedical scientists too. A versioning system as GitHub empowers \textbf{forking} of projects, i.e. which means that a colleague scientist starts a  new block elements. Or research project from your peer-reviewed, published SME. The more your SME are forked, the higher the interest in your work, whether it is a cutting edge technology or a novel model system. However, if colleague scientists forking your SME will not be able to reproduce your results, their fork will interrupt at their first published SME. Thus, the longer the forks, the more valuable your research is. Importantly, if  you can \textbf{drag cannot reproduce someone else data, it will be in your interest to make it public,  and drop an image} right with a minimum effort thanks to Authorea.  \textbf{And now  onto the nerdy part: The Fork Factor}. Here, we define a \textbf{Fork Factor} (FF) as:  \begin{equation}  FF = N*(L^{\frac{1}{\sqrt{N}}}-1)  \end{equation}  Where N is the number of forks and L their median length. In order to take into account the reproducibility of SME, the length of forks has a higher weigh in the FF formula.  Anyone out there care to help us collect some data and test  this text. Happy writing! out?