Alberto Pepe moved fork factor to the bottom below figure  almost 10 years ago

Commit id: 20a1324c938b3da55e33edcd12eb38351ea70822

deletions | additions      

       

\textbf{Research impact in the era of forking.} A number of \href{http://altmetrics.org/manifesto/}{alternative metrics for evaluating academic impact} are emerging. These include metrics to give scholars credit for sharing of raw science (like datasets and code), semantic publishing, and social media contribution, based not solely on citation but also on usage, social bookmarking, conversations. We, at Authorea, strongly believe that these alternative metrics should and will be a fundamental ingredient of how scholars are evaluated for funding in the future. In fact, \href{https://www.authorea.com}{Authorea} already welcomes data, code, and raw science materials alongside its articles, and is built on an infrastructure (Git) that naturally poses as a framework for distributing, versioning, and tracking those materials. \href{http://en.wikipedia.org/wiki/Git_(software)}{Git} is a versioning control platform currently employed by developers for collaborating on source code, but its features perfectly fit the needs of most scientists too. A versioning system, such as \href{https://www.authorea.com}{Authorea} and \href{http://www.github.com}{GitHub}, empowers \textbf{forking} of peer-reviewed research data, allowing a colleague of yours to further develop it in a new direction. Forking inherits the history of the work and preserves the value chain of science (i.e., who did what). In other words, forking in science means \textit{standing on the shoulder of giants} (or soon to be) and is equivalent to citing someone else work but in a functional manner. Whether it is a "negative" result (we like to call it non-confirmatory result) or not, publishing your peer reviewed research in Authorea will promote forking of your data. To learn how we plan to implement \textbf{peer revision} in the system, please stay tuned for future posts on this blog.  \textbf{More forking, more impact, less bad science.} Obviously, the more of your research data are published, the higher are you chances that they will be forked and used as a basis for groundbreaking work, and in turn, the higher the interest in your work and your academic impact. Whether your projects are data-driven peer reviewed articles on Authorea discussing a new finding, raw datasets detailing some novel findings on \href{http://zenodo.org}{Zenodo} or \href{http://figshare.com}{Figshare}, source code repositories hosted on Github presenting a new statistical package, every bit of your work that can be reused, will be forked and will give you credit. Do you want to do a favor to science? Publish also non-confirmatory results and help your scientific community to quickly spot \href{http://www.nature.com/news/papers-on-stress-induced-stem-cells-are-retracted-1.15501}{bad science} by publishing a dead end fork (Figure 1).\textbf{And now onto the nerdy part: The Fork Factor}. So, we would like to imagine what academia would be like if forking actually mattered in determining a scholar's reputation and funding. How would you calculate it? Here, we give it a shot. We define the \textbf{Fork Factor} (FF) as:  \begin{equation}  FF = N*(L^{\frac{1}{\sqrt{N}}}-1)  \end{equation}  Where N is the number of forks on your work and L their median length. In order to take into account the reproducibility of research data, the length of forks has a higher weight in the FF formula. Indeed, forks with length equal to one likely represent a failure to reproduce the forked research datum.   Anyone out there care to improve the formula above? For instance, would it be better if the FF would reach a plateau for L > 3 ? Let us know at \verb|[email protected]| or by commenting here.