The first scholarly journal Research writing and publishing is built upon a model that was conceived prior the web where wide dissemination could only really be accomplished by journals. Today this is simply not true, as I can share my thoughts with the world instantly through a variety of social networks and channels. Indeed, I have already done it with this piece. However, most researchers produce documents in a way that such networks do not effectively support, like PDF, Word, or even Google Docs. Yes, you can share Word documents and PDFs online just like you can print a tweet-- both are possible, neither make sense. For the most part, researchers are technically reliant upon publishers to share their work online and thus, they are reliant upon publisher's models. To move beyond this, research writing needs to be designed for research and the web. That is, research writing should be collaborative in real-time, semantically structured, data-driven, in HTML or XML, and version controlled. It is finalfinalFINALLY.docx time to move to a new system.
By utilizing a system designed for research and the web, like Authorea,
writing effectively becomes publishing. This is powerful as it gives researchers the tools of publishers. Thousands have taken advantage of this already; compare an
Authorea article to any traditional scientific publication and you will find that it is not only more accessible but also more advanced than many publications from the world's largest publishers. Of course, this is even true of blogs and other tools outside of traditional academic publishing, like Github. So then why has research publishing not been revolutionized? Why do we continue to buy into antiquated models and methods? Because, although it is necessary, it is not enough to give researchers this power. Indeed, preprinting, which is the closest thing we have to a democratized model of publishing still encourages researchers to buy into the traditional system because that is what is tied to career advancement.
Impact, metrics, and brands.
In effect, publications are the currency of scientists. They are used to "buy" grants and careers and like money come in different denominations of worth. A publication in one prestigious journal will advance your career more than ten open preprints or even open access articles--regardless of the actual content of the articles. And that is the crux of the problem, incentives for advancing our careers do not align with incentives for good research. In fact, they are antagonistic in many cases with people hacking their results to fit a preconceived narrative. Cultural movements are driving new change across fields thanks to some amazing people and initiatives and now open, free, and rapid publishing is on the rise with preprints in discicplines. However, preprints do not fix the problems with academic publishing, it simply makes them more tolerable.
If we did not paste together the dead bones and scales of ideology, if we did not sew together the rotting rags, we would be astonished how quickly the lies would be rendered helpless and subside.- Alexander Solzhenitsyn, 1974.
To change, or rather improve, the system we need to change the incentives! We need article-level based metrics that are open and that reward not only the impact of the work but the veracity of it. After all, research is about uncovering truths, not telling nice stories. Multiple proposals for rewarding careful work, from badges for good practices to the
pre-registration of experiments, have been proposed, however, that they have not been widely adopted by researchers and journals suggesting that, while they might benefit research, they have not done enough to benefit the researcher.
New metrics designed specifically to evaluate the veracity of work have also been proposed
\cite{Nicholson2014,Grabitz_2017}. Such metrics could, in theory, provide not only a new incentive and reward for careful work but also a systematic way to highlight the major faults of traditional systems. Consider the following scenario: in todays world a paper that has 100 citations or more is generally considered a success. But what if of those hundred citations, four studies found it to be wrong by their own independent evidence and the rest simply mentioned it? The ability to look at independent papers automatically using sentiment analysis and machine learning would allow us to identify robust work versus unsupported work or even refuted work. By measuring this explicitly in a way that can be ascertained in seconds, researchers would be incentivized to do all they could so that others could come to their similar findings. That is, they would be incentivized to to share data openly, to publish fully detailed protocols so that others could reproduce their work, to publish carefully, and to publish openly. This is becoming a reality already, with
the R-factor and the implications of such a tool will hopefully soon be realized.
A new system
With the right metrics, incentives, and tools, researchers could finally utilize a system of their own. A paper would be drafted as described above on a tool like Authorea. As the document is finished it would be automatically cross-referenced across databases of papers and researchers to find related papers and researchers. This could not only help with writing and collaboration but also review and publication. Indeed, it should be possible that as you write the paper you're automatically told if others are working on a similar idea (possible collaborator) and when complete who would be good to review it. Tools for matching documents with reviewers, like
JANE, already exist they simply need to be improved. Coordinating review, an arduous process of back and forth invitations and scheduling and reminders could be accomplished by an AI system. Indeed, Amy and Andrew, AI-based assistants developed by
x.ai, are in use already by many people to schedule and reschedule meetings all the time. Why
shouldn't it be used to arrange and coordinate peer review? A system that matches, invites, and rewards reviewers could then be developed, dramatically cutting costs. Want your paper to be reviewed? Leave a review first. Want to tell if it is a worthwhile paper, read it, evaluate it in terms of article-level metrics and community sentiment (reviews/endorsements). This might seem like science fiction but these tools already exist. It is simply a matter of putting them together in a way that researchers can utilize them.
I invite you to help make this a reality with us.