Evolving Open Solutions - Group 1

Workgroup Members

Adyam Ghebre, Director of Outreach, Authorea
Elizabeth Kirk, Associate Librarian for Information Resources, Dartmouth College
Frank Sander, Director, Max Planck Digital Library, Max Planck Society
Geoffrey Builder, Director of Strategic Initiatives, Crossref
Joshua Nicholson, CE and Co-Founder, The Winnower
Melinda Kenneway, Executive Director and Co-Founder, Kudos
Mathew Salter, Publisher, American Physical Society
Paul Murphy, Director of Publishing, The RAND Corporation
Robert Kiley, Head of Digital Services, Wellcome Library
Peter Potter, Director of Publishing Strategy, Virginia Tech

Our group started by making a number of general observations about the growth and current state of open scholarship:
Opportunities for open scholarship are greater than ever before, and yet the vast majority of academics still prioritise publishing in pay-for-access journals over open access journals—even though OA journals make scholarship readily available to anyone with access to the internet.
They also prioritise publishing in the traditional article format that has dominated scholarly communications for many centuries, despite the availability of other formats and platforms through which they could make their work known.
The current system of sharing research remains rooted in practices that were built to work in a print-based world and have evolved relatively little over the past decade or more. The online environment has added new features and services to that core process, but underlying core practices remain unchanged.
Key barriers to more open scholarship include:

Flawed incentives
Publication history remains central to tenure and promotion for most academics, judged either by the impact factors of the journals they publish in or, for monographs, the prestige of the Press. These measurements are only loosely linked (by proxy) to the actual quality of an individual scholar and their work.
A host of new metrics are becoming available to assess impact at a more granular level, and yet these emerging metrics (article citations/views/downloads, altmetrics etc.) are not widely used for funding, tenure and promotion decisions – despite initiatives like DORA (the Declaration on Research Assessment), which calls for an end to using journal-based metrics, such as Journal Impact Factors, as a surrogate assessment of the quality of individual research articles.
Until the way in which we measure reputation changes, then scholars – and the many stakeholders that determine their career progression – will continue current practices, even if new more open practices could be shown to be effective in extending the influence and application of their work.
Dysfunctional market
The scholarly communications market has been widely described as ‘dysfunctional’. Journals are – in economist terms – compliments not substitutes. Each journal contains original works that are not available in alternative journals, which means limited market competition. This is evidenced in huge price disparities for subscription journals, even within the same fields - a clear symptom of inefficiency in the market.
However, the current reputation system continues to encourage researchers to publish in journals with high impact factors regardless of the cost to institutions or ease of access to the content for readers. A more open market with greater transparency around costs and access would create more competition oriented towards the needs of the research community. 
Misalignment of funding

The dysfunction within the market in turn leads to the misalignment of funds. Although some progressive work by funders in the UK (Wellcome Trust and RCUK) has begun to address this by requiring grant-holders to publish in a more open manner, much of the allocation of the 1 trillion dollars invested in research every year is based on subjective measures – funders are challenged in identifying the best people and projects to support. There is limited focus on outcomes as well as outputs.

More on misalignment of funding?

Current system too suppressive and slow
A print-based system has created a role for publishers as ‘super filters’ – selecting and curating the best and most original content for publication through the peer review process. That high selectivity was required pre-Internet, when the costs of packaging, printing and distributing material was so high. Although online publishing of course brings with it significant costs, the dissemination part of those costs has been reducing. In addition, alternative peer review systems (post publication etc) offer the potential to publish much more material online without a proportional increase in costs.
The value of the traditional role of publishers as pre-publication filters was much debated within our group and a difference of opinion emerged with the publisher representative believing that the initial filtration role remained important (particularly relating to medical information, which needed clear badging in terms of its credibility). The funder and library representatives were less convinced that this was so necessary in a world where post publication review could mean very rapid presentation of research ideas and discoveries on a pre-print server, vetted through a managed process of post-publication review (much like the F1000 model).
It was agreed by all that a light pre-publication review was desirable to ensure poor quality or inaccurate work was not made available. The group discussed the opportunity for publishers to maintain their filtration and curation role by reviewing and selecting content from pre-print servers for representation in branded journals/resources that represented a particular editorial or quality focus – much as journals make their selections currently. Business models could then apply for publishers to generate revenues from in terms of adding value to those selected articles through brands/prestige associations and other value-add services for authors and readers.
Restrictive formats
The research article remains the currency of career progression in STEM and Social Sciences; the monograph continues to dominate in the humanities. Both are historic print-based formats. Although progress has been made in terms of online features and functionality, these basic units of scholarly communication remain much the same. Additional content like data, images, infographics, presentations and other outputs count little towards a researcher’s funding success and career progression.
The availability of data relating to a funded project is a particular problem. Huge value might be gained for the progress of knowledge in more sharing of data as soon as it is available, but the current system dis-incentivizes this by rewarding authors instead for ‘salami publication’ (multiple articles based on a single data set over the course of a grant). There is a critical problem of attribution relating to data – who has gathered this, analyzed this and how this value (and the people that have provided that value) is then recognized in research articles.
There are a vast array of research activities and outputs aside from formal publications that should be better recognized as contributing to scholarship – data sharing; software/cell line/reagent/tools development; peer review; blogs; social media/talks/posters (outreach); training and teaching; pre-prints and essays – incentives will be required for researchers to produce and share their work in a wider range of formats, an in an open manner. Work also needs to be done to define which of these activities and outputs are most effective in driving impact (and what kind of impact). This in itself will the help act as an incentive, as long as funders and universities subscribe to the same measures of impact.
Lack of normalization of metadata and taxonomies
A key barrier to more openness in scholarly communications remains the relative silos that exist across the scholarly communications industry. Common standards and technical infrastructure are beginning to emerge, but there remains much work to be done here.
More on standards etc?
Overcoming barriers to openness
The barriers to open scholarship are extensive, but most can be traced back to how tenure and promotion decisions are made. We can’t change researcher behaviors until we change how we reward them. To change how we reward them we need a replacement for impact factors, which reflect neither openness or impact for a particular researcher and their work. That measure needs – at least initially – to be relatively simple, but represent a fairer discipline specific comparison and embrace a wider array of activities and outputs.
The conclusion of our group discussion was the following recommendations:
  1. Understand how the system works now. The OSI initiative should fund a research project to review in detail, country by country, how funding, tenure and promotion decisions are made and the role of research outputs and activities within this.

  2. Define and ideal future. A working group should be established to define an ideal system for funding, tenure and promotion and develop an evaluation framework. Key to this would be the recommendation of simple alternative measures of quality/influence to the impact factor. Part of this work would also require analysis of the differential impact and efficacy of various publishing formats and research activities.
  3. Make evaluation system transparent. As well as proposing new measures of impact that should be used when making funding, tenure and promotion decisions, a set of processes should also be developed to ensure complete transparency.

[Someone else is editing this]

You are editing this file