In response to the COVID-19 pandemic and other global challenges, the pace of change in scholarly publishing has been picking up — a lot.
Since the pandemic’s onset, experiments to vet and disseminate critical research faster, such as the C19 Rapid Review Initiative, have surged. The compounding forces of COVID-19 and Open Access funder mandates are also spurring publishers to launch and accelerate OA journal initiatives.
More broadly, within academia, the pandemic has raised awareness of the role scholarly publishers must play in advancing the UN Sustainable Development Goals (SDGs). Following the launch of the SDG Publishers Compact at the Frankfurt Book Fair in October 2020, there has been a groundswell of support across the scholarly communication ecosystem around efforts to further the SDGs. Taking steps to facilitate the rapid flow of reliable information and ensure equitable access to knowledge are among key priorities for publishers.
What areas of innovation are and will likely continue to fuel these latest scholarly publishing developments? Below we round up three we’re watching.
Data driving a more interconnected research ecosystem
In the title of a prescient 2019 Scholarly Kitchen article Alice Meadows, Director of Community Engagement at NISO, proclaimed, “Better Metadata Could Help Save The World!”. Meadows made the case that, in the face of global crises from poverty to climate change to disease outbreaks, publishers must cultivate interoperable metadata and data sharing standards to support the rapid and widespread publication and reuse of vital research. As a starting point for improving metadata quality and data sharing practices, she pointed to the FAIR (Findable, Accessible, Interoperable, and Reusable) data principles.
Fast forward to the present pandemic, and the need for and benefits of machine-readable FAIR data has never been more apparent. Quality metadata, open data, and machine-readable full-text articles that support text-and-data mining have underpinned some of the most significant coronavirus research linking and discovery tools to date, including CORD-19 and NLM’s LitCovid database. And FAIR data has been key to developing publishing initiatives and new AI tools to facilitate rapid vetting of coronavirus-related scholarship, like C19 Rapid Review and the CCC COVID Author Graph.
However, even with so many significant advances made in research aggregation and data analysis, the COVID-19 pandemic has revealed just how unFAIR the majority of metadata outputs and datasets remain. Among primary reasons for this are inconsistencies in publishers’ article-level metadata and data sharing policies (or the lack thereof) and the need for greater standardization of metadata and dataset collection among repositories, as well as interoperability between them.
The best practices resulting from the Metadata 2020 initiative are helping to raise awareness of the roles metadata creators, curators, custodians, and consumers must play in fulfilling existing metadata guidelines and developing new ones. To date, over 100 organizations and 170 individuals have signed the initiative’s metadata pledge. And, as the community-led GO FAIR initiative continues to pick up momentum, it appears publishers and stakeholders are beginning to put more resources towards data sharing in addition to metadata standardization. After declaring 2020 a “research data year,” STM has announced it will prioritize data availability in 2021 with the establishment of a permanent research data program. STM lists the following program focus areas:
- SHARE: Increase the number of journals with data policies and articles with Data Availability Statements (DAS)
- LINK: Increase the number of journals that deposit the data links to the SCHOLIX framework
- CITE: Increase the citations to datasets along the Joint Declaration of Data Citation Principles
To what extent publishers will encourage data sharing remains to be seen. And, as noted by COAR, further analysis is needed to determine whether most repositories will be able to support proposed publisher data deposit guidelines, like the 2020 “Data Repository Selection: Criteria That Matter,” in the near future. But, initiatives like the “Data Repository Selection Criteria” are pushing forward conversations about making metadata and underlying research data more standardized and shareable among stakeholders and also sparking new repository initiatives like COAR’s working framework for best practices in repositories.
Preprints entering the publishing process
The COVID-19 pandemic is also drastically changing the way research is disseminated, with more scholars opting to post manuscripts to preprint servers than ever and increasing publisher support of preprint posting, as in the C19 Rapid Review initiative. Preprints are serving as a powerful vehicle for early research dissemination and collaboration. But the pandemic has also heightened awareness of how the lines between published research and unvetted preprints can become blurred, potentially resulting in the spread of misinformation.
To leverage the benefits of preprint posting and mitigate risks, stakeholders have begun exploring a variety of ways to identify the signal from the noise, including more stringent screening of potential “high-risk” research by preprint servers. Academic journal publishers are increasingly entering the conversation and working to develop more formal processes for distinguishing peer-reviewed preprints from unvetted ones, a topic discussed during Scholastica’s recent webinar, “Increasing transparency and trust in preprints.”
Examples include NIH NIGMS and OASPA encouraging scholars to post preprint reviews and comments. The pandemic has also sparked new overlay journal publishing initiatives, including Rapid Reviews: COVID-19 (RR:C19), the first OA journal dedicated to publishing peer reviews of coronavirus-related preprints launched by MIT Press. The journal is helping to raise awareness of the most promising early COVID-19 research findings and debunk inaccurate scientific claims to combat the spread of false information online and in the mainstream media. The press sees the potential to use the same or similar preprint review publishing models to support more rapid vetting of other critical and time-sensitive research in areas like climate change.
It’s worth noting that FAIR data is also helping to drive RR:C19’s rapid review model, harkening back to the previous section on the role of data in recent publishing innovations. The journal’s editorial team is partnering with the Lawrence Livermore labs at UC Berkeley to use COVID Scholar, an AI tool they developed, to more quickly sift through the thousands of COVID-19 preprints posted and identify novel ones to review.
Agile principles supporting rapid OA journal development
As noted, the pandemic and UNSDGs have also heightened awareness of the need to expand access to essential research. And recent OA funder mandates like Plan S are pushing many publishers to embark on OA journal launches and transitions a lot more quickly than they would have anticipated. The challenge that remains for small and medium-sized publishers with limited funding, in particular, is figuring out which OA journal models will be financially sustainable and how to test out new publishing approaches without putting all of their eggs in one basket.
Taking an Agile approach to piloting OA journal initiatives is one way some small and medium publishers like UC Press and the Electrochemical Society are testing different publishing models while mitigating risk. Agile is an iterative project management methodology that originated in software development as a way for companies to respond to fast-changing technologies. It is becoming increasingly applicable in scholarly publishing with the more rapid introduction of new digital content and research access standards. As the name suggests, Agile supports flexible planning in the face of unknowns. The fundamental aim of Agile is to break up larger-scale initiatives into smaller projects that can be completed individually to achieve results faster while having opportunities to pivot plans early and often as needed.
For example, the UC Press publishing team is starting to use Agile planning for OA journal launches. In the process of developing Advances in Global Health, a new fully-OA journal framed around the UNSDGs, the publishing team quickly realized they would have to experiment with different funding options to identify a sustainable mix in line with their publication mission. So they decided to implement Agile planning, focusing on developing just core journal infrastructure, like editorial processes and initial funding, to start, rather than mapping out publication plans and pricing models three or even five years in advance as they have in the past. In working this way, the team has been able to get the journal off the ground more quickly and remain open to new publication development options that arise in the evolving OA landscape.
To learn more about applying Agile principles in scholarly publishing, check out Scholastica’s white paper “Iterate to Innovate: How scholarly publishers can use Agile methodologies to respond to change more effectively.”
Putting it together
There’s nothing quite like adversity to fuel innovation. In the face of the pandemic and other global challenges, the scholarly publishing community is exhibiting resilience and identifying new opportunities to support pressing research needs. From harnessing the power of data to draw connections between and surface relevant findings to employing Agile principles to test new OA pilots, in recent months, publishers have been experimenting more than ever before.
What other scholarly publishing innovation areas are you watching? Share your thoughts in the comments section!