How often do you and your editorial team stop to consider the effectiveness of your peer review process? If you’re like most journals the answer is probably rarely. It’s understandable, you and your co-editors are busy and keeping up with the work at hand can be more than enough.
However, the truth is every journal’s peer review process can use some polishing from time to time. In the past, we’ve talked about the benefits of performing regular operational audits to ensure that your team is working as efficiently as possible. Journals can perform operational audits on an annual or biannual basis. But, before you can do an effective audit, you need to proactively set up an audit plan with key performance indicators to track. Remember to approach all data collection with set areas of optimization in mind and questions to answer to avoid going down data rabbit holes with no clear objective in mind.
There are 3 key audit areas that we recommend focusing on as they relate to peer review:
- Production schedule - how well you’re able to go from submission to decision and get new articles and issues out on time
- Journal reputation - how authors and reviewers feel about working with your journal
- Team efficiency - How well your team is able to work together and move manuscripts through peer review
In this blog post we break out data points to track in these 3 audit areas and best practices for data assessment.
Track production schedule stats
The first step to determining how your peer review process is going is to look at how well you’re sticking to your journal’s production schedule. Whether you publish articles in regular issues or on a rolling basis, your journal likely has a production timeline for articles that you’re trying to adhere to in order to publish new research in a timely manner.
Among key stats that will impact your production schedule are:
- Your journal’s average acceptance rate: In order to know whether you’re accepting too many or too few submissions
- The number of submissions you receive each week or month: In order to know if you’re getting enough submissions within the timeframe needed to meet your production deadlines
- Your time to decision: Track the average number of days it takes your editors to make a manuscript decision in order to know if you’re staying on track to hit production deadlines or falling behind
- The average time for reviewers to complete a review: Keep on top of how long it’s taking your reviewers to complete manuscript assignments in order to know if and when you need to expand your current reviewer pool
These stats are some primary areas to get you started. If you use peer review software like Scholastica you should be able to track all of this automatically. If you don’t use peer review software you can set up a spreadsheet to track and store this data.
Depending on your journal there may be other metrics that you find useful to track. When choosing stats to track try ending each with “in order to know…”, as we did above, as a way to ensure that each data point you follow has a clear purpose.
As you track these production-related stats, remember to consider the nuances of the numbers you’re looking at, as sometimes the stories they seem to be telling at face value may not be accurate. In our interview with Kathey Alexander, a freelance consultant in professional and scholarly publishing who regularly performs audits of society publishing programs, she shared some accounts of journals misreading data. For example, if you receive different types of submissions (e.g. articles, book reviews) make sure you break out your submission volume data to reflect that. You can do this in Scholastica by adding submission category tags to your articles. Alexander recounted working with one journal that failed to categorize their submissions numbers and initially thought that their submission volume was growing only to find that article submissions were actually in decline. In truth their book review submissions were inflating the aggregate number. Without breaking up the data the journal would not have known there was a problem.
Take steps to gauge journal reputation among authors and reviewers
As you start tracking quantitative production-related performance data, it’s also a good idea to gather some qualitative peer review insights. Many journals benefit from sending annual or biannual surveys to the authors and reviewers they’ve worked with to get their opinions on working with the journal throughout peer review.
Among questions you can pose are:
- Was it easy to understand manuscript or review submission instructions?
- Were the steps in the journal’s peer review process clear and logical?
- Was the editorial team responsive and efficient?
- Do you think peer review was completed in a timely manner?
- Was the revise and resubmit process clear and efficient?
You can either pose these questions as open-ended or set them up as Likert scale questions (e.g. how would you rate this from 1-5 with 5 being best) and then include an open-ended comments box at the end. This latter approach will likely result in more feedback because Likert scale questions tend to be quicker to answer.
Once you’ve sent out the survey, set a time to meet with your editorial team to go over the responses and pull out highlights from the open-ended comments you receive such as trends (e.g. the majority of respondents said peer review took too long) and comments that stood out (e.g. one reviewer said “I think the revise and resubmit process took too long because the editor was not responsive”). Use these insights to flag areas of your process that may need work. But, remember don’t be too quick to make adjustments based on qualitative feedback alone. Jason Roberts, Senior Partner at Origin Editorial, reminds journals, “don’t make a policy change until you have carefully studied current data. I always say anecdote is the enemy of effective office management.” Instead, use feedback to influence how you track journal data - where to focus and what questions to ask - in order to determine if and how your qualitative and quantitative findings agree or diverge.
Track editorial team performance
Finally, in addition to overall journal stats related to article production you’ll also want to pull back and track key performance indicators at the editor level to know how well your team is working individually and as a whole.
Among editor stats you’ll want to track and objectives for each are:
- Assignment speed: In order to know whether editors are getting manuscripts to peer reviewers quickly enough.
- Editor decision ratios (variance across team): In order to know whether some editors are making more decisions than others and if the allocation of manuscripts among editors is fair.
- Acceptance and rejection rate by editor: In order to know whether your editors are accepting around the same amount of submissions or if some are accepting or rejecting manuscripts at a greater rate. Variances in acceptance rate could be chance or they could indicate that all of your editors are not evaluating submissions in the same way.
- Time to decision: In order to know whether editors are making decisions at about the same rates or if some are taking longer than others.
Like the production schedule stats, your team should be able to track these stats via peer review software. Journals that use Scholastica will find a suite of performance analytics charts to quickly see whether their team is on track. These stats can help journals determine how well work is being allocated among the editorial team and to identify any areas where editors need support such as better training or better use of automation in peer review.
Always audit with a purpose
Tracking data can become overwhelming if you find yourself trying to measure too many metrics at once without having clear goals in mind. That’s why it’s so important to focus on making a journal audit plan with the main quantitative and qualitative areas you plan to measure and how you’ll use each data point. A good rule of thumb is to make sure your data follows these 3 best practices. The data should be:
- Specific
- Easy to interpret
- Reproducible
Once you’ve gathered at least 6 months worth of journal data you can start seriously analyzing the results and iterating on your peer review process based on the findings. Your team can also schedule a time for a full operational audit during which you can review all of the data together to start to see a more holistic picture of your journal’s performance. You’re on your way to a more polished peer review process!