Trying to optimize journal peer review processes while in the throes of daily editorial work isn’t easy. That’s why it’s imperative for journal teams to set aside time to review key publication performance indicators, reflect on what’s going well and what isn’t, and make improvement plans as needed. And one of the best ways to do that is running regular operational journal audits. Operational audits are holistic analyses of submissions quality and editorial workflows usually conducted at six-month or yearly intervals. Often teams will run audits at the beginning of each year and/or in the spring since both times are associated with starting new initiatives and general “housekeeping.”
For most journals, the benefits of performing operational audits are apparent (and many) — including potentially finding ways to get more consistent submissions, improve their time to a first decision, and more. But knowing how to structure a successful audit isn’t always as obvious. Below are three keys to performing effective operational audits that result in desired outcomes.
1. Take a data-driven approach to assessment
The first step to performing an operational audit is reviewing your journal’s submissions quality and how well your peer review process is working to look for any problem areas, such as a drop in research article submissions or reviewer delays. As you prepare to do this, keep one mantra in mind — let metrics be your guide. Editors should dig into available data before making any policy changes.
As Senior Partner at Origin Editorial Jason Roberts put it in a past blog interview, “anecdote is the enemy of effective office management.” Roberts gave the example of a journal that was planning to stop sending reviewer reminders because the editors believed they were counterproductive, only to check the data and see spikes in completed review assignments on reminder days.
Primary performance areas all journals should focus on and associated metrics include:
- Submission health: Volume by article type (e.g., “research articles” vs. “book reviews” — just tracking your total submission volume can be misleading), acceptance/rejection rate, and submission influx trends
- Submissions process: Track how often you’re sending submissions back to authors to make changes before review, log instances of author confusion and group similar scenarios to identify common themes
- Reviewer performance: Track total invitations vs. unique reviewers (e.g., are you sending 1,000 invitations per year to 800 reviewers or 100 reviewers — if it’s the latter, you may be exhausting your best people), invitations per reviewer, pending invitations, response rates, late reviews, and average days to complete an assignment
- Editor performance: Equality of assignments (i.e., is anyone being overburdened?), assignment speed, decision ratios (and variance across the team), acceptance/rejection rate, time to decision, and time to publication
Use the data to assess your editorial team productivity, reviewer responsiveness, and if you’re getting decisions to authors on time, among other key factors, in order to spot potential problem areas and address them.
As you log and assess journal data, remember automation and centralization are your friends. You don’t want to be bouncing between spreadsheets or separate report pages to make manual updates. Ideally, find a journal management system that will generate most of your data for you and in one place. Also, be proactive about keeping your data clean. Dirty data will slow you down!
Some steps to prevent dirty data include:
- Checking for duplicate contacts in your database or inactive email addresses
- Establishing naming conventions for data (e.g., pick “Revise and Resubmit” or “R&R”)
- Importing new data into your system as needed
Of course, numerical data alone will only get you so far. If you have more qualitative questions like “how do authors and reviewers feel about working with my journal,” take steps to gather insights like sending annual author and reviewer surveys.
2. Set actionable goals
As you identify areas where your journal needs improvement, don’t stop at general problem statements like “we need more research article submissions.” Set associated goals with clear next steps and a barometer for success. One of the best ways to do this is using the SMART goals framework. SMART stands for Specific, Measurable, Achievable, Relevant, and Time-Bound.
For example, a SMART goal might look like this: “Our goal is to get 25% more research article submissions than last year by [insert specific timeframe]. We will accomplish this goal by [insert specific action steps]. Accomplishing this goal will [insert result or benefit].”
The “specific,” “achievable,” and “relevant” components of the SMART framework will help your team set tangible goals, and making goals “measurable” and “time-bound” will help keep everyone accountable with shared markers for success.
If you’re looking for goal-setting inspiration, check out Scholastica’s Resources page for guides to peer review process optimization, reviewer management best practices, and more.
3. Track progress to goals
Finally, as you wrap up your operational audit, don’t forget to make a plan for tracking progress towards your goals. To do this effectively, you’ll need:
- Metrics for every goal
- Historical data or industry standards to benchmark your progress
- The ability to produce the same core reports consistently, so you’re comparing apples to apples
It’s best to track progress to your goals incrementally, regularly checking how each of your goal-based initiatives is working rather than waiting until your next journal audit to see how you did. That way, if a particular project isn’t going well, you’ll have time to pivot and try something different.
Remember, workflow optimization is an ongoing process, and it may take a few tries to get things right. Tracking consistent metrics, setting actionable goals, and regularly assessing your progress will keep you on track.