Image credit: Getty Images Pro
Image credit: Getty Images Pro

AI is inevitable, and it’s increasingly pervasive in academia. This means more paper mill submissions and undeclared AI use in manuscripts and peer review reports, among other challenges for scholarly journal editors and publishers.

We’re faced with the question: How can we distinguish instances of acceptable AI assistance from nefarious cases?

Avoidance of potential AI misconduct is certainly not an option, and AI-detection tools return too many false positives to be reliable sources of truth. In the current landscape, the best defense for scholarly publishers to prevent unethical AI practices from entering their workflows and publications is implementing clear AI usage policies and guidelines. The imperative to get it right is especially high for small and mid-sized publishers (SMPs), given their teams’ already stretched bandwidth.

For SMPs developing AI policies, it helps to reframe the challenge from reactive and punitive to proactive and supportive:

Look at this way – authors, reviewers, and editors often use AI to alleviate the pressures of unpaid workloads, language barriers, and the “publish or perish” culture. A few act with malice, yet most turn to AI for greater efficiency, deeper insight, and to alleviate hefty workloads, suggesting that empathy is a more effective starting point than condemnation.

We argue that, for SMPs, AI policies should be centered on author education paired with clear, consistent AI usage guidelines, such as checklists and educational blogs. This two-pronged approach will yield smarter and more ethical authors, less-stressed editors, and more quality publications with lower chances of retraction and other reputational damages.

AI detection tools give too many false positives for reliable use

First, let’s talk about the challenges with AI-detection tools.

The reality, at least for now, is that AI detectors are struggling to keep pace with LLM advances. LLMs predict and apply patterns, and detecting them is like nailing jelly to the wall.

In 2023, OpenAI dropped its own AI detector due to poor performance. Since then, various universities have decided to decline or discontinue use of AI detection software, citing concerns around the potential for false positives, including MIT and UCLA.

A 2023 Stanford-based study also found that over 61% of essays by non-native English speakers were misclassified as AI-generated, raising serious concerns about model inequities. For publishers serving global research communities, heavy reliance on these types of tools could create real risks to authors whose first language differs from a journal’s submission language.

AI detectors can be useful to have in a publisher’s toolbelt, as they have basic merits (e.g., finding extensive passages of potentially plagiarized text), but they can’t be your sole strategy. Again, rather than thinking reactively, now is the time to be proactive, and author education is your best defense against (often unintentional) AI misuse.

Clear policies provide structure, if not behavioral change

At the start of this year, STM released a report on how publishers are developing vital research integrity infrastructure, titled “Safeguarding Scholarly Communication.” The report identifies three pillars of publisher practice:

  1. Capacity: dedicated teams and screening technology
  2. Practice: standards, screening protocols, and training
  3. Coordination: collective response mechanisms and shared detection tools

Notice what the STM framework includes under Practice: “training.” The report recognizes that research integrity technology and ethical misconduct response protocols alone aren’t enough.

Here’s what’s needed:

  • Transparent disclosure policies: Require authors to declare their AI use in manuscript preparation and publish this policy clearly in author guidelines. Major publishers now require authors to disclose the use of generative AI beyond basic grammar assistance and to assert that they take full accountability for any AI-assisted work (AI can not be treated as an author).
  • Human-in-the-loop decision-making: Establish that all AI-generated insights (e.g., screening flags, reviewer suggestions, similarity scores) function as decision support. The final call is with the editor. A “20% AI-writing likelihood” score should prompt human judgment about context, author background, and manuscript substance, not auto-rejection.
  • Staff training on tool interpretation: Train editors to interpret AI tool outputs rather than simply receiving alerts. What does a flagged similarity score actually mean? How should they weigh detection results against other quality indicators? A meta-analysis published in Educational Psychology Review found that experiential learning (where learners engage emotionally in problem-solving with practical application) outperforms purely intellectual deliberation about ethical problems.

Among the resources publishers and editorial teams can turn to for support in developing research integrity policies and procedures are the STM Integrity Hub (49 organizational members screening over 125,000 papers monthly), COPE (106 publisher members representing more than 14,500 journals), and United2Act (58 organizations coordinating responses to paper mills).

Of course, simply setting the rules won’t change author behavior upstream. That requires education.

The education imperative changes pre-submission behavior

“Education is learning what you didn’t even know you didn’t know.” – Daniel J. Boorstin

By and large, academics inherently value ethics, but they’re human — and these days, they’re increasingly overwhelmed. AI gives academics the option of doing less, and some can’t resist. They resort to bad acting and may not even know it.

What if we prioritize AI education over policing and punishment?

Research strongly supports investing in clear author guidelines as a prevention strategy. A Cochrane systematic review of 16,604 randomized controlled trials found that journal endorsement of CONSORT reporting guidelines was associated with 81% better reporting of allocation concealment compared with non-endorsing journals. This improvement comes from clearly communicating expectations, not from detection tools.

A before-and-after study in Research Integrity and Peer Review found that implementing a simple decision-tree tool linking to EQUATOR reporting during submission improved correct guideline identification by 8.4% across 590 manuscripts.

Training programs for researchers also show consistent returns when implemented well and at low or no cost. We’ve seen this in how Sci-Train events for publishers attract thousands of attendees and convert not just to submissions but to good ones, thereby reducing processing times for publishers.

Integration of Education in the Publishing Cycle

Another example: INASP’s AuthorAID program has supported more than 15,000 researchers across the Global South since 2007. This program helps level the playing field for researchers in developing regions through writing support, mentorship, and training. This approach extends naturally to AI ethics.

Publishers can step into the training space, and it need not be DIY. Bentham Science’s webinar series, powered by Sci-Train, offers sessions on topics from manuscript preparation to publication ethics, providing direct support to prospective authors. When publishers train authors directly, authors learn expectations before submission, and editorial teams spend less time on fixable problems. Solutions are available that combine content with training (e.g., MacroLingo Academia’s Integrated Journal Experience).

We wrote about this approach in EditorsCafe. Education covers all necessary topics while fostering brand loyalty among authors. Better content attracts the right authors, and better-trained authors submit manuscripts that reduce preventable desk rejections.

Coordinate training with revised author guidelines (in plain English), active blogs, social media, and online communities, and you’ve built a virtuous cycle of investment and reward.

Sample plan for a university press

Here’s how a university press might implement an education+content model over 3/6/12 months to address AI integrity challenges. This example plan focuses on ensuring ethical AI use, transparent disclosures, and quality submissions.

Phase 1 (months 1-3): Foundation

Objectives:

  1. Establish a clear, accessible AI policy baseline
  2. Reduce undisclosed AI use through education
  3. Begin appearing in AI search for integrity-related queries

Education for authors (theme = transparent, ethical AI use):

  • Webinar: “AI Tools and Research Integrity” (with disclosure guidance)
  • Short guide: “What AI use must be declared and how”

Publisher content:

  • Rewrite AI disclosure policy in plain English (publish prominently)
  • Blog: “Ethical AI Use in Manuscript Preparation”
  • Blog: “How to Avoid Citation Hallucinations”

Phase 2 (months 4–6): Expansion

Objectives:

  • Reduce integrity-related desk rejections by 15%
  • Build an email list of integrity-aware prospective authors
  • Increase proper AI disclosure rates in submissions

Education for authors (theme = preventing unintentional misconduct):

  • Webinar: “Responsible AI for Non-Native English Writers”
  • Webinar: “Avoiding Integrity Pitfalls in Data Presentation”

Publisher content:

  • Update submission portal with integrity reminders
  • Blog: “Reference Integrity: Verifying AI-Assisted Literature Reviews”
  • Blog: “Data Fabrication Red Flags for Authors”

Phase 3 (months 7–12): Cycle & scale

Objectives:

  • Establish a self-sustaining integrity education cycle
  • Develop a reviewer pool trained in AI detection nuance
  • Position press as regional leader in AI governance

Education for authors (theme = community-wide integrity culture):

  • Webinar: “Spotting AI-Generated Content” (for reviewers)
  • Short course: “AI Ethics for Early-Career Researchers”

Publisher content:

  • Blog series: “AI integrity from the reviewer perspective”
  • Case studies: authors who navigated AI disclosure successfully
  • Refresh content based on emerging AI integrity issues

Small publishers can lead in education and move on it faster

For SMPs, agility is a genuine competitive strength, enabling rapid testing and iteration, as exemplified by Scholastica’s 2024 ALPSP Conference session “The Small But Mighty Journal Publisher.” Further, a 2024 Jisc analysis found that smaller presses can find and use new solutions faster because simpler governance structures enable quicker decisions.

The Society Publishers Coalition notes that “learned societies and community publishers have been at the forefront of innovation in scholarly communication.” For example, the Royal Society’s journals were among the first to introduce ORCID iD requirements.

Large publishers can mandate policies, while SMPs can actually teach and build relationships with researchers. Direct relationships with researchers make ongoing integrity training practical in ways that larger corporate competitors cannot replicate.

The 58% of SMPs that feel “somewhat or very positive” about AI’s potential (per the recent Scholastica/Maverick survey) should channel that energy into specific actions. Educate authors to prevent misconduct, rather than solely focusing on detecting their mistakes reactively. Position yourself as a leader in efficient, ethical, and high-quality scholarly communication.

The will is there – publishers and journals can turn it into action, momentum, and great contributions to human knowledge.

About the authors:

Gareth Dyke, PhD

Gareth Dyke, PhD:

Gareth Dyke, PhD is a globally recognized evolutionary biologist and paleontologist who has spent more than two decades working at the cutting edge of science. He now works in publishing and is the co-founder of Sci-Train, which helps researchers globally understand the writing and publishing process.


Adam Goulston, PsyD, MBA, ELS

Adam Goulston, PsyD, MBA, ELS:

Adam Goulston, PsyD, MBA, ELS, is the US-born, Japan-based owner of MacroLingo. Adam commissions and crafts high-performing articles using a B2B + B2C approach for science. A BELS-certified editor, he’s edited thousands of scientific manuscripts, managed press relations for science and business, and directed localization and writing projects for companies, universities, and NGOs.

Guide to Managing Authors Course