Person trying to stop the flooding while water with LLM is rushing around him.

Published December 25, 2025 | 6-minute read

The adoption of generative AI tools has pushed major medical and public health journals to rapidly update their editorial policies. The dominant direction is mandatory transparency, but it has drifted into performative detail. For example, JAMA begins with sensible basics like naming the tool and version, then escalates into a compliance obstacle course: prompt sequences and revisions, permissions and licensing language, and extra layers of bias and reproducibility reporting. It may sound rigorous, but it can become a real barrier for authors without time, staff support, or institutional resources.

While well-intentioned, heavy emphasis on disclosure risks becoming an administrative exercise that does little to protect integrity. It distracts from what actually matters: whether human authors stand behind their methods, analyses, and conclusions.

The core concept: disclosure vs accountability

Disclosure is about process transparency (“I used tool X”). Accountability is about outcome responsibility (“I am responsible for this work”).

In research, accountability has always been the cornerstone. Major journals and international organizations like the ICMJE and COPE agree on a fundamental principle: AI cannot be an author because AI lacks accountability and the ability to take responsibility for published work. Humans must meet criteria for authorship and take full responsibility for the accuracy and integrity of the content.

AI does not change that fundamental obligation. It simply changes the tools used along the way.

“Mandatory AI disclosure policies are fast becoming an administrative burden with limited value. What matters is author accountability, transparency of methods, and responsibility for results.”

Why AI disclosure is a weak safeguard

While many journals currently enforce disclosure requirements in the Methods or Acknowledgments sections, relying solely on this approach is insufficient:

  • It is difficult to standardize: Does using a grammar tool require declaration? What about code autocompletion? While some journals permit AI for language editing without full disclosure, others require it even for minor assistance. The lines are often unclear.
  • It shifts focus away from quality: A disclosed-but-flawed analysis remains flawed. Disclosure is not a proxy for validity.
  • It creates false signals of trust: Readers might over-trust disclosed work without real evidence of quality differences.

 

What actually protects research integrity

If the goal is trustworthy science, stronger levers already exist. All major journal policies emphasize that human authors are ultimately responsible for verifying the accuracy, integrity, and originality of any AI-assisted content.

Instead of policing tool usage, the focus must remain on existing pillars of scientific integrity:

  • Author responsibility statements: Every author must explicitly affirm responsibility for the work’s integrity regardless of tools used.
  • Methods transparency: Clear descriptions of analytic decisions and validation steps matter more than just naming an AI tool version.
  • Misconduct enforcement: Fabrication of data or plagiarism using AI is unacceptable and leads to rejection or retraction, just as it would with human-generated misconduct.

 

Gemini_Generated_Image_c4qlm5c4qlm5c4ql (1)

Key Takeaways

  • AI cannot be an author: Journals universally agree that only humans can take responsibility for published work.
  • Accountability over disclosure: While current policies demand transparency about tool usage , human accountability for accuracy remains the foundation of integrity.
  • Disclosure is insufficient: Mandatory reporting of versions and prompts often adds bureaucracy rather than rigor.
  • Equity concerns: Over-disclosure risks stigmatizing researchers who rely on AI for language assistance.
  • Public health needs better standards: We need rigorous science, not just longer footnotes about software versions.

 

Final Comment

True scientific integrity isn’t measured by the length of an AI disclosure statement, but by the undeniable weight of human accountability behind the findings. Let’s stop with the AI disclosure obstacle course and start practicing responsibility.

 

Disclaimer: This blog post was created with the assistance of generative AI tools to summarize information and draft content. The final text has been reviewed and edited by human authors, who take full responsibility for its accuracy and integrity.