Most of the top 100 medical journals provide guidance on the use of artificial intelligence (AI) during the peer review process, with many explicitly prohibiting its use, a study showed.
Of the 78 top journals that provide this guidance, 59% prohibit its use in peer review, while the rest allow its use if confidentiality is maintained and authorship rights are respected, reported Jian-Ping Liu, PhD, of Beijing University of Chinese Medicine, and co-authors.
In addition, 91% of the journals prohibited the uploading of content related to manuscripts to AI, and 32% allowed for restricted use of AI that mandated reviewers disclose in review reports, they noted in their research letter in .
In their introduction, Liu and colleagues pointed out that "the rapid growth of medical research publishing and preprint servers appears to be straining the peer review process, potentially causing a shortage of qualified reviewers and slower reviews."
"Innovative solutions are urgently needed," they added. "Recent advancements in artificial intelligence, particularly generative AI (GenAI), offer potential for enhancing peer review, but its integration into this workflow varies by journal policy."
Co-author Zhi-Qiang Li, MPH, PhD, also of Beijing University of Chinese Medicine, told Ƶ that "it was striking to discover that, despite AI's potential to augment the efficiency of peer review, a substantial 91% of journals have prohibited the submission of manuscript-related content to AI. This underscores a heightened awareness for safeguarding the confidentiality and integrity of manuscripts."
He noted that there was considerable divergence among different journals' AI policies, with many identifying a few primary reasons for choosing to limit the use of AI, including the desire to protect manuscript confidentiality; concerns about the introduction of incorrect, incomplete, or biased information by AI; and the potential for violating data privacy rights.
"This study indicates that the impact of AI on the scientific publishing process and medical research is a double-edged sword," Li said. "On one hand, AI has the potential to enhance the efficiency of peer review, but on the other hand, it raises concerns about biases and confidentiality breaches."
"The varying stances of journals towards AI use may significantly influence the decisions of researchers when drafting and submitting their papers," he added.
For this study, the authors used data from Scimago.org for the top 100 medical journals to determine the existence and nature of their AI guidance during peer review. They searched the journals' websites for AI-related policies on June 30 and August 10. If a journal did not have its own AI guidance but linked to its publisher's guidance, the authors used that guidance for the analysis.
Of the 78 journals, 41% linked to their publisher's website that had preferences for AI use. Wiley and Springer Nature favored limited use of AI, while Elsevier and Cell Press prohibited any AI use during peer review.
Notably, 22% of journals also provided links to statements from the International Committee of Medical Journal Editors or the World Association of Medical Editors, which allowed for limited use of AI. However, the authors noted that five of those journals had specific guidance that contradicted the statements of those organizations.
Liu and colleagues said that they only considered the policies of the top 100 medical journals, which could have missed other trends or attitudes in lower-ranked journals' policies. They also noted that relying on shared publisher guidance as a proxy for all journals could have overestimated the number with specific guidance on AI.
Disclosures
The study was funded by grants from the National Administration of Traditional Chinese Medicine.
The authors reported no conflicts of interest.
Primary Source
JAMA Network Open
Li ZQ, et al "Use of artificial intelligence in peer review among top 100 medical journals" JAMA Netw Open 2024; DOI: 10.1001/jamanetworkopen.2024.48609.