Editorial Type: EDITORIAL
 | 
Online Publication Date: 25 Nov 2025

Responsible Use of Artificial Intelligence in the National Athletic Trainers’ Association Journals Manuscript Review Process

PhD, ATC and
PhD, ATC, FNATA
Article Category: Editorial
Page Range: 754 – 755
DOI: 10.4085/1062-6050-1006.25
Save
Download PDF

As artificial intelligence (AI) tools continue to evolve and permeate academic and clinical environments, their use in scientific publishing is drawing increasing attention. From grammar checks to literature summaries, tools such as ChatGPT, Copilot, Gemini, and Claude have (potentially) introduced efficiencies into writing and analysis. However, their application in the peer-review process, a cornerstone of scientific integrity, raises important ethical questions for all members of the scientific community. In this editorial, we outline key considerations for the Journal of Athletic Training (JAT) and Journal of Athletic Training-Education and Practice (JAT-EP) community as we collectively navigate the responsible use of AI in reviewing scientific manuscripts.

GENERATIVE AI, LARGE LANGUAGE MODELS, AND THEIR RELEVANCE TO PEER REVIEW

Generative artificial intelligence (AI) refers to systems that can create content, such as text or images, based on input data. Large language models (LLMs), a subset of generative AI, are trained on vast data sets to understand, predict, and generate human-like language.1 Although some LLMs excel at tasks including language translation, summarization, and content generation, they are fundamentally unsuited for assessing the scientific quality or rigor of original manuscript submissions.2 These LLMs cannot critically evaluate research methods, assess the validity of or potential biases in findings, or determine a study’s significance in the relevant literature.2 Peer review demands the nuanced scientific judgment of experts about the rigor, originality, and lasting influence of a submission. These required skills exceed the current capabilities of commercial generative AI products and, more specifically, LLMs. Relying on LLMs for manuscript evaluation risks undermining review integrity and may fail to maintain the rigorous standards essential to scientific publishing.2

CONFIDENTIALITY AND DATA PRIVACY

Reviewers should assume that text or documents entered into the system are stored and used to further train the AI model in question. Given that the author transfers copyright of the material in the manuscript, including figures, tables, and supplemental files, to the NATA on submission, the reviewers should not upload any part of the manuscript to an AI platform (if the manuscript is rejected, the author is then free to submit it elsewhere).3 This includes AI models or platforms that provide guarantees of data privacy or opt-outs for data sharing. Reviewers are expected to uphold the confidentiality of the peer-review process and refrain from exposing unpublished content to third-party tools unless explicitly permitted by the journal.

ACCOUNTABILITY AND TRANSPARENCY

The lack of accountability of the output of AI model(s) could create a strain in the peer-review process.3 Unlike peer reviewers, an AI model cannot be held accountable for the feedback it provides on a document. Trust and transparency are key factors in peer review and scholarly publishing; a reviewer who violates these principles by using an AI product can severely harm the entire publication process. A reviewer who uses an LLM or other AI product as a tool (eg, for verifying references or assessing adherence to author guidelines) must disclose this fact when submitting the evaluation.

BIAS AND JUDGMENT

Due to the complex nature of peer review, using LLMs or AI products to review manuscripts without consideration of the reviewer’s expertise is problematic.2 Reviewers are selected by associate editors based on their subject matter expertise. Artificial intelligence models can only use the information available to them, which is often largely unregulated. The mission of the JAT is to advance the science and clinical practice of athletic training and sports medicine, and the mission of the JAT-EP is to disseminate scholarly works to advance knowledge about the vitality of the profession, education, and health care competency. The expectation of the editors, authors, reviewers, and readers should be that manuscripts published in the NATA journals reflect these advancements. Because they are designed to address prompts based on what the model knows already, LLMs can struggle when evaluating new information. The subject matter covered in JAT and JAT-EP also presents a concern for ensuring that clinical knowledge and expertise are represented in peer review. New, innovative information that appears in manuscripts cannot be fully evaluated by AI models due to their reliance on existing data.

NATA JOURNALS’ RECOMMENDATIONS FOR ETHICAL USE OF AI

The JAT leadership has recently updated the Authors’ Guide (https://nata.kglmeridian.com/fileasset/nata/2020-JAT-authors-guidelines-ai.pdf) to include the following language (with permission) from the International Committee of Medical Journal Editors’ “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals” regarding the use of AI in the manuscript writing and submission process4:

Artificial Intelligence (AI)–Assisted Technology. At submission, the Journal of Athletic Training requires authors to disclose whether they used artificial intelligence (AI)-assisted technologies (such as Large Language Models [LLMs], chatbots, or image creators) in the production of submitted work. Authors who use such technology should describe, in both the cover letter and the submitted work, how they used it. Chatbots (such as ChatGPT) should not be listed as authors because they cannot be responsible for the accuracy, integrity, and originality of the work, and these responsibilities are required for authorship (see Section II.A.1). Therefore, humans are responsible for any submitted material that includes the use of AI-assisted technologies. Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI. Humans must ensure there is appropriate attribution of all quoted material, including full citations.

NATA JOURNALS’ GUIDELINES FOR REVIEWERS

Although these policies do not directly discuss peer review, it is important to consider that the peer-review process involves a reciprocal relationship between author and reviewer with the goal of improving the quality and rigor of each submission. As such, the core messages of the authors’ guidelines can be adapted to encompass the expectations of a responsible reviewer. More specifically,

  • Reviewers should not use AI detection when reviewing submissions to JAT or JAT-EP. However, given the frequency with which LLMs and other AI tools have produced text with incorrect or fabricated references, reviewers should scrutinize the references included in submissions to ensure that they both exist and are appropriate.

  • A reviewer who suspects that AI was used in the creation of the submitted manuscript should continue the review and make the handling editor aware of these concerns when submitting the assessment.

  • Neither full manuscripts nor portions of manuscripts should be uploaded to AI platforms, regardless of whether the product is accessed via a personal or institutional license.

  • When submitting assessments, peer reviewers should disclose if and how AI was used in the production or editing of the submitted review. This may include uses such as copyediting, softening of language, or evaluation of consistency throughout the evaluation.

  • Reviewers should be able to assert that there is no plagiarism in their review. They must ensure appropriate attribution of all quoted material, including full citations in their evaluations.

CONCLUSION

The NATA journals embrace innovation, recognizing our responsibility to establish ethical guardrails for emerging technologies. As generative AI tools become increasingly accessible, clear ethical boundaries are essential to preserve the integrity of peer review, which is a cornerstone of scientific publishing. Reviewers must remain fully accountable for their assessments and cannot delegate critical scientific judgment to AI systems. Through our collective commitment to transparency, expertise-driven evaluation, and scientific rigor, the NATA journals will continue upholding the highest standards of scholarly publishing while thoughtfully adapting our guidance as the technology evolves.

Copyright: © by the National Athletic Trainers’ Association, Inc 2025

Contributor Notes

Address correspondence to L. Colby Mangum, PhD, ATC, University of Central Florida, 4364 Scorpius Street, HS II, Room 235, Orlando, FL 32816-2205. Address email to lauren.mangum@ucf.edu. Address correspondence to Christopher Kuenze, PhD, ATC, FNATA, University of Virginia, 550 Brandon Avenue, Charlottesville, VA 22903. Address email to cmk7sq@virginia.edu.
  • Download PDF