Artificial Intelligence (AI) Policy

The Global Medical Reviews (GMR) recognizes the increasing role of artificial intelligence (AI) tools in scientific research and publishing. To safeguard academic integrity, ensure transparency, and maintain accountability, the journal adopts the following policy:

1. AI and Authorship

1.1 Authorship accountability
  • AI tools (e.g., ChatGPT, Bard, Claude, or other large language models) cannot be listed as authors.
  • Authorship requires accountability, which cannot be assigned to AI systems.
1.2 Disclosure of AI use
  • If AI tools were used for drafting, editing, language refinement, or data processing, this must be disclosed in the Methods section or another appropriate part of the manuscript.
1.3 Minor editing
  • Minor AI-assisted copyediting (grammar, spelling, readability improvements) does not require formal disclosure.
1.4 Responsibility
  • Authors remain fully responsible for the originality, accuracy, and ethical standards of the final manuscript.

2. AI-Generated Images and Figures

2.1 Restrictions
  • AI-generated images, graphics, or videos are not permitted as primary scientific data or figures due to unresolved issues of copyright and research integrity.
2.2 Exceptions
Exceptions may be considered if all conditions below are met:
  • The AI system was trained on transparent and verifiable scientific datasets.
  • The output complies with copyright and ethical standards.
  • The figure is explicitly labeled as AI-generated.
2.3 Non-generative AI tools
  • The use of non-generative AI or machine learning tools (e.g., for image enhancement, statistical modeling, data visualization) must be described in the manuscript.
  • Such use must also be acknowledged in the figure/table captions.

3. AI Use in Peer Review

3.1 Confidentiality
  • Reviewers must not upload manuscripts or reviewer reports into AI tools, as this compromises confidentiality and data security.
3.2 Declaration of AI use
  • If AI tools are used to support the reviewer’s evaluation (e.g., language polishing of the review text), this must be transparently declared in the review report.
3.3 Accountability
  • Reviewers remain fully responsible for the accuracy, validity, and fairness of their assessments.

4. Editorial Use of AI

4.1 Permitted uses
  • GMR editors may employ internal or publisher-approved AI tools for accessory content such as:
  • plain language summaries,
  • highlights or key points,
  • glossary terms,
  • posts for academic social media channels.
4.2 Verification
  • All AI-assisted editorial content will undergo review and final approval by human editors before publication.
4.3 Transparency
  • Any substantive use of AI in the editorial process will be clearly declared on a case-by-case basis.

5. Policy Review and Updates

5.1 Regular review
  • This policy will be reviewed and updated periodically in accordance with international publishing standards (COPE, WAME, ICMJE) and evolving best practices.
5.2 Transparency of updates
  • Any revisions to this policy will be published on the journal’s official website.