The journal adheres to the principles and recommendations of international organizations –including UNESCO (2021, 2023), the European Commission (2019, 2024), COPE (2023), and WAME (2023)–regarding the ethical, transparent, and responsible use of artificial intelligence (AI) tools in the production, review, and publication of scientific works. The use of these tools is permitted under regulated conditions, provided that it is grounded in principles of integrity, transparency, traceability, data protection, and human oversight, and that institutional and academic responsibility rests at all times with the individuals involved in the editorial process.

1. Authorship

AI tools cannot be listed as authors of articles, images, figures, tables, or any other component of a manuscript. As systems without legal personality, they cannot assume responsibility for the work produced, declare conflicts of interest, sign rights assignment agreements, or guarantee the integrity of the data. This criterion is shared by COPE (2023), WAME (2023), and the Heredia Declaration (Penabad-Camacho et al., 2024).

Authors are fully responsible for the content of their manuscripts, including sections produced wholly or partially with AI assistance. This responsibility encompasses factual accuracy, originality, proper citation of sources, data integrity, and the possible presence of biases, errors, or content incorrectly generated by the system (“hallucinations”).

2. Permitted Uses and Mandatory Disclosure

The use of generative AI tools is permitted for auxiliary writing tasks, such as grammar and spelling correction, style improvement, translation, reference formatting, and editorial adjustments. In all cases, such use must be explicitly declared.

When AI tools have been used at any stage of the research process –including literature search and systematization, data analysis, text generation, or figure creation– authors must report this in the article’s methodology section, specifying which tools were used and for what purpose.

In addition, immediately before the references section, a subsection titled “Use of Artificial Intelligence” must be included, detailing which systems were used and for what tasks; or it must be expressly stated that no such tools were used. This statement is mandatory for all submissions.

The use of AI to create, alter, manipulate, or fabricate data, results, images, measurements, or other elements of the original research is expressly prohibited.

3. Peer Review

Reviewers may not use AI tools to review manuscripts they receive as part of the peer review process. The use of these tools in this context would involve the transfer of confidential information to third-party systems, with the consequent risk of breaching the confidentiality, privacy, and data protection obligations that govern the editorial process (COPE, 2023; European Commission, 2024; Oxford University, 2025).

4. Editorial Team

The editorial team may also not use AI tools to perform checks, analyses, or evaluations of received manuscripts, for the same reasons related to the confidentiality and data protection of authors and reviewers.

The use of AI for administrative or internal management tasks that do not involve the content of manuscripts must have the express authorization of the editorial board and be carried out with full awareness of the associated risks.

5. Reference Framework

This policy is based on the following international documents and recommendations:

Bittle, K., & El-Gayar, O. (2025). Generative AI and academic integrity in higher education. Information, 16(4), 296. https://doi.org/10.3390/info16040296

European Commission, Directorate-General for Research and Innovation. (2024, March 20). Guidelines on the responsible use of generative AI in research developed by the European Research Area Forum. https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/guidelines-responsible-use-generative-ai-research-developed-european-research-area-forum-2024-03-20_en

European Commission High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

Organisation for Economic Co-operation and Development. (2019). OECD AI Principles. https://oecd.ai/en/ai-principles

Oxford University. (2025). Policy for using Generative AI in Research: Guidelines for researchers and professional staff. University of Oxford. https://www.ox.ac.uk/research/support-researchers/research-practice/policy-generative-ai-research

Penabad-Camacho, L., Penabad-Camacho, M. A., Mora-Campos, A., Cerdas-Vega, G., Morales-López, Y., Ulate-Segura, M., Méndez-Solano, A., Nova-Bustos, N., Vega-Solano, M. F., & Castro-Solano, M. M. (2024). Heredia Declaration: Principles on the use of Artificial Intelligence in Scientific Publishing.  Revista Electrónica Educare, 28 (S), 1–10. https://doi.org/10.15359/ree.28-S.19967

Publication Ethics Committee. (2023). Authorship and AI tools. COPE. https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools

UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137_spa

UNESCO. (2023). Guidance for generative AI in education and research. https://unesdoc.unesco.org/ark:/48223/pf0000386693

UNESCO IESALC. (2023). ChatGPT and Artificial Intelligence in Higher Education: Quick Start Guide. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000385146_spa

WAME. (2023). Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. World Association of Medical Editors. https://wame.org/page3.php?id=106