AI policy

Policy on the Use of Generative AI Tools

Our journal supports transparent use of generative artificial intelligence (AI) tools in academic research and publishing.

The journal aligns with COPE’s position on the use of AI in research and also shares Elsevier’s policy on the use of generative AI.

While acknowledging the potential of AI tools in research, the journal emphasizes that AI cannot be listed as an author or co-author. Authors bear full responsibility for the content of their submissions, including any material generated with the help of AI. Proper disclosure of AI use is mandatory, and using AI-generated text without appropriate attribution is considered plagiarism and a violation of this policy.

As with all content submitted to the journal, the author(s) must ensure that they have permission to use any third-party content included in the submitted work, including content obtained with the help of AI.

 

Requirements for authors

  1. If AI tools (e.g., ChatGPT, Claude, Gemini, etc.) were used during the research process or in preparing the manuscript, the authors must provide a declaration.
  2. The declaration should specify:
  • the name(s) of the corresponding author(s);
  • the specific tasks delegated to generative AI;
  • the version of the generative AI tool used.

 

Declaration sample

The author(s) (surname, first name) declare the use of generative AI during the research and writing process. The following tasks were delegated to generative artificial intelligence (GAI) tools under full human supervision: literature search and systematization; data analysis; translation; ethical risk analysis. GAI tool used: ChatGPT-5.

 

The authors bear a full responsibility for the final version of the manuscript.