GENERATIVE AI POLICIES FOR ETNOREFLIKA
These policies are established in response to the increasing use of generative AI and AI-assisted technologies, which are expected to be progressively utilized by content creators. The purpose of these policies is to provide clear guidance and ensure transparency among authors, reviewers, editors, readers, and contributors. The ETNOREFLIKA publisher will continue to monitor developments in this area and revise or refine the policies as necessary.
For Authors
Use of Generative AI and AI-Assisted Technologies in Scientific Writing
Please note that this policy exclusively applies to the writing process and does not cover the use of AI tools for data analysis or drawing research insights.
If authors choose to use generative AI or AI-assisted technologies in the writing process, such tools should only be employed to enhance language clarity and readability. Their use must be supervised and controlled by humans, and authors must carefully review and revise the output, since AI-generated content, although it may appear authoritative, can be inaccurate, biased, or incomplete. Ultimately, the authors bear full responsibility for the content of their manuscripts.
Authors are required to explicitly disclose the use of generative AI or AI-assisted tools within the manuscript. A corresponding statement will be included in the published article. This practice promotes transparency and fosters trust among stakeholders while ensuring compliance with the terms of use associated with the AI tool.
AI or AI-assisted technologies must not be listed as authors or co-authors, nor should they be cited as such. Authorship carries responsibilities that only human contributors can fulfill, including ensuring the accuracy and integrity of the work, approving the final version, and agreeing to its submission. Furthermore, authors are accountable for confirming the originality of the work, the legitimacy of authorship claims, and that the manuscript does not infringe on third-party rights. All submissions must adhere to ETNOREFLIKA’s publication ethics.
Use of Generative AI in Figures, Images, and Visual Artwork
The use of generative AI or AI-assisted tools to create or manipulate images in submitted manuscripts is strictly prohibited. This includes modifying images by enhancing, removing, obscuring, shifting, or introducing specific elements. Basic adjustments to brightness, contrast, or color balance are acceptable, provided they do not conceal or distort original content. Image analysis tools or forensic software may be used during the editorial process to detect potential irregularities.
The sole exception applies when the use of AI or AI-assisted tools is part of the research design or methodology. In such cases, authors must provide a reproducible explanation in the methods section detailing the role of the AI tools, including the tool’s name, version, extension, and manufacturer. Authors must follow the tool’s terms of use and provide proper content attribution. When requested, authors should also be prepared to submit pre-AI versions of the images and/or the raw image composites used to produce the final version for editorial review.
The use of generative AI in visual content such as graphical abstracts is not permitted. In some instances, AI-generated cover art may be allowed if the author obtains prior approval from both the journal editor and the publisher, secures all necessary rights, and ensures proper attribution of content.
For Reviewers
Use of Generative AI in the Peer Review Process
Manuscripts submitted for peer review must be treated as strictly confidential documents. Reviewers must not upload any portion of the manuscript to a generative AI tool, as this may violate the confidentiality, intellectual property, and, where applicable, the privacy rights of the authors.
This confidentiality requirement also extends to review reports, which may contain sensitive information about the authors or the manuscript. Therefore, reviewers are prohibited from using AI tools to enhance or revise the language of their reports.
The peer review process is central to scientific publishing and must adhere to the highest standards of integrity. The analytical and critical thinking involved in reviewing a scientific manuscript requires human judgment, which cannot be delegated to AI tools. Using AI for peer review may lead to misleading, incomplete, or biased conclusions. As such, reviewers are fully responsible for the accuracy and integrity of their assessments.
ETNOREFLIKA’s AI authorship policy permits authors to use generative AI for language editing before submission, provided they appropriately disclose its use.
For Editors
Use of Generative AI in the Editorial Process
All submitted manuscripts must be treated as confidential. Editors are not permitted to upload manuscripts or parts thereof to generative AI tools, as this could breach author confidentiality, proprietary rights, and potentially data protection laws if identifiable information is present.
This confidentiality obligation also applies to all editorial communications—including decision and notification letters—since these may contain confidential content. Therefore, editors must not use AI tools to edit or rewrite such communications, even for the purpose of improving clarity or style.
Note:
Generative AI refers to artificial intelligence capable of creating diverse content such as text, images, audio, or synthetic data. Notable examples include ChatGPT, NovelAI, Jasper AI, Rytr AI, and DALL·E.