Generative Artificial Intelligence Declarations

1. Artificial Intelligence Policy for Editors

1.1 Policy Overview and Editorial Duties

Smart and Green Materials is committed to maintaining the highest standards of publication ethics and preventing malpractice. This policy outlines clear guidance for editors on the responsible use of artificial intelligence (AI) tools, while ensuring manuscript confidentiality, editorial independence, and scholarly integrity. Editors, as custodians of the scientific record, must rely on professional judgment that cannot be replaced by AI systems.

1.2 Manuscript Confidentiality and AI Restrictions

Editors must not upload or process submitted manuscripts, or any part thereof, using generative AI or AI-assisted systems. This rule safeguards authors’ intellectual property, preserves confidentiality, and prevents potential violations of data privacy regulations. It applies to all editorial materials, including peer review reports, decision letters, and internal correspondence. Limited external AI use is acceptable for language polishing or administrative tasks.

1.3 Editorial Judgment and Decision-Making

The peer review process requires domain expertise, contextual understanding, and critical thinking—qualities exclusive to humans. Editors may not use AI systems to evaluate or decide on manuscripts at any stage, as such technologies lack the depth to assess research significance or disciplinary context. Editorial responsibility and authority remain solely with human editors.

1.4 Oversight of Author AI Use

Authors may employ AI for language improvement, provided they include a declaration statement before the reference list. Editors must evaluate such disclosures while prioritizing scientific quality and methodological rigor. Suspected violations of this AI policy should be reported to the editorial office with supporting evidence for investigation.

1.5 Publisher-Approved AI Tools

Smart and Green Materials utilizes publisher-approved AI systems designed with identity protection and responsible AI practices. These tools support workflows such as plagiarism detection, completeness checks, anonymization, and reviewer identification, while maintaining privacy. They are regularly tested to reduce bias and enhance accuracy, complementing—not replacing—human editorial judgment.

1.6 Professional Development and Policy Updates

Editorial staff remain committed to staying informed on emerging AI technologies and their implications for scholarly publishing. Editors seeking clarification should contact editorial leadership. This policy will be reviewed and updated periodically to align with evolving AI capabilities, while preserving confidentiality and editorial independence.

2. Artificial Intelligence Policy for Reviewers

2.1 Policy Overview and Reviewer Duties

The journal emphasizes the integrity of peer review and provides clear guidance on AI use. Reviewers, as guardians of research quality, must uphold confidentiality, impartiality, and academic objectivity.

2.2 Confidentiality in AI Usage

Reviewers may not use generative AI tools to analyze or reprocess any manuscript content, including abstracts, methods, results, figures, or supplementary materials. Such actions would breach confidentiality, copyright, and privacy standards. This obligation continues even after the review process ends.

2.3 Review Report Restrictions

Peer review reports are strictly confidential and must not be processed with AI tools for editing or language refinement. The responsibility for clarity, accuracy, and professionalism lies entirely with reviewers.

2.4 Scientific Evaluation and AI Limits

Scientific assessment requires subject expertise, methodological knowledge, and critical thinking beyond the capacity of current AI systems. Reviewers must judge the soundness of research design, data interpretation, and validity of conclusions without relying on AI.

2.5 Author AI Disclosure

Authors may disclose AI use for language polishing before the References section. Reviewers should consider these statements but base their evaluations solely on scientific quality. If they suspect undisclosed AI use that undermines originality, they should report it confidentially to the editor.

2.6 Publisher-Approved AI Technologies

The journal employs proprietary AI systems, aligned with responsible AI principles, to support editorial checks such as plagiarism detection while safeguarding data security and confidentiality.

2.7 Compliance and Misconduct Reporting

Reviewers violating this policy may face disqualification and possible institutional reporting. Those uncertain about specific cases should consult the editorial office before acting.

3. Artificial Intelligence Policy for Authors

3.1 Policy Overview

Smart and Green Materials recognizes the potential of AI in research and writing but requires responsible use to preserve integrity, originality, and transparency. Human oversight and accountability remain essential.

3.2 AI-Assisted Writing

Authors may use AI tools only to improve readability, grammar, and language. This permission does not extend to generating research content. Human supervision and thorough review of AI-assisted text are required. All use of AI must be disclosed through a declaration in the manuscript.

3.3 Authorship and Accountability

AI systems cannot be listed as authors or contributors. Authorship requires human responsibility, approval of the final version, and accountability for the integrity of the published work.

3.4 Image and Visual Content

The use of AI to create or alter images, figures, or visual elements is strictly prohibited. Only standard image adjustments (e.g., brightness, contrast, or color balance) are permitted, provided they do not misrepresent data.

3.5 Research Applications

AI tools may be employed within research methodologies (e.g., automated image analysis or computer vision) if essential to the study design. Authors must document all such applications in the methods section, including details on the system, version, manufacturer, processing history, and validation.

3.6 Verification and Compliance

The editorial office may use detection tools to identify AI-generated or modified images. Authors may be asked to submit raw data, original images, or documentation for verification. Requests to use AI in graphical abstracts require prior approval.

3.7 Enforcement

Non-compliance may result in rejection, retraction, or other editorial actions. Authors with questions should consult the Editor-in-Chief before submission.