Skip to main content

Policy on Using Large Language Models (LLMs)

SHARE:

Policy on Using Large Language Models (LLMs) in Manuscripts Submitted for Publication in Conferences and Journals.

The following rules complement the IEEE rules listed in:

Author Guidelines for Artificial Intelligence (AI)-Generated Text

When submitting a manuscript, the authors implicitly confirm that they have read, understood, and followed the rules for acceptable use of LLMs. In particular, the authors confirm that any output of these tools used in the manuscript has been thoroughly checked, including careful text editing, verification of audio/visual content, and testing of any code to ensure correctness.

The basic principle informing the following acceptable use rules is that authors should take full responsibility and ownership for their research and the content of their submitted manuscript. In particular, it is unacceptable for any section of a manuscript to be entirely produced using an LLM.

  1. Acceptable uses:
    1. For improving language and clarity during the editing process.
    2. For accelerating code development and visualization.
    3. Research and ideation (identifying related work, feedback on ideas, etc.).
  2. Unacceptable uses:
    1. Use an LLM to generate most (or significant components) of a manuscript (as opposed to improving the clarity of author-composed text).
    2. Direct use of LLM-generated code without subsequent thorough verification of correctness.
    3. Direct use of LLM-generated text without subsequent thorough verification of correctness and accuracy for any section of the manuscript, including an introduction, a related work section, and a summary of prior work (distinct from a related work section).