Policies and Regulations Addressing AI Use
In response to the growing concerns about AI and academic integrity, educational institutions worldwide are developing new policies and regulations to guide the use of these technologies in academic work. These policies aim to strike a balance between leveraging the potential benefits of AI and preventing its misuse.
Key components of these policies often include:
- Clear Guidelines: Providing students with clear guidelines on when and how AI tools can be used in their assignments.
- Disclosure Requirements: Requiring students to disclose the use of any AI tools in their work.
- Citation Standards: Establishing standards for citing AI tools and AI-generated content.
- Consequences for Misuse: Defining the consequences for violating academic integrity policies by using AI tools inappropriately.
- Utilizing AI checkers: In the same way that institutions may utilize plagiarism checkers, AI checkers will assess whether or not students created their own content, or whether it was AI generated
The International Baccalaureate (IB) program has also addressed the use of AI.
The IB explicitly states that if students use text or other products produced by an AI tool, they must clearly reference the AI tool in the body of their work and add it to the bibliography. This includes copying, paraphrasing, or modifying an image. The in-text citation should contain quotation marks using the reference style in use by the school, and the citation should also contain the Prompt given to the AI Tool and the date the tool generated the text.
For example, if a student used ChatGPT to generate a Paragraph for an Essay, they would need to cite ChatGPT in the text and include the prompt they used and the date they generated the text in the bibliography. These new standards are in place to prevent students from using AI to pass off someone else’s work as their own. It also ensures that students are aware of the limitations of AI and that they are not relying on it too heavily.
Here’s a helpful example of how to list AI products in your bibliography:
Source Type |
Format |
AI-Generated Text |
Author. (Date). Title of text. AI Tool. URL |
AI-Generated Image |
Author. (Date). Title of image. AI Tool. URL |
AI-Generated Audio/Video |
Author. (Date). Title of audio/video. AI Tool. URL |
AI-Generated Code |
Author. (Date). Title of code. AI Tool. URL |
Personal Communication |
Author. (Date). Personal communication. AI Tool. URL |
Other AI-Generated Work |
Author. (Date). Title of work. AI Tool. URL |
Strategies for Educators to Promote Original Thinking
To combat the challenges posed by AI, educators need to implement strategies that promote original thinking, critical analysis, and authentic learning. These strategies should focus on fostering students' ability to think for themselves, evaluate information critically, and express their own unique perspectives.
Some effective strategies include:
- Emphasis on Process: Shifting the focus from the final product to the learning process, rewarding students for their effort, engagement, and critical thinking skills.
- Authentic Assessments: Designing assessments that require students to apply their knowledge and skills to real-world problems or scenarios.
- Critical Evaluation of AI Content: Teaching students how to critically evaluate AI-generated content, identify biases, and verify information from multiple sources.
- Collaborative Learning: Encouraging students to work together on projects and assignments, fostering teamwork, communication, and shared understanding.
- Promoting Critical Analysis: Students should be encouraged to apply their own personal knowledge in any context where they are using information gathered from AI tools. Students may use information gathered from AI tools, but should include their own analysis and interpretation of the data.
The Role of AI Detection Tools: A Word of Caution
As AI tools become more sophisticated, so do AI detection tools designed to identify AI-generated content. While these tools may seem like a promising solution to combat academic dishonesty, they should be used with caution.
Limitations of AI detection tools include:
- Accuracy: AI detection tools are not always accurate and can produce false positives, wrongly flagging human-written content as AI-generated.
- Evasion: Sophisticated AI tools can often evade detection by using techniques such as paraphrasing, rewording, or adding human-like nuances to the text.
- Privacy Concerns: The use of AI detection tools may raise privacy concerns, as they often require submitting student work to third-party services for analysis.
Educators should use AI detection tools as one piece of evidence in assessing academic integrity, rather than relying on them as the sole determinant. It is crucial to investigate further when a tool indicates AI use and consider other factors, such as the student's overall performance, writing style, and understanding of the subject matter.
Avoid the Race: Students are increasingly capable of finding new methods to evade AI detection technology. Educators will likely find themselves in a never-ending race to maintain academic integrity.