PREVENTION OVER PUNISHMENT
However, even with clear AI guidelines, there will be students tempted to use tools and software to circumvent the rules.
For example, students may use “humanising” software to disguise an AI-generated assignment to bypass detection software. Students may also use AI tools in oral exams, as current technologies allow for such apps to reside on mobile phones and communicate wirelessly to the students via discreet earpieces.
Rather than play detective, institutions should focus on prevention through clear communication. This means writing unambiguous AI policies with concrete examples.
Other prevention strategies include AI literacy training for faculty and students, redesigning assessments that are more focused on processes rather than answers, and verifying students’ understanding through conversational assessments and in-class discussions.
Universities can also consider “AI-transparent” approaches where students document their use of AI tools throughout the assignment, similar to how they cite traditional sources. This creates accountability on the students’ part while avoiding the adversarial effects of detection-based enforcement.
Clear AI guidelines protect the value of university degrees and prepare students for an AI-driven future. They help students develop ethical instinct, emotional intelligence and creative thinking – human skills that AI cannot replace.
University graduates will likely work alongside AI tools and apps throughout their careers. The problem for universities is not about addressing the over-reliance on AI or banning it outright, but teaching students how to collaborate with AI responsibly.
With clear and transparent guidelines, universities can uphold educational integrity while preparing students for an AI-enhanced world.
Dr Caroline Wong is Associate Dean for Teaching & Learning, and Associate Professor of Business, and Mr Harry Klass is Senior Learning Technologies Specialist at James Cook University (Singapore Campus).