The rise of AI detection tools, like GPTZero, compound our anxieties. I tested my own original work and, while not flagged as AI-written, the conclusion that it was “likely written by a human” offers little reassurance, especially after hearing stories of other students’ original works were wrongly flagged as containing “parts written by AI”.
This leaves me worried about being flagged as a false positive in the future – it feels less of an “if it happens”, and more of a “when it happens”. When that day comes, how do I prove my innocence?
Without definitive ways to prove our innocence, we students are left vulnerable, unable to clear our names beyond a reasonable doubt.
All these contribute to the erosion of trust between students and educators, the consequences of which are far-reaching, especially in the context of universities where collaboration between students and educators are invaluable in advancing innovative ideas in the spirit of problem-solving.
If students are worried about being falsely flagged by an “almost-good-enough” AI detector, and if educators suspect AI involvement in their students’ work, what then becomes of this collaborative spirit?
THE JOYS OF LEARNING
To ensure the continuity of trust in the partnership between students and educators, a good first step is for educators to acknowledge the limitations of current AI detection tools.