In response to CNA TODAY’s queries, all six autonomous universities in Singapore said that they viewed artificial intelligence (AI) as a useful and transformative pedagogical tool that requires responsible use.
In general, the universities encourage instructors to explore AI use in their teaching and for streamlining administrative tasks, though they also stressed adherence to academic integrity and human oversight.
Improper use of AI will be handled under existing academic misconduct policies, they added.
Associate Professor Tamas Makany, the associate provost in teaching and learning innovation at the Singapore Management University (SMU), said: “The first step (to academic integrity) is transparency with students through open discussions of mutual expectations about acceptable practices.”
He also said that instructors must manually check all generated materials for accuracy, fairness and alignment with learning goals, and consider legal and ethical issues such as copyright.
Associate Professor Karin Avnit, deputy director of the Teaching and Learning Academy at the Singapore Institute of Technology (SIT), said that irresponsible usage includes using AI to replace essential expert human judgment (such as for final grading), relying on unvalidated AI outputs, or failing to inform students of expectations around AI usage.
Some universities are exploring using AI-assisted grading tools within defined boundaries.
Ms Tammy Tan, chief communications officer at the Singapore University of Technology and Design, said that students use AI graders developed in-house to receive structured, rubric-based feedback on their poster submissions for the university’s flagship first-year course, Design Thinking and Innovation. Final grading is still undertaken by the faculty.
The National University of Singapore said that “approval is required when AI is used to provide instructional responses, feedback or marks, whether as virtual tutors or markers”.
Its publicly available 2024 Policy for Use of AI in Teaching and Learning states that for AI marking, human oversight is mandatory except for objective or closed-ended assessments, where non-compliance may be deemed academic misconduct.
Closed-ended assessments refer to those with pre-determined response options.
To address improper AI usage, the Nanyang Technological University said that it has well-established academic integrity procedures in place.
Similarly, SIT said that should any cases of improper use arise in future, such incidents would be addressed through existing academic and employee disciplinary procedures.
Assoc Prof Makany from SMU said that instructors are told about the expectations for using AI in teaching and that should an incident arise, the university’s first response will be to counsel rather than penalise them.
The Ministry of Education said that “all autonomous universities have institutional policies governing the use of AI, which are aligned to (the ministry’s) position”.