A Microsoft study published this year on knowledge workers’ use of generative AI found the tool had changed “the nature of critical thinking” from “information gathering to information verification”, from “problem-solving to AI response integration” and from “task execution to task stewardship”.
But like many pleasingly neat solutions to complex problems, mine turns out to be a terrible idea.
CRITICAL THINKING ABILITIES AT RISK
Maria Abreu, a professor of economic geography at Cambridge university, told me her department had experimented along these lines. But when they gave undergraduates an AI text and asked them to improve it, the results were disappointing.
“The improvements were very cosmetic, they didn’t change the structure of the arguments,” she said.
Masters students did better, perhaps because they had already honed the ability to think critically and structure arguments. “The worry is, if we don’t train them to do their own thinking, are they going to then not develop that ability?”
After the pandemic prompted a shift to assessments in which students had access to the internet, Abreu’s department is now going back to closed exam conditions.
Michael Veale, an associate professor at University College London’s law faculty, told me his department had returned to using more traditional exams, too.