Hinton told the Times he believed that these systems were eclipsing human intelligence in some ways because of the amount of data they were analysing.
“Maybe what is going on in these systems is actually a lot better than what is going on in the brain,” he told the paper.
While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk.
AI “takes away the drudge work” but “might take away more than that”, he told the Times.
The scientist also warned about the potential spread of misinformation created by AI, telling the Times that the average person will “not be able to know what is true anymore.”
Hinton notified Google of his resignation last month, the Times reported.
Jeff Dean, lead scientist for Google AI, thanked Hinton in a statement to US media.
“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI,” the statement added.
“We’re continually learning to understand emerging risks while also innovating boldly.”
In March, tech billionaire Elon Musk and a range of experts called for a pause in the development of AI systems to allow time to make sure they are safe.
An open letter, signed by more than 1,000 people including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4, a much more powerful version of the technology used by ChatGPT.
Hinton did not sign that letter at the time, but told The New York Times that scientists should not “scale this up more until they have understood whether they can control it.”