It was at the University of Toronto in 2012 that artificial intelligence researcher Dr. Geoffrey Hinton created the intellectual foundation for the AI systems that are taking over the tech industry and beyond today.
But now, Hinton has announced his resignation from his role as lead researcher in AI developments at Google, citing concerns over the future of this fast-moving technology and its ethical implications.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said in an interview with The New York Times on Monday.
Hinton, who is originally from Britain, came to Toronto in the 1980s to pursue research about “neural networks” — a system that can analyze data to teach itself skills. In 2012, Hinton and two Toronto graduate students, Ilya Sutskever and Alex Krishevsky, built a neural network that analysed thousands of photos to teach itself to identify common objects such as flowers and dogs.
It was that system that prompted Google to spend $44 million to buy the company led by Hinton, Sutskever and Krishevsky. That neural network served as the foundation for powerful AI technologies today, such as ChatGPT. But when companies including Google started building neural networks that learn from digital texts, Hinton eventually became concerned that these systems might become smarter than the human brain in some ways. He is now worried about where AI will take us in the coming years.
““Look at how it was five years ago and how it is now,” he said. “Take the difference and propagate it forwards. That’s scary.”
Hinton told The New York Times he’s concerned about the influx of false photos, videos and text that will spread across the internet and make it difficult for a person to distinguish between what is true and what isn’t. He’s also worried about how AI technologies will impact the job market, and notes that they could pose a threat to humanity in the future if companies allow AI systems to create and run their own computer code.
Hinton is certainly not the first to raise concerns about the ethics of AI technology — a coalition of over 500 technologists, engineers and ethicists signed a letter in March urging AI labs to pause training on all AI systems more powerful than GPT-4 for at least six months, stating the technologies could pose “profound risks to society and humanity.” However, once the letter was signed, it came with its own controversy — there were a few false signatories, including OpenAI CEO Sam Altman, and some of the signatories criticized the letter after it was published. AI experts have said that the letter furthers “AI hype,” which ultimately helps those building AI sell the products. Emily M. Bender, co-author of the first paper the letter cites, broke this down in a thread on Twitter.
“The risks and harms have never been about ‘too powerful AI,'” she writes in the thread. “Instead: They’re about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).”
Now, with Hinton voicing his own concerns and stepping away from his work on AI with Google, the debate over the future of AI will surely continue.