Geoffrey Hinton, one of the pioneers of artificial intelligence (A.I.), has quit his job at Google and has become one of a growing number of critics concerned about the risks of generative artificial intelligence. Generative A.I. is the technology that powers popular chatbots like ChatGPT. Despite being credited with creating the intellectual foundation for the development of A.I. systems that are considered to be the key to the future of the tech industry, Hinton is now expressing regret for his life’s work. The fear is that the new A.I. systems, which could be as important as the introduction of the web browser in the early 1990s, could pose profound risks to society and humanity.
Hinton’s journey from an A.I. pioneer to doomsayer marks an important inflection point for the technology industry. Industry leaders believe that generative A.I. systems could lead to breakthroughs in various fields, including drug research and education. However, many industry insiders fear that they are releasing something dangerous into the wild, as generative A.I. can already be a tool for misinformation and could be a risk to jobs and humanity in the future. Hinton believes that it is hard to see how bad actors can be prevented from using it for bad things.
After OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.” This was followed by a letter from 19 current and former leaders of the Association for the Advancement of Artificial Intelligence warning of the risks of A.I.
Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning and talked by phone with Sundar Pichai, the CEO of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Pichai. Jeff Dean, Google’s chief scientist, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”
Hinton is a 75-year-old British expatriate and lifelong academic who has always been driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Hinton embraced an idea called a neural network, which is a mathematical system that learns skills by analyzing data. Few researchers believed in the idea at the time, but it became his life’s work.
In the 1980s, Hinton was a professor of computer science at Carnegie Mellon University but left the university for Canada because he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Hinton is deeply opposed to the use of artificial intelligence on the battlefield, which he calls “robot soldiers.”
Google has spent $44 million to acquire a company founded by Dr. Geoffrey Hinton and his two students, which led to the development of new chatbots like ChatGPT and Google Bard. Dr. Hinton and two other collaborators received the Turing Award often called “the Nobel Prize of computing,” in 2018 for their work on neural networks. They believed that neural networks, which learn from vast amounts of digital text, were a powerful way for machines to understand and generate language but were inferior to human language processing.
However, last year, Dr. Hinton’s views changed as Google and OpenAI built systems using much larger amounts of data. He believed that while the systems were inferior to the human brain in some ways, they were surpassing human intelligence in others. This made him concerned that as companies improve their AI systems, they become increasingly dangerous. He warned that the rapid advancement of AI technology is a scary prospect and believes it will eventually upend the job market.
Dr. Hinton’s immediate concern is that the internet will be filled with false information that will make it difficult for people to differentiate between what is true and what is not. He is also worried about AI technology’s potential to create autonomous weapons, and that future versions of the technology could pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze.
Dr. Hinton believes that the race between Google, Microsoft, and other tech giants to develop AI technology will escalate into a global race that will not stop without some form of global regulation. However, he acknowledges that this may be impossible because there is no way of knowing whether companies or countries are working on the technology in secret.
Dr. Hinton suggests that the world’s leading scientists collaborate on ways of controlling the technology before scaling it up further. He said, “I don’t think they should scale this up more until they have understood whether they can control it.”
Dr. Hinton used to respond to people’s concerns about working on potentially dangerous technology by quoting Robert Oppenheimer’s statement: “When you see something that is technically sweet, you go ahead and do it.”