Subscribe to our Newsletter

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

It took until the 2010s for the power of neural networks trained via backpropagation to truly make an impact. Working with a couple of graduate students, Hinton showed that his technique was better than any others at getting a computer to identify objects in images. They also trained a neural network to predict the next letters in a sentence, a precursor to today’s large language models.

One of these graduate students was Ilya Sutskever, who went on to cofound OpenAI and lead the development of ChatGPT. “We got the first inklings that this stuff could be amazing,” says Hinton. “But it’s taken a long time to sink in that it needs to be done at a huge scale to be good.” Back in the 1980s, neural networks were a joke. The dominant idea at the time, known as symbolic AI, was that intelligence involved processing symbols, such as words or numbers.

But Hinton wasn’t convinced. He worked on neural networks, software abstractions of brains in which neurons and the connections between them are represented by code. By changing how those neurons are connected—changing the numbers used to represent them—the neural network can be rewired on the fly. In other words, it can be made to learn.

“My father was a biologist, so I was thinking in biological terms,” says Hinton. “And symbolic reasoning is clearly not at the core of biological intelligence.

“Crows can solve puzzles, and they don’t have language. They’re not doing it by storing strings of symbols and manipulating them. They’re doing it by changing the strengths of connections between neurons in their brain. And so it has to be possible to learn complicated things by changing the strengths of connections in an artificial neural network.”

A new intelligence

For 40 years, Hinton has seen artificial neural networks as a poor attempt to mimic biological ones. Now he thinks that’s changed: in trying to mimic what biological brains do, he thinks, we’ve come up with something better. “It’s scary when you see that,” he says. “It’s a sudden flip.”

Hinton’s fears will strike many as the stuff of science fiction. But here’s his case. 

As their name suggests, large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”

Leave a Reply

Your email address will not be published. Required fields are marked *