Bold technology predictions pave the road to humility. Even titans like Albert Einstein own a billboard or two along that humbling freeway. In a classic example, John von Neumann, who pioneered modern computer architecture, wrote in 1949, “It would appear that we have reached the limits of what is possible to achieve with computer technology.” Among the myriad manifestations of computational limit-busting that have defied von Neumann’s prediction is the social psychologist Frank Rosenblatt’s 1958 model of a human brain’s neural network. He called his device, based on the IBM 704 mainframe computer, the “Perceptron” and trained it to recognize simple patterns. Perceptrons eventually led to deep learning and modern artificial intelligence.
In a similarly bold but flawed prediction, brothers Hubert and Stuart Dreyfus—professors at UC Berkeley with very different specialties, Hubert’s in philosophy and Stuart’s in engineering—wrote in a January 1986 story in Technology Review that “there is almost no likelihood that scientists can develop machines capable of making intelligent decisions.” The article drew from the Dreyfuses’ soon-to-be-published book, Mind Over Machine (Macmillan, February 1986), which described their five-stage model for human “know-how,” or skill acquisition. Hubert (who died in 2017) had long been a critic of AI, penning skeptical papers and books as far back as the 1960s.
Stuart Dreyfus, who is still a professor at Berkeley, is impressed by the progress made in AI. “I guess I’m not surprised by reinforcement learning,” he says, adding that he remains skeptical and concerned about certain AI applications, especially large language models, or LLMs, like ChatGPT. “Machines don’t have bodies,” he notes. And he believes that being disembodied is limiting and creates risk: “It seems to me that in any area which involves life-and-death possibilities, AI is dangerous, because it doesn’t know what death means.”