Subscribe to our Newsletter

 Four trends that changed AI in 2023

Existential risk has become one of the biggest memes in AI. The hypothesis is that one day we will build an AI that is far smarter than humans, and this could lead to grave consequences. It’s an ideology championed by many in Silicon Valley, including Ilya Sutskever, OpenAI’s chief scientist, who played a pivotal role in ousting OpenAI CEO Sam Altman (and then reinstating him a few days later). 

But not everyone agrees with this idea. Meta’s AI leaders Yann LeCun and Joelle Pineau have said that these fears are “ridiculous” and the conversation about AI risks has become “unhinged.” Many other power players in AI, such as researcher Joy Buolamwini, say that focusing on hypothetical risks distracts from the very real harms AI is causing today. 

Nevertheless, the increased attention on the technology’s potential to cause extreme harm has prompted many important conversations about AI policy and animated lawmakers all over the world to take action. 

4. The days of the AI Wild West are over

Thanks to ChatGPT, everyone from the US Senate to the G7 was talking about AI policy and regulation this year. In early December, European lawmakers wrapped up a busy policy year when they agreed on the AI Act, which will introduce binding rules and standards on how to develop the riskiest AI more responsibly. It will also ban certain “unacceptable” applications of AI, such as police use of facial recognition in public places. 

The White House, meanwhile, introduced an executive order on AI, plus voluntary commitments from leading AI companies. Its efforts aimed to bring more transparency and standards for AI and gave a lot of freedom to agencies to adapt AI rules to fit their sectors. 

One concrete policy proposal that got a lot of attention was watermarks—invisible signals in text and images that can be detected by computers, in order to flag AI-generated content. These could be used to track plagiarism or help fight disinformation, and this year we saw research that succeeded in applying them to AI-generated text and images.

It wasn’t just lawmakers that were busy, but lawyers too. We saw a record number of  lawsuits, as artists and writers argued that AI companies had scraped their intellectual property without their consent and with no compensation. In an exciting counter-offensive, researchers at the University of Chicago developed Nightshade, a new data-poisoning tool that lets artists fight back against generative AI by messing up training data in ways that could cause serious damage to image-generating AI models. There is a resistance brewing, and I expect more grassroots efforts to shift tech’s power balance next year. 

Deeper Learning

Now we know what OpenAI’s superalignment team has been up to

OpenAI has announced the first results from its superalignment team, its in-house initiative dedicated to preventing a superintelligence—a hypothetical future AI that can outsmart humans—from going rogue. The team is led by chief scientist Ilya Sutskever, who was part of the group that just last month fired OpenAI’s CEO, Sam Altman, only to reinstate him a few days later.

Business as usual: Unlike many of the company’s announcements, this heralds no big breakthrough. In a low-key research paper, the team describes a technique that lets a less powerful large language model supervise a more powerful one—and suggests that this might be a small step toward figuring out how humans might supervise superhuman machines. Read more from Will Douglas Heaven

Bits and Bytes

Google DeepMind used a large language model to solve an unsolvable math problem
In a paper published in Nature, the company says it is the first time a large language model has been used to discover a solution to a long-standing scientific puzzle—producing verifiable and valuable new information that did not previously exist. (MIT Technology Review)

Leave a Reply

Your email address will not be published. Required fields are marked *