Deeper Learning
Catching bad content in the age of AI
In the last 10 years, Big Tech has become really good at some things: language, prediction, personalization, archiving, text parsing, and data crunching. But it’s still surprisingly bad at catching, labeling, and removing harmful content. One simply needs to recall the spread of conspiracy theories about elections and vaccines in the United States over the past two years to understand the real-world damage this causes. The ease of using generative AI could turbocharge the creation of more harmful online content. People are already using AI language models to create fake news websites.
But could AI help with content moderation? The newest large language models are much better at interpreting text than previous AI systems. In theory, they could be used to boost automated content moderation. Read more from Tate Ryan-Mosley in her weekly newsletter, The Technocrat.
Bits and Bytes
Scientists used AI to find a drug that could fight drug-resistant infections
Researchers at MIT and McMaster University developed an AI algorithm that allowed them to find a new antibiotic to kill a type of bacteria responsible for many drug-resistant infections that are common in hospitals. This is an exciting development that shows how AI can accelerate and support scientific discovery. (MIT News)
Sam Altman warns that OpenAI could quit Europe over AI rules
At an event in London last week, the CEO said OpenAI could “cease operating” in the EU if it cannot comply with the upcoming AI Act. Altman said his company found much to criticize in how the AI Act was worded, and that there were “technical limits to what’s possible.” This is likely an empty threat. I’ve heard Big Tech say this many times before about one rule or another. Most of the time, the risk of losing out on revenue in the world’s second-largest trading bloc is too big, and they figure something out. The obvious caveat here is that many companies have chosen not to operate, or to have a restrained presence, in China. But that’s also a very different situation. (Time)
Predators are already exploiting AI tools to generate child sexual abuse material
The National Center for Missing and Exploited Children has warned that predators are using generative AI systems to create and share fake child sexual abuse material. With powerful generative models being rolled out with safeguards that are inadequate and easy to hack, it was only a matter of time before we saw cases like this. (Bloomberg)
Tech layoffs have ravaged AI ethics teams
This is a nice overview of the drastic cuts Meta, Amazon, Alphabet, and Twitter have all made to their teams focused on internet trust and safety as well as AI ethics. Meta, for example, ended a fact-checking project that had taken half a year to build. While companies are racing to roll out powerful AI models in their products, executives like to boast that their tech development is safe and ethical. But it’s clear that Big Tech views teams dedicated to these issues as expensive and expendable. (CNBC)