Subscribe to our Newsletter

How do you solve a problem like out-of-control AI? 

Google’s approach is to introduce these new functions into its products gradually. But it will most likely be just a matter of time before things start to go awry. The company has not solved any of the common problems with these AI models. They still make stuff up. They are still easy to manipulate to break their own rules. They are still vulnerable to attacks. There is very little stopping them from being used as tools for disinformation, scams, and spam. 

Because these sorts of AI tools are relatively new, they still operate in a largely regulation-free zone. But that doesn’t feel sustainable. Calls for regulation are growing louder as the post-ChatGPT euphoria is wearing off, and regulators are starting to ask tough questions about the technology. 

US regulators are trying to find a way to govern powerful AI tools. This week, OpenAI CEO Sam Altman will testify in the US Senate (after a cozy “educational” dinner with politicians the night before). The hearing follows a meeting last week between Vice President Kamala Harris and the CEOs of Alphabet, Microsoft, OpenAI, and Anthropic.

In a statement, Harris said the companies have an “ethical, moral, and legal responsibility” to ensure that their products are safe. Senator Chuck Schumer of New York, the majority leader, has proposed legislation to regulate AI, which could include a new agency to enforce the rules. 

“Everybody wants to be seen to be doing something. There’s a lot of social anxiety about where all this is going,” says Jennifer King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. 

Getting bipartisan support for a new AI bill will be difficult, King says: “It will depend on to what extent [generative AI] is being seen as a real, societal-level threat.” But the chair of the Federal Trade Commission, Lina Khan, has come out “guns blazing,” she adds. Earlier this month, Khan wrote an op-ed calling for AI regulation now to prevent the errors that arose from being too lax with the tech sector in the past. She signaled that in the US, regulators are more likely to use existing laws already in their tool kit to regulate AI, such as antitrust and commercial practices laws. 

Meanwhile, in Europe, lawmakers are edging closer to a final deal on the AI Act. Last week members of the European Parliament signed off on a draft regulation that called for a ban on facial recognition technology in public places. It also bans predictive policing, emotion recognition, and the indiscriminate scraping of biometric data online. 

The EU is set to create more rules to constrain generative AI too, and the parliament wants companies creating large AI models to be more transparent. These measures include labeling AI-generated content, publishing summaries of copyrighted data that was used to train the model, and setting up safeguards that would prevent models from generating illegal content.

Leave a Reply

Your email address will not be published. Required fields are marked *