Subscribe to our Newsletter

It’s time to talk about the real AI risks

Unsurprisingly, everyone was talking about AI and the recent rush to deploy large language models. Ahead of the conference, the United Nations put out a statement, encouraging RightsCon attendees to focus on AI oversight and transparency.

I was surprised, however, by how different the conversations about the risks of generative AI were at RightsCon from all the warnings from big Silicon Valley voices that I’ve been reading in the news.

Throughout the last few weeks, tech luminaries like OpenAI CEO Sam Altman, ex-Googler Geoff Hinton, top AI researcher Yoshua Bengio, Elon Musk, and many others have been calling for regulation and urgent action to address the “existential risks”—even including extinction—that AI poses to humanity. 

Certainly, the rapid deployment of large language models without risk assessments, disclosures about training data and processes, or seemingly much attention paid to how the tech could be misused is concerning. But speakers in several sessions at RightsCon reiterated that this AI gold rush is a product of company profit-seeking, not necessarily regulatory ineptitude or technological inevitability.

In the very first session, Gideon Lichfield, the top editor at Wired (and the ex–editor in chief of Tech Review), and Urvashi Aneja, founder of the Digital Futures Lab, went toe to toe with Google’s Kent Walker.

“Satya Nadella of Microsoft said he wanted to make Google dance. And Google danced,” said Lichfield. “We are now, all of us, jumping into the void holding our noses because these two companies are out there trying to beat each other.” Walker, in response, emphasized the social benefits that advances in artificial intelligence could bring in areas like drug discovery, and restated Google’s commitment to human rights. 

The following day, AI researcher Timnit Gebru directly addressed the talk of existential risks posed by AI: “Ascribing agency to a tool is a mistake, and that is a diversion tactic. And if you see who talks like that, it’s literally the same people who have poured billions of dollars into these companies.”

She said, “Just a few months ago, Geoff Hinton was talking about GPT-4 and how it’s the world’s butterfly. Oh, it’s like a caterpillar that takes data and then flies into a beautiful butterfly, and now all of a sudden it’s an existential risk. I mean, why are people taking these people seriously?”

Frustrated with the narratives around AI, experts like Human Right Watch’s tech and human rights director, Frederike Kaltheuner, suggest grounding ourselves in the risks we already know plague AI rather than speculating about what might come. 

And there are some clear, well-documented harms posed by the use of AI. They include:

  • Increased and amplified misinformation. Recommendation algorithms on social media platforms like Instagram, Twitter, and YouTube have been shown to prioritize extreme and emotionally compelling content, regardless of accuracy. LLMs contribute to this problem by producing convincing misinformation known as “hallucinations.” (More on that below)
  • Biased training data and outputs. AI models tend to be trained on biased data sets, which can lead to biased outputs. That can reinforce existing social inequities, as in the case of algorithms that discriminate when assigning people risk scores for committing welfare fraud, or facial recognition systems known to be less accurate on darker-skinned women than white men. Instances of ChatGPT spewing racist content have also been documented.
  • Erosion of user privacy. Training AI models require massive amounts of data, which is often scraped from the web or purchased, raising questions about consent and privacy. Companies that developed large language models like ChatGPT and Bard have not yet released much information about the data sets used to train them, though they certainly contain a lot of data from the internet. 

Kaltheuner says she’s especially concerned generative AI chatbots will be deployed in risky contexts such as mental health therapy: “I’m worried about absolutely reckless use cases of generative AI for things that the technology is simply not designed for or fit for purpose.” 

Gebru reiterated concerns about the environmental impacts resulting from the large amounts of computing power required to run sophisticated large language models. (She says she was fired from Google for raising these and other concerns in internal research.) Moderators of ChatGPT, who work for low wages, have also experienced PTSD in their efforts to make model outputs less toxic, she noted. 

Regarding concerns about humanity’s future, Kaltheuner asks “Whose extinction? Extinction of the entire human race? We are already seeing people who are historically marginalized being harmed at the moment. That’s why I find it a bit cynical.”

What else I’m reading

  • US government agencies are deploying GPT-4, according to an announcement from Microsoft reported by Bloomberg. OpenAI might want regulation for its chatbot, but in the meantime, it also wants to sell it to the US government.
  • ChatGPT’s hallucination problem might not be fixable. According to researchers at MIT, large language models get more accurate when they debate each other, but factual accuracy is not built into their capacity, as broken down in this really handy story from the Washington Post. If hallucinations are unfixable, we may only be able to reliably use tools like ChatGPT in limited situations. 
  • According to an investigation by the Wall Street Journal, Stanford University, and the University of Massachusetts, Amherst, Instagram has been hosting large networks of accounts posting child sexual abuse content. The platform responded by forming a task force to investigate the problem. It’s pretty shocking that such a significant problem could go unnoticed by the platform’s content moderators and automated moderation algorithms.

What I learned this week

A new report by the South Korea–based human rights group PSCORE details the days-long application process required to access the internet in North Korea. Just a few dozen families connected to Kim Jong-Un have unrestricted access to the internet, and only a “few thousand” government employees, researchers, and students can access a version that is subject to heavy surveillance. As Matt Burgess reports in Wired, Russia and China likely supply North Korea with its highly controlled web infrastructure.

Leave a Reply

Your email address will not be published. Required fields are marked *