The news: Disinformation generated by AI may be more convincing than disinformation written by humans, according to a new study. It found that people were 3% less likely to spot false tweets that had been generated by AI than real-life examples collected from Twitter.
Why is that? The researchers aren’t exactly sure. But the way in which GPT-3 orders information could have something to do with it, as AI-generated text tends to be more structured and condensed in comparison to how humans write.
Why it matters: AI models can generate incorrect text that appears convincing, which could be used to generate false narratives quickly and cheaply for conspiracy theorists and disinformation campaigns. In theory, this could be spread further and faster than online disinformation networks manned by humans. Read the full story.
—Rhiannon Williams
Lab-grown meat just reached a major milestone. Here’s what comes next.
It’s easy to avoid meat these days if you want to. Alternative products like plant-based meat are becoming more common—you can even get Impossible burgers at Burger King now. And soon we might have new options, like products made with animal cells grown in a lab.
Just last week, the US Department of Agriculture gave the green light to two companies to make and sell cultivated chicken products in the US. This is a major moment for the field—even if a lot of milestones are left ahead. Casey Crownhart, our climate reporter, takes a look at the burgeoning world of lab-grown meat. Read the full story.
Casey’s story is from The Spark, her weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.