Subscribe to our Newsletter

AI lie detectors are better than humans at spotting lies

This reliance can shape our behavior. Normally, people tend to assume others are telling the truth. That was borne out in this study—even though the volunteers knew half of the statements were lies, they only marked out 19% of them as such. But that changed when people chose to make use of the AI tool: the accusation rate rose to 58%.
 
In some ways, this is a good thing—these tools can help us spot more of the lies we come across in our lives, like the misinformation we might come across on social media.
 
But it’s not all good. It could also undermine trust, a fundamental aspect of human behavior that helps us form relationships. If the price of accurate judgements is the deterioration of social bonds, is it worth it?
 
And then there’s the question of accuracy. In their study, von Schenk and her colleagues were only interested in creating a tool that was better than humans at lie detection. That isn’t too difficult, given how terrible we are at it. But she also imagines a tool like hers being used to routinely assess the truthfulness of social media posts, or hunt for fake details in a job hunter’s resume or interview responses. In cases like these, it’s not enough for a technology to just be “better than human” if it’s going to be making more accusations. 
 
Would we be willing to accept an accuracy rate of 80%, where only four out of every five assessed statements would be correctly interpreted as true or false? Would even 99% accuracy suffice? I’m not sure.
 
It’s worth remembering the fallibility of historical lie detection techniques. The polygraph was designed to measure heart rate and other signs of “arousal” because it was thought some signs of stress were unique to liars. They’re not. And we’ve known that for a long time. That’s why lie detector results are generally not admissible in US court cases. Despite that, polygraph lie detector tests have endured in some settings, and have caused plenty of harm when they’ve been used to hurl accusations at people who fail them on reality TV shows.
 
Imperfect AI tools stand to have an even greater impact because they are so easy to scale, says von Schenk. You can only polygraph so many people in a day. The scope for AI lie detection is almost limitless by comparison.
 
“Given that we have so much fake news and disinformation spreading, there is a benefit to these technologies,” says von Schenk. “However, you really need to test them—you need to make sure they are substantially better than humans.” If an AI lie detector is generating a lot of accusations, we might be better off not using it at all, she says.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

AI lie detectors have also been developed to look for facial patterns of movement and “microgestures” associated with deception. As Jake Bittle puts it: “the dream of a perfect lie detector just won’t die, especially when glossed over with the sheen of AI.”
 
On the other hand, AI is also being used to generate plenty of disinformation. As of October last year, generative AI was already being used in at least 16 countries to “sow doubt, smear opponents, or influence public debate,” as Tate Ryan-Mosley reported.
 
The way AI language models are developed can heavily influence the way that they work. As a result, these models have picked up different political biases, as my colleague Melissa Heikkilä covered last year.
 
AI, like social media, has the potential for good or ill. In both cases, the regulatory limits we place on these technologies will determine which way the sword falls, argue Nathan E. Sanders and Bruce Schneier.
 
Chatbot answers are all made up.
But there’s a tool that can give a reliability score to large language model outputs, helping users work out how trustworthy they are. Or, as Will Douglas Heaven put it in an article published a few months ago, a BS-o-meter for chatbots.

From around the web

Scientists, ethicists and legal experts in the UK have published a new set of guidelines for research on synthetic embryos, or, as they call them, “stem cell-based embryo models (SCBEMs).” There should be limits on how long they are grown in labs, and they should not be transferred into the uterus of a human or animal, the guideline states. They also note that, if, in future, these structures look like they might have the potential to develop into a fetus, we should stop calling them “models” and instead refer to them as “embryos.”

Antimicrobial resistance is already responsible for 700,000 deaths every year, and could claim 10 million lives per year by 2050. Overuse of broad spectrum antibiotics is partly to blame. Is it time to tax these drugs to limit demand? (International Journal of Industrial Organization)

Spaceflight can alter the human brain, reorganizing gray and white matter and causing the brain to shift upwards in the skull. We need to better understand these effects, and the impact of cosmic radiation on our brains, before we send people to Mars. (The Lancet Neurology)

The vagus nerve has become an unlikely star of social media, thanks to influencers who drum up the benefits of stimulating it. Unfortunately, the science doesn’t stack up. (New Scientist)

A hospital in Texas is set to become the first in the country to enable doctors to see their patients via hologram. Crescent Regional Hospital in Lancaster has installed Holobox—a system that projects a life-sized hologram of a doctor for patient consultations. (ABC News)

Leave a Reply

Your email address will not be published. Required fields are marked *