—Felix M. Simon is a research fellow in AI and News at the Reuters Institute for the Study of Journalism; Keegan McBride is an assistant professor in AI, government, and policy at the Oxford Internet Institute; Sacha Altay is a research fellow in the department of political science at the University of Zurich.
This year, close to half the world’s population has the opportunity to participate in an election. And according to a steady stream of pundits, institutions, academics, and news organizations, there’s a major new threat to the integrity of those elections: artificial intelligence.
The internet is full of doom-laden stories proclaiming that AI-generated deepfakes will mislead and influence voters, as well as enabling new forms of personalized and targeted political advertising.
Though such claims are concerning, it is critical to look at the evidence. With a substantial number of this year’s elections concluded, it is a good time to ask how accurate these assessments have been so far. The preliminary answer seems to be not very. Read the full story.
Here’s how ed-tech companies are pitching AI to teachers
This back-to-school season marks the third year in which AI models like ChatGPT will be used by thousands of students around the globe. A top concern among educators remains that when students use such models to write essays or come up with ideas for projects, they miss out on the hard and focused thinking that builds creative reasoning skills.
But this year, educational technology companies are pitching schools on a different use of AI. Rather than scrambling to tamp down the use of it in the classroom, these companies are coaching teachers how to use AI tools to cut down on time they spend on tasks like grading, providing feedback to students, or planning lessons.