Subscribe to our Newsletter

I met a police drone in VR—and hated it

It’s important because police departments are racing way ahead and starting to use drones anyway, for everything from surveillance and intelligence gathering to chasing criminals.

Last week, San Francisco approved the use of robots, including drones that can kill people in certain emergencies, such as when dealing with a mass shooter. In the UK most police drones have thermal cameras that can be used to detect how many people are inside houses, says Pósch. This has been used for all sorts of things: catching human traffickers or rogue landlords, and even targeting people holding suspected parties during covid-19 lockdowns

Virtual reality will let the researchers test the technology in a controlled, safe way among lots of test subjects, Pósch says.

Even though I knew I was in a VR environment, I found the encounter with the drone unnerving. My opinion of these drones did not improve, even though I’d met a supposedly polite, human-operated one (there are even more aggressive modes for the experiment, which I did not experience.)  

Ultimately, it may not make much difference whether drones are “polite”  or “rude” , says Christian Enemark, a professor at the University of Southampton, who specializes in the ethics of war and drones and is not involved in the research. That’s because the use of drones itself is a “reminder that the police are not here, whether they’re not bothering to be here or they’re too afraid to be here,” he says.

“So maybe there’s something fundamentally disrespectful about any encounter.”

Deeper Learning

GPT-4 is coming, but OpenAI is still fixing GPT-3

The internet is abuzz with excitement about AI lab OpenAI’s latest iteration of its famous large language model, GPT-3. The latest demo, ChatGPT, answers people’s questions via back-and-forth dialogue. Since its launch last Wednesday, the demo has crossed over 1 million users. Read Will Douglas Heaven’s story here

GPT-3 is a confident bullshitter and can easily be prompted to say toxic things. OpenAI says it has fixed a lot of these problems with ChatGPT, which answers follow-up questions, admits its mistakes, challenges incorrect premises, and rejects inappropriate requests. It even refuses to answer some questions, such as how to be evil, or how to break into someone’s house. 

Leave a Reply

Your email address will not be published. Required fields are marked *