“If we really want to address these issues, we’ve got to get serious,” says Farid. For example, he wants cloud service providers and app stores such as those operated by Amazon, Microsoft and Google, Apple, which are all part of the PAI, to ban services that allow people to use deepfake technology with the intent to create nonconsensual sexual imagery. Watermarks on all AI-generated content should also be mandated, not voluntary, he says.
Another important thing missing is how the AI systems themselves could be made more responsible, says Ilke Demir, a senior research scientist at Intel who leads the company’s work on the responsible development of generative AI. This could include more details on how the AI model was trained, what data went into it , and whether generative AI models have any biases.
The guidelines have no mention of ensuring that there’s no toxic content in the data set of AI generative AI models. “It’s one of the most significant ways harm is caused by these systems,” says Daniel Leufer, a senior policy analyst at the digital rights group Access Now.
The guidelines include a list of harms that these companies want to prevent, such as fraud, harassment, and disinformation. But a generative AI model that always creates white people is also a type of harm, and that is not currently listed, adds Demir.
Farid raises a more fundamental issue. Since the companies acknowledge that the technology could lead to some serious harms and offer ways to mitigate against them, “why aren’t they asking the question ‘Should we do this in the first place?’”