Mason encourages any artists who don’t want their works in the data set to contact LAION, which is an independent entity from the startup. LAION did not immediately respond to a request for comment.
Berlin-based artists Holly Herndon and Mat Dryhurst are working on tools to help artists opt out of being in training data sets. They launched a site called Have I Been Trained, which lets artists search to see whether their work is among the 5.8 billion images in the data set that was used to train Stable Diffusion and Midjourney. Some online art communities, such as Newgrounds, are already taking a stand and have explicitly banned AI-generated images.
An industry initiative called Content Authenticity Initiative, which includes the likes of Adobe, Nikon, and the New York Times, are developing an open standard that would create a sort of watermark on digital content to prove its authenticity. It could help fight disinformation as well as ensuring that digital creators get proper attribution.
“It could also be a way in which creators or IP holders can assert ownership over media that belongs to them or synthesized media that’s been created with something that belongs to them,” says Nina Schick, an expert on deepfakes and synthetic media.
Pay-per-play
AI-generated art poses tricky legal questions. In the UK, where Stability.AI is based, scraping images from the internet without the artist’s consent to train an AI tool could be a copyright infringement, says Gill Dennis, a lawyer at the firm Pinsent Masons. Copyrighted works can be used to train an AI under “fair use,” but only for noncommercial purposes. While Stable Diffusion is free to use, Stability.AI also sells premium access to the model through a platform called DreamStudio.
The UK, which hopes to boost domestic AI development, wants to change laws to give AI developers greater access to copyrighted data. Under these changes, developers would be able to scrape works protected by copyright to train their AI systems for both commercial and noncommercial purposes.