I think there’s an important set of lessons for us, about what the next decade’s going to be like for AI. The first is where it came from: which is a team of three people poking at an idea in, like, a random corner of the OpenAI building.
This one single idea about diffusion models, just a little breakthrough in algorithms, took us from making something that’s not very good to something that can have a huge impact on the world.
Another thing that’s interesting is that this was the first AI that everyone used, and there’s a few reasons why that is. But one is that it creates, like, full finished products. If you’re using Copilot, our code generation AI, it has to have a lot of help from you. But with DALL-E 2, you tell it what you want and it’s like talking to a colleague who’s a graphic artist. And I think it’s the first time we’ve seen this with an AI.
3/ What DALL-E means for society
When we realized that DALL-E 2 was going to be a big thing we wanted to have it be an example of how we’re going to deploy new technology—get the world to understand that images might be faked and be like, ‘Hey, you know, pretty quickly you’re going to need to not trust images on the internet.’
We also wanted to talk to people who are going to be most negatively impacted first, and have them get to use it. It’s not the current framework, but the world I would like us, as a field, to get to is one where if you are helping train an AI by providing data, you should somehow own part of that model.