AI models that generate stunning imagery from simple phrases are evolving into powerful creative and commercial tools.
OpenAI, Stability AI, Midjourney, Google
Now
OpenAI introduced a world of weird and wonderful mash-ups when its text-to-image model DALL-E was released in 2021. Type in a short description of pretty much anything, and the program spat out a picture of what you asked for in seconds. DALL-E 2, unveiled in April 2022, was a massive leap forward. Google also launched its own image-making AI, called Imagen.
Yet the biggest game-changer was Stable Diffusion, an open-source text-to-image model released for free by UK-based startup Stability AI in August. Not only could Stable Diffusion produce some of the most stunning images yet, but it was designed to run on a (good) home computer.
Coming soon: A new report about how industrial design and engineering firms are using generative AI. Sign up to get notified when it’s out.
By making text-to-image models accessible to all, Stability AI poured fuel on what was already an inferno of creativity and innovation. Millions of people have created tens of millions of images in just a few months. But there are problems, too. Artists are caught in the middle of one of the biggest upheavals in a decade. And, just like language models, text-to-image generators can amplify the biased and toxic associations buried in training data scraped from the internet.
The tech is now being built into commercial software, such as Photoshop. Visual-effects artists and video-game studios are exploring how it can fast-track development pipelines. And text-to-image technology has already advanced to text-to-video. The AI-generated video clips demoed by Google, Meta, and others in the last few months are only seconds long, but that will change. One day movies could be made just by feeding a script into a computer.
Nothing else in AI grabbed people’s attention more last year—for the best and worst reasons. Now we wait to see what lasting impact these tools will have on creative industries—and the entire field of AI.
No one knows where the rise of generative AI will leave us. Read more here.
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.
Discover special offers, top stories, upcoming events, and more.
Thank you for submitting your email!
It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
Our in-depth reporting reveals what’s going on now to prepare you for what’s coming next.
Subscribe to support our journalism.
Cover art by Matthijs Herzberg
© 2023 MIT Technology Review