Image: Adobe
AI's magical text-to-image generators, like Dall-E 2, have sparked fears of unemployment among professional illustrators — but Adobe, the leading maker of software tools for designers, sees AI as more of a creative assistant.
Driving the news: At at a conference this week, Adobe showed how it could build generative AI tools into Photoshop, Lightroom and other products.
Why it matters: Fanciful images generated by the likes of Dall-E 2 and Stable Diffusion have raised thorny legal and ethical questions, but Adobe's early work provides a glimpse at an alternative vision of humans and AI working together.
Details: At its MAX conference in Los Angeles this week, Adobe showed off a number of ways generative AI could help creative workers without entirely supplanting artists.
The big picture: Adobe's approach offers a contrast to the projects put out by OpenAI, Google, Meta, Stable Diffusion and others, which have largely focused on what AI programs can do on their own given text prompts from users.
What they're saying: While some see generative AI as a threat to artists' livelihood, Adobe chief product officer Scott Belsky told Axios he sees it as eliminating mundane chores, similar to the way GitHub's Copilot helps programmers code faster.
Yes, but: Legal issues remain, particularly around intellectual property and rights to the data used to train these AI programs. Belsky said that Adobe is wrestling with the same questions as others in the industry, and also trying to develop business models that support artists.
Then there is the question of who owns an AI-generated image.
What's next: While much of the industry's early work has focused on image generation (and simple videos), Adobe sees generative AI also being useful in everything from 3D design to texture creation to making logos.

source