We’re a new publication dedicated to reporting on how the most important trends, challenges and opportunities of the day connect to one another – and require connected solutions. Learn more.
Introducing Grid Health, our new weekly health and policy newsletter
Photo Illustration: Mae Decena. Sources: Getty Images/DALL-E
New apps spotlight the best and worst of humanity.
You’ve likely seen them around social media — demented faces, fantastical landscapes and futuristic hellscapes produced by artificial intelligence-powered image generators like DALL-E 2 and Midjourney.
These platforms, which are slowly opening up to the public, all follow a similar model. A user enters a prompt, from a single word to a sentence, and the AI spits out an image to represent it. And yes, it gets weird.
But while the images can be fascinating, and the process of creating them almost addictive, they also come with a host of questions about how these platforms may be used, and what they may be asked to create. The latter includes images that are lewd or potentially illegal, such as virtual child sexual abuse material.
“We are telling AI who we are, we’re feeding it data sets of who we are, and it’s just spitting it right back at us,” said Julie Carpenter, a research fellow in California Polytechnic State University’s Ethics and Emerging Sciences Group. “In some ways it’s a funhouse mirror, and sometimes, depending on the medium and what it spits back, it’s not very fun.”
Some AI image generators, like DALL-E 2 and WOMBO, are still in limited beta release — meaning that only a certain amount of people are allowed to use them. Others, including Craiyon and Midjourney, are open to anyone who wants to take part. Midjourney allows 25 free queries before users have to pay for a license to do more, while Craiyon offers unlimited queries.
These systems are trained on millions of real images, which the AI analyzes for patterns that it uses to respond to user queries.
What an AI sees: How Midjourney interpreted Grid’s headline “The purebred dogs craze has led to dognappings and puppy scams.” (Midjourney)
A visitor to the Midjourney Discord server, the forum where images are generated, will normally see thousands of images simultaneously sharpening into focus. During one recent visit by Grid, users’ prompts ranged from “clown with black eyes 8k ultra realistic bad weather in new york” to “volcano shooting out pies.” The output ranges from the cheerfully surreal to the downright sinister, as Grid staff found out when we fed a few of our headlines into Midjourney. (The results are embedded throughout this piece.)
Some AI-image sites, such as Craiyon and Midjourney, also have a social component that has helped to lure early adopters. Midjourney’s Discord has channels where people show off the images they generated, give feedback to the developer team and each other, as well as show off their new pet lizard. While on Craiyon’s precursor, DALL-E mini, people could post their images, give likes and leave comments.
During a recent Midjourney “office hour” where founder David Holz fielded questions from users, one user said they had trouble with social media since the mid 2000s when Facebook was “the place to be.”
“But on Midjourney not so much,” said the user. “I picked it up immediately and on my second day I was like whoa, this is one of the most miraculous thing that’s ever happened to me.”
The person said they’d generated more than 6,000 images in less than a month.
The headline for Grid’s story on the 2022 “summer from hell” of climate disasters produced even more apocalyptic AI art. (Midjourney)
“I mean, it’s fun,” said Carpenter. “You could go back to really arcade games or other games. Social media reminds me of a lot of childhood games, like the game of telephone.”
Just as with telephone — where one child whispers a message to another, repeating the process down a line of participants — it’s often unclear what will come out the other side for users of AI image generators. And some experts see profound implications as machines inch closer to demonstrating one of the qualities that define humanity.
“People are seeing ways in which this is calling into question or at least asking us to be a little more precise in the way that we define human creativity,” said David Gunkel, a professor of media studies at Northern Illinois University who specializes in ethics of emerging technology. “Because if the machines now can start pumping out images that are photorealistic and that are this entertaining, then it’s called into question the whole idea of the uniqueness of human creativity.”
An AI interpretation of Grid’s story, “Americans hate masks, but there’s no safely ending the pandemic without them.” (Midjourney)
The images people try to generate with these platforms aren’t always fun. Some are potentially illegal.
For example, when a Grid reporter was in Midjourney’s Discord forum, one user asked the system to generate child sexual abuse imagery. The request was explicit. While Midjourney did not produce exactly what the user requested, it did generate a general image of a small child. The incident illustrates the extent to which bad actors will try to use such platforms for their own ends.
When Grid shared the user’s name with Midjourney, Holz said that “it looks like they had already been detected, banned and wiped from our system.”
“We have filters that try to prevent many forms of inappropriate content. If someone tries to bypass them, the moderators will either warn or ban the user (depending on the type of content), and then the team will update the filters,” Holz said.
Other requests for things like “photorealistic elves in bikinis” and “kathryn winnick, insanely realistic, hyber detailed, hot, body shot,” were generated, to varying degrees of success.
Craiyon, for its part, has a section in its frequently asked questions list about the potential for limitations and bias in its AI model.
“While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases,” reads the section. “Because the model was trained on unfiltered data from the Internet, it may generate images that contain harmful stereotypes. The extent and nature of the biases of the DALL·E mini model have yet to be fully documented.”
Experts told Grid recently that it’s not clear whether Vladimir Putin is sick, but this is what it might look like, according to Midjourney. (Midjourney)
Both Gunkel and Carpenter said they are concerned about whether there will be enough content moderation in place as these systems become more popular. While filters and other measures built into these systems can provide some degree of moderation, Carpenter said having humans review images is important for understanding them in context. That’s important to catch bad actors trying new strategies to get around existing safeguards. But just how it will work, given how quickly image generators can spit out new images, is unclear.
Gunkel is also worried about the photorealistic images and how those may be misused or manipulated. Midjourney, for its part, does not create photorealistic images for this very reason.
“I think the real concern here, and those things I think we’ve got to really keep our eye on, is the extent to which these image generation systems are able to be employed to create deepfakes because of the photorealism,” said Gunkel. That has implications not just for AI-generated images’ use in politics, but also as tools of defamation or libel if users create deepfakes to harm others, he said.
“As users, we can feel like the content moderation that’s being done by some social media or all social media sites is not enough,” said Carpenter. “There’s even less it with these emerging technologies around creating images.”
Thanks to Lillian Barkley for copy editing this article.
Benjamin Powers is a technology reporter for Grid where he explores the interconnection of technology and privacy within major stories.
Sign up for Grid Today and get the context you need on the most important stories of the day.