The ability to remix historical movements and styles without context raises questions about the future of art.
Jo Lawson-Tancred,
In recent months, strange collages have been popping up on Twitter feeds featuring improbable and occasionally gruesome images that each represent some kind of farcical prompt. Examples include “the demogorgon from Stranger Things holding a basketball,” “plague doctor onlyfans,” and “archaeologists discovering a plastic chair.”
Many will recognize these posts as the result of text-to-image generation, the latest craze in machine learning, which gained mainstream attention earlier this year with artificial intelligence company OpenAI’s announcement of an image generator called DALL-E 2. Some of the pictures it became known for include the avocado armchair and an astronaut riding a horse through space.
pic.twitter.com/DbLoe1s00c
— Weird Dall-E Mini Generations (@weirddalle) June 8, 2022

This year has also seen the arrivals of Midjourney, from a lab of the same name, and Google’s version, Imagen. Some possible practical uses include producing specific images to illustrate news items or creating cheap prototypes during design research and development.
It’s plain to see that the latest influx of viral images on Twitter are not nearly as impressively rendered as those that accompanied the announcement of these official releases, some of which are photorealistic. These much scrappier images with barely discernible faces have been produced by the replica Craiyon, known until recently as DALL-E Mini. 
A horde of these open source unofficial text-to-image algorithms have surfaced online—including the Russian-language ruDALLE, Disco Diffusion, and Majesty Diffusion—produced by programmers in their spare time as an impatient response to the reluctance of tech giants to make their algorithms public.
OpenAI has explained that before it does so, it plans to study DALL-E 2’s limitations and the safety mitigations that would be necessary to make sure it’s deployed responsibly.
Beyond the obvious meme-making potential of these open source replicas, they have also proved a huge hit with a growing community of self-identifying, largely self-taught A.I. artists. 

A post shared by Michael Carychao (@michaelcarychao)

Michael Carychao, whose A.I. generations have won him 26,800 followers on TikTok, had previously worked in games, VFX, V.R., and animation when he came across an open source algorithm trained by the programmer and AI artist Katherine Crowson. “Instantly I dropped everything and have been making A.I. art every day since,” he said.
Much of the appeal seems to be the unprecedented quantity of images produced at high speed that generative A.I. can offer an artist looking to experiment with any idea that pops into their head.
“The most surprising things can be conjured up in seconds,” says Carychao, who also notes that since becoming a member of this community his social media feeds are filled with “quirky, beautiful, curated visions that blend old and new styles in provocative ways; it’s like going to a dozen galleries a day.”
OpenAI, Google, and Midjourney are justified in waiting to unleash the power of their creations, the full implications of which are far from apparent, but the decision has been controversial and their waitlists have created a huge gap in access.
The concept of making code open source for these artists is not only about barriers to entry but is a fundamental value of democratic digital production. Members of the community dedicate a huge amount of their time to making improvements to the algorithms currently available via Google Colab, congregating on the relevant Discord server to crowdsource their expertise.
“Every week, new models and new techniques appear, and what is perceived as good quality is ever-changing,” said Apolinário Passos, who set up the platform multimodal.art as a comprehensive guide to the scene. He is working to make the tools accessible to non-coders with the newly developed MindsEye interface that allows users to pilot different available algorithms.
Artists hoping to make use of the current resources available for producing A.I. art are best off being able to make at least some sense of Python and having access to a decent GPU. Aside from these hurdles, however, the tools have opened up digital art production to many more aspiring creatives and proved to be a great leveler in terms of artistic skill.
Unlimited Dream Co., Stained Glass (2022). Courtesy of Unlimited Dream Co.
According to London-based artist Unlimited Dream Co., who produces lyrical, semi-abstracted dreamlike compositions, the best comparison for text-to-image generation is the advent of photography in terms of its democratizing potential. It gives “anyone the ability to create perfect representations of anything they can imagine just by writing a sentence.”
Writing this sentence, technically known as “prompt engineering,” may be the key skill that replaces the artist’s hand. These carefully worded instructions can be seen as an interface between ideas floating in an artist’s head and the final images that show up on screen.
For artists, a crude command like “the demogorgon from Stranger Things holding a basketball” is unlikely to suffice. Instead, sophisticated prompt engineering requires adding in stylistic references to mediums, like Polaroid, stained glass or drawing, and to movements or makers.
The label of the popular 3D creation tool Unreal Engine, for example, cues the algorithm to output images that are more hyperrealistic. 
The names of artists or designers will also produce the desired effects. The prompt “Barbara Hepworth” summons her trademark smoothly modeled, bulbous shapes with open cavities. This raises the question of what it means for a traditional artist to spend a lifetime cultivating a style that can be aped in just a few seconds.
Unlimited Dream Co, said his abstract compositions explore the contrasts this makes possible, for example, he might “mix Brutalist architecture with intricate Arts and Crafts patterns or organic flowing shapes made of hard plastic.”
“AI is being trained on the collective works of humanity and in my opinion that makes it our collective inheritance,” according to Carychao. “We can co-create with the greats of the past, we can riff with our ancestors, and future creators will be able to riff with the bodies of work we leave behind.”
This mode of creating—dubbed the “Recombinant Art” movement by creative technologist and writer Bakz T. Future—reflects our digital age, in which vast troves of context-free content is endlessly reordered and reused. It makes sense that our mode of creation might start to mimic our mode of consumption.
“The conjuring of thoughts into visuals enables the exploration of creativity,” said Passos. “It can be a powerful tool for new artists to figure out that they are artists.”

Share
By Jo Lawson-Tancred
By Jo Lawson-Tancred
By Jo Lawson-Tancred
By Vivienne Chow
By Caroline Goldstein
©2022 Artnet Worldwide Corporation. All Rights Reserved. ';
$('body').append(ouibounceScript);

// Add animation css
addCss('https://cdnjs.cloudflare.com/ajax/libs/animate.css/3.5.2/animate.min.css');

if (generalSettings.loadFontAwesome) {
addCss('https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css');
}

// Check if ouibounce exist before calling ouibounce
var initOuibounce = setInterval(function() {
if (typeof ouibounce !== 'undefined') {
appendNewsletterSignup();

var $modal = $('#ouibounce-modal');
SignupForm.init($modal.find('form'), function onSuccess() {
//hide form fields and show thank-you message
$modal.find('.form-row').hide();
$modal.find('.newsletter-signup-thank-you').fadeIn('fast');

setNewsletterCookie('signedUp', 1);

//after successful signup, hide the signup bar after 5 seconds
setTimeout(function() {
closeSignupBar();
}, 5000);
});

// Handler for close signup button
$('body').on( 'click', '.close-signup', function(){
setNewsletterCookie('closedSignupBar', 1);
closeSignupBar();
});

ouibounceAPIaccess = ouibounce(
$modal[0], {
aggressive: true,
sensitivity: 50,
callback: function() {
slideInModal('Down');
}
}
);

clearInterval(initOuibounce);
}
}, 100);
}

function slideInModal(upOrDown) {
$('#ouibounce-modal')
.removeClass('slideOutDown slideOutUp')
.addClass( 'slideIn' + upOrDown );

setNewsletterCookie('recentlyShown', 1);
}

function setNewsletterCookie(cookieName, value) {
//exdays*24*60*60
var settings = cookieSettings[cookieName];
var expirationMinutes = settings.expiration_minutes;
if (!expirationMinutes) {
expirationMinutes = daysToMinutes(settings.expiration_days);
}
setCookie(cookieName, value, expirationMinutes);
}

function daysToMinutes(numDays) {
return numDays * 24 * 60;
}

/**
* Generic setCookie() method, used by setNewsletterCookie().
* There is probably no need to call this directly - use setNewsletterCookie().
*/
function setCookie(cname, cvalue, expMinutes, prefix) {
//default prefix is 'artnet_newsletter_'
if (prefix == undefined) {
prefix = 'artnet_newsletter_';
}
var d = new Date();
d.setTime(d.getTime() + (expMinutes*60*1000));
var expires = "expires="+d.toUTCString();

//console.log(prefix + cname + "=" + cvalue + ";" + expires + ";path=/");
document.cookie = prefix + cname + "=" + cvalue + ";" + expires + ";path=/";
}

function getCookie(cname, prefix) {
//default prefix is 'artnet_newsletter_'
if (prefix == undefined) {
prefix = 'artnet_newsletter_';
}
var name = prefix + cname + "=";
var ca = document.cookie.split(';');
for(var i = 0; i
artnet and our partners use cookies to provide features on our sites and applications to improve your online experience, including for analysis of site usage, traffic measurement, and for advertising and content management. See our Privacy Policy for more information about cookies. By continuing to use our sites and applications, you agree to our use of cookies.
You are currently logged into this Artnet News Pro account on another device. Please log off from any other devices, and then reload this page continue. To find out if you are eligible for an Artnet News Pro group subscription, please contact [email protected]. Standard subscriptions can be purchased on the subscription page.
Log In

source