In just a few years, Artificial Engine (AI) engines seem to have turned from clunky and robot-like to smooth and remarkably sophisticated. We’re seeing it in text generators as well as in image generators. So how good are they actually, and what does this mean for the world?
We’ve known for some time that AI can be a great disruptor not just for the tech sphere, but for the entire world. From helping us make fairer elections to changing how we learn and even protecting us online, the potential applications for AI are truly endless. Even when you make a food order or look for tickets online, the odds are some algorithm somewhere is based on AI algorithms.
But one place where we thought it wouldn’t make a dent is art.
Surely, art is something entirely human, right? Well, about that.
We recently wrote about GPT-3, OpenAI’s powerful text-generating model that can produce some strikingly human-like texts. Now, OpenAI, has announced a new update for this model.
“We’re excited to announce the release of the newest addition to the GPT-3 model family: `text-davinci-003`. This model builds on top of our previous InstructGPT models, and improves on a number of behaviors that we’ve heard are important to you as developers,” the company announced.
Text-davinci-003 includes several improvements that can produce more engaging and compelling content, as well as enable the input of more complex instructions. For instance, here’s a small limerick one user produced:
Here’s another poem about why we should love AIs (not creepy at all, by the way):
OpenAI also shared an example of how the new version of the model works compared to the old one:
Granted, these are specially picked examples, and the engine undoubtedly still fails for some prompts, but in some instances, at least, it’s practically indistinguishable from human input.
This type of engine could be used in a number of applications, from summarizing large bodies of text to producing new text, maybe even fiction.
But before you creep out, it’s important to keep in mind that the AI doesn’t truly understand what it’s doing.
GPT-3 is a neural network machine that is trained on a vast body of internet texts and spots patterns. It then uses these patterns to predict new text. The “intelligence” in artificial intelligence is a bit of a misnomer in that sense. Does it make any difference? Sort of. It means the AI doesn’t have “common sense” in the sense that we humans perceive it and can sometimes spew seemingly random or nonsensical things.
Nonetheless, though, its results are impressive. But the results of image-generating AIs are maybe even more impressive.
They say an image is worth a thousand words, but what about an ‘artificial’ image?
In the image-generating battlefield, several companies have made striking progress this year, and there’s already an array of tools you can use to produce images that bear an incredible likeness to photos or art. It’s not just bland or generic images — the algorithms can now mimic the style of artists.
Here too, however, we have to make the same distinction: the AI can comprehend data, but it doesn’t truly have knowledge of what it’s doing. Or at least, knowledge that we can understand. This being said, however, the results look pretty convincing.
Here too, the possibilities seem endless. You can create your own art, or art for your project (whatever that project may be) without needing any specialized artistic skills. All you need do is input a good enough text and the algorithm will create the art for you. This can democratize the field and make it more accessible to everyone.
But not everyone is really thrilled.
As always, there are unavoidable concerns with AI. In this case, it’s not hard to see why.
For starters, people writing text and creating art for a living may have their livelihoods threatened. This was painfully clear when an AI-generated image won an art competition, leaving many human artists unhappy.
There’s also the problem of amplifying inequality and discrimination — AIs need to be trained on existing data, and if that data has biases, then it will only amplify said biases. Already, several experiments show that AIs tend to be racist and sexist if these data biases are not addressed. Similarly, data can be biased to amplify censorship. Notably, an image-generating AI in China completely ignores all inputs of Tiananmen square and other censored subjects. The same thing can happen to text-generating algorithms: you can use them to favor a particular ideology and bury another.
Then, there’s the problem of disinformation. Let’s say you hate a politician. You can create unflattering images of that politician, or paint them in whatever situation you’d like, and it’s not hard to see how that can go awry. For now, there are only a handful of companies and they are very restrictive about this type of potentially malicious inputs, but as more and more companies rush to release their product, this will undoubtedly be loosened.
But these AIs are coming whether we like it or not. They’re not going to disappear anytime soon; in fact, we can expect the opposite: a flourishing in the near future. So what can we do?
The problem is technology seems to evolve quicker than our social and ethical norms do these days. We don’t really have clear ethical frameworks and mechanisms to regulate and define what is and isn’t acceptable for this type of technology.
Giada Pistilli, principal ethicist at AI startup Hugging Face, one of the big players in the machine learning industry, says it’s hard to identify a clear line between censorship and moderation due to differences between cultures and legal regimes.
…Join the ZME newsletter for amazing science news, features, and exclusive scoops. More than 40,000 subscribers can’t be wrong.

&nbsp &nbsp

“When it comes to religious symbols, in France nothing is allowed in public, and that’s their expression of secularism,” says Pistilli. “When you go to the US, secularism means that everything, like every religious symbol, is allowed.”
If we look at other technologies that were revolutionary in their time, like say cars, something similar happened: the cars appeared, and then they were regulated and safety measures were implemented (though you could argue that we’re still trying to safely integrate cars into society).
But AI development now happens at breakneck speed, and the environment is often a technological wild, wild west. Can we do the same, or will we be swept away by revolutionary AIs that produce content in text and visual form?
For now, the answer is up in the air. But the solution is unlikely to come from technologists. It will likely be fueled by us, the consumers, and the people we elect as policymakers.

Alexandra is a naturalist who is firmly in love with our planet and the environment. When she’s not writing about climate or animal rights, you can usually find her doing field research or reading the latest nutritional studies.
© 2007-2021 ZME Science – Not exactly rocket science. All Rights Reserved.
© 2007-2021 ZME Science – Not exactly rocket science. All Rights Reserved.