When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Most people buy powerful gaming laptops, for, well, to play games. Some buy them to play and stream games. But now there’s an entirely new reason to buy a powerful PC: To create your own AI art, right on your own PC.
AI art is fascinating. Enter a prompt, and the algorithm will generate an image to your specifications. Generally, this all takes place on the Web, with algorithms like DALL-E, Craiyon, Latitude Voyage, Midjourney, and more. But all of that cloud computing comes at a price: either your request sits in a queue, or you only receive a certain number of requests. Midjourney, an excellent AI art algorithm, costs $10 per month for 200 images, for example.
Generating revenue from AI art has been one of the reasons that the algorithmic models haven’t been released to the public. (Another is that their creators feared that they could be used for disinformation, violent images, or defamatory representations of celebrities.) Stability.Ai and its Stable Diffusion model broke that mold this week, with a model that has been publicly available and can run on consumer GPUs.
Stable Diffusion is also available via a credit-based service, DreamStudio, as well as a separate public demonstration demo on HuggingFace, the home of many AI code projects. However, you can also download the model itself, too, for unlimited art generation right on your own PC. It takes some doing, though; you’ll need to sign up for a free username and password on HuggingFace, which will only then give you access to the Stable Diffusion code itself.
Stability.Ai released the model under the CreativeML OpenRAIL-M license, listed in the Readme file that accompanies the code. Essentially, it states that you agree that the images won’t be photorealistic, and that you agree to share your HuggingFace login with the model creators. You also agree to not create hostile or alienating environments for people, creating images that use violence or gore, and so on. The model includes a content filter, which has already been circumvented with various forks of the code.
To install Stable Diffusion, we’d recommend following either AssemblyAI’s tutorial to install the “actual” Stable Diffusion code, or separate instructions to install a fork of the code that’s been optimized to use less VRAM at the expense of longer inference times. (Note that the latter code is a third-party fork, so there’s theoretically some risk in installing unknown code on your PC.)
Either way, you’ll need to download the model itself (about 4GB) and a few supporting files. You’ll also need to install either a third-party Python application or use the Windows Subsystem for Linux, which gained GPU compute capabilities in 2020. Essentially, installation requires copying a few Linux instructions and tweaking some file names.
For now, Stability.Ai recommends that you have a GPU with at least 6.9GB of video RAM. Unfortunately, only Nvidia GPUs are currently supported, though support for AMD GPUs will be added in the future, the company says.
It seems pretty clear that, eventually, all of this will be bundled into a GUI-driven application, whether it be for Linux or either in a Windows application or at least a Windows front end. But for now, prompts are entered via the Linux command-line interface. This isn’t as traumatic as it may seem, as you can simply enter all of the code once, then tap the Up arrow to bring down the previous entry.
Prompts will therefore look something like this:
You can also add modifiers such as the size of the resulting image, how many iterations the algorithm will use to generate it, and so on, using the tutorial instructions.
Beware, however: it’s at this point that Stable Diffusion can begin taking a real toll on your PC. Creating more images, creating higher-resolution images, and more iterations all require additional processing power. The algorithm appears to put the most load on your system memory, SSD, and especially your GPU and its video RAM.
I tried loading Stable Diffusion on a Surface Laptop Studio (H35 Core i7-11370H, 16GB RAM, GeForce RTX 3050 Ti with 4GB GDDR6 VRAM) and not surprisingly ran into “out of VRAM” errors. Running it on a separate gaming laptop with a Core i7-11800H, 16GB of RAM, and an RTX 3060 laptop GPU with 6GB of GDDR6 VRAM worked, however, with the code fork optimized for lower VRAM. (I didn’t have a desktop PC on hand to test.)
Even then, generating a series of 5 images (the default) required about ten minutes apiece, at 512×512 resolution, with 50 iterations. By contrast, DreamStudio, the same algorithm hosted in the cloud, completed in about two seconds — though of course you’ll only receive an undisclosed amount of credits to generate images.
Of the AI algorithms I’ve tried, I still consider Midjourney and Latitude Vantage to be the best AI art generators I’ve tried — I wasn’t that impressed with my Stable Diffusion results. Still, quite a lot of AI art depends on “promptcraft”: entering the right commands to generate something truly cool. What’s great about Stable Diffusion, however, is that if you own a powerful PC, you can take all the time you’d like to fine-tune your algorithmic art and come up with something truly impressive.
As PCWorld’s senior editor, Mark focuses on Microsoft news and chip technology, among other beats. He has formerly written for PCMag, BYTE, Slashdot, eWEEK, and ReadWrite.
PC Hardware
Digital Magazine – Subscribe
Manage Subscription
Gift Subscription