Wait, AI art is good now?
How AI art generators went from awful to photorealistic.
What is AI art?
AI art is art that has been generated using artificial intelligence. Generally, the mechanism is a text-to-image AI generator. So you write the image you want to be created for you, and AI sets about making your dreams come true.
Some of these tools still aren’t brilliant. For instance, I typed “conservative MP weeping” into a free AI art generator. After about 20 seconds, I was given these images of ships sailing into the gloomy horizon and some sad looking people. It might be forecasting something, and it is sort of evocative of a distressed government, but it’s not exactly Rishi Sunak blowing into a Kleenex, is it?
However, not all AI is created equally. Some are actually pretty good at hitting briefs now, and improving rapidly. DALL·E – like Salvador Dalí but also like WALL‑E – was among the first widely known AI that could develop images from text-based descriptions. It first came about in 2021, and was developed by OpenAI, a research lab in Silicon Valley. There’s a DALL·E 2 now, which is even more capable. The tech is available to the public for free.
There have been entire exhibitions of art created using DALL·E 2, such as the one currently on display at San Francisco’s Bitforms Gallery, featuring art created on the DALL·E 2 as well as other AI. Some AI-generated art has been sold for huge sums. Edmond de Belamy was created by making AI study a dataset of 15,000 portraits painted between 1300 – 1900, and later sold at Christie’s for $432,500 in 2018.
Speaking to Art Basel Miami Beach Magazine, Kelani Nichole, founder of the L.A.-based experimental media art gallery TRANSFER said that if artists use AI, “they have an ability to present the ideas of their time in a way that can help shape culture in a way that is hopefully more magical, ethical, equitable, subversive or challenging.” Which is quite good, surely?
So, why is AI art so good now?
In short, because it is very good at learning and, specifically, machine learning: the name given to how AI is able to learn from its own experiences, as opposed to being programmed. In short, the more it is used, the better it becomes.
Speech recognition, for example, is a form of machine learning, in that it gets better every single time it is used. If the AI hasn’t heard a word before, it might struggle to recognise it, but if it hears that word over and over, it can use the data to learn the word, without having to be manually programmed to understand the word. AI has learned to improve its art-creation abilities in the same way.
And of course, the better AI becomes, the more people are going to use it. So the improvement accelerates rapidly. This is why we now see incredibly photorealistic work generated by AI, such as this corgi inside a sushi house.
Is this a good thing or a bad thing?
Well, it’s good in terms of the art being able to generate very accurate images based on what you ask it to generate. Morally, there are questions. Some of the questions are around how the AI is used, others are around who uses the AI.
For example, advancement in AI is a large part of why deepfakes have become increasingly hard to spot with the human eye, which poses risks for misinformation spreading. This is because deepfakes are partly created by fake running images through AI over and over until they become incredibly realistic.
There’s also bias due to the people using the AI. Because there are more men using it than women, AI tends to have a male gaze and “the dataset may be biassed toward presenting women in more sexualised contexts,” OpenAI has said.
There’s also worry that it will cause some image-makers and creators to lose their jobs. Shutterstock is selling AI generated images now, for instance. They will apparently reimburse creators whose art is used to teach AI, but still, it’s a bit rude to replace humans with some code. We tried that with Liz Truss and we all saw what happened there.