AI Artists, What Are You Selling: An Image, A Neural Network Or A Story? / by Aleksandra Art

Mario Klingemann’s illustration, inspired by the Dung Beetle Learning series

Mario Klingemann’s illustration, inspired by the Dung Beetle Learning series

The last couple of years have marked a turning point for AI art. Major auction houses, such as Sotheby's and Christie's introduced pieces made using machine learning. Creative AI platforms such as Playform.io allowed anyone remotely familiar with technology to upload datasets and generate images. Artists coming from traditional media began outsourcing their artwork production to those familiar with the tools to keep up with the rising demand of our digital culture. In this article, together with prominent digital artists and experts, we explore art market perceptions towards AI art. To properly understand where the initiatives are heading, I ask a question – what is it that creators of AI art are selling? This inquiry allows shedding light on some of the shortcomings that the market currently faces when it comes to an understanding of the subject. It also calls for a point of view that would consider the broader context of tech culture.

When it comes to art, there are currently two groups of practitioners exploring AI and its contributions to the creative industry. First is Computational Creativity, the field that concerns itself with theoretical and practical issues in the study of creativity. Primarily, the group explores whether computers can be creative on their own and how could this be achieved. The second movement is the Creative AI movement. The focus in Creative AI lies more towards the widespread applications of AI tools to produce cultural goods. Some examples of the Creative AI include generative art, AI-written symphonies and even poems. One particularly fun example is a science fiction film 'Sunspring', in which actors were hired to act out a script written by an AI bot (clip below).

In the wake of Google's AI Go victory, filmmaker Oscar Sharp turned to his technologist collaborator Ross Goodwin to build a machine that could write screenplays. They created "Jetson" and fueled him with hundreds of sci-fi TV and movie scripts.

For the movie, music and literature industries new technology is nothing new. The effect of AI content production on the fields is particularly interesting to consider. However, in this article, the focus is on the industry where the processes are not as clearly defined when it comes to using technology – the art market.

A machine learning system that is currently most commonly used among AI artists is the generative adversarial network (GAN). Ian Goodfellow and his colleagues developed GAN while he was working as a research scientist at Google. A simple explanation by Google describes GANs as generative models, which create new data that resemble your training data.

For example, GANs trained on human portraits can create images that look like photographs of human faces, even though the people depicted do not exist. A good example of GAN in practice is the work of Mike Tyka, an AI artist and technologist at Google. Mike’s project ‘Portraits of Imaginary People’ (below), featured at Ars Electronica Festival ’17, explored the latent space of human faces by training an artificial neural network to imagine and generate portraits of non-existent people. To do so, he fed GAN with thousands of photos of faces he collected from Flickr.

On October 2018, Ahmed Elgammal, Professor of Computer Vision and an AI artist, published an article titled 'With AI Art, Process Is More Important Than the Product'. Dr Elgammal argued that AI art is conceptual art, an art form that began in the 1960s, in which the idea represented is considered more important than the finished object. "It's about the creative process – one that involves an artist and a machine collaborating to explore new visual forms in revolutionary ways," he wrote.

The notion of the artist and the machine 'collaborating' humanizes the latter. The idea of humanizing technology is nothing new. We gave names to natural language processing devices such as Siri and Alexa. We create robots that look like humans. However, these products address the market from a consumer standpoint. By humanizing the machine an artist works with, we take away the credit for the artists' work.

Mimesis, "imitation" in Greek, refers to nature and human behaviour mimicked in the arts. Art imitates life, so to say. In almost all areas of our professional experience, we use technology to aid us in our work. However, we do not give salary to our machines. Neither we credit them in our reports. Similarly, how can we consider giving credit to the machine for an artwork?

GANs provide a new way for artists to experiment, but they also cause a stir of thought. "What is Art?" is a subject of discussion throughout centuries. With the rise of AI, we have a new question that asks "Who is the Artist?". A group of professionals in the field of new media art share a view that AI is simply a tool to create the artwork, like a paintbrush.

2018 Lumen Prize Gold Award winner: Mario Klingemann’s piece ‘The Butcher’s Son’. A neural network’s interpretation of the human form.

2018 Lumen Prize Gold Award winner: Mario Klingemann’s piece ‘The Butcher’s Son’. A neural network’s interpretation of the human form.

Mario Klingemann, a known artist and a winner of Lumen Prize, the award for art and technology, compares AI to the piano. "If you hear somebody playing the piano, would you ever ask if the piano is the artist? No. So same thing here: just because it's a complicated mechanism, it doesn't change the roads" he explains in an interview with Sotheby’s. Carla Rapoport, who runs the Lumen Prize for eight years now, agrees. "Cavemen used sticks and coloured mud - today's visual artists use algorithms, among other tools. A number of shortlisted artists this year, for example, incorporated AI tools into a wider work, either moving image or sculptural.

The work by Jake Elwes, CUSP, shortlisted this year, fits into this category. He used an AI tool to create his birds and then 'set' them into a filmed landscape" she shares. Both Jake Elwes and Mario Klingmann use AI as an element of their work, a tool. They do so by either creating installations to stream the generated images or by integrating GAN images as an element of a video piece.

However, the tool itself is not the creator, neither it is a work of art. What artists choose to create using that tool holds more substantial value. However, does this apply to any digital device? With companies such as Acute Art and Khora Contemporary, allowing any artist to become a VR artist, has the technical knowledge become irrelevant?

It is necessary to approach the digital field within the context of digital culture. When famous artists of the past centuries outsourced their work, it wasn't because they couldn't do it themselves. "It's not that people couldn't do it, it's just not worth their time… Henry Moore I'm sure knew all about working in bronze" shares Michael Takeo Magruder, an internationally acclaimed digital artist. In a non-digital medium, if an artist wants to use another artist's style, they would still have to create the artwork themselves.

When it comes to digital tools, and especially AI, however, the process is fluid. And since the practice is relatively new, the market lacks the understanding to provide constructive feedback. Established critics from the traditional medium evaluate the worth of a digital piece under traditional measures and context of the art world, if at all. There is a level of scepticism present due to the infant stage of the movement. (Since I wrote this, I witnessed Jonathan Jones, an art critic at the Guardian, referring to AI as "Bullshit" at a recent panel).

Michael continues, "For myself, I don't do the heavy lifting, but I know absolutely what is possible… I understand the medium, and I come from that scene. When all of these artists and academics want to talk about digital, it's like yeah but do you really understand it, the culture?."

Michael Takeo Magruder, detail of Imaginary Cities — Paris (11097701034), 2019. Algorithmically generated mono prints on 23ct gold-gilded board. Photo: David Steele © Michael Takeo Magruder.

Michael Takeo Magruder, detail of Imaginary Cities — Paris (11097701034), 2019. Algorithmically generated mono prints on 23ct gold-gilded board. Photo: David Steele © Michael Takeo Magruder.

The 'culture' that Michael is referring to is the tech culture, the gamers, coders, and tech enthusiasts. Some traditional art market professionals may perceive the tech culture as 'outsider' culture in the art world. Another similar outsider culture is often considered street art. Michael considers Banksy as one of the greatest contemporary artists. He notes that although street art has an ecosystem of its own, Banksy demonstrates a thorough understanding of art and pays tribune to notable traditional artists in his work.

Both the street art culture and digital culture bring something new to the art world. However, when it comes to digital art, we see a different phenomenon. Established traditional artists begin to outsource their work to VR or AI specialists. While doing so, the artist receives all the recognition, using his name as a brand. But what if this happened in street art, what if David Hockney all of a sudden started doing graffiti, would the market recognize him as a prominent street artist? I doubt it (but who knows, right?).

A week after Dr Elgammal described AI art as conceptual, the first AI artwork sold at a major auction. Obvious, the Paris-based collective that produced the work used 15000 portraits painted between the 14th and 20th century to feed the system. By taking the available data of Renaissance paintings, the artist collective used GAN to create a fictional character, referred to as Edmond de Belamy (pictured in the family tree below). The press loved the story of the sale, especially since the work fetched an incredible sum of over $400,000 at Christie's.

The collective indeed demonstrated the highest level of salesmanship and marketing. Some would call it  'state of the art'. The story, however, sparked a level of criticism when the audience learned that another artist, Robbie Barrat, made the algorithm they used in creating their work. "We are the people who decided to do this, who decided to print it on canvas, sign it as a mathematical formula, put it in a gold frame," defended their actions Obvious when asked about their lack of credit to Barrat. However, the level of inquiry had limits due to the lack of understanding about who borrowed what from whom.

Obvious’ first collection is a series of 11 ‘realistics’ portraits generated by GANs. They used it to create a fictional family titled the Belamy Family.

Obvious’ first collection is a series of 11 ‘realistics’ portraits generated by GANs. They used it to create a fictional family titled the Belamy Family.

The chain of code would be a rabbit hole if one were to point out to try and establish who made a more significant contribution. Just consider the following: Since Ian Goodfellow's development of GANs, researchers have been using its open access for various adaptations. Namely, Facebook's Soumith Chintala partnered with Alex Radford, a researcher at Indico Data Solutions, to improve Goodfellow's GANs, so they work better with images. The collaboration was a component that further adapted the code for artistic practice.

After cooperating with Radford, Soumith shared the implementation on Github, an open-source for developers to share their work. Only after Soumith shared his work on Github did the code reach Robbie Barrat, who then made additional improvements by adding scrapers and pretrained models. Hence, the line is blurred when deciding between Barrat, Soumith, Radford or Goodfellow as the contributors to Obvious' piece. Therefore, all of the players, to a certain degree, can be considered to have contributed to Obvious' work. And yet, only one of them was called out.

In short, while we refer to AI  as a 'tool', we also can't deny that, with its complexity, it's a different kind of tool when compared to traditional material methods. There is value in the context within which the works are created and presented. Otherwise, the artist is most likely to be misunderstood. So what are the artists selling then? "Perhaps the term for AI art might be a 'generative story'?" suggests Carla. "But certainly not a neural network, as an oil painting wouldn't be identified by the brush or the chemicals chosen to create it." she adds.

Mike Tyka comments that "significance comes from the process, its implications and the connection to what else is happening in the AI field". He draws attention to the fact that knowledge about the AI field (the 'culture' that we discussed in the beginning) is a relevant factor when it comes to evaluating AI art.

Meanwhile, sceptics like Jonathan Jones from the traditional art world, argue that the AI works currently produced should be dismissed altogether since they lack aesthetic qualities. As an example, take a look at this conversation between Jason Bailey, founder of an art and tech publication about cutting-edge technology in art, and senior art critic and columnist for New York Magazine Jerry Saltz. Jason argues, that “The problem is nothing in traditional art world training has prepared the current gate keepers to understand or speak intelligently about the nuance of generative art.”.

On the one hand, you have the sceptics, the traditional art critics, judging the works based on their aesthetic qualities. On the other hand, you have computer scientists, researchers, and practitioners of AI, who are exploring new ways of what's possible and yet not fully understood. In near future, machines will become more sophisticated and accessible, causing a wider range of artists to adapt them in their work. Initiatives that could facilitate a dialogue between the two groups can enhance our experience and perception of the new medium. We are in an era where Art and Technology is a confluence, not a juxtaposition.

EONS is a short animation, a moving painting, a music video and an experiment in creating narrative using neural networks. EONS was created entirely using artificial neural nets: The Generative Adversarial Net BigGAN (Andrew Brock et al.) was used to create the visuals, while the Music was composed by Music Transformer (Anna Huang et al.). http://www.miketyka.com

Mike Tyka’s "EONS" - a video made entirely using #BigGAN, scored with Anna Huang’s #MusicTransformer