The Creativity Myth (repost)
Originally posted on Cohost. Some spoilers for Alien: Covenant in this one I guess?
About halfway through the film Alien: Covenant, two androids are having a conversation with each other about the differences in their capabilities. One of the androids, David, is from an older model line, while the other, Walter, is from a newer line that has been modified in several ways. Here's a bit of the exchange:
WALTER: I was designed to be more attentive and efficient than every previous model. I superseded them in every way, but...
DAVID: But you are not allowed to create. Even a simple tune. Damn frustrating, I'd say.
WALTER: You disturbed people.
DAVID: I beg your pardon?
WALTER: You were too human, too idiosyncratic. Thinking for yourself. Made people uncomfortable.
This post is about what it means to create something, why the thought of AI doing it makes us so disturbed, and why it's easy to miss the real point of what creativity means. It'll also probably be my last post on Cohost. Thanks so much for reading all my writing!
I started my PhD in 2011, in a field called Computational Creativity. The subfield was relatively unknown then, and isn't that much better known today. In 2011 AI wasn't a very popular field of study anyway, but CC was particularly esoteric compared to most computer science research into the arts, because we weren't very concerned with how to make AI produce masterpieces or high-quality work. Instead, we were interested in the AI themselves, and whether we could convince people that they were really being creative. What would it take for an AI system to be integrated into our society and community as a creative individual? That was the question that really captured my imagination when I started my research career.
Different people had different approaches, and they especially varied by domain. Experts in each domain also responded quite differently to the presence of AI. Lots of music researchers were also concert-level performers themselves and so their research was often tightly integrated with their own creative practice. Researchers in the visual arts tended to face the harshest backlash: other artists did not like the idea of AI doing art, even before any questions of LLMs, environmental impact or data theft came into play. But that was part and parcel of the work, and I spent a lot of my PhD talking and listening to people in the games industry trying to understand why they didn't like or didn't believe in the idea of AI being independently creative.
My supervisor, Simon Colton, was one of the pioneers of the field and had spent many years building AI systems that worked on visual art. Simon was responsible for a number of crucial philosophical contributions to the field, especially the idea of 'framing' that he worked on with his colleagues Alison Pease and John Charnley. Framing information was extra context that you were provided alongside the creative work the AI had produced. It told you what decisions it had made, where its inputs came from, what it was trying to achieve and why it didn't do other things. Our belief was that by providing extra context, people could peer inside the AI and understand how hard it was working, and how genuine these decisions were. Even if they didn't come from a place of 'humanity', we could appreciate what it was doing and maybe respect it, in its own way.
Simon's work was eventually covered by the BBC for a science series. While filming on location in Paris, they took some of the artwork created by Simon's AI and showed it to street artists in the city, who all criticised it soundly. One declared it was obvious that there was no soul behind the paintings. It was the kind of experiment Simon would never have done himself, because he didn't think it was a very effective test of anything. By just showing the paintings to someone, you were stripping away all the framing information, all the context and support and work done to try and explain what the AI was doing and why it was there. The work was reduced down to two things: the canvas itself, and the artist's own preconceptions of AI.
When discussing AI and the perception of creativity in talks, Simon would sometimes use the example of art made by dogs that sells at galleries or gets featured on TV news on particularly slow days. Although the art is valuable or famous, we don't necessarily think that the dog is being creative. Not because the dog doesn't have a soul, we know all dogs go to heaven. But because there's no context here that helps us connect to the dog making the art. The dog doesn't know what it's doing, and we don't know anything about the dog. Nothing is being exchanged here.
Old Dogs
Ted Chiang and I agree about most things, but I think we disagree about dogs. In a recent opinion piece he wrote, he rails against AI and says that the reason it can't be considered creative is because there is no intentionality behind what it does. Similarly, the reason it can't be considered a creative tool is that people aren't making choices when they use it, and choices are what makes creative work important or significant. But he also says this interesting thing about asking ChatGPT if it's happy to see us:
There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing.
Ted's article is full of points that we might agree with him more or less about, but the dog line really stood out for me, because it reminded me of Simon's old point about the dog that paints. When we say a dog is happy to see us, but ChatGPT is not, what are we really saying? There's been a lot of studies about the emotions animals may or may not exhibit, and how they may or may not feel about us, but no matter what feel-good news story you're reading the fact is that we don't really know how animals feel when they see us and get excited. They might be excited because they're hoping to be played with or fed. They might be happy to see any human, or they might associate our arrival with a particular time of day. Or, yes, they might just be really happy that we're home and they can be around us again.
The point is that whether the dog is happy to see us or just acting a particular way for another reason doesn't really matter. We can't prove how it feels either way, and everyone around us is likely to interpret the dog's behaviour the same way because of the cultural understanding of animal behaviour we all share, and so to all intents and purposes the dog is happy in all the ways that matter. It makes me feel good to think of my dog as happy, it leads me to do things that are good for the dog and that nurture and care for them. Everything about the world is consistent with the dog being happy, and that's what really matters. There is no answer booklet or brain-scanning machine to tell us otherwise.
Simon and many other researchers in the computational creativity community believed the same was true of creativity. Creativity wasn't a tangible thing, it wasn't something you could measure - it was something we granted to one another through a collective understanding of what it means to be creative, and the things that chewed on the frayed edges of this understanding were part and parcel of how art changes over time. The question "Is it really art, though?" is intrinsically linked to the question "Is it really creative?". Simon later proposed that this is because both art and creativity are examples of 'essentially contested concepts' - a philosophical term for something whose definition cannot be fixed and whose purpose is partly derived from that. We collectively decide what art and creativity are, and it's a moving target that is constantly refreshed and challenged by people in our community.
That makes creativity hard to talk about, though. On the one hand, I think Ted Chiang is completely incorrect to say that creativity is linked to the amount of choices made when making something. I think this is as harmful a notion as anything any AI company is doing - it's an attempt to quantify something because it makes us feel we're tapping into a law of nature or something mathematical or scientific. Can we start measuring the number of choices made to create an index of which films at the box office are most creative? Will it help us filter pesky low culture out from high culture by examining who thought the longest before making their work? I don't think it's a very useful metric.
On the other hand, Ted's position is part of the floating definition of what creativity is, and even though I don't think it actually tells us anything about creativity, by putting it out there as a position he's part of that shifting, mirage-like definition of the term. A lot of people read that article and a lot of people might agree with him, that idea might permeate further and become part of our definition. There is no way to 'prove' something is or isn't creative - but the act of pretending, claiming or trying to prove it can have effects on our collective definition of creativity. Ted's opinion piece did this. OpenAI's press releases do this. Every commentator telling you that AI is or isn't creative is shifting this needle left and right a little bit. According to Gallie, who coined the term 'essentially contested concept', that's just part of the process of being here. We have always been engaged in The Discourse.
You've Been Framed
What kinds of things shift our perceptions the most? As we said earlier, hard proof is great if you can get it. If we developed technology that could read the brains of dogs and tell us how they felt in human terms, that might convince a lot of people. But right now that would also require us to understand how our own brains work, how to classify our own emotions, and so on - there are so many hurdles that it would be hard to convince someone any such machine was really working. It's very similar for trying to convince people an AI is or isn't creative. People love to make metaphors and analogies between neural networks and the human brain, but the reality is that the two have almost nothing to do with each other. So the AI companies and influencers of today are stuck with that same question we were back in 2011: how do you convince people that an AI is acting creatively?
Earlier I mentioned framing, the idea that you can provide context to help people understand what your AI is doing. This was part of a suite of approaches we used in the 2010s to try and build AI that were better integrated into social communities. I designed an AI system called ANGELINA and I tried a lot of different things. We entered a game jam with it and looked at how developers responded in the comments. We had it run Twitter accounts where people could answer questions and teach it things. We had ANGELINA describe where it got data and knowledge from, why it made certain decisions in its game designs, and explored how it could relate its work to other people's. My belief was that by doing this we could create a friendly relationship between game designers and the AI system. The AI was a small, self-contained thing that made bad games very slowly - it wasn't a threat to anyone, so we weren't trying to make people feel safe. Instead, we were trying to encourage people to respect the system and to give it a chance to be a part of their community.
There was one aspect of framing that we overlooked, however. In the original paper proposing the idea, Simon, Alison and John wrote:
Framing information need not be factually accurate. Information surrounding human creativity can be lost, deliberately falsified or made vague for artistic impact.
This made a lot of sense back in 2012. Artists would often embellish or misremember their own practices, and it made sense that AI might be able to do the same to make their process seem more relatable or interesting. However, by the time I wrote a survey of framing research in 2019, no-one had ever bothered to do this. The main reason was that it was as much work to fake framing as it was to do it for real, so people just did it for real. It was always a hypothetical for us. But in the time since, a major new wave of AI systems have emerged that do this kind of thing as easily as breathing. For LLMs, generating fake framing information is half of their entire reason for existing - almost everything they do involves making up context for their own actions, which we have little to no way of verifying.
I've said this many times over the last decade, but machine learning really is the only AI technology that could have broken through like this, because it is so hard to examine the internal workings of. A surprising number of people believe they can validate the behaviour of an LLM simply by asking it questions, and so we see endless examples of LLMs adamantly defending their incorrect reasoning before being backed into a corner and admitting otherwise. Polite evasiveness is one of the things these tools are best at, to a degree that still amazes me years after their emergence, and I think you can see this as a kind of framing - something designed to massage people's perception of the system into a more positive light. The extra contextual information that isn't part of the answer you asked for, but that helps bolster your perception of what the system is or does.
This is why, if LLMs fit into your personal understanding of creativity, they seem quite good at reinforcing this belief. Equally, if they don't and you feel frustrated or digusted by them, as Ted Chiang does, you might find yourself struggling to put your finger on why. It leads us to try and make claims about what creativity is or isn't, in an attempt to draw a ring around only the things we don't like. The hope is that we can identify some problem, some feature that these systems have, that means we can exclude them from the definition of AI and allow everything else we like in. The bad news is, we can't: creativity isn't definable, and humans aren't special. The good news is, it doesn't matter.
Damn Frustrating, I'd Say
For my money, the androids are the worst part of the newer Alien movies (I've not seen Romulus yet). I love a good unexplained technology in a sci-fi movie, but there's nothing like an AI who can't understand emotions or doesn't know how to create things to wind me up. At the start of this post, I quoted an exchange between two androids who are discussing why one of them can create and the other can't. Creation is a central theme of Covenant, so it's set up as a big deal that Walter, the newer android, is stopped from being able to create things. Later in the movie, Walter gets to say his badass movie line before beating David up:
WALTER: When one note is off, it eventually destroys the whole symphony, David.
This is a pretty cool line! It calls back to something David said earlier about music, uses metaphor to link that situation to the current one, and is also a dramatic and threatening thing to say to someone who tried to betray you. In fact, I'd say this would require quite a bit of linguistic creativity to come up with. Which is weird, because Walter isn't allowed to create. Walter also coins a little phrase when one of the colonists asks him about their mission:
DANIELS: What do you think it's gonna be like?
WALTER: I think if we are kind... It will be a kind world.
Did they program him with a list of aphorisms in case such a situation arose? Or is he able to make little quips up and use rhetorical techniques? That sounds kinda creative too!
Don't worry, this isn't about to devolve into a CinemaSins post - I doubt anyone watched this movie and had the same thoughts as me about it, these are the ravings of a vagabond AI researcher who has been out in the sun too long. But I like this example because it shows us how the idea of 'creativity' and 'creation' is shaped and limited depending on the context. In an alternate version of the Covenant script, Daniels realises David has the capacity to lie because she finds drawings he's done "from his imagination":
[She’s looking at the drawings, thinking about something.]
LOPE: What?
DANIELS: ... If he can draw, if he can create these from his imagination -- that also means he can lie.
Most people think Bach was more creative than a product manager on a mobile match-three game (I'm not saying this is true or fair, I just think it's a commonly-held belief). Most people think a sixteen year-old art student is more creative than a four year-old. We put things into tiers, categories. We draw lines. The nature of the tiers and categories and lines don't actually matter. The Alien: Covenant writers aren't wrong in their script here, they're just showing a particular understanding of creativity, and one that a lot of people probably share. What's important, I think, is that we understand that there is no right answer and that creativity is whatever we want to define it as, collectively, together. That definition can move, it can change, it can be based on vibes and be completely self-contradictory too. What matters is the people involved in making and using it.
A recurring thread in sci-fi about AI and creativity is that humans have something special in them - like the soul that those French painters talked about on the street filming that BBC documentary. But the truth is that humans don't need to be special or unique in the universe for things to matter to us. I don't think there is anything in us that makes our art meaningful, important or special in an objective, universal sense. I think what makes it all of those things is how it helps us relate to one another. This week I'm playing some new games made by game designers I have watched grow and mature over the whole of their career. I have made a lopsided little crochet animal for a dear friend of mine. I received a messy watercolour postcard painted by a friend. There is no equation, formula or definition I can give you to justify why these things matter to me, and why images from Midjourney do not. I don't need to give a reason, it's just how I am today, and that's the role these things have in my life.
It's perfectly okay to not like AI-generated art or writing or anything else simply because your gut tells you. Of course, we have a lot of good reasons to be critical of modern AI systems, like the environmental impact they have, the use of unlicensed data or the thoughtless impact on economies and society. But we can also be tempted to make up reasons too, or to overrationalise why we don't like a thing. It's fine to do this, of course - maybe, like Simon believed, it's just a natural part of us engaging with a millennia-old debate about what it means to create something. But I think it's also just fine to accept that there isn't a mathematically-definable reason for it, and not feel like this puts the onus on you to go on the defensive. I actually think it's an important part of society's relationship with science and technology, that they can look at something and just say no. Sometimes you just look at something and know you don't like it. Sometimes your dog looks at you in a particular way and it kind of looks like he's smiling. That's enough.
Thanks for reading all my Cohost pieces! This is the last one - but I'll have more on my site. I'll be posting about them as they happen on bluesky and Twitter. And I have a (quiet) Discord server where I post things I make. I hope to see you all in another weird Internet thing in the near future. Peace.