< Back to Index Posted: June 16th 2024
You Don't Hate AI, You Hate Capitalism
This is a two-part post, both of which were put on Cohost over the same weekend. I deleted them due to the commentary getting a little aggro, but some people have asked for an easy-to-cite copy of it, so I'm putting it up here. I don't think the piece is the most perfect or elegant way of expressing the ideas, but I'm going to post it unedited so people can weigh it up as it originally stood. If you'd like to explore other writing of mine about AI and my criticism of it, you can find it on my Cohost or this recent series I wrote for Rock, Paper, Shotgun
Part 1 (You Don't Hate AI, You Hate Capitalism)
No-one has enjoyed being on the Internet since around 2003. We all understand this to be true. Even so, lately it's become an especially dire place to be, and I'm not talking about "enshittification" or "dead internet theory" or whatever phrase someone is trying to coin this week to describe Technology Bad. This is a more personal hell, a sort of bespoke, algorithmically-prepared hell just for me, where everyone is having every possible take about AI, all the time, simultaneously, and they are all being broadcast into my eyeballs at the exact frequency that makes my brain explode. So I'm writing a short post to try and break down a couple of things I've seen recently, and also to maybe give some people a different way of thinking about what they're seeing.
And then I'm going to log off and go lie down in a dark room.
Why Do Cats Eat Grass
In 2013 I entered the 7-Day Roguelike Competition and made a prototype called A Rogue Dream. At the start of the game you type a noun that you want to be into the game, like cat or journalist and the game then tries to theme game elements around your chosen noun. If you choose to be a cat, the enemies are water droplets, the food turns into cat grass, and you have abilities like scratch and bite. It did this through a really smart method I had learned about the year before - it tapped into the Google Autocomplete API (which was really easy back then, it was just a thing you could do) and asks an incomplete question like "Why do cats hate..." and then grabs whatever the most common autocomplete response is. Cats hate water, so the enemies became water. I had different question prompts for different kinds of information I wanted to put in the game.
A Rogue Dream was quite a successful prototype for me. I showed it at a few events, and I got invited to turn it into a physical installation at the National Videogame Museum. I ended up cancelling this plan, though, because the ethical flaws in it meant there just wasn't a way to make it safe for a museum setting while also keeping the interesting parts of it intact. But it was a lot of fun, the exact kind of hacky prototype I really enjoy making, and it generated a lot of fun memories for me too (if you chose to play as a schoolkid, the food you ate to regain health was 'boogers').
Today, this project would be pretty easy to replicate in some ways, although not in others. I made it during something of a golden age for web data - there was more of it than ever before, and most of it was easily accessible with a single python script. Yet the communities I was in were also very aware of the ethical responsibilities they had, and how to be a good citizen on the Internet. Nowadays it wouldn't be possible to make a game with the same approach I used back then. If you were making it today, instead of using live web data like I did, you might make it using ChatGPT. You could easily make a more capable version (on the face of it) - I asked Google Gemini what they would suggest to use as enemies or food in a cat-themed roguelike, here's what I got back:
Then you could feed this automatically into Midjourney or something similar to generate some assets for the game based on ChatGPT's ideas. Of course, you couldn't easily release this online as a fun little prototype for everyone to use, because these services have paid APIs and get expensive fast. AI Dungeon, which used OpenAI's GPT API as part of its narrative game system, was burning hundreds of thousands of dollars a month at peak just from making API calls to OpenAI. My 2013 game used a few lightweight calls to web APIs that were still open and easy to use, meaning it cost nothing, and ran quickly with little resource usage. I wrote a tool called Spritely that could query Google Images for art relating to a word (like cat, or water) and then crunch it into a pixel representation. It worked fine!
If a startup or research lab presented a game like A Rogue Dream today, whether or not it was powered by ChatGPT or something else entirely, the project would land very differently. This project tries to automate the design of a particular type of game, it uses online data - which I didn't really have the rights to use, to be clear1 - and it's very easy to manipulate into creating offensive content (for a very light example of this: if you ask to play as a woman, your enemies are men, because Google Autocomplete thinks women hate men2). For a lot of people that would raise a lot of red flags and generate angry tweets and messages, for others it would result in an invitation to speak as an expert on generative AI and games at several conferences and be heralded as someone ushering in a new era of game design. The underlying problem in both cases is the same: we've completely lost perspective, on both sides of the debate, on what matters about these projects.
This Cohost Article Is Sponsored By Squarespace #ad
When we look at a new headline that involves AI, whether it's for a small experiment someone posted on Reddit, or a big tool a startup is pitching, we can't just look at what is right in front of us. Just like my example of A Rogue Dream at the start, the project itself is not the only information we need to make a judgement on whether something should worry us or not - we also need to look at who is doing it, and why. A lot of AI projects that I see criticised on Twitter recently would have been considered pretty interesting or fun if they have been posted online a decade ago by some lone student who cooked it up in their bedroom. The objection is not to the idea itself, but what it implies, what the suggestion for the next step is.
For example, my research centers around using AI to design games. I've been working on this since 2011, a time when no-one cared about AI, and they definitely didn't care about AI in games. I started this research because building AI game designers is a great way to study and think about games; because it has the potential to discover game designs that I could not; because it helps us think about why creativity matters to us and how we can support people in new ways; because of a whole heap of other reasons. I definitely have no interest in creating technology that is actually used in place of other game designers. Back in 2011 that was taken for granted because, I mean, suggesting anything else would've gotten me laughed out of whatever room I was in. But in 2024 I've tried to make this messaging clearer - by thinking about how I can pivot my research further away from that, for instance.
On the other hand, if Ubisoft announced they were looking into building AI systems that could design simple games on their own, even if they described a project very similar to my research, it would (and should) read very differently. That's because Ubisoft have different motivations behind work like this, and thus it is likely to be developed in different directions and applied in different ways. The two projects would not be the same, even if the initial objectives were the same on paper. There are a lot of ways to solve a problem, a lot of ways to develop an idea, and a lot of ways to communicate solutions to the world. That doesn't mean that my research is risk-free, nor that Ubisoft's attempt at doing the same would be purely evil and have no benefits for game design. My research carries risks with it, and it has the potential to cause harm, which is one of the reasons I write things like this and think a lot about how best we can proceed. But there is a qualitative difference between this hypothetical project and my own.
All new work is motivated by something. We might be motivated by money, by fame, by curiosity or spite, but work always has an interesting set of drivers behind it. Publicly-traded companies are motivated by making money. They can't escape this motivation - and any attempt to do so would be automatically corrected by their own shareholders and market forces. That doesn't mean that these companies can't also do things that benefit society - just that, over time, they will tend towards the most efficient way of increasing profit, whether that benefits society or not. One of the beliefs that underpins a lot of liberal politics about science and technology is that we can somehow design our economy such that the best way for companies to make profit is to do good things for society. A lot of your economic political beliefs are probably based on whether you believe that's possible or not.
Most big names in AI today understand the importance of managing their image, and a big part of this is managing what you appear to be motivated by. It's why PR is more important to OpenAI than almost anything else - it affects government stances, investor enthusiasm, public confidence and much more. It's why tech CEOs talk about saving humanity, why governments talk about "unlocking potential", why commentators prefer to present their impartial academic credentials to the press rather than their investments or consultancy work. I have a feeling that this is one of the reasons so many people who are critical of AI react so strongly to almost any AI news now - because there is so much misdirection and image management that is it simply easier to assume that any AI news is bad news. And who can blame them?
I think this is leading to some harmful assumptions forming about AI as a whole though, and I think they gloss over some important questions we need to be asking. What I'm getting a little tired of - with all due respect to Shaun, he's far from the only one doing this - is tweets like this:
There are a lot of very stupid people working in and around AI right now. But there are also a lot of very brilliant artists who care deeply about creativity and are excited by new technology. Do I wish they weren't excited about generative AI? Of course I do. Absolutely. Are many of them actively contributing to problems in the industry (and the world) by doing so? They sure are. But I'm not going to pretend that they aren't creative or artistic or interested in self-expression. It doesn't make sense as a criticism, and it simply reinforces the idea that critics of AI are just angry and don't have a point. This isn't a "you shouldn't insult people" thing - as I say, there are some absolute feckless gits in AI right now. Just the worst human beings imaginable. A sort of black hole where ideas and inspiration goes to die. You get the idea. But to me, the rightness and wrongness of an AI tool isn't just about what a technology is. It's about how it's applied, where, by whom, and to what end.
As the title of this post suggests, what I find mostly is that people hate capitalism, not AI. The other day I saw a tweet about an AI tool that could automatically colour in anime line art. The tweet criticised it as a soulless attempt to bypass an expressive and beautiful human process, and they might be right, I don't know who made it or why. But what I saw from the tweet wasn't that they didn't like the technology itself, but rather that they were worried it would be used to put people out of work, because capitalism inevitably will see this as a route to increased profit. In isolation, if someone posted a janky tech demo of anime getting automatically coloured in a decade ago, it would've gone viral on Reddit, everyone would have thought it was kind of neat, and then moved on with their lives. Our fear isn't that automatically colouring anime is possible, or that it lessens what it means to be human - our fear is that an executive who earns a thousand times as much as us in a year will see it, realise it can make them richer, and use it as a weapon against us.
For the most part, the distinction between these two things don't really matter. A lot of AI products you see today are from startups and companies that are purely looking to make money. It's probably safe if you just assume every AI announcement like this is part of a project aiming to make the world worse somehow. But I think for us to find a way forward, or to envision a tech industry and an AI field that exists post-LLMs (or even alongside it) we have to get better at articulating where these problems actually come from. For a lot of projects, the actual functionality of an AI system is not the main reason we end up afraid, angry or upset about it. Really, we're objecting to two things: the conditions required to create this new technology in the first place; and the worst-case scenarios it implies about the future3.
Part of what spurred me to write this article was seeing more people discover the game 1001 Nights recently, and seeing the varied reactions to it. 1001 Nights is a game that is developed partly as a research project, using large language models as part of a storytelling game. It's on Steam and - theoretically at least, I don't understand the business model - will be released one day to buy and download. For now it mostly exists as a demo that travels around events being shown off. I've met the lead developer, Yuqian Sun, several times and she's a really lovely person, full of creative energy and enthusiasm for other people's work. It's very clear to me that she's someone who cares a lot about being creative and using technology in different ways, just like I did a decade ago when I was making things like A Rogue Dream.
How should we feel about 1001 Nights as a project? It's not by a major game publisher, and it's not trying to set out a template for replacing game writers. The Steam page explains how many elements of its workflow have been open-sourced, how they avoid using artist names in search terms, and they even invite people to contact them with concerns and criticisms - a pretty bold move given the level of discourse usually happening on Steam. Yet at the same time, I'm sad to see an increasingly high-profile use of ChatGPT in narrative design, and I know other people have complicated feelings about it too. It's clearly an increasingly influential part of the narrative that generative AI is going to change all game production, even though the developers may not have intended it to be as such. And although the team is a small group of independent artists and academics, they are heavily invested in the promotion of the game and now, especially, are also invested in the broader success of generative AI. Why do we still feel uneasy about projects like this, even though - on paper - they're just small, fun experiments?
Collective (In)action
I think there's another problem that goes beyond just understanding who is doing something and why, and it's due to a fundamental difference between how most technology works today, versus how it worked ten or twenty years ago. In my example of A Rogue Dream at the start of this article I laid out the entire structure of the software: a couple of scripts running in the game to pull some text data from the web when you played the game. I wrote the systems myself, using everyday APIs to pull together resources and process them myself, and I could easily share the code or teach it to someone else - and I did, all the time. But this is not how most new AI demos and prototypes are built today. When I see a pitch for a new AI startup or product, nine times out of ten it's a wafer-thin layer built on top of ChatGPT or another bloated, cutting-edge LLM. 1001 Nights isn't quite wafer-thin, but it extensively uses GPT-4 and Stable Diffusion to generate text and images for the game -- tools that the developer customised but didn't create, that they don't have any deep control over, and that they are entirely dependent on for their game to work, forever.
In addition to considering the near-term impact that 1001 Nights has and the motivations and aims of its individual developers, we also have to consider the broader impact that this technology has as we keep using it. If thousands of small experimental projects use it, we provide fuel to keep the industry going, and normalise its presence in games, art and digital creativity spaces. One of the reasons we're hearing about 1001 Nights more now is that a lot of events want to have talks about generative AI and how it's going to change the games industry. Unfortunately it's not possible to just use this technology speculatively - either you're using it, or you aren't, and if you are using it then you inevitably end up being dragged into a much larger political struggle about the use of these tools in the industry at large, even though you yourself may not be interested in that at all. There is a huge amount of message manipulation going on right now about generative AI in the games industry - from weird stat-padding of surveys to "all of my coworkers are just out of shot, also being positive about using AI too".
I have another blog post I've been meaning to write about a recent experience sitting in a session where Amazon tried to pitch some non-technical university staff about the benefits of generative AI. To paraphrase what I'm going to say there: I was shocked at how empty the sales pitch was. I firmly believe that the top companies who are most invested in this technology have absolutely no idea how they're going to turn it into a big enough product to justify its costs. For companies like OpenAI, games like AI Dungeon didn't actually need to succeed long-term or have any positive impact on the industry. They just needed to carry a headline until they found the next project to shift their PR focus to.
This entanglement of projects and organisations complicates how we assess individual AI projects. The underlying ideas might be experimental, provocative, exploratory or interesting, but that work does come at its own cost. At what point does an art project start contributing to the economic instability in the creative industries? How do we signal to the public the difference between an experimental one-off idea, and an attempt to transform a labour market? Is 1001 Nights good because it shows how AI can create new experiences that complement traditional games, or bad because it's being used to signal that we can create narrative experiences without writers? And what about the individual humans in this, many of whom are young students and artists - do we not also have some responsibility to help and nurture them, for whatever future game-making community we want to build?
One of the biggest things that I struggle with is related to this: to what degree should young engineers, artists and other AI experimenters feel responsible for the future that is being built by these large companies? In the 2010s I made Twitterbots and played with web data because it was exciting and fun. I know that for a lot of young creative coding people today, LLMs feel the same way, something that is in reach, full of potential, and easy to use. I like to think that if LLMs had been around in 2010 I wouldn't have used them, but I don't really know if that's true. Should I expect more from them just because the technology is a lot more dangerous? Or is it the responsibility of other people, the people actually building this stuff, astroturfing it, lobbying for it, that need to wake up and change course?
As I get involved in more education, supervision and mentoring, I find these questions harder to answer. We can't lecture everyone into doing the right thing, and we definitely can't when a lot of people are incentivised to promote these things in direct opposition. I know that a lot of what I say is automatically dismissed as overly biased by a lot of people, which limits what I can actually communicate now. In any case, we don't have the political, legal or social capital to brute force our way to a solution. I feel like the best I can do is create a supportive environment for the people around me who want to build something different and better, and make it as easy as possible for people to explore alternatives, learn about the issues, and invest in different solutions.
We need to get a lot smarter about how we talk about AI. We cannot leap on every AI project and treat them as equally guilty of causing the situation we are in today. Equally, we can't give a free pass to technology without asking questions about who employs it, how they employ it, and what their motivations for employing it are. No-one is going to provide us this information -- in fact, people are explicitly paid to hide this information from us in many cases. It's up to us to dig a bit deeper and ask what other factors are at play, and to think about what it actually means for an AI project to cause harm today, or to contribute towards future harms heading towards us.
Enormous gratitude to Kenti and Peter for their feedback on the piece.
Part 2 - You Don't Hate Me (I Think)
Hey everyone! Normally I reply to comments one-by-one but given that many of them are the same comment, and that some of the responses would probably be best surfaced, I'm gonna write a follow-up here instead. I appear to have made almost everyone replying unhappy in some form or another - sorry about that! I hope some of my replies help clarify my position, and if not, well... uh. Sorry again!
I Can Hate Two Things!
My titles are usually jokes or things that are a bit playful, and I'm aware you can hate two things! I suspect this probably came across to some people as me being defensive of all AI technology. When I say that I don't think people hate AI, what I mean is that the underlying methods being used are often not what we have a problem with. For example, LLMs use similar techniques to the ones that have been powering language translation for many, many years. We've been discussing language translations issues for many years (for example, sexist tendencies when translating between languages with and without gendered pronouns), longer than GPT-1 has been around, but we did not discuss them in terms of being morally bankrupt in terms of their underlying architecture. That's because of the context it is being used in: the data it is being fed, the ways it is being sold, the scale on which it is being leveraged. When I say you don't hate AI, what I mean is that the reason you feel differently about, say, autocorrect on your phone, and ChatGPT, is not because you are a hypocrite or inconsistent or stupid. It's because the conversations we have about AI are about more than just what a piece of software does.
I am not saying "technology is neither good nor bad". Technology can be bad. Most fundamental research into LLMs today is trying to enable them to work at bigger scales. I don't think there's an ethical or environmental way to plough billions of text documents through a cluster of GPUs larger than the moon. But I also think that from the perspective of science communication, or public policy, or a dozen other things where what you're saying matters, it's important to think and talk beyond "does the phrase 'machine learning appear anywhere'".
Actually All Automation Is Bad
One sub-chost (is that word?) I saw about this post said that they actually hate all generated work now because it reminds them of LLM-style churn. I don't really have a 'response' to this because it's a perfectly understandable way to feel, but it did make me sad to read. I had a paper rejected earlier this year that I spent a long time on, talking about how I feel that all game design is generative design of a sort (by 'generative' here I mean in the sense of little random numbers and noise functions, not Midjourney). Games are spaces of experiences that we shape and design with care - there is no way to hand-craft an experience. Even an entirely linear experience with no choices can be hacked and modded, might be played on an airplane or drunk at 2am. We design experiences in the full knowledge that they can and will be experienced in many ways, and as the player explores the game they branch off more and more from where we assumed they might be.
For me, designing generative systems is beautiful, and as fundamental an act of creativity as any other design discipline that goes into making games. I've dedicated my whole life to it, and I don't think I will do anything else. I don't need anyone to know or care about it - when the AI winter comes it'll collapse my work as much as it collapses everyone else's. But I like making these things and I hope more people do it too.
Yet You Continue To Participate In Academia! Curious!
I thought that my position on AI and its surrounding issues was clear in the post, if not more generally, but, uh...
... that clearly was not the case, I guess. I understand that most people don't know (or want to know) about my work outside of stumbling across a cohost post, but to give a bit more context: I've been an AI researcher since 2010. I joined the field because I liked it, despite it being pretty unpopular at the time (unpopular as in unfashionable and boring, not unpopular as in 2024), and because I wanted to do work with games and creativity. I spent the 2010s doing a lot of outreach, community-building, advocacy work, science communication. I cared, and care, deeply about the impact my research has on the world, how the public (and game developers) understand and use AI.
I've always been a vocal critic of the broader trends in the AI industry since the boom took off. I've written papers advocating for new paradigms in AI that move away from illegal data use, large-scale compute, and automation. My students gave a beautiful hour-long talk at GDC about the ethical issues in AI long before the AI summit got flooded with GenAI talks. I do not use LLMs in my work. I don't like listing this stuff like I'm trying to earn a merit badge or ask for a pat on the head, but I assume it's important context because some people seem to have taken my article as a defence of this technology, when in fact I've tried at every opportunity to take the responsibilities of my job and position and platform really seriously. It's also why I am so exhausted - because I have been doing this for a decade, since before Lee Sedol lost to AlphaGo. I have been listening and thinking and talking to people for a long, long time, and slowly watching it become a losing battle as major AI companies dominate the conversation. No-one is more tired or angry here than me.
Why Words Matter
I cut this example from my original post (which was almost twice as long) but I'm going to include it here as I think it's a useful example of why the points I was trying to make matter. One of the things that spurred me to originally write this post was the responses to this tweet:
I was completely bewildered by the quote-tweets I saw of this. Some of the confusion was over terminology - some people claimed the use of the term AI was incorrect and that it should be called machine learning "like we used to call it" because AI was bad and machine learning was good. But the thing that really got me was some people (self-identifying as AI critics) pointed out that this was a good example of AI in games -- it was using AI to automate something that was "boring" and was an example of how AI can be used without affecting people's jobs or creativity.
It's a vibes-based criticism that partitions AI into two buckets based on whether they feel it's harming what they perceive as "real creative work" or not. Lip-synching sounds boring and fiddly and not expressive, and thus it's perfectly fine to automate this, but automating writing is bad because a writer might lose their job. All technology has an impact on labour. It's not our job to simply say if a technology sounds nice or not. If we want to engage with AI properly, and think about the consequences it actually has for society, we need to ask more fundamental questions: do people want this? Is it likely to change the value of certain skills in the workforce? Is the aim of the technology to increase the scale of production? To lower its operating costs? Who gets to decide which jobs are boring?
I think this is a good example of why engaging only with surface-level ideas can be harmful - it can lead us to accept harmful things uncritically as much as anything else.
1001 Notifications
By far the most resounding comment is that my post is a defence of 1001 Nights, or its developers, or that I am somehow a cog in a machine enabling this, or that I am shilling for big tech just because I happen to have an incredible offer for my readers today to get 25% off their first order with HelloFresh. I don't know what to tell you here. I think I'm pretty clear: I don't think 1001 Nights is a good idea, I think its rise to significance is partly down to organisations hungry for prominent examples of generative AI to embrace. I spend several hundred words talking about how the prominence of projects like this create a platform for support for this technology. I've seen people write my exact points back at me as a criticism of my original post, which I'm going to assume is on me for not being clear enough in my original writing. So to be clear: I don't like it. I am not advocating for it.
@vectorpoem wrote a comment I really liked, here's just a small bit of it:
so yeah my conclusion is that these are good conversations to have in the future but the air is too thick with capitalist poison to talk about anything other than removing the poison.
I totally understand what they mean, and I kind of agree. I've spent a lot of time thinking about how I can contribute to removing the poison. I don't know what the best course of action is, but I think better science communication is a big part of it, because one of the main tools that big tech companies use against us is obfuscation and confusion. And I worry that by talking less about what's actually wrong and reducing our analysis down to a surface level vibe-check, we're opening ourselves up to even bigger problems in the discourse down the line.
I'm not going to be reading and responding to any more comments on this or the OP, because honestly they got really aggro really fast and I get enough of that on every other social media site, so it was a bit of a shame to discover the same here. But thanks for taking the time to read both of these posts, and (hopefully) writing something thoughtful and nice. Have a good weekend, I'm gonna go back to the dark room I was lying down in.
-
Arguably as an academic I have a slightly broader remit than a lot of people, and I wasn't profiting off it either, but as I say that isn't necessarily the point.
-
Incidentally, if you ask Gemini to make suggestions about a roguelike where the main character is a woman, it initially ignores the prompt entirely and makes generic roguelike suggestions. If you then ask it to specifically deal with the main character being a woman, it makes very funny suggestions that are either overly worries about being sexist ("Instead of debuffs like 'weakened', consider debuffs like 'distracted'") or actually are kinda sexist ("The protagonist could have combat skills that rely on agility rather than brute force") which is a perfect encapsulation of why I hate these tools.
-
This post is already quite long but it's important to note here that how an AI tool is produced is still important. ChatGPT isn't just rough because of the errors it makes or the jobs it might harm, but it's also bad because of the costs involved in creating it. However, in this instance I'm mainly talking about smaller experiments like the anime colouring system, A Rogue Dream, or the lip-synching system I'll mention at the end of this article.
Posted June 16th, 2024