AI Is Here To Stay
It's always interesting speaking to people about AI these days, especially as a lot of people don't quite know how to pitch their opinions to me. People know that I describe myself as an 'AI researcher', but if they know me they might also have seen me write critically about AI, and it leads to people sometimes hedging their opinions a little, or equally being very honest. A very common thing I hear people say, both critics and advocates, is that AI is "here to stay", "well, it's not going anywhere" and so on. I most commonly see it used by people who are critical of some aspects of AI, but also either want or feel they need to engage with it. I've been thinking a lot about this phrase lately, and how often it's now used, and I've realised I agree. AI is here to stay. But in what way is it here to stay, and what exactly do we mean when we say that? Let's explore it from a few different angles.
Email is here to stay. I registered for my first email account sometime around the year 2000 (I was going to write 'turn of the century' but then realised how bad that sounds). It was a Hotmail account, which I now sadly don't have access to, but it was very exciting at the time to have a way for people on the internet to contact me, a place of my own that I could access anywhere. By that time email was already bedded into our lives, and we were at the end of the dot-com bubble that had led to a surge in websites and internet presences for businesses and people.
I have only lived as an adult in a world where email has long been considered the standard, but occasionally I get a glimpse into what a time before that must have been like, particularly when you come across reports of academics from earlier in the 20th century who sent in typewritten manuscripts to conferences via the post, or who sent letters to colleagues on the other side of the world to exchange ideas. Email brought huge increases in productivity for businesses of all kinds, both in the sense that it reduced costs incurred from errors in communication or the process of distributing communications with people, and also in the sense that people could now work faster - they could get responses quicker, they could send responses quicker, and they could do so regardless of distance or time.
Short of some kind of telepathic communication, it's now hard to imagine a world without email. Some technology have threatened it at times - Microsoft Teams, for example, allows me to send a short message to a colleague which is sometimes preferable to a full email, and it allows me to create group discussion areas (when it works). But no particular communication method has replaced email, and for many purposes I doubt it ever will.
Who did email benefit? I suspect I would find it frustrating to have to go to the mail room in my office every time I wanted to send a report or note to a colleague, and it certainly woudln't allow me to work from home as often as I'm doing today. Yet I also don't know of anyone who speaks kindly about email as a technology - mostly we complain about our inboxes. I received twenty emails today, four of which need replies, three of which are notifications about other apps that want me to log in and respond to messages inside, and three of which are departmental circulars which themselves contain several other notices in them that I might or might not need to read. Several more work-related emails are waiting in my personal inbox too.
Something I often hear people say about AI is that it will hopefully make our working lives more interesting, by automating the drudgery and boring tasks. In fact, one of the tasks that people use AI for is summarising emails and composing replies. Email did the same thing, in a way, by removing the need for 'boring' tasks relating to communication, or eliminating 'boring' jobs like working in a mailroom. Do I think I have a more enjoyable, fulfilling or easy job today compared to academics who worked before email? Absolutely not. I think most people would say the same. We intuitively know this is true for two reasons: one, technological improvements don't outweigh the fact that workers are more exploited than ever before; and two, relatedly, is that companies find ways to push workers to the maximum limit they can get away with. If your employer isn't giving you an easier time now, with all the benefits of email, word processors, spreadsheets and spellcheckers, why would they change their mind tomorrow?
So AI might be here to stay in the sense that email is - as something hardwired into our economy, but something that only really brings benefits to a minority of people who profit from productivity. For the rest of us, it's more likely to change the nature of our work rather than improve it. Email is absolutely here to stay, but it's hard to say where exactly it's benefitted us, and it seems to have brought as many problems with it as it solved (I think most of us would probably argue it brought more). "Here to stay" doesn't always mean it's a net good - or any good at all.
Asbestos
Asbestos is here to stay, at least for now. If you don't know what asbestos is, it's a building material that's been used probably for thousands of years, all over the world. It has a number of really amazing properties, including being an excellent insulator of heat, and an electrical insulator too. It was used extensively throughout the 20th century in particular - until, in the 1970s and 80s, it became clear that it was killing people. Inhaling fibres from asbestos can cause a number of deadly conditions, including cancer, and it's now illegal to use in most countries around the world. Unfortunately despite these changes, it was used so extensively for construction that you are probably closer than you think to some asbestos as you read this. Asbestos is considered sufficiently dangerous that it has to be disposed of carefully, using specific processes and safety procedures, since breaking or damaging it is one of the easiest ways to release fibres into the air.
What would happen to the internet tomorrow if the AI bubble burst tonight and every AI model, startup and founder disappeared overnight? One problem is that, like asbestos, AI-generated content is everywhere now, all across the internet and seeping into the real world beyond, most of it unlabelled. There are several different estimates online for how much content on the web is AI-generated, some peer reviewed, some not, and some seemingly made up entirely. A widely-cited 2024 study was somewhat misleadingly reported as saying that 57% of content on the internet was AI-generated - it didn't actually say this, instead it studied how much textual content on the internet had been translated into other languages using AI, but the numbers are still pretty staggering. A somewhat less reliable study claims to have analysed 900,000 recently-created web pages and found that 74% of them contain AI-generated text of some kind. These are less reliable because they rely on AI detection (and aren't peer-reviewed) but let's be charitable and say that 10% of textual content on the web is AI-generated - that's a phenomenal amount.
It's a similar story with images. Some stock image websites now allow users to label content as explicitly AI-generated. One of these is Adobe Stock, which has had to put upload limits on AI-generated content because it ballooned so quickly. This blog post suggests that around 15% of Adobe Stock's portfolio is now AI-generated - but this is only labelled, public images. On places like imgur there is no need to declare an image as AI-generated, and social media spaces such as Facebook are rife with intentionally mislabelled AI-generated content. I received emails regularly from a major press organisation asking for input on detecting AI content in videos, normally pulled from Instagram, TikTok or Facebook and it's been staggering to see how bold people are in creating misleading content. Even if we only consider the recreationally-created fake content though, people messing around in Midjourney - we are talking millions upon millions of images and text passages, with video potentially following soon too. We will never, ever inhabit an internet that does not contain AI-generated content, no matter what we do.
One of the reasons for this is that AI-generated content is actually considerably harder to get rid of than asbestos. While you don't need special protective gear to remove ChatGPT-written blog posts, the major advantage asbestos has is that we know what it looks like and can identify it with confidence once it has been detected. AI content detection is an incredibly hard problem, and one that we are nowhere near close to solving. What makes it harder is that content detection depends on us having a good understanding of how many generative models there are out there (which we don't), and having access to them (which we also don't), as well as being able to act fast enough to keep up in the arms race against new models (which we can't). To make matters worse, because it is such a tricky and valuable problem, there are a lot of startups selling products to do this who are incentivised to make stuff up, exaggerate their capabilities, and generally muddy the waters around detection.
So AI might be here to stay in the sense that asbestos is - embedded so deeply and broadly into our world that even if we were to discover it was literally killing us tomorrow, it would be an enormous task to get rid of, and one that we are not equipped with the tools for tackling. Technology can be here to stay not because anyone benefits at all, not even out of habit, but because we have made decisions that we can no longer reverse. We're seeing more and more institutions make decisions that are similarly irreversible with each passing month.
(Edit: I saw Casey make a similar comparison on bluesky as I was finishing up this blog!)
Virtual Reality
Virtual reality is here to stay. The most recent wave of VR headsets began around 2012 with the Oculus Rift, which swiftly was followed by products like the HTC Vive, the Sony PSVR, and VR-adjacent technology like Google Cardboard. In August of 2015, Time put Palmer Luckey on their front cover looking like a complete idiot, and declared VR was about to change the world. Ten years later we now live in that bravely changed world, a world in which no-one I know plays or talks about VR almost at all, really, except to make fun of Mark Zuckerberg. We have a few headsets in the department offices for the occasional research application.
In 1997, Joe Tidd and Martin Trewhella published a study of technology adoption by British and Japanese companies. They identified two major factors in whether a new technology would be taken up: comfort and credibility. Comfort is about how easy the technology is to adopt, what needs to change, who needs to retrain, how easily does it fit into the daily life of the person using it. Credibility is about what it brings to the person or company, why would we want to adopt it in the first place, what does it give us that we don't already have. I use VR a lot as an example of a technology that had credibility, but not comfort. If you used a VR headset at any point in the 2010s, I would guess you were probably quite impressed by it. VR provides interesting, unique experiences. However it lacks any sense of comfort - most people do not have spaces to use a VR headset in, it isolates you from the environment you're in, it is tiring to use for long periods of time, and for a long time it was a luxury device above and beyond the cost of a new games console.
Artificial intelligence has something of an opposite problem. The major breakthroughs in AI at the end of the 2010s and beginning of the 2020s were mostly about comfort. Being able to prompt AI models with natural language made it easy for people to interact with this technology and not feel like they were talking to a computer. However it lacked - and still lacks - credibility. Credibility is something AI companies manage very carefully, through sponsorships, advertising, endorsements and careful announcements. AI is sold as the future of everything, just like VR was, but unlike VR it's easy for people to get access to and use for themselves, which has allowed it to spread much faster. Because of this, discussions about its credibility are much more fragmented. Everyone has access to ChatGPT, and so a huge proportion of people have tried to use it, for everything from writing wedding speeches to advising on government policy. Some people swear it has transformed their lives, while others are confused at why it doesn't do what they were promised.
Something that Tidd and Trewhella don't mention in their paper, probably because it's more focused on companies than society at large, is how credibility is measured. You can be mis-sold a new technology, but in general businesses are good at measuring credibility because executives love to measure productivity and performance using metrics, and if the new technology moves those metrics then that's a good sign. The way AI is used is a bit tricky. Some people use it for tasks they are already an expert in - they often seem to report that the AI makes a lot of mistakes but that they work around them. The greyer area is people using it for tasks they know nothing about. They generally report either incredible performance (for tasks they don't have the ability to critique or evaluate) or terrible performance (in my experience often for creative tasks where they know what they want - I don't mean AI critics here, either). Credibility is something that is still settling for AI, and big tech companies are in a race against time to keep raising expectations of the future to combat declining evaluations of now.
In the 2010s I was pretty sure that VR would evaporate entirely, but it hasn't. I do have a couple of friends who have VR headsets and sometimes tell me about a new game they've played on it. I know some people who work on VR games and sometimes they do pretty well. I know researchers who use VR for some applications. VR hasn't changed the world, it hasn't replaced all forms of entertainment, arguably it wasn't worth the money that was poured into it - but it has found its niche, as a stable and usable product that has some effective use-cases. The same could be said for AI. The last decade of research has led to important advances in certain areas of medicine, for example. Regardless of how you feel about AI generally, it would be hard to write off the last decade of work in the field as entirely worthless (even if we might agree that the cost and harm overall wasn't worth it).
So maybe AI is here to stay in the sense that VR is - in niches where it has a measurable benefit (whatever that benefit is), where people can get around its limitations and failure modes. It won't transform the world completely, but in some cases it'll be worth the cost to certain people, and will persist because of it. Even if most people find a reason to reject it, it's likely at least some places will find the tradeoffs worth it to them. Technology can be here to stay without being all or nothing, and just because something looks and sounds like a sci-fi movie concept doesn't mean it has the same effect on the world.
AI
AI is here to stay, as people like to tell me, and I agree. When they say it, they often use it as an explanation for why they're using it, advocating for it, or getting more involved with it. That's totally understandable. But I think we should stop talking about AI as 'here to stay' as an empty slogan. Lots of things are here to stay, but some of them don't necessarily make our lives better, and many of them make it actively worse. I do think AI is here to stay, because it has always been here, and because too much money has been invested in it for it to entirely collapse now. If the bubble burst tomorrow, we would still see the remnants of AI embedded deep in our society for decades to come - in governments, in corporations, in schools, in mass-produced cheap t-shirts with AI-generated images on them, in jokes about people with too many fingers, in the new boom startups from people who got rich off the last ones. People would still run models, they would still train their own. Google would still translate languages for you.
But if we want to talk about 'here to stay', I think we need to be more specific about what we mean by it, and what aspect of it is significant to us. If you tell me that we need to incorporate AI into our university policy because it is 'here to stay', does that mean we should uncritically invite it in to every aspect of our education and operation? Does 'here to stay' mean that a new technology gets a free pass and full capitulation? If you tell me that the next generation needs training in AI (whatever that means) because it is 'here to stay', does that mean we are not planning for any other eventuality? Does 'here to stay' mean we bet our future society on a technology that is 99% owned by a handful of private corporations? 'Here to stay' can't be a gloss for giving up. Criticising technology doesn't begin and end at abolition - it is an ongoing process of analysis, reflection and dialogue.
Thanks for reading. This is a new blog format I'm trying out, as I was getting a bit tired of the inconsistencies in the style of the old one. It's a static site generator called Strawberry Starter, which I found thanks to Izzy Kestrel (who has her own SSG called Bimbo).