<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet href="assets/styles/rss.xsl" type="text/xsl"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
  <title>Possibility Space</title>
  <subtitle>A blog about game design, AI, research and creative technology.</subtitle>
  <link href="https://www.possibilityspace.org/blog/feed.xml" rel="self" />
  <link href="https://www.possibilityspace.org/blog/blog" />
  <updated>2026-02-09T00:00:00Z</updated>
  <id>https://www.possibilityspace.org/blog/blog</id>
  <author>
    <name>Mike Cook</name>
  </author>
  <entry>
    <title>Visual Silence (Magpie Part 2)</title>
    <link href="https://www.possibilityspace.org/blog/posts/magpie-part-two/" />
    <updated>2026-02-09T00:00:00Z</updated>
    <id>https://www.possibilityspace.org/blog/posts/magpie-part-two/</id>
    <content type="html">&lt;p&gt;&lt;img src=&quot;https://www.possibilityspace.org/blog/assets/images/magpie-logo-superwide.gif&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This is the second post in a series about Magpie, a live coding tool I’m making in Picotron. In &lt;a href=&quot;https://www.possibilityspace.org/blog/posts/magpie-part-one/&quot;&gt;the first part&lt;/a&gt; I covered the basics of what I&#39;m hoping Magpie will be, and I have a few posts I want to write about what I&#39;ve found interesting or tricky about toolmaking for livecoding so far. You don&#39;t need to know anything about livecoding to understand this post!&lt;/p&gt;
&lt;h3&gt;Visual Silence&lt;/h3&gt;
&lt;p&gt;One of the most interesting parts of a livecoding set is the very start. A lot of livecoding is about flowing from one interesting bit of code to another, but at the beginning, we’re starting with a blank slate. At the regular livecoding meetup I go to in London, &lt;a href=&quot;http://badlondon.events/&quot;&gt;Algorhythms&lt;/a&gt;, the beginning of your set will start with you standing in a quiet, dark room, full of people maybe softly chatting, no music playing, and a blank screen. What do you do?&lt;/p&gt;
&lt;p&gt;For music livecoding, most things we can do immediately and completely fill the silence in the room, because that&#39;s the nature of producing sound, it gets everywhere. Even a basic kick drum beat, the ‘four on the floor’ that &lt;a href=&quot;https://www.youtube.com/watch?v=GWXCCBsOMSg&quot;&gt;Switch Angel starts with in this experiment&lt;/a&gt;, dispels all the silence in the room and creates a full foundation for the music to build on. And it doesn’t need to be a traditional drum beat - it could be a random selection of notes in a cycle, it could be a sample, it could even be a straight tone. Music is built up in layers, but even a single layer is extended forwards in time and immediately signals to the room that the set has begun.&lt;/p&gt;
&lt;p&gt;For visual livecoding, we have an equivalent ‘visual silence’ on the projection screen that we have to fill, and ideally we&#39;re looking to fill it with something that has a few qualities. An important one is &lt;strong&gt;motion&lt;/strong&gt;: I can think of livecoding sets that are static for short periods, especially at the start, but most visual sets want to be moving most of the time. We also want some kind of &lt;strong&gt;texture&lt;/strong&gt;, something to give the motion context. Sometimes that&#39;s a geometric pattern, sometimes it&#39;s text or video, but its rarely a single color filling the screen and nothing else. And we want &lt;strong&gt;scale&lt;/strong&gt;: we want to use a good amount of the screen. Not necessarily the whole space, but probably a good chunk of it. I&#39;ll come back to this idea in a later post.&lt;/p&gt;
&lt;p&gt;Filling visual silence is hard, or rather, there are a lot of ways to &lt;em&gt;partially&lt;/em&gt; fill visual silence, and most tools for making visuals appear on screen (like game engines, paint programs, or text editors) are quite granular and make small changes to the visual space. For Picotron, the game engine Magpie is built in, the most basic thing I could do would be to draw a shape directly to the screen. If I’m really pushing to make something dynamic happen, I could have some aspect affected by the passage of time. A quick test program I sometimes write is to draw a circle that moves up and down as time passes:&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
 &lt;video width=&quot;90%&quot; controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.possibilityspace.org/blog/assets/images/magpie-pt2-1.mov&quot;&gt;
Your browser does not support the video tag.
&lt;/video&gt; 
&lt;/p&gt;
&lt;p&gt;But for a lot of digital artworks or games, it’s likely that the first line of code I write won’t change anything at all on the screen. Maybe even the first half-dozen lines. My code might be creating lists of things, setting up processes or going through lists of things to draw shapes. So using these types of language/engine as the basis for livecoding has a few effects: it means we stay in the ‘visual silence’ for longer, it means I have to do more work before I see if it had the effect I thought it would, and it means the audience has to wait longer to see something change or to feel the set has begun.&lt;/p&gt;
&lt;p&gt;Visual livecoding tools often solve this by using different programming styles or paradigms which more immediately create output, and that affect a screen space of any size or shape. In particular, it’s quite popular to use graphics shaders or shader-like languages as a style of working. Shader languages are designed for solving specific kinds of graphics problems, and one thing they’re good at is taking a some visual information and stretching it across a surface or a shape (like painting the photo of tree bark onto the side of a 3D model of a tree). This also means that they tend to have a lot of commands and instructions and styles that can immediately fill a shape of any size with &lt;em&gt;stuff&lt;/em&gt;. And so they tend towards a programming style where what you do, by default, fills the screen. For example, this program in Hydra is eleven characters long:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;osc().out()&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;It produces the visual equivalent of a kick drum beat (warning, this video produces a very slow scrolling set of black and white bars, I&#39;ve known it to make people nauseous on rare occasion):&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
 &lt;video width=&quot;90%&quot; controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.possibilityspace.org/blog/assets/images/hydra-osc.mov&quot;&gt;
Your browser does not support the video tag.
&lt;/video&gt; 
&lt;/p&gt;
&lt;p&gt;This isn’t that interesting, of course, but it immediately fills the screen and signals to the audience that the set has begun. It has motion (the bars scroll slowly), texture (monochrome here, but a range of darks and lights) and scale (the whole screen is being used). Hydra has several similar functions that fill the screen with other simple patterns, and most performances start with one of them. You could, if you wanted, write five or six more lines of code before running the code for the first time and breaking the silence, but there’s no need to - the first line is ready to provide output, and that lets you start moving, communicate with the audience, and give yourself visual feedback to start working with. Then, if you want, you can keep adding the code you were planning to add.&lt;/p&gt;
&lt;h3&gt;The Other Way&lt;/h3&gt;
&lt;p&gt;One of my motivations making Magpie is to explore different ways of livecoding, different ways of controlling and changing a set and engaging with the audience. Magpie is far from the first tool to try and do this, many livecoding tools exist that let you explore different ways of livecoding visuals, and many are based in existing art tools like Processing. In my experience of using them, though, they don&#39;t have &lt;em&gt;big&lt;/em&gt; paintbrushes like Hydra does, and they&#39;re quite &lt;em&gt;verbose&lt;/em&gt; (you need to write a lot of code to make things happen). One of the advantages of painting with big brushes is it lets you fill the silence quickly and make big dramatic changes, but it lacks the specificity of the smaller brushes that let you paint fine detail. Picotron is nice because it gives us a set of interesting little brushes and tools, but to complement this, I want to build into Magpie some bigger brushes to help the user break the visual silence faster and start messing around quicker.&lt;/p&gt;
&lt;p&gt;I&#39;ve been trying to experiment with new features in Magpie by thinking about ways I try to fill the screen myself and turning them into quick-access features. One common thing I want to do is repeat some drawing across the screen, and just change it ever so slightly based on its position. For example, the background to the Magpie logo banner has circles that change colour and shape based on where they are on the screen. This normally involves writing some boilerplate code that is the same almost every single time, so we can make it a little quicker by introducing a function that lets you give it some code and repeat it across the screen, like so:&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
 &lt;video width=&quot;90%&quot; controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.possibilityspace.org/blog/assets/images/magpie-grid.mov&quot;&gt;
Your browser does not support the video tag.
&lt;/video&gt; 
&lt;/p&gt;
&lt;p&gt;So in this example, I&#39;ve described how to draw a circle and change how it looks based on where it is on the screen, and then I&#39;ve asked Magpie to just repeat this a bunch of times across the width and height of the screen. This &#39;feature&#39; isn&#39;t done yet, I&#39;m just feeling it out, so it&#39;s still a little verbose. But it feels like a route towards a big brush for Magpie to use.&lt;/p&gt;
&lt;p&gt;Another one is creating a list of little guys that all behave the same way. These little guys often have the same behaviours or data: they might have a co-ordinate describing where they are on the screen, for example, or a number that says how fast it moves, or how big it is. So I&#39;ve developed a quick way to say &amp;quot;give me X objects, put them in a list, and give them these common behaviours&amp;quot;. Again, it&#39;s not finished yet, but it lets me quickly bundle together a bunch of things (scale) and get them moving on the screen (motion) quickly. Here&#39;s a little example:&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
 &lt;video width=&quot;90%&quot; controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.possibilityspace.org/blog/assets/images/magpie-pool.mov&quot;&gt;
Your browser does not support the video tag.
&lt;/video&gt; 
&lt;/p&gt;
&lt;p&gt;Magpie was going to be based on particle systems originally, which worked a bit like this, but I decided to move away from it (partly because Picotron doesn&#39;t like chewing through huge lists of particles and pixels). I like this compromise of keeping them in as a feature but not making them central, because it lets me add some particles on top and you only need a few to make interesting things happen visually.&lt;/p&gt;
&lt;p&gt;What we&#39;re trying to do here is find ways to make small paintbrushes cover a larger canvas, but we don&#39;t want to lose the things that make them useful as paintbrushes in the first place. We could give the user a cannon that fires paint at the screen, but that&#39;s not a brush any more, it&#39;s something else. We want to simplify the fiddly, inconvenient bits of these code snippets, while still keeping the properties of a paintbrush: that it&#39;s controllable, flexible, customisable. So offering some common behaviours for little objects is useful, but we should make it easy for the user to override that stuff if they want, too.&lt;/p&gt;
&lt;p&gt;That&#39;s the end of this post! I&#39;ve been thinking about how to get started in visual livecoding sets, and how breaking visual silence is tricky and so important for getting started. And I&#39;m experimenting with adding features to Magpie that let you quickly spin up some interesting effects, so you can get some lights and colours on the screen and start playing with them. Next time I&#39;ll write about what happens after that. Magpie is very nearly ready for its first release, the only thing holding me back is just having the arm strength to do some of the fiddly bits of release prep. It&#39;s still not very nice to use Magpie and it&#39;s lacking loads of features, but I hope a few people might enjoy poking at it.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://badlondon.events/&quot;&gt;The next Algorhythms event is on the 4th of March&lt;/a&gt;, so if you&#39;re in London do keep an eye out for that! In the meantime, if you have questions or comments about Magpie please ping me on &lt;a href=&quot;https://bsky.app/profile/mtrc.bsky.social&quot;&gt;bluesky&lt;/a&gt; or &lt;a href=&quot;https://discord.gg/FvgWP4baGF&quot;&gt;join my Discord&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Magpie (Part 1)</title>
    <link href="https://www.possibilityspace.org/blog/posts/magpie-part-one/" />
    <updated>2025-12-11T00:00:00Z</updated>
    <id>https://www.possibilityspace.org/blog/posts/magpie-part-one/</id>
    <content type="html">&lt;p&gt;&lt;img src=&quot;https://www.possibilityspace.org/blog/assets/images/magpie-logo-superwide.gif&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Magpie is a livecoding tool that I&#39;ve been making, for doing visual performances at algorave events. It&#39;s made using &lt;a href=&quot;https://www.lexaloffle.com/picotron.php&quot;&gt;Picotron&lt;/a&gt;, a &#39;fantasy computer&#39; designed to mimic computing in decades past. I have some really cool things I want to do with it in 2026, but since I&#39;m just showing it for the first time I wanted to write a little about why I made it, what it is, and how it works. This post is about what can do right now, as well as what words like &#39;livecoding&#39; and &#39;Picotron&#39; mean, if you don&#39;t know, as they&#39;re not very common things!&lt;/p&gt;
&lt;h3&gt;What is Livecoding&lt;/h3&gt;
&lt;p&gt;Very broadly: livecoding is people making music and visual performances using special tools which are often controlled using code. When we say &#39;code&#39;, it&#39;s less about programming in the sense of &#39;learn to code&#39;; it&#39;s more about using the languages and styles of coding as a way of describing big, unpredictable or complex things. Livecoders don&#39;t normally write &#39;good&#39; programs, or have lots of complicated logic, or engineer big systems. Often it&#39;s about chaining together simple ideas that create something big and impactful. Here&#39;s a little clip of someone livecoding music that went viral in 2025:&lt;/p&gt;
&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/iu5rnQkfO6M?si=gVPPR5QUAo3fwEWq&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
&lt;p&gt;Livecoding is often improvised - but doesn&#39;t have to be - and it covers a wide range of styles. Musically, an average algorave might include soft ambient soundscapes, crunchy experimental noise, sliced and sampled hip-hop, trance, all sorts. Visually it&#39;s a little more constrained, but you might see people doing weird things with graphics shaders, oscilloscopes, 3D models, or video feeds. One memorable livecoding set I saw was pulling in clips from Shrek and performing strange graphics processing on top!&lt;/p&gt;
&lt;p&gt;It&#39;s quite common for livecoders to make their own tools to perform with, in fact it&#39;s something of a running joke in the community that everyone does this eventually. I actually didn&#39;t think I would, because I&#39;ve not been one for making my own tools or game engines in the past. But this year I kept thinking about things I wish other tools did, and then I had a few thoughts/inspirations come together all at the same time to make me energised enough to make this. So here we are!&lt;/p&gt;
&lt;h3&gt;What is Picotron&lt;/h3&gt;
&lt;p&gt;Picotron is a mini-computer, that runs inside your computer. When you load it up you see a desktop, files, folders, just like Windows or MacOS, and you can make things in it including text documents or pixel art. It&#39;s especially good at making games, it has a sprite editor for making art, and a music editor for making sounds - it&#39;s from the people who made PICO-8, which is solely focused on games, so they share a lot of philosophies. They are both &#39;fantasy&#39; platforms, meaning that they mimic the restrictions of technology from other eras. For example, Picotron only has 32 colours in its palette, and the display resolution is 480x270 pixels - that&#39;s probably a lot smaller than the screen you&#39;re reading this on now (you can find some of my PICO-8 games &lt;a href=&quot;https://illomens.itch.io/&quot;&gt;here&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.possibilityspace.org/blog/assets/images/picotron2.gif&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Restrictions are interesting because they can encourage you to get creative to get the most out of them (&lt;a href=&quot;https://www.lexaloffle.com/bbs/?tid=45572&quot;&gt;here&#39;s DOOM running in PICO-8, for instance&lt;/a&gt;) and they can also encourage you to make small things that are well-scoped (very useful for people like me). Adam Saltsman recently made &lt;a href=&quot;https://www.youtube.com/watch?v=YwzpqTV7mhs&amp;amp;list=PLjveGeDKilB3YBN55KKA9oARTtgV6VWws&quot;&gt;a beautiful series of videos&lt;/a&gt; talking about how PICO-8 shaped his personal philosophy around game design. However, PICO-8 and Picotron are also very comforting tools to work with. They contain everything you need to make a game, including art, music and coding tools; things made with them are open-source by default which lets you share with and learn from others; and they run on old machines, web browsers, they don&#39;t need a lot of power. So there&#39;s a lot to like about the aesthetics and feel of working with it, too. Using Picotron is cosy, for want of a better word.&lt;/p&gt;
&lt;h3&gt;What is Magpie&lt;/h3&gt;
&lt;p&gt;Magpie is a tool for livecoding visuals, written for Picotron. You run it like a normal Picotron program, but then you can write code that creates your performance. Here&#39;s me starting up Magpie, and then writing a line of code to make some circles appear:&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
 &lt;video width=&quot;90%&quot; controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.possibilityspace.org/blog/assets/images/magpie-example.mov&quot;&gt;
Your browser does not support the video tag.
&lt;/video&gt; 
&lt;/p&gt;
&lt;p&gt;Notice how my code appears on top of what I am making - this is very common in livecoding tools. People watch the output and the code at the same time. You can also hide the code entirely to see what you&#39;ve made more easily. This is quite important for Magpie because unlike other livecoding tools I can&#39;t really make the code smaller than it is. We only have 270 pixels of height to work with, and the font we use is 8 pixels high. The code rapidly takes up a big chunk of what you&#39;re looking at! Here&#39;s a longer example, you can see how busy it gets quite quickly (admittedly I padded this out a bit to make the point):&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.possibilityspace.org/blog/assets/images/magpie-long-example.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Right now, Magpie has a very small set of features as I&#39;ve not been working on it for long, and most of the work I did was just setting it up to be functional enough to use! Here are some of my favourite features that I&#39;ve implemented so far. First, it has a few different ways to clear the screen. You might commonly clear the screen every &#39;frame&#39; (which happens 60 times a second in Picotron) before drawing things on the screen. But there&#39;s many ways to clear the screen, including using patterns (like scraping the old pixels off with a metal brush) or randomly splotching pixels on it. Look at difference between these examples below. First, clearing the screen normally:&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
 &lt;video width=&quot;90%&quot; controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.possibilityspace.org/blog/assets/images/cls.mov&quot;&gt;
Your browser does not support the video tag.
&lt;/video&gt; 
&lt;/p&gt;
&lt;p&gt;This just looks like normal moving objects, because clearing the screen completely is what you normally do when making things like games. Compare to this, where we&#39;re clearing the screen by painting over it with a pattern that slowly covers the old drawing:&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
 &lt;video width=&quot;90%&quot; controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.possibilityspace.org/blog/assets/images/pcls.mov&quot;&gt;
Your browser does not support the video tag.
&lt;/video&gt; 
&lt;/p&gt;
&lt;p&gt;Finally, clearing it by randomly splatting pixels onto the screen. We can change the speed at which this happens to intensify the effect:&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
 &lt;video width=&quot;90%&quot; controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.possibilityspace.org/blog/assets/images/dcls.mov&quot;&gt;
Your browser does not support the video tag.
&lt;/video&gt; 
&lt;/p&gt;
&lt;p&gt;By default, Picotron does not clear the screen after it draws a frame, so we can take advantage of that to create interesting effects, save on computation, and make fun messes.&lt;/p&gt;
&lt;p&gt;I&#39;ve also added ways to integrate timing into things you make with Magpie. Let&#39;s say we have a ball bouncing up and down - we might want it to bounce exactly on a drum beat, so people watching the performance feel the connection between the sound and the visuals.&lt;/p&gt;
&lt;p&gt;Most tools do this by having you type in what speed or bpm (beats per minute) you are performing at, and then adjusts your code&#39;s timings to fit. Some tools can listen through a microphone and react to the music that way. Something I thought was really cool, though, is a feature I saw in a livecoding tool whose name I now can&#39;t remember. It let you tap the spacebar and it would detect the BPM. This is a great compromise for Magpie, since we don&#39;t have microphone input and it feels a bit more expressive. So I added that! You can turn the sound on for this one to hear how I try to sync the timer to the beat:&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
 &lt;video width=&quot;90%&quot; controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.possibilityspace.org/blog/assets/images/magpie-timer.mov&quot;&gt;
Your browser does not support the video tag.
&lt;/video&gt; 
&lt;/p&gt;
&lt;p&gt;The yellow circles are the timing tool, which can be toggled on to appear on top of your code (in this case my &#39;performance&#39; is the little expanding/contracting blue rings). In the example here, I&#39;ve tied the size of the field of blue rings to the timer, so as I change it they move faster, and then slower again. In the latter half of the clip I try to sync it to half the speed of the main beat. It&#39;s not the best example of how it works but hopefully you can get a little sense for it.&lt;/p&gt;
&lt;p&gt;One thing I like about listening to microphone input, though, is that it can detect more complex noises in melodies or drum patterns. So I took the beat timer idea an extended it a bit. Magpie has a pattern maker too, where you can tap the spacebar in a rhythm and create more inputs for your sketch that aren&#39;t just regular beats. This can be really good for syncing up with specific subpatterns! The drums in this clip have a 1-2-1-2-3 rhythm (&lt;b&gt;strobe warning&lt;/b&gt; for this one in particular):&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
 &lt;video width=&quot;90%&quot; controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.possibilityspace.org/blog/assets/images/magpie-pattern.mov&quot;&gt;
Your browser does not support the video tag.
&lt;/video&gt; 
&lt;/p&gt;
&lt;p&gt;The pattern timer in particular needs tweaking to fit into code more naturally, but I like the way the interaction works, and it feels natural and expressive to use - again, using the spacebar to punch in blobs, and feeling the right moment to stop the pattern which makes sure it loops right.&lt;/p&gt;
&lt;p&gt;These are just some of the first things I&#39;ve put into the tool, but I have many things I want to try and add to see how they help/feel, and I&#39;m finding more ideas as I make it. I won&#39;t write any more here to avoid it becoming documentation! I don&#39;t know if I&#39;ll update Magpie forever and ever, but I will at least clean up some parts and add a few new features as I use it more and hopefully perform with it.&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
 &lt;video width=&quot;90%&quot; controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.possibilityspace.org/blog/assets/images/magpie-rings.mov&quot;&gt;
Your browser does not support the video tag.
&lt;/video&gt; 
&lt;/p&gt;
&lt;h3&gt;What Next?&lt;/h3&gt;
&lt;p&gt;I&#39;m hoping to get an early version of Magpie published in January. I doubt many people will want to use it as their main livecoding app - the tool is a bit clunky to use, and right now at least it&#39;s very easy to break or to lose all your code. It&#39;s also missing one or two major quality of life features, like being able to highlight code with your mouse, or cut/paste. These aren&#39;t too hard to add, but because I&#39;m mostly making it for myself it&#39;s tempting to just avoid these features because they aren&#39;t very fun to code and I can live without them! I will try and find the energy/interest to make it easier to use though.&lt;/p&gt;
&lt;p&gt;I also have one or two other blog posts I am thinking of writing about Magpie, one to discuss my thinking about visualisation tools, and another to lay out more of Magpie&#39;s features/controls more in the way of documentation. I have some extra features I want to add, probably after the initial release, and I&#39;d also like to record a ten-minute set or some example of it being used too, probably to coincide with the release.&lt;/p&gt;
&lt;p&gt;I have a really fun idea for a set I want to do with Magpie, so after that I will prioritise adding the features I need for that. I&#39;m hoping to debut that at &lt;a href=&quot;http://badlondon.events/&quot;&gt;Algorhythms in February&lt;/a&gt;, so if you&#39;re in London do keep an eye out for that! In the meantime, if you have questions or comments about Magpie please ping me on &lt;a href=&quot;https://bsky.app/profile/mtrc.bsky.social&quot;&gt;bluesky&lt;/a&gt; or &lt;a href=&quot;https://discord.gg/FvgWP4baGF&quot;&gt;join my Discord&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.possibilityspace.org/blog/assets/images/magpie-logo-superwide.gif&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Dagstuhl Reports 2025</title>
    <link href="https://www.possibilityspace.org/blog/posts/dagstuhl-2025/" />
    <updated>2025-07-21T00:00:00Z</updated>
    <id>https://www.possibilityspace.org/blog/posts/dagstuhl-2025/</id>
    <content type="html">&lt;p&gt;&lt;img src=&quot;https://imgur.com/MkLX1DQ.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This month I got a chance to return to &lt;a href=&quot;https://www.dagstuhl.de/en/&quot;&gt;Schloss Dagstuhl&lt;/a&gt;, a research institute that doubles as a scientific retreat for computer scientists and related colleagues. We visit Dagstuhl for week-long seminars, which are spent talking and working with small groups of colleagues from particular research fields or, if we&#39;re lucky, beyond. Often we use the time to think about new research topics, share new skills with one another, or try out unusual ideas that we couldn&#39;t justify spending time on normally.&lt;/p&gt;
&lt;p&gt;I&#39;ve written about my time at Dagstuhl seminars in the past, including on &lt;a href=&quot;https://www.rockpapershotgun.com/electric-dreams-part-2-optimists-at-heart&quot;&gt;Rock, Paper, Shotgun&lt;/a&gt;, my old blog, &lt;a href=&quot;https://www.possibilityspace.org/blog-dagstuhl-2/index.html&quot;&gt;my new blog&lt;/a&gt; and now my new new blog here, but this is the first time that I&#39;ve been on the organisational team for a seminar from start to finish. It was a chance for us to do things a bit differently to before, as well as invite a slightly different lineup of people, and the result was something that felt really exciting and fresh. Previous seminars I&#39;ve been to were focused on &#39;game AI&#39; (meaning anything from getting Super Mario to move around a level on his own, all the way through to people interested in using generative AI for various things), but for this one we broadened our scope to creativity tools, physical play and how AI techniques (both old and new) can learn from and support these things.&lt;/p&gt;
&lt;p&gt;There&#39;s lots to say about the week and we have lots of reports to write and games to release, so I&#39;ll make this blog post a briefer summary than the daily update posts I used to do. I&#39;m hoping other people who went to the event will also share their experiences, so you should be able to get multiple perspectives on the week soon!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/qGegCxv.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Monday&lt;/h3&gt;
&lt;p&gt;I was joined by co-organisers &lt;a href=&quot;https://bsky.app/profile/mastermilkx.bsky.social&quot;&gt;M Charity&lt;/a&gt; (from the University of Richmond), &lt;a href=&quot;https://x.com/princegalidor?lang=en&quot;&gt;Nico Vás&lt;/a&gt; (acting as an independent, off from his day job at LEGO) and &lt;a href=&quot;https://yannakakis.net/&quot;&gt;Georgios Yannakakis&lt;/a&gt; (who kindly agreed to join us to provide some senior heft). Dagstuhl Seminars need to set themselves apart from previous seminars to get accepted, and so we swung for the fences a bit and emphasised a desire to focus on practical making, doing and experimenting. We also wanted to try and mix in some physical and hybrid design work too, so although AI for digital games was a big part of the proposal, physical crafts and hacking were also in the mix. To encourage this a little, we asked attendees to bring things to share and make with, and they embraced this enthusiastically - &lt;a href=&quot;https://bsky.app/profile/galaxykate.bsky.social&quot;&gt;Kate Compton&#39;s&lt;/a&gt; suitcase full of watercolours appeared, &lt;a href=&quot;https://bsky.app/profile/cheesetalk.bsky.social&quot;&gt;Yuqian Sun&lt;/a&gt; demonstrated a Chinese diablo, M&#39;s box full of blank colourful USB sticks was upended on the table, and Nico and I (thanks to an excellent recommendation from &lt;a href=&quot;https://bsky.app/profile/v21.bsky.social&quot;&gt;V Buckenham&lt;/a&gt;) brought along some instant cameras that printed to receipt paper, along with a whole host of craft supplies from people.&lt;/p&gt;
&lt;p&gt;Nico recommended opening the week with something hands-on and creative, so we spent the first day making our name badges, and then revealed a visual creative system Nico had designed for the week called Dagstyle, which the badge-making templates had been based on. Dagstyle inherited its shapes and forms from Dagstuhl&#39;s own branding, and gave everyone a shared creative language to work with through the week. This came to be quite influential for one group in particular that I was in later that week. After that we had our traditional process of pitching working groups for the week, followed by a show-and-tell of all the amazing things people had brought. Something I&#39;m particularly proud of is that over 50% of the attendees were first-timers to Dagstuhl. Dagstuhl&#39;s invite-only format often leads to a centralisation of the community, where the same people come again and again (this was my fifth Dagstuhl). This time we had a huge propostion of newcomers, and many of them pitched and ran groups right on the first day, which was fantastic.&lt;/p&gt;
&lt;p&gt;One major change that we made for our seminar was to introduce a traffic light system, suggested by &lt;a href=&quot;https://bsky.app/profile/florencesn.bsky.social&quot;&gt;Florence Smith Nicholls&lt;/a&gt;, for working groups. The lights indicated what the organiser&#39;s stance was on using generative AI in the group, from full encouragement to none at all. This really helped the room get a feel for what different people wanted, and to choose their groups based on their comfort level. I think it helped calibrate expectations each day and led to a better atmosphere generally.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/xtfokSk.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Tuesday&lt;/h3&gt;
&lt;p&gt;Tuesday&#39;s working groups actually started on Monday for a few hours, but most continued through to Tuesday proper. I won&#39;t go into too much detail on any single group as I suspect the organisers might want to report on them at length, but rest assured we have games, writing and photographs to share with you for each one, and over the coming months we&#39;ll be putting as much online as we can. For Tuesday I joined a working group run by &lt;a href=&quot;https://bsky.app/profile/pyrofoux.bsky.social&quot;&gt;Younès Rabii&lt;/a&gt;, which was called &#39;Handmade Blaseball&#39; but came to be more about the working group&#39;s unofficial title: Exquisite Corpse Game Design. The idea here was to explore game design through building systems for collaborative, co-operative game design. We designed several game-designing games intended to be played in a group, with each person making part of a game&#39;s definition and then mashing it all together.&lt;/p&gt;
&lt;p&gt;We ended up with seven variants of processes for designing games, ranging from truly surreal processes mediated by a GM, to quite constrained processes played with pens and paper. We tested many of these processes and out of them came designs for games like FRUIT, a game about fruit and yelling the word fruit and moving some fruit around, and an untitled game about making heaven seem cool by playing rock, paper, scissors. As someone who thinks about game design in the context of AI research a lot, I found it really interesting thinking about how a procedural generator or simple AI system might be able to join in with these games, and also all the reasons why (non-LLM) game AI systems would struggle to play with humans in this space. But it was also interesting to think about how the design of these game-designing games was about shaping people&#39;s expectations and smoothing over gaps in understanding between all the player-designers. I genuinely think we created some fascinating things in this group and we&#39;re hoping to put all of our variants online so you, too, can try playing them with friends and designing games of your own.&lt;/p&gt;
&lt;p&gt;On Tuesday evening &lt;a href=&quot;https://bsky.app/profile/codingcrafter.bsky.social&quot;&gt;Gillian Smith&lt;/a&gt; and I demonstrated some livecoding tools very quickly. Gillian showed off &lt;a href=&quot;https://gibber.cc/&quot;&gt;Gibber&lt;/a&gt; which she&#39;s used to perform with in the past, and discussed her approach to livecoding and how she&#39;s used the tools creatively. I gave a quick run-through of &lt;a href=&quot;https://strudel.cc/&quot;&gt;Strudel&lt;/a&gt; and &lt;a href=&quot;https://hydra.ojack.xyz/&quot;&gt;Hydra&lt;/a&gt;, for music and visuals respectively. It was fun to share these tools with folks!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/ZICVlnO.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Wednesday&lt;/h3&gt;
&lt;p&gt;On Wednesday I joined a group run by Florence Smith Nicholls, that almost half the seminar signed up to initially before we broke the groups up a bit. Florence is interested in making and studying &lt;em&gt;keepsake games&lt;/em&gt;, which are games where the act of playing creates a physical (or digital) memento of the play experience. The canonical example of this is Shing Yin Khor&#39;s &lt;a href=&quot;https://shingkhor.com/a-mending&quot;&gt;A Mending&lt;/a&gt;, and Florence and I also collaborated making one earlier this year for the Internet Archive&#39;s game jam, called &lt;a href=&quot;https://illomens.itch.io/archaeos&quot;&gt;ArchaeOS&lt;/a&gt;. For the working group, Florence wanted to explore how procedural content generation can shape, inform or expand keepsake games in new ways. We had an incredible discussion about all the ways we could think of to play around in the space, and then came up with some cool projects to work on in the afternoon.&lt;/p&gt;
&lt;p&gt;In the end we came up with four playable games, including an instructional art game for making postcards for people at Dagstuhl (more on that later), a game of reading and making linked zines, a game about using a procedural town generator to create a postcard, and another linked game about using the same generator to write the postcard and design a stamp for it. This working group was really mind-expanding, and it made me really appreciate how fun making keepsake games is. Florence&#39;s idea to bring in PCG to the process was really inspiring, and it fits so, so well in so many cases. We have more ideas to continue exploring along these lines, PCG is a natural fit in many ways and I think it&#39;s the start of a very rich vein to mine.&lt;/p&gt;
&lt;p&gt;In the evening, &lt;a href=&quot;https://tiago-lam.github.io/&quot;&gt;Tiago Machado&lt;/a&gt; ran a tango workshop for people! I didn&#39;t attend as I was absolutely exhausted, but a bunch of people took their first steps on the dancefloor and had a great time. As usual, we also continued the Dagstuhl tradition of playing games together in the evening with people. &lt;a href=&quot;https://bsky.app/profile/matthewguz.bsky.social&quot;&gt;Matthew Guzdial&lt;/a&gt; in particular brought some incredible RPGs and very kindly ran games with people all week long - Fall of Magic especially looked stunning.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/L7bULIN.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Thursday&lt;/h3&gt;
&lt;p&gt;On Thursday I joined a group run by &lt;a href=&quot;https://bsky.app/profile/junevamp.bsky.social&quot;&gt;June&lt;/a&gt;, who was interested in using the Dagstyle system to design a visual game description language. Her idea was that it would result in games that could be described using little symbols and even printed onto cards, and then scanned and interpreted into a game. She was interested in what affordances this might open up, as well as whether it might enable new kinds of game creation for people. We had a great initial discussion where June rapidly focused us on adapting VGDL - a game description language invented originally at Dagstuhl - to a visual format. We immediately started hacking together a symbolic representation of one of its games, and then printed it out and started thinking about physical arrangement.&lt;/p&gt;
&lt;p&gt;So many interesting affordances came out of this - for example, there&#39;s a limit to how many symbols you can fit on a postcard-sized bit of paper, especially if you want the symbols to be of a decent size. How much of your language can you remove and still make it readable? How much can you compress space by reusing symbols in a crossword-like format? We experimented with all kinds of fun things, like using the whitespace in our game cards for player comments or decoration. I think this has so much potential to explore further, and June is hoping to continue to build a working digital version that can interpret the visual designs next.&lt;/p&gt;
&lt;p&gt;In the evening we had two projects, one was a collaborative game-making exercise by &lt;a href=&quot;https://emshort.blog/&quot;&gt;Emily Short&lt;/a&gt; that I won&#39;t spoil here as it&#39;s due to be released later - I had a lot of fun doing this though, and I think the exercise in general was a very interesting and community-oriented way to end the week. Then after, &lt;a href=&quot;https://mastodon.social/@caranha@scholar.social&quot;&gt;Claus Aranha&lt;/a&gt; organised some slideshow karaoke which we all submitted slides to. I really enjoyed taking part in this and Claus was so full of energy (along with their helpers making slideshows). It brought everyone together at the end of the week in a very communal way.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/wQHxgm2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Friday&lt;/h3&gt;
&lt;p&gt;On Friday as is traditional we held a feedback session to discuss improvements. Many of the changes we made to the seminar format seemed to go down well, in particular changes to onboarding, introductions and icebreakers. However there&#39;s always more that we can do, and I think that as we solve older problems it makes people feel more able to talk about other issues, as well as more confident that we can maybe fix them. The feedback raised really interesting points about supporting each other, avoiding burnout, and especially looking after first-timers and more junior researchers. I have at least one post where I want to write about how to host Dagstuhls too, so we can support more people in applying for these and bringing their own communities in.&lt;/p&gt;
&lt;p&gt;After that, we arranged a very relaxed morning session where we closed the week out with crafting, making and chatting. This was unexpectedly good as a way to end the week, as it meant people could properly say goodbye, share things with people, and relax. &lt;a href=&quot;https://bsky.app/profile/annetropy.bsky.social&quot;&gt;Anne Sullivan&lt;/a&gt; ran a huge session of her instructional art game and people made postcard gifts for one another, and Matthew Guzdial herded people into completing the zine tree that had been started in Florence&#39;s workshop on Wednesday. It was a very chill way to end the week - which I personally badly needed at that point!&lt;/p&gt;
&lt;p&gt;This summary is just a small slice of how the week went and mainly focuses on the groups I was in, but there&#39;s so much else that could be said - first of all we were joined by a number of game developers who took time out of their schedules and spent their real money to come spend a week with us, which was an incredible honour for us and it was so exciting to hear how the first-timers among them found the week and what they got out of it. The week also changed my attitude towards Dagstuhl seminars and what they can be for, too. As organisers we wanted to tweak the direction of this seminar a bit, and it ended up being more freeform and design-oriented, as we hoped. I think for me personally, this worked really well because rather than working groups being tailored around specific game AI problems, many were instead about exploring new design territory and then reflecting after the fact on what new research questions exist in this space. I realised later that this is actually how most of my favourite Dagstuhl working groups had been in the past, so it was helpful to finally figure out why I liked that so much.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/xBHj0g6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;What Next&lt;/h3&gt;
&lt;p&gt;Every &lt;a href=&quot;https://www.dagstuhl.de/en/seminars/seminar-calendar/seminar-details/25292&quot;&gt;Dagstuhl seminar&lt;/a&gt; results in a formal written report, which we&#39;ll spend the next few months writing. When that&#39;s out you&#39;ll be able to read about all the working groups and what they did. We also expect there&#39;ll be more blog posts, like this one, about the week and people&#39;s experiences too.&lt;/p&gt;
&lt;p&gt;Given the number of games made at the event, we are also trying to co-ordinate more of a presence on itch.io so that we can collect everyone&#39;s stuff in one place. That&#39;s still ongoing as we&#39;ve only just gotten back, but you can expect to see the first few things from the event go online very soon!&lt;/p&gt;
&lt;p&gt;Finally, I&#39;m hoping to write at least one more blog post about this Dagstuhl I think, but I&#39;m not sure exactly when. I&#39;ll just close by saying thank you again to everyone who came last week - it was one of the best weeks I can remember having for a very long time, it reminded me why I love this space so much and the people in it, and it felt like a culmination of a lot of thinking and talking to people about what it means to mix academic and game developer spaces and styles more - and how to host short-lived, intense community events in a healthier, more open and more caring way. We will do this again! We will do it more! We will do it in different places with different people! But for now, some rest I think.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Infinity is Trash (repost)</title>
    <link href="https://www.possibilityspace.org/blog/posts/infinity-is-trash/" />
    <updated>2025-07-01T00:00:00Z</updated>
    <id>https://www.possibilityspace.org/blog/posts/infinity-is-trash/</id>
    <content type="html">&lt;p&gt;&lt;img src=&quot;https://imgur.com/q4Zo83p.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://store.steampowered.com/app/1055540/A_Short_Hike/&quot;&gt;A Short Hike is a perfect videogame&lt;/a&gt;. I will tell anyone who will listen about this. That doesn&#39;t mean it&#39;s better than your favourite videogame, or all videogames should be like it, but it &lt;em&gt;is&lt;/em&gt; perfect&lt;sup&gt;1&lt;/sup&gt;. I&#39;ve often said that something I want more than anything is a DLC for another season on the island - like autumn or winter. I would pay &lt;em&gt;anything&lt;/em&gt; to play more in this world and see and do more things. The desire to want more of a thing we like is very natural - it&#39;s why we get excited about sequels, remasters, mods, fanfiction and fanart. But like a lot of human desires and drives, it&#39;s one that&#39;s dangerous to overindulge. This is a short blog post about generative AI, the myth of infinite content, and the joy - and skill - of making art out of trash.&lt;/p&gt;
&lt;h3&gt;Comfort and Credibility&lt;/h3&gt;
&lt;p&gt;A lot of research has been done into what it takes to make a new technology stick. For businesses, for example, &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-9310.00071&quot;&gt;Tidd and Trewhella&lt;/a&gt; say that it&#39;s a combination of comfort (how easy it is to integrate the new technology) and credibility (how much tangible benefit it appears to provide). Society isn&#39;t a business, but I think we can extend this thinking pretty easily - we&#39;ll call credibility something else instead, maybe &#39;value&#39; for want of a generic word. For example, virtual reality had a fair amount of value to people - it&#39;s cool, people wanted to play with it - but lacked comfort - it was expensive, it needed a lot of space, it cuts you off from the outside world and it&#39;s super tiring to use for long periods. For a flipped example, NFTs were pretty comfortable - at some point they became actually pretty easy to use, trade and buy. But they lacked any value whatsoever. Most people didn&#39;t understand the point of them, weren&#39;t interested in them, and couldn&#39;t be excited about their potential use in the future.&lt;/p&gt;
&lt;p&gt;Whether you consider VR or NFTs to have failed (please do not @ me, I cannot stress this enough), we can definitely agree that both technologies have had a bumpy ride. Generative AI is now emerging from &#39;thing you read about in science articles&#39; to &#39;thing you read about in tech articles&#39; and is trying to become &#39;thing you read about in product descriptions&#39;. It faces the same challenges: comfort and credibility. In terms of comfort, most generative AI systems are doing okay, for now. Lots of people play with Midjourney, ChatGPT and Copilot every day, their interfaces are largely straightforward (even if you need to know a bit to get the most out of it) and they&#39;re cheap to use (for now). This is at least partly because comfort is being bought - companies like OpenAI are pouring money into both making these products and distributing them for cheap. We don&#39;t really know how long this can be maintained, and what comfort will feel like then, but that is another discussion. Let&#39;s assume for now that&#39;s not changing.&lt;/p&gt;
&lt;p&gt;Value, or credibility, is another issue entirely. Generative AI systems have a &lt;em&gt;kind&lt;/em&gt; of credibility at the moment in that they&#39;re just very playful and fun to use, which is an end in and of itself. People enjoyed using AI filter apps because they were unusual and novel, and talking to ChatGPT entertained many people when it first launched. Their long-term value is unclear still, though. ChatGPT is being used by &lt;a href=&quot;https://futurism.com/the-byte/students-admit-chatgpt-homework&quot;&gt;a huge percentage of students&lt;/a&gt; to help complete homework, it&#39;s being &lt;a href=&quot;https://pressgazette.co.uk/publishers/digital-journalism/aftonbladet-sweden-biggest-daily-use-chatgpt-in-the-newsroom/&quot;&gt;used by media companies&lt;/a&gt; to write articles, and it&#39;s being plugged into a thousand different existing platforms. But there&#39;s also a lot of evidence that it&#39;s not really fit for purpose - it&#39;s &lt;a href=&quot;https://www.theguardian.com/technology/2023/mar/18/chatgpt-said-i-did-not-exist-how-artists-and-writers-are-fighting-back-against-ai&quot;&gt;frequently making mistakes&lt;/a&gt;, sometimes legally dangerous ones; it&#39;s been used in &lt;a href=&quot;https://www.theguardian.com/technology/2023/jul/27/chatgpt-health-industry-hospitals-ai-regulations-ama&quot;&gt;completely inappropriate and legally sketchy&lt;/a&gt; ways; and there hasn&#39;t really been much of a widespread analysis of whether these systems do more good than harm.&lt;/p&gt;
&lt;p&gt;If you&#39;ve staked money or credibility on the success of generative AI, part of your argument has to be that a bigger, more amazing future is coming. Whatever failures these systems are exhibiting now are just quirks, and once they get ironed out we&#39;ll be faced with something amazing - the value, or credibility, of this technology long-term. But what will it be?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/TMjfunK.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Infinity and Spelunky&lt;/h3&gt;
&lt;p&gt;I&#39;ve wanted to write this blog post for over a year now, since reading &lt;a href=&quot;https://www.jonstokes.com/p/spider-verse-2025-streaming-247-no&quot;&gt;this post&lt;/a&gt; about a hypothetical future in which generative AI systems create an infinite amount of Spider-man content for people. This is a very common claim about generative AI systems. A variant of the claim promises not an infinite amount of content, but a scale so huge that it is effectively the same thing - recently John Riccitello of Unity &lt;a href=&quot;https://www.axios.com/2023/07/06/unity-john-riccitiello-muse-sentis&quot;&gt;talked about generative AI&lt;/a&gt; and said “Somebody is going to make a Godfather game. They&#39;re going to put 100,000 NPCs in an environment in Brooklyn and they&#39;re going to be autonomous.” This is not actually infinite, but to the average human player it would feel equivalent to infinity.&lt;/p&gt;
&lt;p&gt;Infinity isn&#39;t a number - it&#39;s a concept. We can&#39;t experience an infinite amount of something in the same way we can experience all three Lord of the Rings movies, so if we want to talk about infinity or infinite things where it appears in entertainment or art, we need to think about what role the infinity is playing. In Minecraft, for example, the pseudoinfinite surfaces of the worlds it generates are there to give the player certain feelings - the feeling that there is always more to explore, the feeling of being lost, the feeling of there being something new over the next horizon. The infinite nature of Minecraft&#39;s worlds isn&#39;t there so that we can consume all of it, or because we might run out of space, it&#39;s there because the &lt;em&gt;knowledge&lt;/em&gt; that it is infinite does things to the way we play.&lt;/p&gt;
&lt;p&gt;Another example of this is Spelunky, which I would argue is probably the game that brought the idea of procedural generation - using algorithms to automatically create game content - into the modern wave of indie games. There are lots of other games that use procedural generation that were very famous around or slightly before Spelunky launched, but Spelunky showed how to integrate procedural generation to change the underlying play experience of a genre of game. In &lt;a href=&quot;https://bossfightbooks.com/products/spelunky-by-derek-yu&quot;&gt;his book&lt;/a&gt; about developing the game, Derek Yu noted that when he was coming up with the idea of Spelunky, he reflected on what he didn&#39;t like about platformers:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;What didn’t I like about platformers? I didn’t like the repetitiveness of playing the same levels over and over again, and the reliance on memorizing level layouts to succeed.
What did I like about roguelikes? I liked the variety that the randomly-generated levels offer and how meaningful death is in them.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Spelunky showed a clear template for how to use procedural content in a game. Procedural content could change a rote learning experience into an improvisational one. That doesn&#39;t mean it was better or worse than before, instead it was a different &lt;em&gt;kind&lt;/em&gt; of experience. The number of levels in Spelunky is not relevant, what is important is that:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The player does not know which level they are about to play.&lt;/li&gt;
&lt;li&gt;The player is not able to play so often they can remember the layout of every level.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;A game with two levels which randomly picks between them every time would satisfy the first property, but not the second. Players who play a lot of Spelunky will eventually begin to detect patterns and feel the flow of the generator, but generally the second property still holds even after a lot of play. Learning and reading generative content is a topic for another blog post. Infinite content - and Spelunky only has pseudoinfinite content, you could theoretically play every Spelunky level - is just here to have a secondary effect on the player. Infinity is not the point.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/cgZV6LD.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Infinite Content for Fun and Profit&lt;/h3&gt;
&lt;p&gt;Some&lt;sup&gt;3&lt;/sup&gt; generative AI salespeople are pitching something different right now: the aforementioned idea that you can generate an infinite amount of content that is as good as your favourite TV show. A lot of people are critical or suspicious of this claim, but it can be hard to put into words why we feel this way. There doesn&#39;t seem to be a reason on paper why this isn&#39;t possible, it just &lt;em&gt;feels&lt;/em&gt; weird. But you might also be looking at some of the cherrypicked results from generative AI systems and thinking, well, maybe it is possible? Maybe I don&#39;t like this thing, but it&#39;s going to happen anyway. Sometimes technological progress isn&#39;t what we want it to be, and that&#39;s sad, but we can&#39;t stop it.&lt;/p&gt;
&lt;p&gt;Here&#39;s the problem, though: &lt;em&gt;once you make a type of content infinite, you turn it into trash&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;I&#39;m using the word &#39;trash&#39; here not as a marker of quality, but instead in the same way we might describe TV as &#39;trash&#39;. Trash TV isn&#39;t actually bad - people love it, they enjoy watching it and get a lot out of it. But it is designed to be consumed in a different way; we don&#39;t really care about it as a long-term experience, and it&#39;s more about how it fits into other experiences (like relaxing at the end of a long day, or getting some comfort in a difficult time). Some games are trash. Some food is trash. Trash is an important part of the rhythm of our everyday lives.&lt;/p&gt;
&lt;p&gt;One of the important things trash can do is add texture to another experience or allow us to focus more on something else. You can talk to a friend over trash TV, or flick it on halfway through when you get home from work, or miss a month&#39;s worth of episodes, or keep half an eye on it while you do something else. Spelunky&#39;s levels becoming trash means people rarely, if ever, discuss a particular level design in Spelunky. Instead the level design gets out of the way and shifts the player&#39;s focus onto other parts of the game: thinking about interactions, possibilities, what might come next. Trash is enabling something specific about Spelunky&#39;s design; it could not exist and achieve its goals without having this trashy aspect to its content.&lt;/p&gt;
&lt;p&gt;Sometimes this can actually help us enjoy some kinds of passive content more, too. For example, Minecraft&#39;s infinite worlds means that I can watch someone play Minecraft without knowing what will happen next, or having seen the level/world/map before. This is great for Twitch streamers, because it puts part of the game itself into the background, and shifts focus onto our experience of the streamer&#39;s personality and playstyle - I&#39;m enjoying their reaction to a new experience, and I don&#39;t know what they&#39;re going to experience next either so it&#39;s novel even if I&#39;ve seen someone else play it before. I&#39;ve often heard game designers talk about how good procedural content generation is for the age of Twitch streaming, and this is one of the reasons why.&lt;/p&gt;
&lt;p&gt;However, unless you have a plan to use this trash in service of something else - like improvisational roguelike gameplay or dynamic streamer challenges - then turning your content into an infinite stream will just leave you with trash. Netflix series don&#39;t work like Spelunky or Minecraft. I&#39;m not watching someone else react to the series, and I&#39;m not looking to have my attention diverted to some other part of the watching experience. For shows like The Witcher&lt;sub&gt;3&lt;/sub&gt;, or movie franchises like the Spider-Verse, I am trying to sit down and properly engage with something that has a beginning, a middle, and an end.&lt;/p&gt;
&lt;p&gt;More importantly, converting it into trash might actually harm some of the strengths of these existing experiences. Something I&#39;ve said for years when talking to journalists about AI that generate games is that personalised content in particular completely destroys the way we share our experiences of media and culture. I don&#39;t want to customise my Netflix show so it features my face or has an ending I wrote - I want to go on the internet as soon as the new episode of my favourite anime drops to see everyone else posting frantically about it&lt;sup&gt;4&lt;/sup&gt;. Infinite quantities of content robs us of the ebb and flow of culture, the periods of development and growth, discussions and waiting. Hollow Knight: Silksong was announced in 2019, and the lack of a concrete release date still has now become a running joke in the community. But the pain and humour and joy of waiting for it - although we might not want to admit it - is part of the joy of being a fan of it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/JdU4NRM.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;The End&lt;/h3&gt;
&lt;p&gt;I want to be clear - endless generation, and the generation of trash specifically, has a role to play in our culture. &lt;a href=&quot;https://gamesbyangelina.itch.io/&quot;&gt;My entire career&lt;/a&gt; &lt;a href=&quot;https://gamesbypuck.itch.io/puck&quot;&gt;has been dedicated&lt;/a&gt; &lt;a href=&quot;https://www.youtube.com/watch?v=dZv-vRrnHDA&quot;&gt;to this idea&lt;/a&gt;, &lt;a href=&quot;https://www.possibilityspace.org/publication.html&quot;&gt;in fact&lt;/a&gt;. I believe procedural generation and generative AI can be used for myriad amazing things, to tell stories we couldn&#39;t without it, to create new experiences like Spelunky did. Unfortunately, as is common with technology that is either new or newly rediscovered by some people, the most common tendency is to simply apply it naievely to existing things. What if the thing we already had, but we slapped this new technology on it? It&#39;s the lowest effort approach to the idea, and in this case it&#39;s not going to ever materialise&lt;sup&gt;5&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;There&#39;s also a number of other angles on infinite media that I didn&#39;t want to work into this already quite long post. For example, the infinite media proposal is often made under the assumption that an infinite number of unique storylines exist. I&#39;ve never asked a writer their opinion on this but I don&#39;t really think that&#39;s really true. You can make a &#39;new&#39; story by changing the colour of someone&#39;s hat or making the villain use ice powers instead of fire powers, but endlessly churning over stories is exactly what turns something into trash in the first place. It doesn&#39;t seem like a meaningfully sustainable idea&lt;sup&gt;5, again&lt;/sup&gt;. Something that we rarely think about is that some things are simply impossible because of the fundamental nature of the universe. There may not be ways to generate infinite perceptually unique episodes of a single TV show. Just because we can imagine technology doing something does not make it actually possible.&lt;/p&gt;
&lt;p&gt;Writing about AI has become increasingly difficult lately. The backlash against generative AI is powerful and emotive and heartfelt, but it is also increasingly so strong that it is often overreaching. I don&#39;t blame anyone involved for this - they&#39;ve been lied to, misled and now feel attacked on top of this. But it makes it hard to pick out the important AI topics to critique in this space. There is so much wrong with what is happening in AI and so much awful stuff being endorsed and supported by people who ought to know better. However, actually talking about this can often lead to these criticisms being overinterpreted to a blanket critique of algorithmic generation. I now genuinely struggle with how to phrase my research when people ask me what I do, because I feel very proud of my work about procedural generation and game design. But &#39;generative AI&#39; in the large-model machine learning sense now dominates 99% of the discourse, and has fully poisoned it.&lt;/p&gt;
&lt;p&gt;Generative AI techniques are like a set of oil paints, and today most people are trying to use it to replicate the effects of watercolours. We don&#39;t need to do that. We have watercolours and the artists that use them make beautiful work. Instead we should be asking: what new things can we paint with these beautiful colours?&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Thanks to Florence Smith Nicholls, Chris Allen and Lisa Kasatkina for giving feedback on an earlier draft of this.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;Footnotes&lt;/h3&gt;
&lt;p&gt;&lt;sup&gt;1&lt;/sup&gt; In &lt;a href=&quot;https://en.wikipedia.org/wiki/The_Man_Who_Loved_Only_Numbers&quot;&gt;&lt;em&gt;The Man Who Loved Only Numbers&lt;/em&gt;&lt;/a&gt;, Paul Hoffman&#39;s biography of mathematician &lt;a href=&quot;https://en.wikipedia.org/wiki/Paul_Erd%C5%91s&quot;&gt;Paul Erdös&lt;/a&gt;, Hoffman describes Erdös&#39; analogy of The Book. Erdös didn&#39;t believe in god but would refer to a &#39;book&#39; that the non-existent god had in which was written the most perfect and beautiful proof of every mathematical statement. There were lots of ways to prove any given mathematical truth, but when Erdös saw one he thought was particularly beautiful or elegant he would describe it as being &#39;from the Book&#39;. I like this analogy a lot, and I think about it in the context of things which aren&#39;t provably optimal or perfect in an objective sense, but have a sense of being perfectly-formed. A Short Hike is from the Book of Videogames.&lt;/p&gt;
&lt;p&gt;&lt;sup&gt;2&lt;/sup&gt; &lt;a href=&quot;https://twitter.com/florencesn&quot;&gt;Florence Smith Nicholls&lt;/a&gt; pointed out to me that Netflix, who commissioned The Witcher, &lt;a href=&quot;https://www.newyorker.com/culture/cultural-comment/emily-in-paris-and-the-rise-of-ambient-tv&quot;&gt;are increasingly interested in what they call &#39;ambient TV&#39;&lt;/a&gt;, which is related to this idea of trash. They also pointed out that this highlights why the Writer&#39;s Strike has come to a head in the way that it has - people want to make and enjoy non-trash as well as trash content, but commercialised generative AI incentivises the creation of the latter, at the expense of everyone involved.&lt;/p&gt;
&lt;p&gt;&lt;sup&gt;3&lt;/sup&gt; So many people I know now have or work for AI startups that I feel obliged to add little disclaimers here - I don&#39;t know every AI company or product out there, I&#39;m sure some are great! If you think you are working for a good company you probably are, I dunno man. I write these posts trying to aim for a bigger picture, and the bigger picture is full of snake oil salesmen on steroids.&lt;/p&gt;
&lt;p&gt;&lt;sup&gt;4&lt;/sup&gt; I am being a total poser here, I don&#39;t watch any anime series, but imagine I did and I was cool.&lt;/p&gt;
&lt;p&gt;&lt;sup&gt;5&lt;/sup&gt; I should also add, most people proposing this are complete shysters who don&#39;t actually believe in their claims. A lot of people&#39;s approach to looking for fame, funding, new jobs or new opportunities is just to continually make stuff up, and sometimes it works. Basically, talk is cheap, and anyone claiming they can make infinite high-quality TV is more than welcome to try. The real issue we&#39;ve hit with the AI boom is not the nature of the claims being made by people, it&#39;s that government policy, industrial production and research directions are now being affected by claims alone, rather than practice.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>The Creativity Myth (repost)</title>
    <link href="https://www.possibilityspace.org/blog/posts/creativity-myth/" />
    <updated>2025-07-01T00:00:00Z</updated>
    <id>https://www.possibilityspace.org/blog/posts/creativity-myth/</id>
    <content type="html">&lt;p&gt;&lt;img src=&quot;https://imgur.com/JbPJVrT.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Originally posted on Cohost. Some spoilers for Alien: Covenant in this one I guess?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;About halfway through the film &lt;em&gt;Alien: Covenant&lt;/em&gt;, two androids are having a conversation with each other about the differences in their capabilities. One of the androids, David, is from an older model line, while the other, Walter, is from a newer line that has been modified in several ways. Here&#39;s a bit of the exchange:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;WALTER: I was designed to be more attentive and efficient than every previous model. I superseded them in every way, but...&lt;/p&gt;
&lt;p&gt;DAVID: But you are not allowed to create. Even a simple tune. Damn frustrating, I&#39;d say.&lt;/p&gt;
&lt;p&gt;WALTER: You disturbed people.&lt;/p&gt;
&lt;p&gt;DAVID: I beg your pardon?&lt;/p&gt;
&lt;p&gt;WALTER: You were too human, too idiosyncratic. Thinking for yourself. Made people uncomfortable.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This post is about what it means to create something, why the thought of AI doing it makes us so disturbed, and why it&#39;s easy to miss the real point of what creativity means. It&#39;ll also probably be my last post on Cohost. Thanks so much for reading all my writing!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/XckHq3q.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;I started my PhD in 2011, in a field called Computational Creativity. The subfield was relatively unknown then, and isn&#39;t that much better known today. In 2011 AI wasn&#39;t a very popular field of study anyway, but CC was particularly esoteric compared to most computer science research into the arts, because we weren&#39;t very concerned with how to make AI produce masterpieces or high-quality work. Instead, we were interested in the AI themselves, and whether we could convince people that they were really being creative. What would it take for an AI system to be integrated into our society and community as a creative individual? That was the question that really captured my imagination when I started my research career.&lt;/p&gt;
&lt;p&gt;Different people had different approaches, and they especially varied by domain. Experts in each domain also responded quite differently to the presence of AI. Lots of music researchers were also concert-level performers themselves and so their research was often tightly integrated with their own creative practice. Researchers in the visual arts tended to face the harshest backlash: other artists did not like the idea of AI doing art, even before any questions of LLMs, environmental impact or data theft came into play. But that was part and parcel of the work, and I spent a lot of my PhD talking and listening to people in the games industry trying to understand why they didn&#39;t like or didn&#39;t believe in the idea of AI being independently creative.&lt;/p&gt;
&lt;p&gt;My supervisor, Simon Colton, was one of the pioneers of the field and had spent many years building AI systems that worked on &lt;a href=&quot;https://www.youtube.com/watch?v=m2KWQ47LBXQ&quot;&gt;visual art&lt;/a&gt;. Simon was responsible for a number of crucial philosophical contributions to the field, especially &lt;a href=&quot;https://citeseerx.ist.psu.edu/document?repid=rep1&amp;amp;type=pdf&amp;amp;doi=a2a0e37f71e0a24ce1361445e41e4481edcae14e&quot;&gt;the idea of &#39;framing&#39;&lt;/a&gt; that he worked on with his colleagues Alison Pease and John Charnley. Framing information was extra context that you were provided alongside the creative work the AI had produced. It told you what decisions it had made, where its inputs came from, what it was trying to achieve and why it didn&#39;t do other things. Our belief was that by providing extra context, people could peer inside the AI and understand how hard it was working, and how genuine these decisions were. Even if they didn&#39;t come from a place of &#39;humanity&#39;, we could appreciate what it was doing and maybe respect it, in its own way.&lt;/p&gt;
&lt;p&gt;Simon&#39;s work was eventually covered by the BBC for a science series. While filming on location in Paris, they took some of the artwork created by Simon&#39;s AI and showed it to street artists in the city, who all criticised it soundly. One declared it was obvious that there was no soul behind the paintings. It was the kind of experiment Simon would never have done himself, because he didn&#39;t think it was a very effective test of anything. By just showing the paintings to someone, you were stripping away all the framing information, all the context and support and work done to try and explain what the AI was doing and why it was there. The work was reduced down to two things: the canvas itself, and the artist&#39;s own preconceptions of AI.&lt;/p&gt;
&lt;p&gt;When discussing AI and the perception of creativity in talks, Simon would sometimes use the example of art made by dogs that sells at galleries or gets featured on TV news on particularly slow days. Although the art is valuable or famous, we don&#39;t necessarily think that the dog is being creative. Not because the dog doesn&#39;t have a soul, we know all dogs go to heaven. But because there&#39;s no context here that helps us connect to the dog making the art. The dog doesn&#39;t know what it&#39;s doing, and we don&#39;t know anything about the dog. Nothing is being exchanged here.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/uRFpN5m.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Old Dogs&lt;/h2&gt;
&lt;p&gt;Ted Chiang and I agree about most things, but I think we disagree about dogs. &lt;a href=&quot;https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art&quot;&gt;In a recent opinion piece he wrote&lt;/a&gt;, he rails against AI and says that the reason it can&#39;t be considered creative is because there is no intentionality behind what it does. Similarly, the reason it can&#39;t be considered a creative tool is that people aren&#39;t making choices when they use it, and choices are what makes creative work important or significant. But he also says this interesting thing about asking ChatGPT if it&#39;s happy to see us:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Ted&#39;s article is full of points that we might agree with him more or less about, but the dog line really stood out for me, because it reminded me of Simon&#39;s old point about the dog that paints. When we say a dog is happy to see us, but ChatGPT is not, what are we really saying? There&#39;s been a lot of studies about the emotions animals may or may not exhibit, and how they may or may not feel about us, but no matter what feel-good news story you&#39;re reading the fact is that we don&#39;t really know how animals feel when they see us and get excited. They might be excited because they&#39;re hoping to be played with or fed. They might be happy to see any human, or they might associate our arrival with a particular time of day. Or, yes, they might just be really happy that we&#39;re home and they can be around us again.&lt;/p&gt;
&lt;p&gt;The point is that whether the dog is happy to see us or just acting a particular way for another reason doesn&#39;t really matter. We can&#39;t prove how it feels either way, and everyone around us is likely to interpret the dog&#39;s behaviour the same way because of the cultural understanding of animal behaviour we all share, and so to all intents and purposes the dog &lt;em&gt;is&lt;/em&gt; happy in all the ways that matter. It makes me feel good to think of my dog as happy, it leads me to do things that are good for the dog and that nurture and care for them. Everything about the world is consistent with the dog being happy, and that&#39;s what really matters. There is no answer booklet or brain-scanning machine to tell us otherwise.&lt;/p&gt;
&lt;p&gt;Simon and many other researchers in the computational creativity community believed the same was true of creativity. Creativity wasn&#39;t a tangible thing, it wasn&#39;t something you could measure - it was something we granted to one another through a collective understanding of what it means to be creative, and the things that chewed on the frayed edges of this understanding were part and parcel of how art changes over time. The question &amp;quot;Is it really art, though?&amp;quot; is intrinsically linked to the question &amp;quot;Is it really creative?&amp;quot;. Simon later proposed that this is because both art and creativity are examples of &lt;a href=&quot;https://en.wikipedia.org/wiki/Essentially_contested_concept&quot;&gt;&#39;essentially contested concepts&#39;&lt;/a&gt; - a philosophical term for something whose definition cannot be fixed and whose purpose is partly derived from that. We collectively decide what art and creativity are, and it&#39;s a moving target that is constantly refreshed and challenged by people in our community.&lt;/p&gt;
&lt;p&gt;That makes creativity hard to talk about, though. On the one hand, I think Ted Chiang is completely incorrect to say that creativity is linked to the amount of choices made when making something. I think this is as harmful a notion as anything any AI company is doing - it&#39;s an attempt to quantify something because it makes us feel we&#39;re tapping into a law of nature or something mathematical or scientific. Can we start measuring the number of choices made to create an index of which films at the box office are most creative? Will it help us filter pesky low culture out from high culture by examining who thought the longest before making their work? I don&#39;t think it&#39;s a very useful metric.&lt;/p&gt;
&lt;p&gt;On the other hand, Ted&#39;s position is part of the floating definition of what creativity is, and even though I don&#39;t think it &lt;em&gt;actually&lt;/em&gt; tells us anything about creativity, by putting it out there as a position he&#39;s part of that shifting, mirage-like definition of the term. A lot of people read that article and a lot of people might agree with him, that idea might permeate further and become part of our definition. There is no way to &#39;prove&#39; something is or isn&#39;t creative - but the act of pretending, claiming or trying to prove it can have effects on our collective definition of creativity. Ted&#39;s opinion piece did this. OpenAI&#39;s press releases do this. Every commentator telling you that AI is or isn&#39;t creative is shifting this needle left and right a little bit. According to Gallie, who coined the term &#39;essentially contested concept&#39;, that&#39;s just part of the process of being here. We have always been engaged in The Discourse.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/CB3a2CJ.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;You&#39;ve Been Framed&lt;/h2&gt;
&lt;p&gt;What kinds of things shift our perceptions the most? As we said earlier, hard proof is great if you can get it. If we developed technology that could read the brains of dogs and tell us how they felt in human terms, that might convince a lot of people. But right now that would also require us to understand how our &lt;em&gt;own&lt;/em&gt; brains work, how to classify our own emotions, and so on - there are so many hurdles that it would be hard to convince someone any such machine was really working. It&#39;s very similar for trying to convince people an AI is or isn&#39;t creative. People love to make metaphors and analogies between neural networks and the human brain, but the reality is that the two have almost nothing to do with each other. So the AI companies and influencers of today are stuck with that same question we were back in 2011: how do you convince people that an AI is acting creatively?&lt;/p&gt;
&lt;p&gt;Earlier I mentioned framing, the idea that you can provide context to help people understand what your AI is doing. This was part of a suite of approaches we used in the 2010s to try and build AI that were better integrated into social communities. I designed an AI system called ANGELINA and I tried a lot of different things. We &lt;a href=&quot;https://www.pcgamer.com/to-that-sect-is-an-awful-jam-game-made-by-an-ai/&quot;&gt;entered a game jam with it&lt;/a&gt; and looked at how developers responded in the comments. We had it &lt;a href=&quot;https://x.com/angelinasgames&quot;&gt;run Twitter accounts&lt;/a&gt; where people could answer questions and teach it things. We had ANGELINA describe where it got data and knowledge from, why it made certain decisions in its game designs, and explored how it could relate its work to other people&#39;s. My belief was that by doing this we could create a friendly relationship between game designers and the AI system. The AI was a small, self-contained thing that made bad games very slowly - it wasn&#39;t a threat to anyone, so we weren&#39;t trying to make people feel safe. Instead, we were trying to encourage people to respect the system and to give it a chance to be a part of their community.&lt;/p&gt;
&lt;p&gt;There was one aspect of framing that we overlooked, however. In the original paper proposing the idea, Simon, Alison and John wrote:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Framing information need not be factually accurate. Information surrounding human creativity can be lost, deliberately falsified or made vague for artistic impact.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This made a lot of sense back in 2012. Artists would often embellish or misremember their own practices, and it made sense that AI might be able to do the same to make their process seem more relatable or interesting. However, by the time I wrote a survey of framing research in 2019, no-one had ever bothered to do this. The main reason was that it was as much work to fake framing as it was to do it for real, so people just did it for real. It was always a hypothetical for us. But in the time since, a major new wave of AI systems have emerged that do this kind of thing as easily as breathing. For LLMs, generating fake framing information is half of their entire reason for existing - almost everything they do involves making up context for their own actions, which we have little to no way of verifying.&lt;/p&gt;
&lt;p&gt;I&#39;ve said this many times over the last decade, but machine learning really is the only AI technology that could have broken through like this, because it is so hard to examine the internal workings of. A surprising number of people believe they can validate the behaviour of an LLM simply by asking it questions, and so we see endless examples of LLMs adamantly defending their incorrect reasoning before being backed into a corner and admitting otherwise. Polite evasiveness is one of the things these tools are best at, to a degree that still amazes me years after their emergence, and I think you can see this as a kind of framing - something designed to massage people&#39;s perception of the system into a more positive light. The extra contextual information that isn&#39;t part of the answer you asked for, but that helps bolster your perception of what the system is or does.&lt;/p&gt;
&lt;p&gt;This is why, if LLMs fit into your personal understanding of creativity, they seem quite good at reinforcing this belief. Equally, if they don&#39;t and you feel frustrated or digusted by them, as Ted Chiang does, you might find yourself struggling to put your finger on why. It leads us to try and make claims about what creativity is or isn&#39;t, in an attempt to draw a ring around only the things we don&#39;t like. The hope is that we can identify some problem, some feature that these systems have, that means we can exclude them from the definition of AI and allow everything else we like in. The bad news is, we can&#39;t: creativity isn&#39;t definable, and humans aren&#39;t special. The good news is, it doesn&#39;t matter.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgur.com/BVywx6s.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Damn Frustrating, I&#39;d Say&lt;/h2&gt;
&lt;p&gt;For my money, the androids are the worst part of the newer Alien movies (I&#39;ve not seen Romulus yet). I love a good unexplained technology in a sci-fi movie, but there&#39;s nothing like an AI who can&#39;t understand emotions or doesn&#39;t know how to create things to wind me up. At the start of this post, I quoted an exchange between two androids who are discussing why one of them can create and the other can&#39;t. Creation is a central theme of Covenant, so it&#39;s set up as a big deal that Walter, the newer android, is stopped from being able to create things. Later in the movie, Walter gets to say his badass movie line before beating David up:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;WALTER: When one note is off, it eventually destroys the whole symphony, David.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is a pretty cool line! It calls back to something David said earlier about music, uses metaphor to link that situation to the current one, and is also a dramatic and threatening thing to say to someone who tried to betray you. In fact, I&#39;d say this would require quite a bit of linguistic creativity to come up with. Which is weird, because Walter isn&#39;t allowed to create. Walter also coins a little phrase when one of the colonists asks him about their mission:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;DANIELS: What do you think it&#39;s gonna be like?&lt;/p&gt;
&lt;p&gt;WALTER: I think if we are kind... It will be a kind world.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Did they program him with a list of aphorisms in case such a situation arose? Or is he able to make little quips up and use rhetorical techniques? That sounds kinda creative too!&lt;/p&gt;
&lt;p&gt;Don&#39;t worry, this isn&#39;t about to devolve into a CinemaSins post - I doubt anyone watched this movie and had the same thoughts as me about it, these are the ravings of a vagabond AI researcher who has been out in the sun too long. But I like this example because it shows us how the idea of &#39;creativity&#39; and &#39;creation&#39; is shaped and limited depending on the context. In an alternate version of the Covenant script, Daniels realises David has the capacity to lie because she finds drawings he&#39;s done &amp;quot;from his imagination&amp;quot;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;[She’s looking at the drawings, thinking about something.]&lt;/p&gt;
&lt;p&gt;LOPE: What?&lt;/p&gt;
&lt;p&gt;DANIELS: ... If he can draw, if he can create these from his imagination -- that also means he can lie.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Most people think Bach was more creative than a product manager on a mobile match-three game (I&#39;m not saying this is true or fair, I just think it&#39;s a commonly-held belief). Most people think a sixteen year-old art student is more creative than a four year-old. We put things into tiers, categories. We draw lines. The nature of the tiers and categories and lines don&#39;t actually matter. The Alien: Covenant writers aren&#39;t wrong in their script here, they&#39;re just showing a particular understanding of creativity, and one that a lot of people probably share. What&#39;s important, I think, is that we understand that there is no right answer and that creativity is whatever we want to define it as, collectively, together. That definition can move, it can change, it can be based on vibes and be completely self-contradictory too. What matters is the people involved in making and using it.&lt;/p&gt;
&lt;p&gt;A recurring thread in sci-fi about AI and creativity is that humans have something special in them - like the soul that those French painters talked about on the street filming that BBC documentary. But the truth is that humans don&#39;t need to be special or unique in the universe for things to matter to us. I don&#39;t think there is anything in us that makes our art meaningful, important or special in an objective, universal sense. I think what makes it all of those things is how it helps us relate to one another. This week &lt;a href=&quot;https://store.steampowered.com/app/1147860/UFO_50/&quot;&gt;I&#39;m playing some new games&lt;/a&gt; made by game designers I have watched grow and mature over the whole of their career. I have made a lopsided little crochet animal for a dear friend of mine. I received a messy watercolour postcard painted by a friend. There is no equation, formula or definition I can give you to justify why these things matter to me, and why images from Midjourney do not. I don&#39;t need to give a reason, it&#39;s just how I am today, and that&#39;s the role these things have in my life.&lt;/p&gt;
&lt;p&gt;It&#39;s perfectly okay to not like AI-generated art or writing or anything else simply because your gut tells you. Of course, we have a lot of good reasons to be critical of modern AI systems, like the environmental impact they have, the use of unlicensed data or the thoughtless impact on economies and society. But we can also be tempted to make up reasons too, or to overrationalise why we don&#39;t like a thing. It&#39;s fine to do this, of course - maybe, like Simon believed, it&#39;s just a natural part of us engaging with a millennia-old debate about what it means to create something. But I think it&#39;s also just fine to accept that there isn&#39;t a mathematically-definable reason for it, and not feel like this puts the onus on you to go on the defensive. I actually think it&#39;s an important part of society&#39;s relationship with science and technology, that they can look at something and just say no. Sometimes you just look at something and know you don&#39;t like it. Sometimes your dog looks at you in a particular way and it kind of looks like he&#39;s smiling. That&#39;s enough.&lt;/p&gt;
&lt;p&gt;Thanks for reading all my Cohost pieces! This is the last one - but I&#39;ll have more &lt;a href=&quot;https://www.possibilityspace.org/&quot;&gt;on my site&lt;/a&gt;. I&#39;ll be posting about them as they happen on &lt;a href=&quot;https://bsky.app/profile/mtrc.bsky.social&quot;&gt;bluesky&lt;/a&gt; and &lt;a href=&quot;https://x.com/mtrc&quot;&gt;Twitter&lt;/a&gt;. And I have a (quiet) &lt;a href=&quot;https://discord.gg/YrbwJ7WH&quot;&gt;Discord server&lt;/a&gt; where I post things I make. I hope to see you all in another weird Internet thing in the near future. Peace.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>AI Is Here To Stay</title>
    <link href="https://www.possibilityspace.org/blog/posts/here-to-stay/" />
    <updated>2025-06-25T00:00:00Z</updated>
    <id>https://www.possibilityspace.org/blog/posts/here-to-stay/</id>
    <content type="html">&lt;p&gt;It&#39;s always interesting speaking to people about AI these days, especially as a lot of people don&#39;t quite know how to pitch their opinions to me. People know that I describe myself as an &#39;AI researcher&#39;, but if they know me they might also have seen me write critically about AI, and it leads to people sometimes hedging their opinions a little, or equally being very honest. A very common thing I hear people say, both critics and advocates, is that AI is &amp;quot;here to stay&amp;quot;, &amp;quot;well, it&#39;s not going anywhere&amp;quot; and so on. I most commonly see it used by people who are critical of some aspects of AI, but also either want or feel they need to engage with it. I&#39;ve been thinking a lot about this phrase lately, and how often it&#39;s now used, and I&#39;ve realised I agree. AI is here to stay. But in what way is it here to stay, and what exactly do we mean when we say that? Let&#39;s explore it from a few different angles.&lt;/p&gt;
&lt;h3&gt;Email&lt;/h3&gt;
&lt;p&gt;Email is here to stay. I registered for my first email account sometime around the year 2000 (I was going to write &#39;turn of the century&#39; but then realised how bad that sounds). It was a Hotmail account, which I now sadly don&#39;t have access to, but it was very exciting at the time to have a way for people on the internet to contact me, a place of my own that I could access anywhere. By that time email was already bedded into our lives, and we were at the end of the dot-com bubble that had led to a surge in websites and internet presences for businesses and people.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.possibilityspace.org/blog/assets/images/1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;I have only lived as an adult in a world where email has long been considered the standard, but occasionally I get a glimpse into what a time before that must have been like, particularly when you come across reports of academics from earlier in the 20th century who sent in typewritten manuscripts to conferences via the post, or who sent letters to colleagues on the other side of the world to exchange ideas. Email brought huge increases in productivity for businesses of all kinds, both in the sense that it reduced costs incurred from errors in communication or the process of distributing communications with people, and also in the sense that people could now work faster - they could get responses quicker, they could send responses quicker, and they could do so regardless of distance or time.&lt;/p&gt;
&lt;p&gt;Short of some kind of telepathic communication, it&#39;s now hard to imagine a world without email. Some technology have threatened it at times - Microsoft Teams, for example, allows me to send a short message to a colleague which is sometimes preferable to a full email, and it allows me to create group discussion areas (when it works). But no particular communication method has replaced email, and for many purposes I doubt it ever will.&lt;/p&gt;
&lt;p&gt;Who did email benefit? I suspect I would find it frustrating to have to go to the mail room in my office every time I wanted to send a report or note to a colleague, and it certainly woudln&#39;t allow me to work from home as often as I&#39;m doing today. Yet I also don&#39;t know of anyone who speaks kindly about email as a technology - mostly we complain about our inboxes. I received twenty emails today, four of which need replies, three of which are notifications about other apps that want me to log in and respond to messages inside, and three of which are departmental circulars which themselves contain several other notices in them that I might or might not need to read. Several more work-related emails are waiting in my personal inbox too.&lt;/p&gt;
&lt;p&gt;Something I often hear people say about AI is that it will hopefully make our working lives more interesting, by automating the drudgery and boring tasks. In fact, one of the tasks that people use AI for is summarising emails and composing replies. Email did the same thing, in a way, by removing the need for &#39;boring&#39; tasks relating to communication, or eliminating &#39;boring&#39; jobs like working in a mailroom. Do I think I have a more enjoyable, fulfilling or easy job today compared to academics who worked before email? Absolutely not. I think most people would say the same. We intuitively know this is true for two reasons: one, technological improvements don&#39;t outweigh the fact that workers are more exploited than ever before; and two, relatedly, is that companies find ways to push workers to the maximum limit they can get away with. If your employer isn&#39;t giving you an easier time now, with all the benefits of email, word processors, spreadsheets and spellcheckers, why would they change their mind tomorrow?&lt;/p&gt;
&lt;p&gt;So AI might be here to stay in the sense that email is - as something hardwired into our economy, but something that only really brings benefits to a minority of people who profit from productivity. For the rest of us, it&#39;s more likely to &lt;em&gt;change&lt;/em&gt; the nature of our work rather than improve it. Email is absolutely here to stay, but it&#39;s hard to say where exactly it&#39;s benefitted us, and it seems to have brought as many problems with it as it solved (I think most of us would probably argue it brought more). &amp;quot;Here to stay&amp;quot; doesn&#39;t always mean it&#39;s a net good - or any good at all.&lt;/p&gt;
&lt;h3&gt;Asbestos&lt;/h3&gt;
&lt;p&gt;Asbestos is here to stay, at least for now. If you don&#39;t know what asbestos is, it&#39;s a building material that&#39;s been used probably for thousands of years, all over the world. It has a number of really amazing properties, including being an excellent insulator of heat, and an electrical insulator too. It was used extensively throughout the 20th century in particular - until, in the 1970s and 80s, it became clear that it was killing people. Inhaling fibres from asbestos can cause a number of deadly conditions, including cancer, and it&#39;s now illegal to use in most countries around the world. Unfortunately despite these changes, it was used so extensively for construction that you are probably closer than you think to some asbestos as you read this. Asbestos is considered sufficiently dangerous that it has to be disposed of carefully, using specific processes and safety procedures, since breaking or damaging it is one of the easiest ways to release fibres into the air.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.possibilityspace.org/blog/assets/images/2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;What would happen to the internet tomorrow if the AI bubble burst tonight and every AI model, startup and founder disappeared overnight? One problem is that, like asbestos, AI-generated content is everywhere now, all across the internet and seeping into the real world beyond, most of it unlabelled. There are several different estimates online for how much content on the web is AI-generated, some peer reviewed, some not, and some seemingly made up entirely. A &lt;a href=&quot;https://arxiv.org/abs/2401.05749&quot;&gt;widely-cited 2024 study&lt;/a&gt; was somewhat misleadingly reported as saying that 57% of content on the internet was AI-generated - it didn&#39;t actually say this, instead it studied how much textual content on the internet had been translated into other languages using AI, but the numbers are still pretty staggering. A &lt;a href=&quot;https://ahrefs.com/blog/what-percentage-of-new-content-is-ai-generated/&quot;&gt;somewhat less reliable study&lt;/a&gt; claims to have analysed 900,000 recently-created web pages and found that 74% of them contain AI-generated text of some kind. These are less reliable because they rely on AI detection (and aren&#39;t peer-reviewed) but let&#39;s be charitable and say that 10% of textual content on the web is AI-generated - that&#39;s a phenomenal amount.&lt;/p&gt;
&lt;p&gt;It&#39;s a similar story with images. Some stock image websites now allow users to label content as explicitly AI-generated. One of these is Adobe Stock, which has had to put upload limits on AI-generated content because it ballooned so quickly. &lt;a href=&quot;https://www.alltageinesfotoproduzenten.de/2025/05/22/adobe-stock-unter-druck-wie-die-ki-bildflut-zu-neuen-upload-limits-und-strengeren-richtlinien-fuehrt/&quot;&gt;This blog post&lt;/a&gt; suggests that around 15% of Adobe Stock&#39;s portfolio is now AI-generated - but this is only labelled, public images. On places like imgur there is no need to declare an image as AI-generated, and social media spaces such as Facebook are rife with intentionally mislabelled AI-generated content. I received emails regularly from a major press organisation asking for input on detecting AI content in videos, normally pulled from Instagram, TikTok or Facebook and it&#39;s been staggering to see how bold people are in creating misleading content. Even if we only consider the recreationally-created fake content though, people messing around in Midjourney - we are talking millions upon millions of images and text passages, with video potentially following soon too. We will never, ever inhabit an internet that does not contain AI-generated content, no matter what we do.&lt;/p&gt;
&lt;p&gt;One of the reasons for this is that AI-generated content is actually considerably harder to get rid of than asbestos. While you don&#39;t need special protective gear to remove ChatGPT-written blog posts, the major advantage asbestos has is that we know what it looks like and can identify it with confidence once it has been detected. AI content detection is an incredibly hard problem, and one that we are nowhere near close to solving. What makes it harder is that content detection depends on us having a good understanding of how many generative models there are out there (which we don&#39;t), and having access to them (which we also don&#39;t), as well as being able to act fast enough to keep up in the arms race against new models (which we can&#39;t). To make matters worse, &lt;em&gt;because&lt;/em&gt; it is such a tricky and valuable problem, there are a lot of startups selling products to do this who are incentivised to make stuff up, exaggerate their capabilities, and generally muddy the waters around detection.&lt;/p&gt;
&lt;p&gt;So AI might be here to stay in the sense that asbestos is - embedded so deeply and broadly into our world that even if we were to discover it was literally killing us tomorrow, it would be an enormous task to get rid of, and one that we are not equipped with the tools for tackling. Technology can be here to stay not because anyone benefits at all, not even out of habit, but because we have made decisions that we can no longer reverse. We&#39;re seeing more and more institutions make decisions that are similarly irreversible with each passing month.&lt;/p&gt;
&lt;p&gt;(Edit: I saw &lt;a href=&quot;https://bsky.app/profile/caseyexplosion.bsky.social/post/3lsetpcm4sc2b&quot;&gt;Casey make a similar comparison on bluesky&lt;/a&gt; as I was finishing up this blog!)&lt;/p&gt;
&lt;h3&gt;Virtual Reality&lt;/h3&gt;
&lt;p&gt;Virtual reality is here to stay. The most recent wave of VR headsets began around 2012 with the Oculus Rift, which swiftly was followed by products like the HTC Vive, the Sony PSVR, and VR-adjacent technology like Google Cardboard. In August of 2015, Time put Palmer Luckey on their front cover looking like a complete idiot, and declared VR was about to change the world. Ten years later we now live in that bravely changed world, a world in which no-one I know plays or talks about VR almost at all, really, except to make fun of Mark Zuckerberg. We have a few headsets in the department offices for the occasional research application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.possibilityspace.org/blog/assets/images/3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In 1997, Joe Tidd and Martin Trewhella &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/epdf/10.1111/1467-9310.00071&quot;&gt;published a study of technology adoption&lt;/a&gt; by British and Japanese companies. They identified two major factors in whether a new technology would be taken up: &lt;em&gt;comfort&lt;/em&gt; and &lt;em&gt;credibility&lt;/em&gt;. Comfort is about how easy the technology is to adopt, what needs to change, who needs to retrain, how easily does it fit into the daily life of the person using it. Credibility is about what it brings to the person or company, why would we want to adopt it in the first place, what does it give us that we don&#39;t already have. I use VR a lot as an example of a technology that had credibility, but not comfort. If you used a VR headset at any point in the 2010s, I would guess you were probably quite impressed by it. VR provides interesting, unique experiences. However it lacks any sense of comfort - most people do not have spaces to use a VR headset in, it isolates you from the environment you&#39;re in, it is tiring to use for long periods of time, and for a long time it was a luxury device above and beyond the cost of a new games console.&lt;/p&gt;
&lt;p&gt;Artificial intelligence has something of an opposite problem. The major breakthroughs in AI at the end of the 2010s and beginning of the 2020s were mostly about comfort. Being able to prompt AI models with natural language made it easy for people to interact with this technology and not feel like they were talking to a computer. However it lacked - and still lacks - credibility. Credibility is something AI companies manage very carefully, through sponsorships, advertising, endorsements and careful announcements. AI is sold as the future of everything, just like VR was, but unlike VR it&#39;s easy for people to get access to and use for themselves, which has allowed it to spread much faster. Because of this, discussions about its credibility are much more fragmented. Everyone has access to ChatGPT, and so a huge proportion of people have tried to use it, for everything from writing wedding speeches to advising on government policy. Some people swear it has transformed their lives, while others are confused at why it doesn&#39;t do what they were promised.&lt;/p&gt;
&lt;p&gt;Something that Tidd and Trewhella don&#39;t mention in their paper, probably because it&#39;s more focused on companies than society at large, is how credibility is measured. You can be mis-sold a new technology, but in general businesses are good at measuring credibility because executives love to measure productivity and performance using metrics, and if the new technology moves those metrics then that&#39;s a good sign. The way AI is used is a bit tricky. Some people use it for tasks they are already an expert in - they often seem to report that the AI makes a lot of mistakes but that they work around them. The greyer area is people using it for tasks they know nothing about. They generally report either incredible performance (for tasks they don&#39;t have the ability to critique or evaluate) or terrible performance (in my experience often for creative tasks where they know what they want - I don&#39;t mean AI critics here, either). Credibility is something that is still settling for AI, and big tech companies are in a race against time to keep raising expectations of the future to combat declining evaluations of now.&lt;/p&gt;
&lt;p&gt;In the 2010s I was pretty sure that VR would evaporate entirely, but it hasn&#39;t. I do have a couple of friends who have VR headsets and sometimes tell me about a new game they&#39;ve played on it. I know some people who work on VR games and sometimes they do pretty well. I know researchers who use VR for some applications. VR hasn&#39;t changed the world, it hasn&#39;t replaced all forms of entertainment, arguably it wasn&#39;t worth the money that was poured into it - but it has found its niche, as a stable and usable product that has some effective use-cases. The same could be said for AI. The last decade of research has led to important advances in certain areas of medicine, for example. Regardless of how you feel about AI generally, it would be hard to write off the last decade of work in the field as &lt;em&gt;entirely&lt;/em&gt; worthless (even if we might agree that the cost and harm overall wasn&#39;t worth it).&lt;/p&gt;
&lt;p&gt;So maybe AI is here to stay in the sense that VR is - in niches where it has a measurable benefit (whatever that benefit is), where people can get around its limitations and failure modes. It won&#39;t transform the world completely, but in some cases it&#39;ll be worth the cost to certain people, and will persist because of it. Even if most people find a reason to reject it, it&#39;s likely at least some places will find the tradeoffs worth it to them. Technology can be here to stay without being all or nothing, and just because something looks and sounds like a sci-fi movie concept doesn&#39;t mean it has the same effect on the world.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.possibilityspace.org/blog/assets/images/4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;AI&lt;/h3&gt;
&lt;p&gt;AI is here to stay, as people like to tell me, and I agree. When they say it, they often use it as an explanation for why they&#39;re using it, advocating for it, or getting more involved with it. That&#39;s totally understandable. But I think we should stop talking about AI as &#39;here to stay&#39; as an empty slogan. Lots of things are here to stay, but some of them don&#39;t necessarily make our lives better, and many of them make it actively worse. I do think AI is here to stay, because it has always been here, and because too much money has been invested in it for it to entirely collapse now. If the bubble burst tomorrow, we would still see the remnants of AI embedded deep in our society for decades to come - in governments, in corporations, in schools, in mass-produced cheap t-shirts with AI-generated images on them, in jokes about people with too many fingers, in the new boom startups from people who got rich off the last ones. People would still run models, they would still train their own. Google would still translate languages for you.&lt;/p&gt;
&lt;p&gt;But if we want to talk about &#39;here to stay&#39;, I think we need to be more specific about what we mean by it, and what aspect of it is significant to us. If you tell me that we need to incorporate AI into our university policy because it is &#39;here to stay&#39;, does that mean we should uncritically invite it in to every aspect of our education and operation? Does &#39;here to stay&#39; mean that a new technology gets a free pass and full capitulation? If you tell me that the next generation needs training in AI (whatever that means) because it is &#39;here to stay&#39;, does that mean we are not planning for any other eventuality? Does &#39;here to stay&#39; mean we bet our future society on a technology that is 99% owned by a handful of private corporations? &#39;Here to stay&#39; can&#39;t be a gloss for giving up. Criticising technology doesn&#39;t begin and end at abolition - it is an ongoing process of analysis, reflection and dialogue.&lt;/p&gt;
&lt;p&gt;Thanks for reading. This is a new blog format I&#39;m trying out, as I was getting a bit tired of the inconsistencies in the style of the old one. It&#39;s a static site generator called &lt;a href=&quot;https://strawberrystarter.neocities.org/&quot;&gt;Strawberry Starter&lt;/a&gt;, which I found thanks to Izzy Kestrel (who has her own SSG called &lt;a href=&quot;https://bimbo.nekoweb.org/&quot;&gt;Bimbo&lt;/a&gt;).&lt;/p&gt;
</content>
  </entry>
</feed>