Visual Silence (Magpie Part 2)

This is the second post in a series about Magpie, a live coding tool I’m making in Picotron. In the first part I covered the basics of what I'm hoping Magpie will be, and I have a few posts I want to write about what I've found interesting or tricky about toolmaking for livecoding so far. You don't need to know anything about livecoding to understand this post!

Visual Silence

One of the most interesting parts of a livecoding set is the very start. A lot of livecoding is about flowing from one interesting bit of code to another, but at the beginning, we’re starting with a blank slate. At the regular livecoding meetup I go to in London, Algorhythms, the beginning of your set will start with you standing in a quiet, dark room, full of people maybe softly chatting, no music playing, and a blank screen. What do you do?

For music livecoding, most things we can do immediately and completely fill the silence in the room, because that's the nature of producing sound, it gets everywhere. Even a basic kick drum beat, the ‘four on the floor’ that Switch Angel starts with in this experiment, dispels all the silence in the room and creates a full foundation for the music to build on. And it doesn’t need to be a traditional drum beat - it could be a random selection of notes in a cycle, it could be a sample, it could even be a straight tone. Music is built up in layers, but even a single layer is extended forwards in time and immediately signals to the room that the set has begun.

For visual livecoding, we have an equivalent ‘visual silence’ on the projection screen that we have to fill, and ideally we're looking to fill it with something that has a few qualities. An important one is motion: I can think of livecoding sets that are static for short periods, especially at the start, but most visual sets want to be moving most of the time. We also want some kind of texture, something to give the motion context. Sometimes that's a geometric pattern, sometimes it's text or video, but its rarely a single color filling the screen and nothing else. And we want scale: we want to use a good amount of the screen. Not necessarily the whole space, but probably a good chunk of it. I'll come back to this idea in a later post.

Filling visual silence is hard, or rather, there are a lot of ways to partially fill visual silence, and most tools for making visuals appear on screen (like game engines, paint programs, or text editors) are quite granular and make small changes to the visual space. For Picotron, the game engine Magpie is built in, the most basic thing I could do would be to draw a shape directly to the screen. If I’m really pushing to make something dynamic happen, I could have some aspect affected by the passage of time. A quick test program I sometimes write is to draw a circle that moves up and down as time passes:

But for a lot of digital artworks or games, it’s likely that the first line of code I write won’t change anything at all on the screen. Maybe even the first half-dozen lines. My code might be creating lists of things, setting up processes or going through lists of things to draw shapes. So using these types of language/engine as the basis for livecoding has a few effects: it means we stay in the ‘visual silence’ for longer, it means I have to do more work before I see if it had the effect I thought it would, and it means the audience has to wait longer to see something change or to feel the set has begun.

Visual livecoding tools often solve this by using different programming styles or paradigms which more immediately create output, and that affect a screen space of any size or shape. In particular, it’s quite popular to use graphics shaders or shader-like languages as a style of working. Shader languages are designed for solving specific kinds of graphics problems, and one thing they’re good at is taking a some visual information and stretching it across a surface or a shape (like painting the photo of tree bark onto the side of a 3D model of a tree). This also means that they tend to have a lot of commands and instructions and styles that can immediately fill a shape of any size with stuff. And so they tend towards a programming style where what you do, by default, fills the screen. For example, this program in Hydra is eleven characters long:

osc().out()

It produces the visual equivalent of a kick drum beat (warning, this video produces a very slow scrolling set of black and white bars, I've known it to make people nauseous on rare occasion):

This isn’t that interesting, of course, but it immediately fills the screen and signals to the audience that the set has begun. It has motion (the bars scroll slowly), texture (monochrome here, but a range of darks and lights) and scale (the whole screen is being used). Hydra has several similar functions that fill the screen with other simple patterns, and most performances start with one of them. You could, if you wanted, write five or six more lines of code before running the code for the first time and breaking the silence, but there’s no need to - the first line is ready to provide output, and that lets you start moving, communicate with the audience, and give yourself visual feedback to start working with. Then, if you want, you can keep adding the code you were planning to add.

The Other Way

One of my motivations making Magpie is to explore different ways of livecoding, different ways of controlling and changing a set and engaging with the audience. Magpie is far from the first tool to try and do this, many livecoding tools exist that let you explore different ways of livecoding visuals, and many are based in existing art tools like Processing. In my experience of using them, though, they don't have big paintbrushes like Hydra does, and they're quite verbose (you need to write a lot of code to make things happen). One of the advantages of painting with big brushes is it lets you fill the silence quickly and make big dramatic changes, but it lacks the specificity of the smaller brushes that let you paint fine detail. Picotron is nice because it gives us a set of interesting little brushes and tools, but to complement this, I want to build into Magpie some bigger brushes to help the user break the visual silence faster and start messing around quicker.

I've been trying to experiment with new features in Magpie by thinking about ways I try to fill the screen myself and turning them into quick-access features. One common thing I want to do is repeat some drawing across the screen, and just change it ever so slightly based on its position. For example, the background to the Magpie logo banner has circles that change colour and shape based on where they are on the screen. This normally involves writing some boilerplate code that is the same almost every single time, so we can make it a little quicker by introducing a function that lets you give it some code and repeat it across the screen, like so:

So in this example, I've described how to draw a circle and change how it looks based on where it is on the screen, and then I've asked Magpie to just repeat this a bunch of times across the width and height of the screen. This 'feature' isn't done yet, I'm just feeling it out, so it's still a little verbose. But it feels like a route towards a big brush for Magpie to use.

Another one is creating a list of little guys that all behave the same way. These little guys often have the same behaviours or data: they might have a co-ordinate describing where they are on the screen, for example, or a number that says how fast it moves, or how big it is. So I've developed a quick way to say "give me X objects, put them in a list, and give them these common behaviours". Again, it's not finished yet, but it lets me quickly bundle together a bunch of things (scale) and get them moving on the screen (motion) quickly. Here's a little example:

Magpie was going to be based on particle systems originally, which worked a bit like this, but I decided to move away from it (partly because Picotron doesn't like chewing through huge lists of particles and pixels). I like this compromise of keeping them in as a feature but not making them central, because it lets me add some particles on top and you only need a few to make interesting things happen visually.

What we're trying to do here is find ways to make small paintbrushes cover a larger canvas, but we don't want to lose the things that make them useful as paintbrushes in the first place. We could give the user a cannon that fires paint at the screen, but that's not a brush any more, it's something else. We want to simplify the fiddly, inconvenient bits of these code snippets, while still keeping the properties of a paintbrush: that it's controllable, flexible, customisable. So offering some common behaviours for little objects is useful, but we should make it easy for the user to override that stuff if they want, too.

That's the end of this post! I've been thinking about how to get started in visual livecoding sets, and how breaking visual silence is tricky and so important for getting started. And I'm experimenting with adding features to Magpie that let you quickly spin up some interesting effects, so you can get some lights and colours on the screen and start playing with them. Next time I'll write about what happens after that. Magpie is very nearly ready for its first release, the only thing holding me back is just having the arm strength to do some of the fiddly bits of release prep. It's still not very nice to use Magpie and it's lacking loads of features, but I hope a few people might enjoy poking at it.

The next Algorhythms event is on the 4th of March, so if you're in London do keep an eye out for that! In the meantime, if you have questions or comments about Magpie please ping me on bluesky or join my Discord.