Posted July 25th, 2019
Last month, Steam announced Steam Labs, a series of three experiments they were making public, all related to game discovery. Some of them use generative systems, while others leverage machine learning (in Valve's own words, "all the cool kids are doing it"). In this blog post I want to talk to you about how Valve's corporate philosophy complicates these experiments, and potential issues this might cause in the future.
Before we dive into the Steam Labs experiments themselves, I want to take a brief sidetrack to talk to you about Valve's current approach to game discovery, and how their corporate philosophy shapes this. If you've heard it all before, you can feel free to skip this bit.
Content warning: this post references a game about sexual assault.
Discovery is a longstanding problem on Steam, partly because its catalogue of games has grown substantially over the last few years. There are around 28,000 games on Steam today, but over 9,000 of those were released in 2018 alone, with a further 6,500 released in 2017. More games makes it more important to give users tools to browse the store effectively, and it also makes the problem of recommendation and curation harder. Valve's approach to store management is to prefer a hands-off approach that offloads the task of discovery onto other parties, whether that's other users (through Steam reviews), influencers (through Steam curators) or algorithms (like Steam's algorithmically curated front page or its Discovery queues).
A large part of their motivation for this seems to come from Valve's emphasis on personal liberty and free markets. Here's a quote from their Store Curation policy which they had to write in June 2018 after a number of controversies:
"If you're a player, we shouldn't be choosing for you what content you can or can't buy. If you're a developer, we shouldn't be choosing what content you're allowed to create. Those choices should be yours to make. Our role should be to provide systems and tools to support your efforts to make these choices for yourself, and to help you do it in a way that makes you feel comfortable."
This is a convenient position for a large corporation to take. This means they never turn away a potential sale, and also means they never have to take a political stance unless forced to by law, which is great if you're trying to retain the custom of both extremely progressive and extremely reactionary audiences. You can see a similar philosophy in the leaks about Valve's internal structure, years back, which suggested Valve had an almost totally flat management structure (although anecdotal reports from ex-employees suggests this was largely theoretical).
Valve also has a lot of faith in large decision-making systems, especially ones which appear bias-free or apolitical. Valve's current recommendation systems, for example, leverage user data like playtime, wishlists and purchases, as well as data that a user consciously created, like tags or reviews. Valve's aim here is made crystal clear by this slide from Jan-Peter Ewert, speaking at the White Nights conference in 2018:
A slide from Jan-Peter Ewert's talk at White Nights.
The first three bullet points say a lot about how Valve views their systems. They believe they don't pick winners or losers because their discovery and recommendation systems simply respond to user preferences and trends - they don't see their own intent in any of the systems they build, hence no picking. They aren't the taste police - they allow anything onto the store that isn't illegal, as per their curation policy we mentioned earlier, thus they aren't taking a stance on what is acceptable. Great games find their audience - the market is perfectly wise, and if your game does not find an audience, that means it simply wasn't good enough.
Valve is at pains to convince you here that it does not have any influence on the process of selling games, they're just a neutral middleman selling shelf space. It's important to sell this idea because the alternative is that Valve is actually shaping the entirety of the games industry, from the kind of games that can be sold, right down to the kind of player communities that have a voice. If this were true, it would mean that Valve had a huge amount of responsibility, and would be forced to make decisions that would be unpopular with some segments of their audience, or worse, unprofitable. With all this in mind, let's look at Steam's latest experiments, and see how this corporate philosophy affects them.
Steam Labs consists of three experiments currently: a six-second trailer generator; a generated live show about games; and a machine learning-powered game recommender. The first two experiments stemmed from something called Steam Trailers in 6s, a Twitterbot that chopped up videos from Steam product pages into supercuts. It was a really fun way of exploring the store, as it provided a quick glimpse into the atmosphere of a game that you might not otherwise see (Twitter was also the perfect place for it, as a piece of throwaway media you might want to scroll past). Valve hired the bot's creator to expand it, and you can see some of the results online.
The Twitterbot bot is a fun experiment that slots into a non-games context (Twitter) without making any claims about what it is useful for. A key feature of many Twitterbots is that they fail often, and that their failures are usually unimportant because you can just skim past them. Valve, on the other hand, describe the app as follows: `Micro Trailers are six-second looping videos designed to quickly inform viewers about titles on Steam with a presentation that's easy to skim'. And while the @microtrailers bot posts maybe one or two games an hour at most, Valve suggest that `you could absorb a couple dozen RPGs or a hundred of the latest titles over your lunch break'.
As a casual Twitter experiment, Steam Trailers in 6s is fantastic, but I don't think anyone would've advocated for its use as a primary method of understanding games. It's unclear to what degree this experiment is still in development, but its description suggests the trajectory Valve sees for the feature. I've already seen developers express concerns that their games won't work in the six second format, and on the Labs page there are many examples of games that are chopped in odd ways because of the automated nature of the system. The Twitterbot was a fun bonus for people to follow; if it becomes a primary marketing tool on Steam, these small issues will become serious for many developers.
The other problem is that trailer generation is a transformational process - Valve is creating something new from a game's marketing without developer intervention, which I don't think they've done before. All of Valve's internal advertising of games is done using assets made by the developer, but these six-second trailers are edits of that work, done by an algorithm. This drastically changes the tone of Valve's involvement. The same process can be seen in the other generative Steam Labs piece, The Automatic Show, a live show about new game releases. This is a twist on the microtrailer idea, where the generative system cuts together different games that are related, with links to store pages popping up. They say they'd like to use generated speech in the future too, if they can make it sound natural enough. In their prototype, the video shows footage of PUBG and reads out the store description of the game with some relevant statistics.
Valve's vision here is to automatically create video content that can target any niche automatically, with no human involvement ("And why stop at one show? How about many? An Indie channel? Endless micro trailers?") Transformational generative systems always make a mess. I've been caught out countless times, by Twitterbots that put words in real peoples' mouths, or videogame prototypes that pulled raw opinions from Google. Corporations most commonly slip up by allowing people to put arbitrary words in their mouths, and Valve's vision of automated video content has a similar interaction with their hands-off curation policy. Valve's curation policy states that "if we allow your game onto the Store, it does not mean we approve or agree with anything you're trying to say with it". This is a nice get-out clause for lawyers, but in practice it doesn't really mean anything if you own one of the biggest storefronts in the world. Generative video makes it even more difficult to defend.
For example, a few months ago there was an outcry about a game called Rape Day, and after immense pressure Valve was forced to drop the game from the store. If you look at screenshots of Rape Day's deleted Steam page, you can see the description text that Valve's automatic show generator would've read out: "Rape Day is a game where you can rape and murder during a zombie apocalypse." We know, from many examples of corporate goofs, that when automatic systems repurpose content and put it in their own voice, with their own branding, people are more likely to perceive them as owning and endorsing the message (and rightly so). Even with games far less abhorrent than Rape Day, Valve will find it increasingly hard to maintain the veneer that they do not endorse the games that appear on their store once their own software starts cutting together flashy promos with humanlike voiceovers for them.
Steam Labs' final experiment is a recommendation system powered by machine learning. The description emphasises the lack of almost any outside input to the system - "the only information about a game that gets explicitly fed into the process is the release date" they proudly explain. They emphasise that they don't include the tags users assign to games, or 'reviews' (which I take to mean Steam user reviews), instead "the model infers properties of games by learning what users do, not by looking at other extrinsic data."
In practice, what this seems to mean is that the system looks at how a game is played - how long play sessions are, when they happen, how many of them there are, how they are spaced apart, things like that - and tries to link games together that have similar profiles. Some of the details aren't especially clear, such as the incorporation of playtime into the system, which Valve says is 'normalised' to account for shorter or longer games. It's unclear to me how you normalise my playtime in a single-shot narrative game like Firewatch versus an open-ended multiplayer game like DOTA 2. But the overall message is that this system can provide better recommendations with less specific information, and less specific information means less opportunity for bias or human influence to creep in, which in Valve's eyes makes the system purer and thus better.
One problem with the system is that it is simultaneously far more opaque than Valve's existing recommender system, while appearing to be less opaque because of a nice user interface. Valve's current recommendations are pretty poor, but crucially they will explain why a game has been recommended to you, even if it's simply "because you play Free To Play games". This gives people information to help them process and respond to the suggestion - for example, if you only play one free to play game, and have no interest in F2P otherwise, then you can safely ignore this set of recommendations, because you know where they come from.
The machine learning recommender isn't capable of doing this currently. At the same time, the new recommender has a lot of knobs and levers on, that allow you to sort by release date or 'popularity'. This gives the impression that the user is exerting a lot of control over the system, but most of these controls are just straightforward filters applied to the main unexplained list. The only interesting slider - "popularity" - is not explained at all in the blog post. I can't tell if it's a feature of the machine learning system, or a human-authored filter that sits on top of it. Sliding it from maximum to minimum causes some games to barely move in the top five, suggesting they are somehow both popular and niche at the same time.
The main question this experiment raises, for me, is whether the user interface is doing a lot of the work. If we were to take Valve's existing recommendations, and plug them into the same filter-based interface, would people notice the difference between the two systems? Perhaps Valve have already tried this and saw a difference, but it would not surprise me if there wasn't one. Giving people a sense of control is powerful, and the opaque nature of some machine learning systems makes their outputs feel slightly magical.
But beyond the interface, the bigger problem is that adaptive recommendation systems are one of the worst things to come out of the age of computing. Everyone has had experience of how recommendations systems fail, whether it's Amazon suggesting you might want to buy a second bed because you seem to have an interest in beds, or YouTube gradually leading every single user into a black hole of alt-right video content. Beyond these very obvious and visible failures, one of the problems with algorithmic recommendation is that it shapes the way people perceive things, which in turn reinforces the patterns identified by algorithmic recommenders in the first place. In the end, they have little to do with how actual recommendations work - think about your friend telling you about their favourite game, a journalist writing a review, or a curator putting together a collection.
Instead, Valve's system seeks to draw a comparison between you and a hidden n-dimensional cluster of people you have never met and have nothing meaningful in common with. Rather than fostering the kind of human connections that naturally give rise to recommendations and curation, Valve are running in the opposite direction, towards total automation that absolves them both of the financial burden and the legal and moral responsibility of managing Steam. This recommendation system places you into a dozen invisible communities, defined by vague points of data in an infinite cloud, that you will only communicate with via your purchases.
Ultimately, trying to build a single system that can solve recommendation for all users, starting with zero knowledge, is a futile effort. The very idea that such a system could be made is deeply rooted in the same kind of technolibertarianism that a vast swathe of the tech industry is gripped by. This interest in automating more of the discovery process encourages Valve to rely further on algorithms and less on connections between people, making us easier to reduce down to a series of play sessions and purchases. Which, from a more cynical angle, is perhaps what Valve would prefer anyway.
This was a long one, I know. When I first spoke critically of Steam Labs shortly after it went live, a Steam employee messaged me to sarcastically thank me for the vote of confidence. I understand the frustration (in the same talk that Ewert declared Valve was "not the taste police", he closed by suggesting developers "[get] your game in front of people who aren't afraid to hurt your feelings.") I'm sure there are a lot of good invidiuals who work for Valve, and I've stressed to the creator of @microtrailers that I really like the project as an isolated experiment. The problem is not necessarily down to any single person or group, but to Valve's philosophy in general, one that is shared by many tech companies and entrepreneurs today. The idea that technology can be created in a vacuum, imbued with purely rational thought, and act without taking a side. This is at best a pipe dream, and at worst is a deliberate attempt to mislead people in order to gain more control over them.
Not every problem has a technological solution, and the ones that do rarely find their solution solely in the domain of computer science. Human problems demand solutions which take humans into account, not as points of data but as real people who are self-directed, who value other people, who are creative and emotional and unpredictable. Our solutions should uplift and empower individuals, not direct or manage them. That doesn't mean we can't find elegant applications for new AI techniques, or find brilliant ways of enhancing and repurposing knowledge and data. It just means that we need to not be tunnel visioned on science fiction dreams and technolibertarian fantasties. It's fine for theoretical physicists or mathematicians to get lost in abstractions in search of universal truth, but most of computer science, and artificial intelligence in particular, does not deal in absolutes. There is no neutral position, no middle ground, and when an entity as large as Valve tries to sit on the fence, it always falls off.
I hope Steam Labs continues to talk openly about the things they're trying, and the progress they're making. Perhaps we'll even see some writeups at conferences or workshops explaining their approaches, or some open-sourced code. In order to truly innovate, though, and to change the way people make and play games, Valve first need to experiment with new ways of running their company.
Posted July 25th, 2019.