Saturday, 24 November 2018

µNote: Respecting the Player's Time

MicroNotes are something new I'm going to try (possibly through 2019). Rather than aiming for a long post once a month, which I am prone to kill (if it doesn't come together or looks slight after poking at a draft), I'm going to post some shorter writing here. I've been writing some thoughts elsewhere for a while and so might go back and put some of those posts here (and even go through my old drafts that I never fully shelved). This should also give room for smaller coding & rendering thoughts (that are bigger than fit on Mastodon or Twitter) to actually get posted here.

I have always found "respecting the player's time" to be a useful lens through which to consider game design. What needs to be here; what do I think makes a positive contribution to the core arc; just how much of a structure can be made optional and how do we signpost it so players can best dive into only the content they want? Extracting the most from the interactive nature of games means building structures that react to each player and customises how they experience a game, attempting to give the best progression to the most people we can reach (and even respecting that some players will not enjoy what we are making and should not be strung along). The thing is, this phrasing has become extremely common in games criticism, to the point of dilution.

“I don’t like this game as I feel like it is not respecting my time (and here are 6 paragraphs on exactly how)” is commentary on where I feel a game does not align with how I value the different activities it offers and what it considers core vs optional - I can’t fast travel in a game where I don’t agree with a design decision to make it more immersive and exploratory by not having those systems; I can’t sample just the narrative content I find engaging and think that the game should flag more content (as optional, as less important) that I don’t find core to the experience; I see mountains of “content” without enough signposts to let me understand it and a progression through it and am simply overwhelmed in a way I do not think benefits the game or possibly even was the design intent.

“This is one of those doesn’t-respect-your-time games” generalises specific criticisms about various systems vs a personal view of what it could be into almost a genre - the too-long game. It flattens a meaningful discussion from which the speaker can make clear what their values are and so why a game doesn’t align with them (valuable information a reader can compare to their own values) into basically nothingness. It’s a topic where you need to be detailed because otherwise you’re basically just repeating the ancient “game long so good value” vs “game long so necessarily boring” war.

To some Assassin's Creed Odyssey provides the latitude to inhabit Kassandra’s life and soak in the world, to others the content is a slog and they’d much rather a far shorter core experience that the game does not appear to offer. Respect-your-time is extremely respect-your-time and the forming of critical consensus around a game will always flatten that even if the critics who voice their views are individually detailed in their analysis.

Every player is different and ideally games would all have a certain level of give to help accommodate as many people as possible but we’re not yet at that point. I’m also not sure the tools for signposting & flagging are sufficiently developed to be universally readable by the audience even when we add them. As we pick through the big lessons of the last decade or so of game design advances, the big open worlds and structures for repeatable content that span genres and platforms, this is hopefully something that will develop.

Friday, 21 September 2018

The Free DLC: Driving Customer Delight

Quick aside: the recent posts about Rust have done very well. Enough that some of you are probably following the RSS feed just for that. I'm going to try and link some interesting Rust posts if I'm writing about a different topic that month. Today, Bryan Cantrill Falling in love with Rust. Both good introductory links and extensive thoughts on why Rust is worth caring about.

On to the main topic: Games are increasingly being updated after release as part of a "living game" strategy to continue to sell the game and any additional content. This is nothing new except in how common it has become. Back at the tail of the 1990s, if you wanted players to keep coming back and knew that the internet was now good enough to distribute patches then free content was how you drove sales and interest between boxed expansions. Total Annihilation was offering optional additional units back in 1997 on top of the balance patches and we'd come to each LAN and make sure everyone was up to date before jumping into the game. TA wasn't the first game to do it but additional maps, scenarios, and units was still notable in 1997 and became more common as games embraced online multiplayer as a primary focus (eg Quake 3 and Unreal Tournament). Even things like mods are part of this, being community-developed free DLC. Counter-Strike drove Half-Life sales at the development cost of building and maintaining those mod tools (which had originally been used to develop the game).

A decade later and consoles embraced online balance and even feature patches to expand games and eventually started pushing paid piecemeal DLC that wasn't just a different way to buy those traditional boxed expansion releases. Fancy buying a single new item for your RPG? What if you paid for it? What if you paid enough that, at those prices, a full game would cost at least several thousand dollars to buy? Hello the pricing model of many current F2P titles which offer to sell content (beyond the model that just sell temporary cheat codes to skip the most grinding activities or boost your stats).

Today, paid DLC of all types (from cheat codes to full expansions that massively extend the volume of content in the game) is everywhere. In a change from those original online games, multiplayer modes often sell additional maps which require you to keep paying to continue to play the current rotation of popular maps, somewhat like how MMOs require continued payments. But things are starting to swing back the other way with cosmetics being sold while the maps are given away to everyone who paid for the base game so they can continue to play. As someone with an interest in the longevity of the industry, I really hope this swing back continues. We need to delight customers, to make them feel like they got more than they expected. If we are making living games, we should avoid making things that feel like you've purchased a $60 storefront that's constantly asking for money rather than offering enjoyment. Over the last year, my feelings on two console exclusive (formerly tent-pole?) driving games have really driven this home.

I may have played more hours of Forza Motorsport 7 in the last year than GT Sport but by the end of Forza, I felt it was a slog; I saw the treadmill and only the proximity of the finish line kept me even considering continuing the career mode. I had access to over 500 of the 700 cars the game launched with; I'd become familiar with all 32 locations; but I had zero interest in paying for more cars (their plan for DLC). Regular challenges and (eventually) some more substantial patches to add back a working drag mode and fix the track limits were unable to really make me feel like I was engaged with the series continuing. The feast with a list of paid extras just made me feel bloated.

Forza was the series that got me to love cockpit mode, assists off, actually feeling like it was driving. But if it wasn't for the rental option then I'd probably not even look at the next release (I'll definitely be playing Forza Horizon 4 as a rental next month - once and done for pennies rather than the increasingly expensive bundled launch day editions that don't even guarantee access to all DLC content over the next year+ of updates; the last Horizon game being unable to run offline on Windows 10 and so effectively being a rental anyway). The monthly FM7 DLCs, heavily advertised in the game and covered in branding for TVs and snack food, offered extra cars on top of already so many, but with an order of magnitude higher price per vehicle and no extra locations. A handful of freebie cars (which appeared later on) are invisible when surrounded by the number the game launched with. But the lock icons stand out, as does the way the Forza series now includes day-one paid DLC in the form of a launch car pack (James Bond cars for Horizon 4) that only the most expensive edition ($100 up front) gets access to.

Meanwhile, Gran Turismo launched with under 200 cars (but most of them feeling distinct - no 10 cars just with different advertisers and identical handling/options) and with only 19 locations from which the various tracks (plus reverses) are assembled. A full career mode did not exist at all in the launch game but all of this was clearly messaged in the advertising (and a temporary sale price really helped push it from something ignored to a worthwhile gamble). As I talked about in my GotY discussion in December, GT Sport really appealed to me. It also found a way to keep me engaged with monthly free DLC that regularly added a new location and track options plus around a dozen cars each pack. The career mode arrived and every single area of the game received additional attention and content. All for free.

While Sony may be holding the next GT game for their next console, I feel extremely positive about the series (and buying that next game) and where it might be going. For a game that launched with less to do but then constantly gave me reasons to come back and try the new additions. The experience was of deciding that the game was probably worth taking a risk on, finding what was purchased was genuinely interesting to me, and then having my interest rewarded in the long term with new content all for the price I'd paid up front. There was enough content at launch, there wasn't a massive treadmill that ever felt full of filler just there to extend the total play time (without the variety to make it interesting), and then when I was ready to go back for more there had been patches that added the more I was looking for and didn't demand additional payments (with the associated question of if that value proposition was better or worse than the one offered by the base game or by choosing a different game).

There are many models for how to do this sort of continuous expansion based on player feedback and a long term plan. I'd call almost all games in Early Access an example of releasing a core experience and then iterating and expanding that with an engaged community who expect those additions to be included in the price they paid up front. We have two decades of using the internet to distribute feature patches and new content and how audiences react to that. The Paradox model ties a lot of paid DLC to base game feature patches that expand the core systems so they can milk a release while never letting someone feel like they paid up front for a game and now are left to just watch it progress without them.

My experiences certainly make me lean towards saying that a launch game shouldn't try to contain absolutely everything possible with a plan to aggressively monetise post-launch content. The audience will feel fatigued at too much content or start to divide it up into what feels fresh and what feels like filler (and when overwhelmed with content but lacking the tools to understand where it all can fit in as unique, it may increasingly look to be filled with filler). You can't under-deliver but over-delivery at the cost of having content to give away for free during the lifespan of the game feels like something to be considered carefully. The option is always there to push back more for an additional polish pass; rather than monetising it, use it to drive sales of the base game and delight your existing customers.

Friday, 31 August 2018

Rust: Fail Fast and Loudly

So recently I was chatting to some Rustaceans about library code and their dislike of a library that can panic (the Rust macro to unwind the stack or abort depending on your build options). The basic argument put forth was that a library should always pass a Result up to the calling code because it cannot know if the error is recoverable. The chapter of the Rust Programming Language book even lays out this binary: Unrecoverable Errors panic! while Recoverable Errors return a Result. As a researcher in debugging, I reached the point where I basically banned these terms from my lectures because they can potentially lead to this thinking that libraries cannot know they're in an unrecoverable state and so can only defer to what calls them.


Throughout CompSci literature, some terms relating to debugging are not used consistently. I'll start with the words I use (so I never have to write this in a blog post again). To illustrate the scale of the terminology issue, enjoy this quote from the 2009 revision to the IEEE Standard Classification for Software Anomalies:
The 1993 version of IEEE 1044 characterized the term “anomaly” as a synonym for error, fault, failure, incident, flaw, problem, gripe, glitch, defect, or bug, essentially deemphasizing any distinction among those words.
A defect (also called a fault, error, coding error, or bug) in source code is a minimal fragment of code whose execution can generate an incorrect behaviour. This is for some input (which includes the environment of execution), against whatever specification exists to declare what is and is not correct behaviour for the program. A defect is still a defect even if it is not exercised or does not cascade into error / failure during a test case. Defects can be repaired by substituting in a replacement block of code into the block reported as defective; this returns the program to executing in a way that does not violate the specifications.

An error (sometimes also called a fault or infection) in program execution or modelled / simulated execution is the consequence of executing a defective block of code and the resulting creation of an erroneous state for the program. This effect may not be visible and not all errors will be exposed by surfacing as a failure. An error is the result of a defect being exercised by an execution which is susceptible to that defect.

A failure in program execution or modelled / simulated execution is the surfacing of an error state by the observation of behaviour in violation of the program specification. It is therefore correct to say that a failure was experienced for a given test case due to a chain of erroneous states that originated with the execution of a defect that caused the error.

Setting a Trap

Having muttered about the language choices made in the Rust book at the top, I'm going to also praise how they actually resolve that chapter. The final section goes into detail about the pros and cons of calling panic from your code. It defines bad state in a way I might write myself in a practical programming guide. It even offers up the type system as a way to ensure your specifications for input aren't violated, with as much of the burden placed on compile time as possible.

It's good writing but it can also potentially be read by those who really wish panics didn't exist as saying you just need to make sure every possible input into your library is valid. My stance is that an occasional small slice of invalid input being possible is actually important for code quality when writing in a language that can fail (it can also make some things a lot easier to write in practice). However, it must be clearly labelled as such, with no question about when you might panic. This is the contract you're writing and every good library should fully document the interface so there is no possibility of an unexpected panic.

To give an example from the Rust standard library (which is totally just a library and we should expect other libraries to conform to the same standards it uses - this is even more true of Rust than in other languages as Rust splits out the really core library code into the Rust core library). When you've got a vector and you need to divide it in two, split_off(at) is what you need.
Splits the collection into two at the given index.
Returns a newly allocated Self. self contains elements [0, at), and the returned Self contains elements [at, len).
Note that the capacity of self does not change.
Panics if at > len.
Here we have a clearly defined operation that does exactly what we want and comes with some important guarantees about how it operates. One of those details is that if we ask to split beyond the end of the array then it will panic.

Why does this panic rather than returning a Result and letting us decide if the error is recoverable or not? I can imagine many places where trying to split an array may not be the only thing a program can do to continue, a backup path could be constructed to continue operating under some circumstances if that failed but this library decision means the calling code cannot decide that. If you ask for a split at an invalid point then you get a panic.

It is because the library set a trap. It asks the calling code to know something about the object it wants to be manipulated. Because there is no reasonable way of asking for the array to be split in two beyond the end of the array, the only conclusion that the library can make about such a request is that it is unreasonable. We are past the point of executing a defect, we are swimming through an erroneous state, and it is time to fail so this can be caught and fixed. That also means no room to let the erroneous state accidentally ask to zero the entire storage medium it has access to and trash a last known-good state that might be used to recover later (or debug the defect). Any calling code that wishes to avoid this should catch the erroneous state and recover (if possible) before calling over the library boundary. The potential to get unreasonable requests allows our blocks of code to keep each other more honest by surfacing errors as failures.

It is a restriction that provides a higher chance of fixing a defect before we ship a product. We must strive to fail fast and sometimes that means using some small gaps between what is possible and what is permitted as traps to catch when errors have occurred. A library can be poorly constructed to panic when not expected (and declared) but the existence of panics should not itself be used as a sign that a library is of poor quality or to be avoided.

Saturday, 28 July 2018

Empty Rust File to Game in Nine Days

I've been doing Rust coding for a bit now. Recently that's involved briefly poking at the Core Library (a platform-agnostic, dependency-free library that builds some of the foundations on which the Standard Library are constructed) to get a feel for the language under all the convenience of the library ecosystem (although an impressive number of crates offer a no_std version mainly for use on small embedded platforms or with OS development). I'm taking a break from that level of purity but it inspired me to try writing a game just calling to the basic C APIs exposed in Windows.

So I'm going to do something a bit different for this blog: this post is going to be an incremental post over the next nine days. I'm going to make a very small game for Windows (10 - but hopefully also seamlessly on previous versions as long as they have a working Vulkan driver), avoiding using crates (while noting which ones I'd normally call to when not restricted to doing it myself). I'm not going to rewrite the external signatures of the C APIs I have to call but I'll only be importing the crates that expose those bare APIs.

Day 1

The day of building the basic framework for rendering. Opening with a Win32 surface (normally something you'd grab from eg Winit) and then implementing a basic Vulkan connection (which would normally be done via a host of different safe APIs from Vulkano to Ash, Gfx-rs to Dacite).

The Win32 calls go via Winapi, which is a clean, feature-gated listing of most of the Win32 APIs. The shellscalingapi is a bit lacking as it doesn't expose all the different generations of the Windows HiDPI API (and Rust programs don't include a manifest by default so you typically declare your program DPI-aware programatically) which means you have to declare a few bindings yourself to support previous editions of Windows. But generally it makes calling into Win32's C API as quick as if you were writing a native C program including the required headers. You could probably generate it yourself via Bindgen but the organisation here is good and it's already been tested for potential edge cases.

Vulkano exposes the underlying C binds via the Vk-sys crate. It has no dependencies (so it's what we want: just something to avoid having to write the signatures ourselves without obfuscating anything going on) and while it's not updated to the latest version of Vulkan (1.1), we're only doing a small project here (so it shouldn't matter at all). The function pointers are all grabbed via a macro, which is a bit cleaner than my previous C code that called vkGetInstanceProcAddr individually whenever a new address was required (to be cached). Of course, other areas are down to just the barest API which means looking up things like the version macro.

So at the end of day 1, we've got a basic triangle on the screen working (with a 40kB compressed executable, most of which is Rust runtime/stdlib as Rust libraries default to static linking).

Thursday, 28 June 2018

Mini-Review: Slay the Spire

So early last year, it was already clear that we were getting a lot of extremely good games (and going on the number of February release dates announced at E3, 2019 looks like it's going to be similar). This year has started with fewer tent-pole releases (most notably, God of War) and far less focus on RPGs overflowing with content (which gave early 2017 a very specific feel) but there certainly have been some great games like Mashinky building up in Early Access and BattleTech getting a full release. Into the Breach is another game from earlier in the year that I've not written about yet but is very nice. There's something in the strategy/tactical water this year and it tastes like roguelike-likes. The genres have always been somewhat mingled, what with 4X games (or even solitaire games) being about semi-random runs which build their own story through the mechanics (and that's where Mashinky fits in), but much of 2018's output (They Are Billions entered Early Access at the very tail of 2017, I'm counting it) feels explicitly part of the current roguelike-like wave. Sometimes it's unclear which side of the line games are aiming for (Frostpunk is probably going for more scenario-based rather than the endless replayability of rogue).

Slay the Spire, currently in Early Access with plans for a release sometime this Summer, is a deckbuilder game. If you're not familiar with the genre, it's the assembling of a card deck from CCGs (like draft format) without that pesky monetisation of the acquiring of the cards required. You fight battles (here entirely PvE against clockwork enemies who have predictable patterns and compositions rather than branching AI) and work out how everything synergises with the simple core mechanics, but without having to buy hundreds of dollars of cardboard or, in our terrible digital future, virtual cardboard.

In order to ensure the game doesn't devolve into simply selecting the best deck from the current meta discussions and throwing it at the enemies, the format here is solidly a roguelike-like. Semi-randomised runs where the expectation is to eventually be weakened to the point of death and have to restart with a new random seed from the very beginning. As you work through a run, you'll be offered various card choices (as well as handed out limited potions and rule modifiers in the form of relics) from which to build your deck. One of the key things here is that card removal is actually hard (not often offered and rarely for free) so building a deck is very much about what you don't select. The only times I've seen cards you can't turn down is used to good effect in a curses category. Negative outcomes can add cards to your deck you don't want and as they are hard to remove, they will stay with you and mess with your flow. Slay the Spire is very clean in how everything works like this - full of smart decisions to keep the game compact without feeling stale.

Unfortunately also absent from this, compared to one of my previous favourites - FTL, is much story development. There are a pool of random events with flavour text but not to the same extent as it felt like FTL assembled a story. Even the Magic: the Gathering standard of flavour text for cards is missing here with only artwork and name working beyond mechanics as narrative. But what you do get from a standard run is 50 events, mainly fights, as you scale up through three main bosses and a few elites (with your exact path somewhat flexible, so you can pick when to fight an elite or rest as a campsite to replenish your health). As with all enemies in the game, each individual boss is clockwork so part of the learning curve is internalising their moves, but there is some variety in which boss you encounter (so the final boss is randomly selected from a pool of three and you can see who it is during the final third of the run to help build your deck towards beating them).

So far the Early Access is going well, with now three different characters (changing the starting relic, some core mechanics, and card availability) all feeling sufficiently different. Beyond the standard roguelike-like, there is some permanent unlocking of extra cards/relics that will randomly appear in the game to expand your options over time as well as a difficulty staircase called Ascension that adds new difficulty modifiers once you've grokked the mechanics and how to steer yourself towards a synergistic deck despite the RNG offering you new cards. Spelunky fans will recognise the Daily Challenge, here adding three daily modifiers to a daily seed and then offering a leaderboard scoring how far you got and what feats you achieved during your first run. There is also now an Endless mode, although I am yet to even try it.

Currently the buzz is positive and the sections of the game with 'coming soon' written over it are almost all swapped out for new features (sitting at Weekly Patch 30 at time of writing) so a final release is probably on track. I do hope the game becomes a living game after 1.0 with plenty of balance tweaks but also a slowly expanding enemy and elite selection (as they are clockwork and so it can feel like you're eventually solving them for almost all competent decks you may have when you encounter them). Expansions for new cards, new bosses, and even a new character also seem like a good long-term future for the game. For now, not even at 1.0, it's an extremely easy way to burn through hours of play and feel like you're getting a deep appreciation for the various mechanics and synergies available.

Tuesday, 29 May 2018

Evolving Rust

At the start of the year I talked about using Rust as a tool to write code that was safe, easy to understand, and fast (particularly when working on code with a lot of threads, which is important in the new era of mainstream desktops with up to 16 hardware threads).

Since then I've been working on a few things with Rust and enjoying my time - especially in some cases where I just wanted to check basic parallelism performance (taking advantage of a language where you can go in and do detailed work but also just call to high-level conceptual stuff for a fast test). If you're looping through something and want to know the minimum benefit of threading it, just call to Rayon and you'll get a basic idea. In practice, that usually means changing the iterator from an .into_iter() to .into_par_iter() and that's it.

I finally upgraded my old i5-2500K desktop (on a failing motherboard from 2011) to a new Ryzen 7 so it's been very useful to quickly flip slow blocks of code to parallel computation. When you're just building some very basic tool programs, I'd probably not even think about threading in C, but here it is so easy that I've been quick to drop a (for example, typically) 30ms loop down to 3.5ms. One of the things I've been somewhat missing is easy access to SIMD intrinsics, but this brings me to something else I've been enjoying this year: Rust is evolving.

I'm used to slowly iterating standards with only slight upgrades between them as tools like compilers improve and the std lib slowly grows. Clang warnings and errors were a massive step forward that didn't rely on a new C standard and libraries can offer great features (you'd otherwise not have time to code yourself) but when I think of C features then I generally think of language features that are fixed for quite some time (about a decade).

Rust is currently working on the next big iteration (we're in the Rust-2015 era, which is what Mozilla now calls 1.0 onwards, with Rust-2018 planned before the end of the year) but that's via continuous updates. Features are developed in the nightly branch (or even in a crate that keeps it in a library until the design is agreed as a good fit for integration into the std lib) and only once they're ready are they deployed into stable. But that's happening all the time, even if a lot of people working with Rust swear on nightly as the only way to fly (where you can enable anything in development via its associated feature gate rather than waiting for it to hit stable).

For an example of that, SIMD intrinsics are currently getting ready to hit stable (probably next release). That's something I'm extremely eager to see stabilised, even if I'm going to say the more exciting step is when a Rayon-style library for it exists to make it easier for everyone to build for, maybe even an ispc-style transformation library.

The recent Rust 1.26 update is a great example of how the language is always evolving (without breaking compatibility). 128-bit integers are now in the core types; inclusive ranges mean you can easily create a range that spans the entire underlying type (without overflow leading to unexpected behaviour); main can return an error with an exit code; match has elided some more boilerplate and works with slices; and the trait system now includes existential types.