Saturday 29 December 2018

Games of the Year 2018

So in recent years I've been doing this based on games that released (1.0 versions) in a given year. But as we talk of living games and Early Access, there's a lot more going on that maybe makes this harder to do right. I'm going to generally try to keep to that broad concept - a list of new, complete games of that year that stood out to me - while maybe considering a lot of games that are for sale but didn't necessarily announce a 1.0 version number. The big addition is games that feel close to release and could be seen as the equivalent of many 1.0 titles that go on to get bigger via incremental patches.

The previous requirement to see a 1.0 number has led to some slightly weirdness. When considering RimWorld's 1.0 release in the later part of this year, I'm considering a game I mainly played (and even wrote about) almost two years ago. Maybe that would have made more sense to be considered last year or even in 2016. I haven't talked about Factorio much here but I've been enjoying that once more and it's getting close to that 0.17 release which is basically feature complete and lays the way for a 0.18 update in early 2019 that will be the official 1.0. That seems like a game that can be considered for this year's list. Slay the Spire is missing their expected 1.0 release window but it's been close to done since I first talked about it.

On the other side of the fence, Mashinky continues to see new development along a roadmap but is very much not done in the way those earlier games are. I think it's not up for this year's list despite having really enjoyed the game as it exists so far. There is a clear point in each playthrough where you reach the end of the line and find the "coming soon" notice - it's not just a game that will get deeper with patches, it feels like the core progression is still very much being built (as a normal part of Early Access). The final balance of the progression may well change significantly during polishing because we don't even know how a full run plays out yet. Rounding out the examples, Life is Strange 2 is in no way finished yet so will be considered for next year. While Half-Life 2 had 'Episodes' as standalone expansions, LiS is this more modern understanding of seasons that work as one cohesive package (released at a reasonable pace and purchased as a season bundle).


    Strategic 'Mech of the Year:

BattleTech

My first taste of BattleMechs made digital was the oft-forgotten Westwood RPG (back before they were known only as an RTS studio) BattleTech: The Crescent Hawk's Inception from 1988. Two years later the turn-based combat gave way to real-time tactics battles in the sequel, The Crescent Hawks' Revenge, and paved the way for Dune II and ultimately Command & Conquer. That universe became lore I absorbed over decades of various games as the original wargame designers built their own FASA Interactive team and then Microsoft acquired them (leading to a slow decline). But Jordan Weisman is back crowdfunding all his old IPs and showing there is value left in them (previously: the recent Shadowrun RPGs). So BattleMechs are back on the menu and it's time to go back to basics with a turn-based tactics RPG.

The individual skirmishes in BattleTech are just some of the most chewy tactical decisions around, with plenty of damage output and equally lots of armour that'll evaporate when you block that incoming fire. As I said when it came out, you can just play and play that layer but it's the arc of the strategic layer that really binds it all together and turns the grinder of each mission into a desperate search for the next contract between the story missions. Much of that has been further tweaked over time and several mods (and an official expansion) have really refined everything so you can get the challenge you demand to match the tone of the game. My view hasn't changed much since I wrote up that review (and the news the developer is now a Paradox subsidiary, so potentially a lot of piecemeal DLC is planned after this current season of expansions) and I expect to head back for even more in 2019 as they slowly expand the combat scenarios and develop the strategic layer with DLCs.


    Platformer of the Year:

Astro Bot: Rescue Mission

I'm not a huge platformer fan. They're fine but I started out on hardware that was never great at them - lack of hardware support for scrolling meant each screen was much more about what we now call (action-)adventure rather than the purity of smooth platforming. I've enjoyed myself some Sonic recently but since the industry move to 3D, I've not found much that really pushes me towards the purity of jumping (when I can get more from games that incorporate that into a larger genre as good traversal). But there's something about VR that really helps ground you and physically peering round for secrets and hidden locations is a delight that I'll probably not even start to tire of for a few more years. Collecting all the things while bashing the baddies in a kid-friendly environment is just more fun as we wait for VR to become capable of supporting bigger games (Homeworld: VR when?) where that's just one element (Assassin's Creed isn't scheduled to become a VR title soon with VR's limited play times to avoid simulator sickness).

While still constrained by the limited GPU of the PS4 (you're not getting nearly the anti-aliasing you need for great VR visuals), Astro Bot is a really clean looking game that oozes light charm and playfulness. The sound helps set that stage but it's little things like bumping the playable character with your controller as you both float in space between levels that is not just playful but helps fix you in the world. Oh, a glowing orb during the short loading screens? I bet I can bounce that with my controller... *orb bounces with a satisfying chime sound*. The game itself is pretty traditional for a platformer, with enough variety to keep it engaging for the eight odd hours it takes to play through it. Hit checkpoints, collect coins, rescue the other robots (who fly up and into the controller you're holding - VR is still somewhat magical when the tracking works perfectly so you see the controller exactly as you're holding it but with the freedom to render it however the game wants), figure out the puzzles and secrets. That rarely used touchpad on the DS4 gets a good workout (will be a shame if the PS5 comes out and cost-saves that out of existence). You already probably know what this is, it's a very good one of those and one of the best games to showcase VR (sorry Lucky's Tale, the bar has risen significantly).


    Tomb Raiding of the Year:

God of War

I love me some Tomb Raider. But as we come to the conclusion of the most recent trilogy with Shadow of the Tomb Raider, I'm left feeling like the initial step forward in the reboot didn't find enough further progress and the narrative left me pining for 2013's ensemble cast (and what they might have done with developing those bonds). But this is a God of War sequel, why start talking Lara's recent escapades? Surprisingly enough, this is the sequel to that 2013 game I wanted: an action-adventure around a semi-open world; quite linear chapters with wide paths and a lot more open areas and reasons to return to collect items or complete side-quests in any order. The world feels more concrete and open (and is certainly less linear in how you approach the different quests) without sacrificing the crafted story and potential for an evolving environment (with some gating and areas that transform as the main plot changes, without making anything inaccessible). As we've seen Uncharted: The Lost Legacy start to poke at wider paths, Lara is starting to look outdated five years after popularising the transition (that avoids falling fully into the open worlds that feel like a different genre).

That's a lot of words about other games but it's important to understand this new God of War as being very much in a new lineage as well as being a sequel to the previous games in the series. The director having previously worked on the Tomb Raider reboot before taking on this project with a list of ideas that didn't make it there (like the camera that never cuts) is how to understand what is on offer here. The major genre-switch from those games being the primarily "stylish action" melee combat in place of ranged weapons or playing for stealth.

The story of family bonds, while leaning on a small mainly-male cast of often-stereotypes, is elevated by the performances and quality of rendering. Christopher Judge perfectly leads the VO cast in a game always ready to fill traversal with anecdotes (which have plenty of spaces to trail off and get continued later as you break for action) and then provide useful information during combat (where the close camera makes calling out attacks as vital as the danger indicator arrows). The core narrative feels like it's something much more reasonable (20 hours?) than what you're finding in open worlds this year (eg RDR2, AC:O) without there not being stuff worth doing if you've got 20-30 more hours you want to put into getting deep into the craftable equipment and ability upgrade trees or just hearing some more anecdotes and side stories while catching the amazing sights - Sony continuing to show how far devs can push their system. Huge areas of the game world are left for optional quest lines on top of a roguelike-like zone (reconfiguring dungeon with a poison mechanic that offers a poison resist gear progression to finally crack it with a run to the treasures at the core) and combat (challenge) arena zone.

I usually open my GotY post (or give the top picture slot) to the game that is my top game that year (everything else is not even trying to be ordered by preference) and this year I've flipped between this and BattleTech a few times. Ultimately, the performances here elevate a story that is too reliant on clichés (and women who are dead or wish to die - the small cast doesn't help allow a diversity of traits for any marginalised voice included) so despite being a gorgeous game I really enjoyed almost every moment with, it misses out on my top spot. But I really look forward to what comes from this team next and finding out if they course correct on some of the narrative stuff (or do a David Cage and double down on the issues with each iteration).


    Lumines of the Year:

Tetris Effect

Lumines was my most played PSP game. Designed to be something like Tetris (without access to that license) while mixing in the synaesthetic reactive music and visuals the team had worked on a few years earlier with Rez. Now that team is back with the actual license (after offering an update to Rez in VR two years ago) and it's Tetris like you've never seen before, as the surroundings explode around you in VR.

I had a GameBoy back around 1990 so I gave the original game some energy (but never experienced the Tetris effect with it, something I did experience with Arkanoid: Revenge of Doh and Columns around that time) and this takes that gameplay (with the few updates to the official formula now considered standard) to another level. Something about the expertly chosen music, sequenced to grow and adapt to the length of your session and button presses, and the visuals keep everything fresh. So does the decision to make the levels contain slower and faster drop speeds based on progression to a line-clear goal, giving the music slower and faster sections, rather than the constantly increasing speed of the classic game. Beyond the many Journey (campaign) levels that tie different visuals and music to the game, there is also plenty of challenges and leaderboards that ask you to focus on slightly different modifiers that can give you a more chilled experience or force you to deal with a single aspect of the game. Particularly, the Mystery challenge offers some extremely non-standard randomised elements that mix up the Tetris formula.

I'm sure it's great in 2D on a TV but to me the game is another that shows off what VR can do with low latency, making anything musical just perfectly timed and where a fully enclosed environment makes even a classic puzzle game swim around you and put you inside the virtual space.


    Management Sim of the Year:

Factorio

How have I not written about this on here before? It's been a good year for builder games, often incorporating detailed pawn simulation in new ways. The bitter cold and series of bad or worse decisions in Frostpunk built an atmosphere and progression that mixed in a lot of how survival games can weigh you down, all with very impressive visuals (and patched in some additional scenarios beyond the initial campaign). Meanwhile, Surviving Mars randomised the tech tree and offered a range of event chains rather than a limited set of fixed scenarios to mix up each playthrough. And later in this list I'll get to RimWorld.

But Factorio has none of that pawn management; it's a builder with a single (controllable) protagonist who you inhabit while doing much of the construction stuff you'd normally do in a god game. You develop robot helpers in the later stages of each run to provide remote construction and management (usable via the map view that zooms into something that looks like the normal view in areas with radar coverage), at which point you can start playing it more like a builder. It's almost like the modded (huge) tech trees of Minecraft, except this retains the top-down view of classic management games. Ultimately, as I iteratively design these huge chains of machines that take in raw material and convert them to final products which can be used to fund more research or placed to expand the factory, it feels a lot like a more open ended version of the puzzles in many Zachtronics games like Infinifactory.

Always, I keep an eye on the power use of everything, watching the pollution which will waft out and enrage the warrens of hostile creatures that live on the planet I've crashed on. My playstyle leans on efficiency and relatively green operations (while still fundamentally mining out the local resources), unlike the really expansive players who ramp up and consume the world with their endless smog and high power factories. You can dump hour after hour into tweaking designs and working out exactly how best to feed various intermediate stages in the factory process here, which is where I feel it sits next to a Zachtronics game (almost offering a way to gamify circuit layout or similar tasks without overlapping with existing programming-style games around that theme - it's all conveyors, loading arms, basic detectors, circuit conditions, and very simple logic here). It's almost at version 1.0 with a load of great mods (and a simple interface to enable them) and a final rebalancing and polishing pass currently underway so now is a great time to jump into a game that's been in Early Access for years.


    Mouse of the Year:

Moss

Sometimes you just want to listen to a story of a mouse as you play though some action-adventure dioramas that VR allows to look perfectly scaled, allowing you to understand the tiny world you're looking into. Weirdly, this wasn't the only mouse game that arrived earlier in the year; Ghost of a Tale was far more of a full (if unexceptional) RPG but lacked the way VR allows you to really feel the scale of everything. This second VR game on the list (also somewhat of a platformer) is also exuding charm but in a completely different way to Astro Bot. The cuteness is strong here and the mix of direct controller controls and reaching into the scene continues a theme this year or games getting more confident with the additional inputs VR offers while staying with a sitting-down, controller-led experience.


    Runaway Card Combo of the Year:

Slay the Spire

This year Valve brought in MtG creator Richard Garfield to design their new digital card game (that is unfortunately pay to play on top of an initial price tag) and Magic: the Gathering itself got a new digital version with MtG: Arena that more closely mirrors the online play rewarding booster packs used to great effect in Hearthstone (moving away from the buy and trade model used by Artifact and MtG: Online). But the digital card game I really enjoyed this year isn't about buying boosters, it's the purity of roguelike-like deckbuilding. When Slay the Spire left its small beta and arrived on Steam Early Access, I mentioned how complete it already felt. Back then the plan was to call it 1.0 later in the Summer but they decided that 52 weekly patches was what it needed before calling it gold so that means the final release is now imminent.

I didn't spend the entire year going back to slowly refine my card preference and climb up the Ascension ladder (unlocked difficulty modifiers) but I've had a good time with this and been back to enjoy some daily challenges. It's exactly the sort of solid design that means you can dive in and spend a few dozen hours and feel satisfied, not the eternal treadmill of new seasons and boosters (and corresponding demand for more money). That said, now it is almost released, we are hopefully going to see some expansions that add a new character and cards at some point - a good excuse to dive back in once more.


    Pawn (Sim) of the Year:

RimWorld

The current generation of Dwarf Fortress inspired games have managed to move away from being far shallower clones and into the territory of actually offering something different. RimWorld is probably the most popular and it finally fully released this year, quite a different beast from the very early version that just looked like a Prison Architect asset rip.

Crashing onto an inhabited planet, a bunch of pawns who all have their own wants and needs must be wrangled into actually doing the important task of living and building up a colony that can survive raider attacks, wild creatures, and natural disasters all while the seasons turn and friendly factions offer trade (and the opportunity to kidnap new members if you can woo them sufficiently). What if The Sims except occasionally a bunch of marauders turn up and you go into a direct-control battle mode? Oh, and less time spent dictating hobbies and more time making sure there are enough crops harvested and refrigerated before winter or actually constructing the items the commune needs from raw resources all based on skill levels of each pawn. Some of the rougher edges and design decisions around pawn psychology have been tweaked via the vibrant mod community and as the game has developed further towards 1.0 (hitting a few moths ago) then a lot of the older usability mods have become core features in the base game. Today, it's a huge story generator without the learning curve cliff that Dwarf Fortress is best known for.


    Oscar for Just Falling Short Every Time:

Forza Horizon 4

My view of the original Forza Horizon has slightly improved with time (especially as the expansions were fun and the micro-transactions that I cited against it have now infected all of AAA). But my fear of it becoming a semi-annual franchise that fails to evolve sufficiently has really cursed the entire Forza franchise (the main series still finds Forza Motorsport 4 as a pinnacle of design, features, and progression that later games have repeatedly missed). It's a series I keep on not quite giving a space on these lists.

In a year where Need for Speed is totally absent without it being a loss (after repeated failed attempts to release something meaty recently) and The Crew 2 arrived as a step back rather than a confident second title (diversifying vehicles while failing to capture the fun of the dynamic challenges that helped cruising round Weird USA in the first game or even retaining the online racing or bad F&F knock-off story elements). So it's time to evaluate the Horizon spin-off series within the crumbling ruins of open city/world driving games.

Horizon games are still good driving, and that means they're by far the best out there now. A new landscape every two years; enough stuff to dump hour after hour into, becoming familiar with every bend; and some nice tech updates each time (running ahead of the 60fps main series while coming to PC has - sometimes imperfectly - enabled 60 with those shiny new dynamics). This year the new tech involved repainting the landscape for each season, on top of the previous dynamic weather and time of day updates. Unfortunately the weight of all these iterations feels like it has almost crushed the core game, now lacking any sense of progression (outside the carefully paced tutorial hours) as everything rests on the roulette wheel of unlocks and endless treadmills of unlocking events that adapt to whatever vehicle you happen to have unlocked and bring along.

Everything adapts. So nothing ever gets harder or feels like a better vehicle would make easier. Whatever you drive up in will work, unless you pick a road car for an off-road challenge. The slight wrinkle here is winter and how that makes a few more cars unsuitable for the slippery conditions. It also means all of the small challenges dotted around the map (which all share a single leaderboard that doesn't account for the seasonal conditions) become pointless as you wait for summer to actually beat your friends. It feels like a live game (the season cycle weekly based on server time and come with "new" events that are actually just recycled existing events with a new marker over them) and the hourly challenges give a reason to meet up with other actual players and all contribute to some shared goals but it quickly runs out of new challenges to share (many of them are already just the existing small changes with a new communal total every individual attempt adds onto so it's repetition not high-score chasing as a group activity).

The lack of a feeling of real progression extends to every event, which are in the classic categories and almost too numerous to count. They all bring up a list of "balanced" opponents (the algorithm used feels like it needs some tweaking as I've noticed the same buggy selections that infested Forza Motorsport 7 last year) based on whatever you care to drive from your roulette wheel of winnings (gone is selling cars back to the garage so you can't even make some money on a car you don't want unless you're prepared to spend hours on the Auction House selling it to another real player and undercutting anyone else trying to offload yet another car they don't care about because it's all random). You don't even have to try and win an event or push down the assists as the small extra credit bonus for less assists doesn't really matter (when looking at $15m player housing with bonus effect) and not even the map tracks when you actually won anything. It's a mess of inconsistent iconography where only the small map challenges even denote stars for doing more than just finishing something in any time.

And this is the best we have right now, by quite some margin. Hopefully the console generation transition (and being a now wholly owned studio inside the Microsoft beast) will give Playground the impetus and security to try and do more than iterate because if Forza Horizon 5 doesn't radically change the formula, it might be time to put a fork in the open world driving genre. We can all go back to playing remasters of the ten year old Burnout Paradise.



    2017 overflowed with so many games that I missed a lot of 2018, these are all pending:
Exapunks, Mutant Year Zero: Road to Eden, Marvel's Spider-Man, Hitman 2 (not Silent Assassin), Assassin's Creed: Odyssey (Xena Simulator), Vampyr, Paratopic, The Missing: J.J. Macfield and the Island of Memories, Into the Breach, Red Dead Redemption 2, Pillars of Eternity II: Deadfire, Artifact, Valkyria Chronicles 4, Yakuza 6: The Song of Life, Phantom Doctrine, Frozen Synapse 2, Ashen, & Ni no Kuni II.

Saturday 24 November 2018

µNote: Respecting the Player's Time

MicroNotes are something new I'm going to try (possibly through 2019). Rather than aiming for a long post once a month, which I am prone to kill (if it doesn't come together or looks slight after poking at a draft), I'm going to post some shorter writing here. I've been writing some thoughts elsewhere for a while and so might go back and put some of those posts here (and even go through my old drafts that I never fully shelved). This should also give room for smaller coding & rendering thoughts (that are bigger than fit on Mastodon or Twitter) to actually get posted here.

I have always found "respecting the player's time" to be a useful lens through which to consider game design. What needs to be here; what do I think makes a positive contribution to the core arc; just how much of a structure can be made optional and how do we signpost it so players can best dive into only the content they want? Extracting the most from the interactive nature of games means building structures that react to each player and customises how they experience a game, attempting to give the best progression to the most people we can reach (and even respecting that some players will not enjoy what we are making and should not be strung along). The thing is, this phrasing has become extremely common in games criticism, to the point of dilution.

“I don’t like this game as I feel like it is not respecting my time (and here are 6 paragraphs on exactly how)” is commentary on where I feel a game does not align with how I value the different activities it offers and what it considers core vs optional - I can’t fast travel in a game where I don’t agree with a design decision to make it more immersive and exploratory by not having those systems; I can’t sample just the narrative content I find engaging and think that the game should flag more content (as optional, as less important) that I don’t find core to the experience; I see mountains of “content” without enough signposts to let me understand it and a progression through it and am simply overwhelmed in a way I do not think benefits the game or possibly even was the design intent.

“This is one of those doesn’t-respect-your-time games” generalises specific criticisms about various systems vs a personal view of what it could be into almost a genre - the too-long game. It flattens a meaningful discussion from which the speaker can make clear what their values are and so why a game doesn’t align with them (valuable information a reader can compare to their own values) into basically nothingness. It’s a topic where you need to be detailed because otherwise you’re basically just repeating the ancient “game long so good value” vs “game long so necessarily boring” war.


To some Assassin's Creed Odyssey provides the latitude to inhabit Kassandra’s life and soak in the world, to others the content is a slog and they’d much rather a far shorter core experience that the game does not appear to offer. Respect-your-time is extremely respect-your-time and the forming of critical consensus around a game will always flatten that even if the critics who voice their views are individually detailed in their analysis.

Every player is different and ideally games would all have a certain level of give to help accommodate as many people as possible but we’re not yet at that point. I’m also not sure the tools for signposting & flagging are sufficiently developed to be universally readable by the audience even when we add them. As we pick through the big lessons of the last decade or so of game design advances, the big open worlds and structures for repeatable content that span genres and platforms, this is hopefully something that will develop.

Friday 21 September 2018

The Free DLC: Driving Customer Delight

Quick aside: the recent posts about Rust have done very well. Enough that some of you are probably following the RSS feed just for that. I'm going to try and link some interesting Rust posts if I'm writing about a different topic that month. Today, Bryan Cantrill Falling in love with Rust. Both good introductory links and extensive thoughts on why Rust is worth caring about.

On to the main topic: Games are increasingly being updated after release as part of a "living game" strategy to continue to sell the game and any additional content. This is nothing new except in how common it has become. Back at the tail of the 1990s, if you wanted players to keep coming back and knew that the internet was now good enough to distribute patches then free content was how you drove sales and interest between boxed expansions. Total Annihilation was offering optional additional units back in 1997 on top of the balance patches and we'd come to each LAN and make sure everyone was up to date before jumping into the game. TA wasn't the first game to do it but additional maps, scenarios, and units was still notable in 1997 and became more common as games embraced online multiplayer as a primary focus (eg Quake 3 and Unreal Tournament). Even things like mods are part of this, being community-developed free DLC. Counter-Strike drove Half-Life sales at the development cost of building and maintaining those mod tools (which had originally been used to develop the game).

A decade later and consoles embraced online balance and even feature patches to expand games and eventually started pushing paid piecemeal DLC that wasn't just a different way to buy those traditional boxed expansion releases. Fancy buying a single new item for your RPG? What if you paid for it? What if you paid enough that, at those prices, a full game would cost at least several thousand dollars to buy? Hello the pricing model of many current F2P titles which offer to sell content (beyond the model that just sell temporary cheat codes to skip the most grinding activities or boost your stats).

Today, paid DLC of all types (from cheat codes to full expansions that massively extend the volume of content in the game) is everywhere. In a change from those original online games, multiplayer modes often sell additional maps which require you to keep paying to continue to play the current rotation of popular maps, somewhat like how MMOs require continued payments. But things are starting to swing back the other way with cosmetics being sold while the maps are given away to everyone who paid for the base game so they can continue to play. As someone with an interest in the longevity of the industry, I really hope this swing back continues. We need to delight customers, to make them feel like they got more than they expected. If we are making living games, we should avoid making things that feel like you've purchased a $60 storefront that's constantly asking for money rather than offering enjoyment. Over the last year, my feelings on two console exclusive (formerly tent-pole?) driving games have really driven this home.


I may have played more hours of Forza Motorsport 7 in the last year than GT Sport but by the end of Forza, I felt it was a slog; I saw the treadmill and only the proximity of the finish line kept me even considering continuing the career mode. I had access to over 500 of the 700 cars the game launched with; I'd become familiar with all 32 locations; but I had zero interest in paying for more cars (their plan for DLC). Regular challenges and (eventually) some more substantial patches to add back a working drag mode and fix the track limits were unable to really make me feel like I was engaged with the series continuing. The feast with a list of paid extras just made me feel bloated.

Forza was the series that got me to love cockpit mode, assists off, actually feeling like it was driving. But if it wasn't for the rental option then I'd probably not even look at the next release (I'll definitely be playing Forza Horizon 4 as a rental next month - once and done for pennies rather than the increasingly expensive bundled launch day editions that don't even guarantee access to all DLC content over the next year+ of updates; the last Horizon game being unable to run offline on Windows 10 and so effectively being a rental anyway). The monthly FM7 DLCs, heavily advertised in the game and covered in branding for TVs and snack food, offered extra cars on top of already so many, but with an order of magnitude higher price per vehicle and no extra locations. A handful of freebie cars (which appeared later on) are invisible when surrounded by the number the game launched with. But the lock icons stand out, as does the way the Forza series now includes day-one paid DLC in the form of a launch car pack (James Bond cars for Horizon 4) that only the most expensive edition ($100 up front) gets access to.

Meanwhile, Gran Turismo launched with under 200 cars (but most of them feeling distinct - no 10 cars just with different advertisers and identical handling/options) and with only 19 locations from which the various tracks (plus reverses) are assembled. A full career mode did not exist at all in the launch game but all of this was clearly messaged in the advertising (and a temporary sale price really helped push it from something ignored to a worthwhile gamble). As I talked about in my GotY discussion in December, GT Sport really appealed to me. It also found a way to keep me engaged with monthly free DLC that regularly added a new location and track options plus around a dozen cars each pack. The career mode arrived and every single area of the game received additional attention and content. All for free.

While Sony may be holding the next GT game for their next console, I feel extremely positive about the series (and buying that next game) and where it might be going. For a game that launched with less to do but then constantly gave me reasons to come back and try the new additions. The experience was of deciding that the game was probably worth taking a risk on, finding what was purchased was genuinely interesting to me, and then having my interest rewarded in the long term with new content all for the price I'd paid up front. There was enough content at launch, there wasn't a massive treadmill that ever felt full of filler just there to extend the total play time (without the variety to make it interesting), and then when I was ready to go back for more there had been patches that added the more I was looking for and didn't demand additional payments (with the associated question of if that value proposition was better or worse than the one offered by the base game or by choosing a different game).

There are many models for how to do this sort of continuous expansion based on player feedback and a long term plan. I'd call almost all games in Early Access an example of releasing a core experience and then iterating and expanding that with an engaged community who expect those additions to be included in the price they paid up front. We have two decades of using the internet to distribute feature patches and new content and how audiences react to that. The Paradox model ties a lot of paid DLC to base game feature patches that expand the core systems so they can milk a release while never letting someone feel like they paid up front for a game and now are left to just watch it progress without them.

My experiences certainly make me lean towards saying that a launch game shouldn't try to contain absolutely everything possible with a plan to aggressively monetise post-launch content. The audience will feel fatigued at too much content or start to divide it up into what feels fresh and what feels like filler (and when overwhelmed with content but lacking the tools to understand where it all can fit in as unique, it may increasingly look to be filled with filler). You can't under-deliver but over-delivery at the cost of having content to give away for free during the lifespan of the game feels like something to be considered carefully. The option is always there to push back more for an additional polish pass; rather than monetising it, use it to drive sales of the base game and delight your existing customers.

Friday 31 August 2018

Rust: Fail Fast and Loudly

So recently I was chatting to some Rustaceans about library code and their dislike of a library that can panic (the Rust macro to unwind the stack or abort depending on your build options). The basic argument put forth was that a library should always pass a Result up to the calling code because it cannot know if the error is recoverable. The chapter of the Rust Programming Language book even lays out this binary: Unrecoverable Errors panic! while Recoverable Errors return a Result. As a researcher in debugging, I reached the point where I basically banned these terms from my lectures because they can potentially lead to this thinking that libraries cannot know they're in an unrecoverable state and so can only defer to what calls them.

Terminology

Throughout CompSci literature, some terms relating to debugging are not used consistently. I'll start with the words I use (so I never have to write this in a blog post again). To illustrate the scale of the terminology issue, enjoy this quote from the 2009 revision to the IEEE Standard Classification for Software Anomalies:
The 1993 version of IEEE 1044 characterized the term “anomaly” as a synonym for error, fault, failure, incident, flaw, problem, gripe, glitch, defect, or bug, essentially deemphasizing any distinction among those words.
A defect (also called a fault, error, coding error, or bug) in source code is a minimal fragment of code whose execution can generate an incorrect behaviour. This is for some input (which includes the environment of execution), against whatever specification exists to declare what is and is not correct behaviour for the program. A defect is still a defect even if it is not exercised or does not cascade into error / failure during a test case. Defects can be repaired by substituting in a replacement block of code into the block reported as defective; this returns the program to executing in a way that does not violate the specifications.

An error (sometimes also called a fault or infection) in program execution or modelled / simulated execution is the consequence of executing a defective block of code and the resulting creation of an erroneous state for the program. This effect may not be visible and not all errors will be exposed by surfacing as a failure. An error is the result of a defect being exercised by an execution which is susceptible to that defect.

A failure in program execution or modelled / simulated execution is the surfacing of an error state by the observation of behaviour in violation of the program specification. It is therefore correct to say that a failure was experienced for a given test case due to a chain of erroneous states that originated with the execution of a defect that caused the error.

Setting a Trap

Having muttered about the language choices made in the Rust book at the top, I'm going to also praise how they actually resolve that chapter. The final section goes into detail about the pros and cons of calling panic from your code. It defines bad state in a way I might write myself in a practical programming guide. It even offers up the type system as a way to ensure your specifications for input aren't violated, with as much of the burden placed on compile time as possible.

It's good writing but it can also potentially be read by those who really wish panics didn't exist as saying you just need to make sure every possible input into your library is valid. My stance is that an occasional small slice of invalid input being possible is actually important for code quality when writing in a language that can fail (it can also make some things a lot easier to write in practice). However, it must be clearly labelled as such, with no question about when you might panic. This is the contract you're writing and every good library should fully document the interface so there is no possibility of an unexpected panic.

To give an example from the Rust standard library (which is totally just a library and we should expect other libraries to conform to the same standards it uses - this is even more true of Rust than in other languages as Rust splits out the really core library code into the Rust core library). When you've got a vector and you need to divide it in two, split_off(at) is what you need.
Splits the collection into two at the given index.
Returns a newly allocated Self. self contains elements [0, at), and the returned Self contains elements [at, len).
Note that the capacity of self does not change.
Panics if at > len.
Here we have a clearly defined operation that does exactly what we want and comes with some important guarantees about how it operates. One of those details is that if we ask to split beyond the end of the array then it will panic.

Why does this panic rather than returning a Result and letting us decide if the error is recoverable or not? I can imagine many places where trying to split an array may not be the only thing a program can do to continue, a backup path could be constructed to continue operating under some circumstances if that failed but this library decision means the calling code cannot decide that. If you ask for a split at an invalid point then you get a panic.

It is because the library set a trap. It asks the calling code to know something about the object it wants to be manipulated. Because there is no reasonable way of asking for the array to be split in two beyond the end of the array, the only conclusion that the library can make about such a request is that it is unreasonable. We are past the point of executing a defect, we are swimming through an erroneous state, and it is time to fail so this can be caught and fixed. That also means no room to let the erroneous state accidentally ask to zero the entire storage medium it has access to and trash a last known-good state that might be used to recover later (or debug the defect). Any calling code that wishes to avoid this should catch the erroneous state and recover (if possible) before calling over the library boundary. The potential to get unreasonable requests allows our blocks of code to keep each other more honest by surfacing errors as failures.

It is a restriction that provides a higher chance of fixing a defect before we ship a product. We must strive to fail fast and sometimes that means using some small gaps between what is possible and what is permitted as traps to catch when errors have occurred. A library can be poorly constructed to panic when not expected (and declared) but the existence of panics should not itself be used as a sign that a library is of poor quality or to be avoided.

Saturday 28 July 2018

Empty Rust File to Game in Nine Days

I've been doing Rust coding for a bit now. Recently that's involved briefly poking at the Core Library (a platform-agnostic, dependency-free library that builds some of the foundations on which the Standard Library are constructed) to get a feel for the language under all the convenience of the library ecosystem (although an impressive number of crates offer a no_std version mainly for use on small embedded platforms or with OS development). I'm taking a break from that level of purity but it inspired me to try writing a game just calling to the basic C APIs exposed in Windows.

So I'm going to do something a bit different for this blog: this post is going to be an incremental post over the next nine days. I'm going to make a very small game for Windows (10 - but hopefully also seamlessly on previous versions as long as they have a working Vulkan driver), avoiding using crates (while noting which ones I'd normally call to when not restricted to doing it myself). I'm not going to rewrite the external signatures of the C APIs I have to call but I'll only be importing the crates that expose those bare APIs.

Day 1

The day of building the basic framework for rendering. Opening with a Win32 surface (normally something you'd grab from eg Winit) and then implementing a basic Vulkan connection (which would normally be done via a host of different safe APIs from Vulkano to Ash, Gfx-rs to Dacite).

The Win32 calls go via Winapi, which is a clean, feature-gated listing of most of the Win32 APIs. The shellscalingapi is a bit lacking as it doesn't expose all the different generations of the Windows HiDPI API (and Rust programs don't include a manifest by default so you typically declare your program DPI-aware programatically) which means you have to declare a few bindings yourself to support previous editions of Windows. But generally it makes calling into Win32's C API as quick as if you were writing a native C program including the required headers. You could probably generate it yourself via Bindgen but the organisation here is good and it's already been tested for potential edge cases.

Vulkano exposes the underlying C binds via the Vk-sys crate. It has no dependencies (so it's what we want: just something to avoid having to write the signatures ourselves without obfuscating anything going on) and while it's not updated to the latest version of Vulkan (1.1), we're only doing a small project here (so it shouldn't matter at all). The function pointers are all grabbed via a macro, which is a bit cleaner than my previous C code that called vkGetInstanceProcAddr individually whenever a new address was required (to be cached). Of course, other areas are down to just the barest API which means looking up things like the version macro.

So at the end of day 1, we've got a basic triangle on the screen working (with a 40kB compressed executable, most of which is Rust runtime/stdlib as Rust libraries default to static linking).

Thursday 28 June 2018

Mini-Review: Slay the Spire

So early last year, it was already clear that we were getting a lot of extremely good games (and going on the number of February release dates announced at E3, 2019 looks like it's going to be similar). This year has started with fewer tent-pole releases (most notably, God of War) and far less focus on RPGs overflowing with content (which gave early 2017 a very specific feel) but there certainly have been some great games like Mashinky building up in Early Access and BattleTech getting a full release. Into the Breach is another game from earlier in the year that I've not written about yet but is very nice. There's something in the strategy/tactical water this year and it tastes like roguelike-likes. The genres have always been somewhat mingled, what with 4X games (or even solitaire games) being about semi-random runs which build their own story through the mechanics (and that's where Mashinky fits in), but much of 2018's output (They Are Billions entered Early Access at the very tail of 2017, I'm counting it) feels explicitly part of the current roguelike-like wave. Sometimes it's unclear which side of the line games are aiming for (Frostpunk is probably going for more scenario-based rather than the endless replayability of rogue).

Slay the Spire, currently in Early Access with plans for a release sometime this Summer, is a deckbuilder game. If you're not familiar with the genre, it's the assembling of a card deck from CCGs (like draft format) without that pesky monetisation of the acquiring of the cards required. You fight battles (here entirely PvE against clockwork enemies who have predictable patterns and compositions rather than branching AI) and work out how everything synergises with the simple core mechanics, but without having to buy hundreds of dollars of cardboard or, in our terrible digital future, virtual cardboard.


In order to ensure the game doesn't devolve into simply selecting the best deck from the current meta discussions and throwing it at the enemies, the format here is solidly a roguelike-like. Semi-randomised runs where the expectation is to eventually be weakened to the point of death and have to restart with a new random seed from the very beginning. As you work through a run, you'll be offered various card choices (as well as handed out limited potions and rule modifiers in the form of relics) from which to build your deck. One of the key things here is that card removal is actually hard (not often offered and rarely for free) so building a deck is very much about what you don't select. The only times I've seen cards you can't turn down is used to good effect in a curses category. Negative outcomes can add cards to your deck you don't want and as they are hard to remove, they will stay with you and mess with your flow. Slay the Spire is very clean in how everything works like this - full of smart decisions to keep the game compact without feeling stale.

Unfortunately also absent from this, compared to one of my previous favourites - FTL, is much story development. There are a pool of random events with flavour text but not to the same extent as it felt like FTL assembled a story. Even the Magic: the Gathering standard of flavour text for cards is missing here with only artwork and name working beyond mechanics as narrative. But what you do get from a standard run is 50 events, mainly fights, as you scale up through three main bosses and a few elites (with your exact path somewhat flexible, so you can pick when to fight an elite or rest as a campsite to replenish your health). As with all enemies in the game, each individual boss is clockwork so part of the learning curve is internalising their moves, but there is some variety in which boss you encounter (so the final boss is randomly selected from a pool of three and you can see who it is during the final third of the run to help build your deck towards beating them).

So far the Early Access is going well, with now three different characters (changing the starting relic, some core mechanics, and card availability) all feeling sufficiently different. Beyond the standard roguelike-like, there is some permanent unlocking of extra cards/relics that will randomly appear in the game to expand your options over time as well as a difficulty staircase called Ascension that adds new difficulty modifiers once you've grokked the mechanics and how to steer yourself towards a synergistic deck despite the RNG offering you new cards. Spelunky fans will recognise the Daily Challenge, here adding three daily modifiers to a daily seed and then offering a leaderboard scoring how far you got and what feats you achieved during your first run. There is also now an Endless mode, although I am yet to even try it.

Currently the buzz is positive and the sections of the game with 'coming soon' written over it are almost all swapped out for new features (sitting at Weekly Patch 30 at time of writing) so a final release is probably on track. I do hope the game becomes a living game after 1.0 with plenty of balance tweaks but also a slowly expanding enemy and elite selection (as they are clockwork and so it can feel like you're eventually solving them for almost all competent decks you may have when you encounter them). Expansions for new cards, new bosses, and even a new character also seem like a good long-term future for the game. For now, not even at 1.0, it's an extremely easy way to burn through hours of play and feel like you're getting a deep appreciation for the various mechanics and synergies available.

Tuesday 29 May 2018

Evolving Rust

At the start of the year I talked about using Rust as a tool to write code that was safe, easy to understand, and fast (particularly when working on code with a lot of threads, which is important in the new era of mainstream desktops with up to 16 hardware threads).

Since then I've been working on a few things with Rust and enjoying my time - especially in some cases where I just wanted to check basic parallelism performance (taking advantage of a language where you can go in and do detailed work but also just call to high-level conceptual stuff for a fast test). If you're looping through something and want to know the minimum benefit of threading it, just call to Rayon and you'll get a basic idea. In practice, that usually means changing the iterator from an .into_iter() to .into_par_iter() and that's it.

I finally upgraded my old i5-2500K desktop (on a failing motherboard from 2011) to a new Ryzen 7 so it's been very useful to quickly flip slow blocks of code to parallel computation. When you're just building some very basic tool programs, I'd probably not even think about threading in C, but here it is so easy that I've been quick to drop a (for example, typically) 30ms loop down to 3.5ms. One of the things I've been somewhat missing is easy access to SIMD intrinsics, but this brings me to something else I've been enjoying this year: Rust is evolving.

I'm used to slowly iterating standards with only slight upgrades between them as tools like compilers improve and the std lib slowly grows. Clang warnings and errors were a massive step forward that didn't rely on a new C standard and libraries can offer great features (you'd otherwise not have time to code yourself) but when I think of C features then I generally think of language features that are fixed for quite some time (about a decade).

Rust is currently working on the next big iteration (we're in the Rust-2015 era, which is what Mozilla now calls 1.0 onwards, with Rust-2018 planned before the end of the year) but that's via continuous updates. Features are developed in the nightly branch (or even in a crate that keeps it in a library until the design is agreed as a good fit for integration into the std lib) and only once they're ready are they deployed into stable. But that's happening all the time, even if a lot of people working with Rust swear on nightly as the only way to fly (where you can enable anything in development via its associated feature gate rather than waiting for it to hit stable).

For an example of that, SIMD intrinsics are currently getting ready to hit stable (probably next release). That's something I'm extremely eager to see stabilised, even if I'm going to say the more exciting step is when a Rayon-style library for it exists to make it easier for everyone to build for, maybe even an ispc-style transformation library.

The recent Rust 1.26 update is a great example of how the language is always evolving (without breaking compatibility). 128-bit integers are now in the core types; inclusive ranges mean you can easily create a range that spans the entire underlying type (without overflow leading to unexpected behaviour); main can return an error with an exit code; match has elided some more boilerplate and works with slices; and the trait system now includes existential types.

Monday 30 April 2018

BattleTech: Just One More Mission

So, this has rapidly taken over all of my free time. Who knew that almost 30 years after I was playing those early BattleTech computer games (including some very early Westwood Studios titles), there would be a tactics game that captures the magic of detailed combat between 'Mech miniatures simplified down without losing the charm and weight of those mechanics.

When X-Com (originally UFO: Enemy Unknown to me) was rebooted into a new tactics game, I just could not get into the simplified systems. Maybe this was made worse by my continuing to go back to that original and throwing dozens of hours into the campaign every few years but something about moving from action points to move & fire phases didn't click with me. I knew how this game worked and a steady shot came from moving less and having more time to aim properly. It was all a complex set of choices that set the pace of progression and the chances of coming back with most of your squad in good health (or at least alive). Without that as the backbone of the tactics game, I just couldn't get into the larger strategic layer.


For whatever reason, I don't feel similarly constrained by that in Harebrained Schemes' latest game. Maybe it's the secondary systems like heat management and armour facing or that all of that stuff comes from detailed loadout decisions made in the strategic layer but the simplifications here feel necessary and improve the flow of each mission (which can sometimes finish in minutes but normally run closer to an hour). There was never going to be a time when 'Mechs could shoot more often (if it used APs) because managing the heat generated already restricts your actions as much as the turn counter. I've also not been going back to a different BattleTech tactics game and getting my fix there in the years up to this release so each mission feels like fresh air, every dodge and answering body-block feel like the taste of metal behemoths becoming mangled for my enjoyment.

I could probably play just the tactical layer for another 40 hours without anything else to draw me in. Keep that random scenario generator running to build missions and some fresh 'Mech loadouts to keep things interesting & my playbook changing and I'd be set. But here we get a full set of scripted story missions and universe building which situates you inside the world some of us have been diving into for decades.

As a mercenary, you're responsible for making payroll every month and ensuring your equipment is replaced after every mission. It can genuinely feel desperate when you're trying to make enough from contracts to keep going and you know that the damage you take can eat through your profits. Far worse, injuries and repairs are going to prevent you jumping into another contract for some time and that payroll is only getting closer. Time is money and even if you win a scenario, you could still come out with a loss. That's where the fiction continues to meet the mechanics: unless you're on a story mission then you are encouraged to consider cutting your losses and abandoning a contract. Optional objectives can increase your pay but none of that is worth it if you're stuck for a month repairing the damage you took completing it. Even before you're done with the core objectives, sometimes it's time to evac and write it off. There are dozens of little things that mesh the narrative and the mechanics like this.

The production values are somewhat mixed (there is a bit of the "KickStarter budget constraints" visible in spots) with some functional-if-TellTale-Games(ish) characters for a lot of the dialogue between story missions giving way to the occasional but far more evocative animated painting cutscenes backed by excellent music. In the tactical layer some of the lighting, atmospheric effects, and 'Mechs look excellent but then it's also easy to note some rather variable detail levels, dodgy action camera shots, the odd framerate canyon, and something seems straight up broken about the loading system (it hasn't crashed, it's just trying to load the loading screen). I grabbed a new Ryzen this month and BattleTech is possibly the only place where I've not noticed the improvement (something is going on during those load screens, if they even render in, but it's not taxing CPU cores doing it). But these are minor blemishes on what is often a gorgeous game that oozes a coherent style.

This is an exceptional tactics game that simplifies the miniatures without stripping that character, of huge 'Mech combat in a crumbling universe of fiefdoms. Come for the tactical mission encounters, stay for playing as mercenaries trying to make ends meet while pawns in much larger events.

Sunday 18 March 2018

The Asset Fidelity Arms Race

So there has been a lot of discussion about the cost of game development recently. Unfortunately a lot of that has been used to defend questionable business practices (there is another gaming industry and I have absolutely no interest in ever being part of it) or extremely short-term views of economic expansion (eg increasing new-release unit prices for a medium that's already one of the most expensive ways of purchasing a single piece of mass produced entertainment and has been shrinking unit costs and value [loss of resale/lending etc] with the successful transition to digital).

Of course, while there are billions in revenue to be made from a single project, massive corporations will continue to greenlight projects whose scopes grow to a decent percentage of the potential rewards. So really the biggest budgets will always grow to fill the potential maximum returns, which means a growing hit-driven industry trends towards growth. This gives me a rather fatalist view of that original discussion (and concern about the "solutions" proposed which point at gambling mechanics and increasing unit prices as if they could not lead to a market crash or reverse decades of market growth).

But let's step back a second. Asset costs are going up and games are getting bigger (if not longer - not a bad trend as we balance the endless replayability of something like chess with the expectation that you can tell most stories in much less than 100 hours - be that in a book, movie, or TV series). We've been talking about this for as long as I've been involved in video games (~1999 onwards, first as press then adding indie).

We're about to watch another GDC where there should be a great selection of technical talks, often that propose paths out of an increasingly expensive asset fidelity arms race. But are we going to listen and then go back and just use these techniques to build even more detailed worlds? Even on an indie project (where the project decisions are usually made by an in-the-trenches dev), we tend to scope for the most that we think we can do. Doesn't that say something about how this arms race only exists because we aren't threatened by it? That we're already engaged in a careful process of ensuring the incline is just right for stable growth.

Forza Motorsport 4 - Xbox 360 (2011)

Seven years ago, this was the detail level for Forza, except this used an offline renderer (photo mode) to really make the most of those assets. To my eye, this asset stands up a generation and a half of consoles later. When I look back at some titles no longer considered cutting edge on game photography sites like DeadEndThrills, there is a lot to like about the actual assets even when just tweaking the real-time renderer to try and push the limits of what it can offer. And the cost of making assets at that fidelity level (as our tools advance) is only going down with time. Not to mention, the potential for reuse grows (especially with more component-based design from workflows promoted by stuff like PBR).

When I'm working on level-of-detail systems, it's really only an incremental improvement in the potential density of the very local area that chasing asset fidelity is bringing us today - the rest of the scene is managing way more assets/detail than we have the ability to render in 16ms. Is the asset fidelity arms race over if we want it to be? Long term, are we looking towards one off costs (R&D: new rendering technology and hardware advances) and larger budgets building bigger worlds (for the projects that need it) rather than major increases in the fidelity of assets? Not to say there is no point in increasing fidelity but how quickly will this look like diminishing returns? So much of the very recent increases in visual fidelity seems to come from rendering advances that provide things like rich environmental lighting or better utilisation of existing assets (combining high pixel counts with good actually super-sampled anti-aliasing).

Sometimes I feel like we're being sold a false choice: between sustainable development costs or expensive looking games. As we slowly ride the silicon advances (the rendering potential of a $150 to $500 device, quite a narrow window that is constantly throwing extra FLOPS at us) and develop new real-time rendering algorithms, it is far from as clear-cut as it can sometimes sound. When we look at the photo modes that have come to games, often that produce extremely clean and detailed versions of what the game actually looks like in action, we should remember that this is already the potential visual detail of current game assets. We’re just a bit of hardware performance and a few real-time techniques away from realising it. These are long-term advances that lift all projects up, sometimes with major increases in asset-creation productivity (eg integrating various procedural assists and more recently the potential from moving to PBR). In addition, expecting users to buy new hardware for a few hundred dollars every four to seven years is a lot more reasonable (and sustainable, as we chase the affordable silicon cutting edge) than pushing unit prices to $100 or even beyond.

GT Sport - PlayStation 4 Pro (2017)

So, as I look to GDC, I'm looking forward to hearing about a load of exciting advances. I always look forward to SIGGRAPH for the same reason. Even if the budget to expand asset fidelity dries up tomorrow, we should be able to continue to make amazing things. Video games are built on innovation. Let's not allow our concerns about the asset fidelity arms race to lead us down a path of thinking the people who buy games are a resource to be strip-mined as rapidly as possible. Sustainability is just as much about ensuring we can offer something at a price everyone can afford and which enriches their lives, providing delight rather than cynically tapping into gambling-like addictions or experiences that feel hollowed out.

Saturday 17 February 2018

Building the Scaffolding: Mashinky

The last month has involved a lot of building the scaffolding required for a large project. The unsexy code that allows you to build things quickly in the future. Core services - debug displays, logging, state save & load, program execution playback; and basic tasks - data structures, juggling between CPU & GPU, job systems for organising work, etc. It's the work of not starting out with an existing engine and only some of it can be avoided by looking into the ecosystem for Rust and picking out the crates that best satisfy my requirements.

The benefit of getting this stuff done in Rust is that I'm not too worried about my threaded code, and I'm getting used to the details of the language and compiler tools before I start any heavy experimenting or profiling to optimise what I'm building. There is always a period of getting up to speed and sometimes you can overlap it with doing the basic code that often is more about getting plans down than doing anything novel or particularly complex.


I've been playing quite a bit of Mashinky recently, which is an early release of what could turn into a pretty exciting spin on the TTD model. A seven year solo development project is being brought together for a final landing and it's interesting to see such a clear vision which hopefully this team (presumably paid via the Early Access monetisation of the final part of development) will be able to polish off. It's good to find inspiration in the work of others, especially when you're building something that has yet to really create anything worth showing off.

The real party trick in this game right now is flipping between a stylised orthographic (polygonal) projection and a more modern and realistic perspective projection. You build on the classic grid but can flip back to enjoy the scenery at any point. It's a good way of providing both the underlying details you need to construct line extensions and offer a visually interesting view into the virtual world you're shaping.

Speaking of scaffolding, the Early Access version is very much incomplete and, between the features that are completely missing, what is here does feel a lot like it is scaffolding for a pretty engaging problem engine (as most builder games are, for example Cities Skylines lives on the traffic problem engine that drives many of the decisions you have to make). In the case of this Mashinky alpha, it provides the template for dynamic and authored quests along with three eras of technology that slowly advance the available equipment and the sites of resource creation and processing. An interesting decision is to cost lots of the technology and upgrades in resource tokens. You deliver felled trees to be processed into planks but then need those to be delivered to the workshop before you get lumber tokens to spend on better rolling stock (and all these locations are randomly populated based on the map seed). Almost everything is extended throughout the game so expect all of those locations to also be heavily upgraded (again, using a mix of token types) as you progress. But there are plenty more eras not in the game and the quest system is extremely light in what it offers you.

If you want to build train tracks and work a slowly deepening optimisation problem of moving resources around randomly generated locations then you're ready to go. But Mashinky right now is very clearly only showing us the hints of a potentially great game. It is exactly the type of game that has previously done very well being refined in public for a couple of years before hitting a 1.0 release. There are always going to be lots of quality of life ideas that a small community can quickly highlight. Right now I would really enjoy being able to disable production buildings as they build up to having lots of different production lines which all have different efficiencies and production output (especially with the current way around this being to destroy extensions if you really want to turn them off).

Right now a big scenario builder update is the next thing coming, which I can see the point of (vs the current random generation only maps), but I'm at the point where I'll probably wait a while before some major updates (like a new era or two) arrive before jumping back in. With 30 hours already logged on Steam, it has already been plenty of fun. By the time it leaves Early Access, it could be something to recommend to anyone who finds Factorio too daunting or violent.

Saturday 20 January 2018

Why I'm Trying Rust in 2018

Last year, when considering a long overdue upgrade for my home desktop, I pondered the end of quad-core desktop dominance. Since then we've seen 8 hardware threads on mainstream "thin" laptop CPUs so even at 15 Watts (including the iGPU) you're not expecting everyone to only have two or four (possibly via SMT) threads. Intel are still charging to enable 2-way SMT on desktop but even there the mainstream is now solidly six threads, possibly 12 via SMT. AMD are running away with 12 hardware threads for pennies and 16 for not much more with a slight speed boost expected in April to eat into that single-threaded gap (while Intel are busy reacting to slightly bad news for their real world performance).

At this point, if we want to maximise the performance of our code (ok, which is not the sole focus), I think it goes beyond saying that a single-threaded process is typically not going to cut it. If we're statically slicing our workload into two to four blocks ("main loop here, audio subsystem and this on a second thread..." ) then we're also not building something that scales to the mainstream consumer hardware we will expect to be running on in the next few years. Even ignoring SMT (maybe our tasks leave very few unused execution units in each core so running two threads just adds overhead/reduces cache per thread) then we're going to have to start to expect six to eight cores that can all run fast enough to always be much better than running fewer cores faster (due to boost/TDP limited processor design - which doesn't even appear to be a factor on desktop). When thrown 32 hardware threads, we should be laughing with joy, not worried about having the single-threaded performance to run our code. We have to work under the assumption that basically everything we do has to be thread-safe and able to split into chunks all working on different cores. Yes, some tasks have to be worked on sequentially but in areas like game development we are juggling a lot of tasks and many of them are not so constrained, so we've got to adapt.

It's 2018 and when I think about some of the most successful threaded code I've written in recent years, it's mainly in Python. Yes, GIL-containing, dynamically-typed (static analysis nightmare) Python. It was never going to be the fastest but I had expressiveness and a great set of libraries behind me. I also have no doubt that a subtle defect is probably still sitting in that code which we never found. But if I was to rewrite it in C, those odds only go up to an almost absolute certainty. At this point, I'd say my ability to debug most defects is ok but, even assuming I catch everything, the time lost to that task is a significant percentage of my development budget. I started looking around for something that retained the system programming language power I expect from C (and knowing that I'd be doing FFI-binds into C ABIs) but with better tools to enable me to write threaded code I felt more confident about being correct.

Enter, stage left, Rust. A system programming language that's just about old enough to have gotten the first round of real-world testing out of the way and start to build an ecosystem while also having one feature you'll not find in most of the other multi-paradigm, high-performance languages: a borrow checker as part of the compiler. It has the expressiveness you'd expect from a modern high-level language eg Python or Scala, with enough bits of Functional or OO, but it has no time for handing out mutable references to memory like candy. The compiler requires you always consider how you interact with your memory, which I found very useful for doing threaded work, along with ensuring you've got no GC overheads (and only count references for memory where you really need it). Once I'd gotten my head around it, the constraints no longer felt onerous.

This is certainly not the only way of doing things, but Rust feels like it is just close enough to C-like for me to feel comfortable (and know what my code will most likely actually do) while offering almost everything I expect from the integration of various modern/functional features, all wrapped in low-overhead (often completely free at run-time) safety. You can still shoot yourself in the foot, but there's not a book of undefined behaviour you have to carefully avoid and which the compiler will often not warn you about. So I'm changing my traditional view of wait and see, 2018 will be a year where I explore a new language and iterate on projects while the tools are very much developing around it. Rust may not quite be here yet, but it's getting close enough that I can't see myself jumping into a large independent project in C and not regretting it in a year. I'm also currently between major contracts and probably looking to relocate so now is an ideal time to use a bit of that freedom to investigate new ideas. Even if it doesn't work out, operating with this ownership model will probably push how I think about (and aim for safety in) concurrent programming in the future.

How is the current state of play? They've got a (mostly) working Language Server for Visual Studio etc with advancing integration into some other IDEs (JetBrains stuff, vim). I've spend quite a lot of time in PyCharm recently so it feels natural to stick with that (using an early plugin that's developed enough to offer the standard niceties, which is handy for the type hints in a language that doesn't require many explicit type declarations). The installer is the library manager (kinda turbo pip) is the solution manager/build system (and installs optional tools so they can tightly integrate into the default workflow). If you've got the C runtime libraries (there is an option to build without ever calling into CRT but default is a standard library built on that) then you're basically ready to go.

The ecosystem makes a lot of stuff seamless. You add an external library by name (or git repo), the solution manager finds the latest version in the central repository, downloads it, builds it, and notes the version (so you don't get caught out by an update unless you ask for it to upgrade). Documentation auto-builds, examples in documentation automatically get included in the test suite, and there's just some light TOML config plus conventions on file locations to keep your solution managed. There's a default style and a decent source formatter (with customisation options) to enforce it if desired. Once you're up and running then VS Code's C/C++ debugger works fine (I've not had much luck debugging with JetBrains but this is the Community edition and CLion retail apparently is where you need to be) or try your favourite tools based on GDB or LLDB for support. Want to share your library code with the community? The library manager just needs your user key and will package it up to upload (so others can include it in their projects).

There is still work ongoing (a year ago the Language Server sounds like it was pretty iffy and even now it's flagged as preview for a reason) but there's enough working that you'll not be feeling like you're working with researchware. 2017 was the year of getting a lot of stuff basically there (the community went through and cleaned a lot of the important libraries to SemVer 1.0.0 releases). 2018 is a year for sanding off some more rough edges (adding a few convenience features to the language, cutting away verbosity that the compiler can reason around for most cases, polishing the development experience) and it sounds like 2019 will be the next time a major revision happens. I've got a few things I'd like to see (faster compilation times would be extremely valuable - I don't want to feel like I'm back in C++ land) but so far I've yet to find a deal-breaker and the potential is evident. Not sure I entirely agree with all these executables carrying stdlib code with them (static linking is the norm) - I've got 110MB made up of 14 binaries just for the Rust CLI package/build manager and I'm betting a lot of that is duplicated stdlib code they really could be calling from the dlls. Maybe this is the C coder talking but it feels like the compiler could be emitting smaller binaries in general, even ignoring static linking.

So far I've been getting used to the language, the tools, the ecosystem; building lots of smaller projects so I can see how things are done and what performance looks like compared to the same work done (idiomatically) in other languages. I'm expecting to move to Ryzen in April or May (based on availability, price, and if the revised parts are worth getting or just make the current parts even more affordable) so that'll be an interesting jump in performance potential. It's time to see what developing a large project feels like in Rust, it's time to start from...



Starting resources

The Official Documentation (includes links to all the books, a couple highlighted below)
The Rust Programming Language (2nd ed intro book - draft being finalised)
The Rust FFI Omnibus (detailed notes on ensuring you can call into your Rust code)
Cookin' with Rust (various common tasks using major bits of the ecosystem's libraries)
Rust by Example (what it says on the tin)
Rustlings (small exercises for getting used to writing Rust)
The Unstable Book (for documentation of what's not yet in the stable feature set)
The Rustonomicon (for those deeper questions about the language)