Monday, 14 October 2019

Cheat Engine: Dev Basics

This year I've been playing through a lot of premium computer games that came out around the time Facebook was a platform where a lot of social-idle games were making a lot of money and getting a lot of attention from game designers. To the point that some of those mechanics for positive feedback loop economic models was being dumped into $60 games or linked via external social games on another platform. Even in the current era, where that stuff is less in fashion, a lot of the changes are still being applied to in-game economies (when not tied to micro-transactions and the quest for "AAA whales") compared to the process of balancing a game economy from the era previous. Some games have been patched to provide all the bonuses from engaging in external social games when those social games were taken down, others simply expect players to do more grinding in the game as that was always one of the play styles they considered viable at launch.

Personally, I've been approaching it from a different angle: one of someone who always wanted to know what the cheat codes were for games, even if I ended up not using them that much during a first playthrough. As a developer who has always believed that my code is usually a guest on someone else's hardware, the cheats available to me are rather broad. I don't feel the need to limit myself to the dev/debug commands that ship in a solo game (where I have not signed up to an agreed set of rules for play in a multiplayer environment). In my consideration, the means by which games are protected from players editing memory values to play content they do not own is called copyright law (as pasting in any parts of the game you have not sold them into memory would be an obvious violation of copyright) - knowing this makes the technical means by which you should operate clear. (And the ESA or anyone trying to shill DRM are not your friends.)

I occasionally have lively discussions with other devs on this topic but I'm against anti-consumer snooping or memory obfuscation having any place in solo experiences that have been sold to consumers (who should then expect to be able to tweak their play experience as long as it doesn't involve grafting on copyrighted content that was not included in the sale of data). Including source code is a method of assuring players that you have not hidden any anti-consumer systems in the thing they purchased (and given some expertise, they can explore and modify their experience however they like); some modding tools even approach this level of access.

Which brings us to today's topic: Cheat Engine. This is quite an advanced tool with a long history of updates so I'm only going to talk about a few of the simpler things that a lot of people use it for. If you make games but have never played with CE then this may be a good primer for what people are talking about when they discuss Cheat Tables for your game.

The simplest function of Cheat Engine is to scan the memory of a running application to find any instances of a certain value (a bit pattern that could be read as a certain value, optionally including fuzzy scans that find anything that might be a interpreted as a value or within a delta of it for floats) and save the list of those addresses. The Next Scan function allows this value to be edited & another scan run on only the addresses already found. A player can use in-game systems to tweak a number and then find all fixed memory locations that are mirroring that change by repeatedly scanning for addresses doing what the in-game value does. A canny player can even deduce that certain values in the UI are not immediately saved back to a permanent location (and the save process may not involve copying the same location as the UI is using) so to only rescan the memory at certain points (like after backing out of a buy screen into the main game UI, completing a virtual transaction).

Knowing where in memory the values are being updated, the player can track those locations and even lock their value to prevent them changing. This is very useful if the game is updating a handful of locations with the same value and the player wants to know which value is used for future calculations as the master value and which is just mirroring (or if they missed something in a previous scan and so don't have the core location they want in their current pool of memory locations - as developers then we have an advantage in how we think about memory & knowing what processes are going on that can cause data to be moved and plenty of players doing this stuff also have that knowledge). Often applying a lock and then trying to change the value in-game will show which location is key & which can be ignored. This is where the player can now basically save-file hack in the live game and change any value they can isolate. The scans are very fast so it's quite easy to do this at any point, especially if you're looking for an unusual bit pattern (eg not 0, 1, 2, 16, etc) that's easy to repeatedly change via in-game actions on demand.

November Edit: one nuance of this is that scanning for the bit pattern is only one of the various modes. Within a range, has not changed since last scan, has decreased (optionally by a specific value)... there's a lot of ways of doing a refinement. With a bit more time (we're talking only a few seconds to index it on a modern system with a typical binary) you can even start from "I don't know the initial value", which makes it surprisingly fast to find where player health etc are being stored in memory in a lot of games. And then locking that memory area (have Cheat Engine repeatedly writing that value to the location to erase any changes by the game). The versatility of the system was something I'd not considered before giving it a poke - average users really can find ammo, health, etc extremely quickly as they control when the value changes or stays fixed then refine memory locations mirroring their expectations.

But it's not common for these offsets to be fixed, so players would have to do this whenever they want to change something and maybe that's enough friction to consider it annoying. Which is where a slower but fancier trick Cheat Engine has comes it: once a player has an address they can look for any memory in the running application that looks like an offset or pointer to that address. Then they can do the same iteration & look for that value not changing. A player will note that after a while (or load, or game restart) the location of some in-game value moves and then check to see if any of those suspected pointers have moved to the new location they've found for that in-game value. Advanced use could even follow a chain of pointers. These saved pointer locations are often stable between level loads, game loads, and even some minor patch revisions (although the last one is uncommon, which is why Cheat Tables usually have the associated patch version tied to them). There is more complex stuff with code injection and advanced tweaks that can be done for fancy tables and reactive cheats (halving damage taken, boosting XP) but the bog standard DIY stuff is usually more limited. But this is still clearly powerful enough to have worked out where your CharacterInfo struct is and know how to follow the pointers and edit various values.

If a player wants a million credits to break your in-game economy, it's probably reasonable easy for them to hack it without that much expertise (most anyone could follow a tutorial on this stuff, even if it's sometimes faster to do with some expertise to understand the underlying systems going on moving data around in memory). Once upon a time, it was standard for computer games to include cheats and some development or debug tools that would make those extra credits something that didn't require an external tool. In recent years it has become a lot less common (maybe in part due to GTA Hot Coffee and similar "scandals" related to leaving assets and tools that the player was never meant to encounter in the release version; maybe the console push for Achievements​/​Trophies as "verified played good" permanent records for player profiles).

I think this stuff is good for games. Especially a few years after release, when players are going to want to really poke at all the systems in a game and find out the limits of how things work. Obfuscation work to frustrate players trying to do this is wasted resources that could be spent making a better final product and, often, isn't even entirely successful as it just takes one smart hacker to figure out what's going on and work out how to get round it by writing memory at a certain point or injecting a bit of extra code at just the right location. It's the user's memory so it's not like you can guarantee that they won't lift it from under you. Embrace the chaos, and kindly ask players to not submit bug reports if they've been editing their memory addresses while playing a game, because this is far far outside of developer supported play.

Friday, 30 August 2019

The Sharpening Curse

I should start this off by saying that there are times when sharpening filters are absolutely standard. Playing with local contrast using an unsharpen mask or clarity tool is a stock part of most digital photo development (baring skin, where the clarity tool is used in the opposite direction to reduce contrast and provide wrinkle suppression) and something like Adobe Lightroom even does an automatic (mild) sharpen on export for printing (in the default configuration).

That said, I welcome anyone to look at freeze frames from any 4K film print and tell me what you see. Watch it in motion and pay attention to any sub-pixel scale elements as they move through the scene. Watch it on a neutrally (professionally) configured screen that's accurately presenting the source input, not a TV that's doing its own mess of sharpening because it's configured for a showroom with everything dialled up to 11. Even if aggressively sharpened (and most films are not), then there is a lack of aliasing thanks in part to the ubiquitous use of an optical low-pass filter in front of the camera sensor during light capture and because an optical sensor is capturing a temporal and spacial integral (light hitting anywhere on the 2D area of each sub-pixel sensor & at any time during the shutter being open is recorded as contributing to the pixel value). Cinematic (offline) rendering simulates these features, even when not aiming for a photo-realistic or mixed (CG with live action) final scene.

When we move to real-time rendering, we're still not that far away from the early rasterisers - constructing a scene where the final result effectively takes a single sample at the centre of each pixel and at a fixed point in time and calculates the colour value. We're missing a low-pass filter (aka a blur or soften filter) and the anti-aliasing effect of temporal and spacial averaging (even when we employ limited tricks to try and simulate them extremely cheaply).

Assassin's Creed III using early TXAA
Assassin's Creed IV with TXAA

Even when using the current temporal solutions to average out and remove some aliasing (and the more expensive techniques like MSAA for added spacial samples, which doesn't work well with deferred rendering so has fallen out of fashion), the end result is still a scene with far fewer samples into the underlying ground truth (or the output you would expect if filming an actual scene with a real camera) than we would like and a tendency for aliasing to occur. When TXAA (an early nVidia temporal solution) was introduced then it sparked a mild backlash from some who wanted a sharper final result, but mainly because they are so used to the over-sharp mess that is the traditional output of real-time rendering. The result has been various engines that use temporal solutions now also offer a sharpening filter as post-process and AMD (& nVidia) are starting to advertise driver-level sharpening filters (as an enhancement to be applied to games for "greater fidelity").

While AMD are talking about their FidelityFX as an answer to nVidia's DLSS AI upscaling (using those Tensor Cores to upscale and smooth based on training on 64xSSAA "ground truth" images for each game - an effect I sometimes like in theory more than I love the final result), DLSS actually removes more high frequency aliasing than adding additional local contrast (it is primarily adding anti-aliasing to a low res aliased frame while also picking up some additional details that the AI infers from the training set). Technically AMD's FidelityFX contains two different branded techniques, one for Upscaling and another for Sharpening, but these two tasks operate in opposite directions (so combining is something to be attempted with extreme care and possibly not without something as complex at AI training to guide it) and the marketing seems to treat them under a single umbrella. Shader upscaling can certainly be better than just the cheapest resize filter you care to run but really, in the current era, I think temporal reconstruction is showing itself to be the MVP now that issues of ghosting and other incorrect contributions are basically fixed (outside of points of very high motion, where we are very forgiving of slight issues - just look at a static screenshot in the middle of what motion blur effects looks like in ~2014 games, but because we only see it as a fleeting streak then we don't notice how bad it can be). Unless DLSS steps up (while AMD and Intel also start shipping GPUs with dedicated hardware acceleration for this computation type), I think we should expect advancing temporal solutions to offer the ideal mix of performance and fidelity.

Edit: As I was writing this, nVidia Research posted this discussion of DLSS research, including: "One of the core challenges of super resolution is preserving details in the image while also maintaining temporal stability from frame to frame. The sharper an image, the more likely you’ll see noise, shimmering, or temporal artifacts in motion." - that's a good statement of intent (hopefully Intel plan to launch their discrete GPUs with acceleration of "AI" - something even a modern phone SoC offers more dedicated PR (and silicon area?) to than current AMD or Intel efforts).

So far we are seeing a lot of optional sharpening effects (optional on PC - I think stuff like The Division actually retained the user-selectable sharpening strength on consoles but not every console release includes complexity beyond a single "brightness" slider) but I'm worrying about the day that you load up a game and start seeing sharpening halos (oh no, not the halos!) and notice additional aliasing that cannot be removed.

A very mild level of sharpening absolutely can have a place (doing so via variable strength that adapts to the scene? ideal!) and is probably integrated into several game post-processing kernels we don't even notice, but a sharpening arms race seems like the opposite of what real-time rendering needs. We are still producing final frames that contain too much aliasing and should continue to lean on the side of generating a softer final image when weighing detail vs aliasing.

Wednesday, 31 July 2019

AAA Rental

When I was young, we used to go to the local video rental store in the nearest town and rent games. Initially this was computer games, including manuals etc in a plastic sleeve that allowed you to enter the correct code to start the game (back when code wheels or typing in a word on a page of the manual confirmed you weren't a pirate). A few years later it was mainly consoles, renting both hardware and a video game for the weekend. The store purchased games and then more than made the money back renting them out - all thanks to the concept of the first sale doctrine (which lobbying from software developers means isn't actually part of the legal framework in many places when it comes to games (?) but still guides what many think of as legal interactions with copyrighted material). Years later, when economic realities made collecting a proper library of games impossible for some years, I used to rent AAA console games via post (many of which I finally got into my library via used sales on last-gen titles no longer sold new).

One of the things that the recent transition to digital has done is really slow down those rental markets. Along with eroding used sales, the game rental services have also found it hard to operate in a world where publishers looked to things like Project $10 (EA making it so a one-time key unlocked content in a new game) and now look to digital as the primary platform to sell games (where there is no physical token to rent out which enables play). But never fear, publishers are stepping into the gap with their own rental offerings.

The biggest player right now is probably Microsoft with GamePass. Some might consider this "the Netflix model", with a mix of their own brand new content and content they're buying in from 3rd party publishers. Others have pointed to Spotify. I've previously said that Spotify (+ Apple Music + Google Music) could actually pay for the music industry as it is (artists are being ripped off by bad contracts, not a lack of consumer cash pumping into the system) but I'm somewhat concerned that gaming (an industry roughly an order of magnitude bigger) may not actually be able to be sustained by subscriptions in the short to medium term.

What makes me doubly concerned on that front is that some publishers have extremely deep pockets right now and so could lose money for a long time on subscriptions before pumping up the price to consumers once many other avenues for playing games had become eroded by artificially cheap subscriptions. That is the model of "disruption" used by plenty in tech with VC backing. As of right now, it's hard to ague with the value on offer (especially as something you subscribe to for a specific game & then dive into the archives and then unsubscribe - not really analogous to TV or music you like to have playing in the background so always want to be subscribed to at least one service with all the classics you enjoy).

As a player of games right now, it seems great to be able to jump through a large archive of games for about $10 per month. With that including the latest releases from the publisher offering the subscription, I don't see why I'd pay $60-100 for a AAA release on launch. With EA even offering a cheaper option if you're not interested in their latest releases and Ubisoft saying their upcoming service will also include all DLC and premium editions - it's starting to look like quite a poor option to give $60 for a brand new game and miss out on DLC when you could rent it once at launch and again when the DLC has all come out while still having more than enough cash in your pocket left to buy it on sale eventually if you want a permanent copy for your library.

This year I've been playing a lot of older games in between trying out subscription services. Sometimes I'm even doing so based on wanting to see credits roll in a game I've owned for a while but never completed before jumping into a sequel I never got round to buying (but is now available on these rental platforms). I've also noticed that once a game appears on a subscription list, I'm probably taking it off a store wishlist - I'll get round to it next time I subscribe rather than watching for an attractive sale price to buy it now. Another thing I've watched myself doing is treating everything like it's on a clock when you're subscribed and that ends up helping to keep me going (rather than getting distracted by reading or something else and not playing anything for a few weeks) - very Battle Pass energy but for games that aren't so multiplayer focussed or reliant on F2P hooks.

It's probably too early to predict how everything shakes out but I certainly think we're in for some turbulent times as everyone figures out how gaming adapts to publisher-driven rentals vs ownership. Ubisoft seem to be doing extremely well with maintaining extended support for their online games and providing several seasons (Year 4 Pass for Siege? Sounds a lot like a slow-mode Battle Pass) of updates for premium games - that likely maps well to pushing a subscription service, although I'm not sure their price point is ideal (lacking the cheap tier that EA has for people who only want older content). Will EA finally resurrect their proposed TV model of narrative? Games as a Service (as they currently do it) has maybe not been working out ideally at EA (without the huge revenue from gambling-like experiences in FIFA etc, disappointments like Anthem would probably be a lot harder for EA to work through) so it might be time for another strategy (as their subscription service finally arrives on the biggest console after Sony have agreed to let it onto their platform).

Thursday, 20 June 2019

Moving to Firefox

I was a big fan of Firefox from approximately the introduction of Live Bookmarks (before Google Reader or even my own use of Bloglines - literally all three of these RSS tools are now dead so RIP RSS in general: push notifications for new website content seems like the obvious right way to do things and yet support is slipping away) up until some decisions I considered strange (eg removing the ability to restrict which websites ran Javascript unless you installed a plugin to manage what I consider a core task of a browser interested in basic security). When Firefox still hadn't added back those basic security tools but decided to lock down running unsigned plugins (like the ones I'd written myself & didn't need external security audits) with the stable release branch, I had already mainly moved to Chrome as my daily browser (which retains the ability to decide which plugin code need to be signed & offers granular whitelist support for managing locally executed website code). Android has been the one place where I've continued to keep FF around as an option (although recently I had also basically moved to exclusively using Chrome because of how it syncs history, bookmarks, tabs, and settings between versions).

But recently my use of Chrome for daily browsing and Edge for occasional tasks with access to a different rendering engine (to avoid bugs) has been defeated by MS giving up on their own rendering engine and deciding that Chrome is the standard. Everything close to mainstream is a child of KHTML now (WebKit & Blink are not identical but they're both derived from a common ancestor and just steered in slightly different directions by Apple and Google). It's starting to feel Microsoft EEE plan crushing to stick with the Blink renderer in 2019; and I also have an ecosystem interest in Servo (built as one of the tentpole projects for Rust). But moving to Firefox wasn't entirely painless so it's time for a quick rundown for anyone else making the move - I'm starting from the Firefox Developer Edition (because they still force you to get your plugin code signed for the main stable branch) as Waterfox's Servo-derived version sounds like it is still early so I'm not yet thinking about projects that have forked from the main Firefox path.

Save often

An early crash seemed to wipe out FF's settings database, which includes most plugin configuration data, so make good use of the Export to File options that most plugins seem to offer. I'd personally prefer if all settings were stored in flat files which were easy to back up and sync between devices but it seems like FF prefers a central database which also stores most of the settings for the browser itself.

Outside of that one disaster of a crash (which ate customisation data & forced me to configure things twice, this time saving backups once I was done), everything seems stable. I was also leaving Chrome due to some rare stability issues that seemed to be triggered while several video streams were running at once and so far none of those issues have happened in FF. A tab has crashes once or twice but with about the same frequency as Chrome and the isolation (so it doesn't take out any other tabs) seems to be just a solid. Discord introduced a bug (that I only saw in FF) for about two days that caused its internal engine to detect a failure state and require refreshing, which indicates the major concern with moving away from the market leader: sites will not be as well tested in FF. On the other hand, a long-standing bug in TweetDeck (making scrolling a column jump around) is simply not an issue on FF so it's good to keep an open mind about which gripes you're just accustomed to.

Customise everything

One of the nice visual updates to Chrome some time ago was to drop the OS stock scrollbars and give us something a bit cleaner and often narrower (using a style extension to manage it). Unfortunately FF does not pick up on that extension but rather has its own extension with which you can request a skinny scrollbar (or the complete removal of one). I had to tweak some of my old CSS injections to customise the pages I often visit (eg TweetDeck) to look more like they do by default under Chrome. I'll write my own CSS injector for FF (as I did in Chrome - it's an ideal "my first plugin" learning experience) but right now I'm using Stylus.

Because I have a 4K desktop and so run my Windows UI above 100% zoom (in the mess that is the various HiDPI APIs in current Windows 10) there have been a few times I've needed to prod the page zoom settings to get everything feeling the same as before. The standout glitch was Discord, where the visible scrollbars are fake (elements drawn by the website, not the browser itself) but the code to hide the real scrollbars doesn't work perfectly outsize of 100% zoom in FF. But as they're not the actual scrollbars you're looking at or interacting with, the above extension can also be used to completely hide them and clean up the visuals (making it look just like in Chrome). Basically it's a lot easier to adapt when you're used to poking CSS to your satisfaction for certain web-apps anyway. I even caught up to modern CSS and the more recently added wildcards to catch all the Discord elements in a single line: div[class^="scroller-"] {scrollbar-width: none;}

Basically all of the actual browser experience customisation maps directly from Chrome to Firefox, from font preferences to interface layout and customisation. You can even tweak the "density" of the main UI to tweak whitespace, something I don't think Chrome offers, which leaves you with a narrower tab bar and more vertical space on a 16:9 screen for the actual website. A really nice stock feature is the Reader View, which toggles a clean article view when detecting a main text block (far from unique, but it's a clean stock implementation unlike Dom Distiller or a plugin). I think we're at a point where the stock features are pretty comparable, even if you do have to do the occasional search to translate it over (as I did for the scrollbars) or find a plugin on one platform to reach parity.

Plugin list

Most of the plugins I had in Chrome also exist for Firefox. Here is a list I'm currently running until I've moved most of my internal stuff to the new ecosystem. I'm not saying I've audited code, but I did at least do basic checks to avoid obvious snooper extensions (eg Stylus is designed to be the non-telemetry alternative to Stylish). There is currently no way to restrict which pages each plugin can read and modify, something I'm shocked hasn't been copied from Chrome on a browser that advertises it's security (FF only just added restricting plugins to not work on Private/Container tabs).

Facebook Container - Keep your logged in FB session in a special container so it's slightly harder for FB to track you elsewhere on the web.
Privacy Badger - EFF tracker blocker & url click-tracker remover for Google search etc links.
HTTPS Everywhere - Another EFF classic: make https the default for websites which haven't made the switch yet.
NoScript - IMO this should be a core feature in Firefox. In previous versions this was a stock feature. For now I'll use this to whitelist the few sites that do need client-side code execution rights.
uBlock Origin - I'm mainly using this as an easy way to suppress certain page elements as I read until I port over my plugin that does that job (I typically do not go for "Adblock" plugins but it's easy to configure & you can turn most of it off). It's a good extra line of security until I get comfortable with NoScript & my own plugins properly protecting me from JavaScript nasties.
Stylus - As mentioned above, this makes CSS injection really quick and easy until I port my own plugin over to customise how regular websites look.
Awesome RSS - Firefox took the classic RSS icon out of the address bar (so did Chrome: Google made an official plugin to add it back). Weep for RSS, an idea that made the web so much nicer to use that they tried to kill it!
Snap Links - This is the equivalent of the most esoteric plugin I love in Chrome: Linkclump. My index of RSS feeds in Feedly: sometimes I want my browser to open lots of links in several tabs ("I've got half an hour, give me 5 articles I've put a pin in as worth reading fully"; "Open all webcomics that have updated since last I checked") and this makes that as easy as dragging a box over all the nicely lined up links.

Friday, 31 May 2019

Co-ops: Sharing the Spoils

For quite some time this blog has been a dumping site for thoughts about how to operate as independent software creators while being fair to the users and developers we work with. Recently those thoughts have turned to the co-operative model, including the focus on giving back to a wider community (not exactly an uncommon consideration for an industry with so much FOSS foundation) while still aiming to operate as a commercially viable entity inside the capitalist hellscape we currently operate (until the seas boil).

Even with the new funding models around donations (eg Patreon and KickStarter), there has been little movement around changing the deal for users (from offering source code & unbaked assets as standard to taking investment as ownership - creating consumer co-operatives) or developers (eg moving to a worker co-operative to democratise the office that is now funded by thousands of small individual donations rather than an investor who takes ownership of the company and chooses the boss). Meanwhile, every week there is a story about workplace conditions and we all kinda know the only reason no indie teams are getting the negative press is because stories do the numbers when tied to well known corporate brands. The EA Spouse blog post is almost 15 years old and things only change at the slowest speed those in power think they can get away with (once again, see boiling oceans); and that's mirrored in how we push ourselves into early burnout (and to keep up with a competitive marketplace filled with so many products).

The big play with a worker co-operative is that they're democratically owned. Every worker buys into the institution and so become a co-owner. Big decisions usually require consensus votes, smaller things can be majority or even left to individuals. As a large company, you still have the same management tiers but ultimately they answer to all the workers rather than shareholders or a small group of private owners. The details are somewhat fluid so maybe in one place you can increase your share through time worked (while most places do it so that after a trial period everyone buys in with an equal vote/share) but fundamentally all workers can buy in and democratically control the institution while also receiving the full returns from their combined labour.

Some places are particularly precious about one vote/share per person. I think we're all aware of how soft power works and that every person having one vote does not mean everyone has equal power. As long as you're being rewarded (eg for time dedicated to the co-op, which increases institutional cohesion) and it has a low share ceiling then I feel those rules make enough sense. I'm actually somewhat more concerned by the other running decisions and initial investment, which is great if you're building a co-op by and for devs who all have $50k cash (and a lot of time to invest that we could value at market rate $10k/month) to create a viable business but becomes less great when you look at who that excludes and how the final system works (often with the aim to move to a salary system to even out income but at the cost of decoupling project profit from remuneration).

It's not helped by software as a product. Work several years on a video game with zero revenue and then you've got a source of cash, an IP bundle that can be duplicated for basically free as buyers are found for additional copies, that may or may not pay for the next development cycle. It's all a bit luck-based because the wider games industry is a hit-driven market. If you've got personal reserves to self-fund then you're buying those lottery tickets. Tying remuneration entirely to a project rather than salary system also seems inadvisable. I have been part of that process of a decade of obscurity and I'm not convinced that the co-op model automatically does anything to ensure those who built the foundations are fairly rewarded.

Buying a company

There are many ways of organising this (or less many, depending on your local legal landscape) but in general you buy your slice of the company when you join and everyone else has to collectively buy back that slice when you leave. I imagine it would be advisable to minimise the value of the company if you're doing direct ownership, because otherwise buying a slice could become prohibitive for new hires and difficult when someone leaves, although with so much being IP rights then minimising value isn't trivial.

My preference would be to put the co-operative's ownership into a trust to be run for the benefit of all employees. That means you can have the buy in be a dollar or similar symbolic value while a contract requires trustees to operate the co-operative for all workers, under rule by the decisions of those workers, while not requiring workers have the net worth of where they work inexorably linked to their own finances. The company as a focus of value to be exploited is an unhealthy model that pushes market cap maximisation and other unsustainable growth models which the co-op model already rejects (along with the potential to raise investment in that way).

Dividing up the IP

What we've started to build up here is a company that splits revenue between paying sustainable salaries to all workers with a bonus based on project contributions. Ultimately that's based on the agreement of all the workers, as they are co-owners, but an initial split would be for everyone who contributes to a project to vote on how the bonus is split. That's how the model works: we have suggested structures for how this works and recruit based on that but also, if all workers agree a different system, then even the core bylaws change with a unanimous vote to change them. The co-op can adapt and change over time.

The system I'm currently thinking through, and the impetus for this blog post, is to tie the IP and projects to the workers rather than the co-op as a whole. Clearly the co-op needs central funding to continue to operate and pay out salaries on unfinished projects. Without that, it all falls apart. Traditionally you'd assign IP ownership to the co-op and then it, as an entity, would divide out the bonus to workers on a project; keeping the rest as core funding and slowly increasing the accumulated IP the co-op owns.

But if we go back to our basic copyright law, there is already a suggested construction for IP which is worked on by several people and is indivisible. Shared copyright ownership where all contributors own the IP but either must agree any license or must equally compensate every owner for any individual deal done (how this works by default changes depending on local legislation). There is the framework for assigning the IP created on a project to the workers of that project and licensing it to the co-op as entity for commercial exploitation and future development. This goes beyond the original model of splitting the company between all workers and also splits the IP between the workers not just as via direct ownership but also applying via indirect ownership & on a per-project basis. If we're working on this, we could also attempt to spread our values even if the workers on a project leave the co-op with their IP, something that copyleft licenses are an example of.

A viral license

So what do we need this system to do and prevent? (Consider this working on top of our previously stated general rules for creating software, so the license will already include terms that automatically transfer the IP to the public domain after a certain number of years of commercial exploitation or after a high return on investment is achieved.)
  • The co-op must be given a reasonable ability to commercialise the project, which repays it for day-to-day costs, salary payments made during development, ongoing platform services, and ensures the future operation of the co-op. This may require it have the exclusive rights for some years to prevent competition from project members operating outside of the co-op. It should probably also have rights to develop new IP on top of the existing IP (sequels, use of the codebase in new projects etc).
  • The individuals on a project should be fairly compensated for commercialisation of their work, around an agreed bonus split. Future work to maintain ongoing development (patches etc) may need to be accounted for in this agreement or allow renegotiation of the original split.
  • To prevent IP becoming inaccessible due to disagreement between shared owners (something several commercial games currently are stuck with), the contract should err on the side of providing every individual with the ability to further commercialise the IP after any initial exclusivity, as long as the returns are split back to all individuals in a way considered fair (a new unanimous agreement) or along the lines of the bonus split (the original agreement).
  • To ensure the IP does not calcify, only able to be duplicated and sold as a fixed product, a viral license should allow new IP to be constructed on top of the existing IP by those who own a share of it. The value of the viral component is to ensure that any project member who takes the shared IP with them will also be constructing new projects that value shared ownership. This will require some sort of agreed structure for how various derivatives built on top of the IP are required to return some cut of their revenue to the original bonus split or find unanimous agreement in drafting a new split and cut amount.
As you can maybe sense, this is very much still a rough outline. I'd love to find projects already working along similar lines that manage to take the idea of worker ownership and split the difference between treating that as a project and co-op office being primarily owned by all workers. Also a way of sharing IP with a framework for future exploitation by the shared owners which has been designed with the expectation that everyone working on the project would be a shared owner (the existing things I've read around this are closer to classic music contracts where various things are not considered indivisible and they are never expected to scale out to a full team working on a project all getting shared ownership).

Worker co-ops are already heavily marketed as part of a wider social movement promoting more co-operatives, which would seem to be a great match for nailing down a form of shared IP ownership that also brings with it various restrictions that mean anyone exploiting it outside of the original co-op would still be bound to the principles of democratic shared ownership between every contributor and fair remuneration.

Sunday, 14 April 2019

Dragon Age(d): the Inquisition

So after finishing Dragon Age 2 in the last post, we're now up to 2014. In fact, the last DLC for Inquisition came out only three and a half years ago - not old enough for us to be diving back to recontextualise the game but also not new enough for this to be a stock review. And yet, if we're evaluating the Dragon Age series in 2019, this is the biggest entry and probably the foundations on which the teased Dragon Age 4 builds (quite clearly narratively but also probably mechanically - whatever that ultimately means for a game rebooted under new directors at a studio imploding under mismanagement & crunch and possibly pivoting to connected "live" experiences built directly on the Anthem code base).

Dragon Age initially went from an attempt to recapture the old BioWare WRPG spark (before crowdfunded revivals offered players a lot of choice there) to a more console-focused character-driven affair on a limited budget. But for the fourth campaign in the setting (and third standalone game), EA couldn't keep ignoring the siren call of Skyrim.

When this series was first in development, Oblivion was already showing where console-friendly RPGs could land commercially. The previous Dragon Age games each did well enough (into the several millions sold) but they didn't manage to compete with Oblivion's incredible long tail and certainly couldn't stand in the same pantheon as a break-out hit like Skyrim (now well beyond 30 million sales thanks to another huge tail and many ports to new platforms). Open worlds were not just for big budget action games that took a few elements of RPGs, and so BioWare chose to take a stab at the big money.

Technically speaking

Moving to a modern engine, it's immediately clear that a GTX1070 can't run close to 4K native or max-settings with MSAA (or higher than native with downsampling) and expect locked v-sync (unlike earlier games). What's worse, the mild shader aliasing that supersampling fixed in previous games becomes terrible shimmer here with more advanced material shaders, an HDR pipeline, and bokeh-simulating depth of field (enlarging any shimmering overbright pixel into a fat blob that would almost look like glimmer if it wasn't so clearly strobing at the frequency of an aliasing artefact). MSAA is still an option but it's not going to do anything about shader aliasing (also you probably don't have the GPU headroom to turn it on at high resolutions anyway - especially as on PC you can and should force a 60Hz mode everywhere) and the post-AA is typically somewhat inconsistent. A modern temporal AA solution is sorely missed here, even if it's no worse than many contemporary titles from this dark era for temporal stability.

The game's technical issues were never fully patched and I had more than a few DXGI_ERROR_DEVICE_HUNG crashes early on (which seemed to get nailed down to some resource management issues that became less prevalent with patches after release but clearly never got completely defeated). Early on I also encountered animation stuttering (especially in cutscenes where they should be playing back at a perfect 30Hz but clearly don't, with some scene elements updating correctly while other stalled for several frames) before deciding that 60Hz SimRate couldn't be worse. Despite being officially unsupported, it seemed much better than the default and provided pretty consistent frame pacing.

Dark (default) indoor
Metals in shadow
Imagine the DoF glints strobing

Also holy clipped brightness values! How was the monitor configured on which this tone-mapping was agreed upon? A few blown highlights outdoors and, far more significantly, severely crushed shadows inside; you're often entirely reliant on the phantom light your protagonist often emits onto the nearby dungeon walls. I ended up pushing the brightness up a few notches, although there is no proper gamma setting in-game so any slight improvement in the blacks also makes blown highlights more common - I could find no satisfactory setting and literally the only screenshot with the default brightness is the one immediately above from the very first dungeon.

There are other areas of taking a visual step forward only for it to lead to inconsistent results. The rather mechanical facial animation of the earlier games are gone and we enter the era of modern BioWare. Not as bad as Andromeda's "automated animation while management failed to schedule any time to hand-tweak the output" but the higher fidelity certainly pushes towards uncanny in a way the previous games didn't. There may also be an element of so many returning characters, with their previous visual representation so fresh in my mind. Playing the games back to back, it's striking - initially I almost wanted to look away to enjoy the vocal performances without the distraction (before getting mostly used to it and then missing seeing face close-ups at all in the many dialogue scenes where the camera doesn't even zoom in).

DAI "default" Hawke
My attempt at a custom job
My near-default Hawke in DA2

The technical chops of the new engine are clear (at the cost of GPU requirements per pixel rendered) and despite the tone-mapping issues, the lighting and material system does a great job of bringing the scenery up to where other Forstbite Engine games can reach. But the move to more realistic skin shaders is possibly going away from where I think BioWare have traditionally done so well - the painted portraits in Baldur's Gate right up to Dragon Age 2's very stylish designs (going as far as to lock the party visuals and heavily push the default look for Hawke, same as had been done for Shepard in Mass Effect). This series playthrough, I'd gone with a basically stock Hawke (modding in the option to tweak a few things but generally sticking to that iconic face you got from selecting the default) and then seeing what BioWare put in Inquisition as the default Hawke: that's really not an aged version of the previous protagonist's facial features. Thinking more closely about how I couldn't really make a custom character that looked like Hawke, it's not just the lacking options - you simply can't create a more cartoonish face from Dragon Age 2 in Inquisition's more realistic rendering palette.

It's worth remembering that this title spanned the console generations (also releasing on PS360) so was always going to straddle the visual expectations of both and Frostbite is a lot fancier today - yet more reasons for a full trilogy remaster for the upcoming consoles. The stories here are worth another stab at; the voice performances may need to be augmented (especially if Dragon Age 2 is to be expanded to a full-length middle chapter in the saga) but are still extremely good; and there's the kernel of some extremely good visual flair here (especially if slightly reworked towards a cohesive, less realistic, style that spanned the series).

When 'all things to all players' fails

Dragon Age 2 felt like it streamlined the RPG mechanics and removed busy-work; Inquisition adds extra busy-work like the "ping" button to reveal resources/loot while removing strategic choices like manually assigning character base attributes to customise a build. Ability trees have been simplified to fit the limit of only being able to hotkey eight active abilities for any character (including an 'ultimate' ability), similar to how some MMOs have streamlined their abilities in recent years and removing the need for large hotbars to play some of the most interesting classes.

A cavalcade of secondary systems have been added (expanded crafting, exploration goals, a million different progress bars, etc) so a desire to cut back on old systems makes sense. Players only have so much mental bandwidth to consider each system and their potential interactions. You can see where every decision comes from, but when put together it often feels like it doesn't lead to a great final experience and certainly doesn't flow from the previous games.

The tactical view returns, the PC UI does not. But it's not the Origins tactical view that allows playing the game as if it was an Infinity Engine game and it barely feels like it fits the more action-oriented combat modelled on Dragon Age 2. As a continued progression into only adding the merest facade of PC niceties, the ability tooltips now fail to actually provide details of what anything does (hover over the toolbar to get the name of an ability and literally nothing more). On such a large project, creating a PC UI seems like it would have been a reasonable task.

When you're using a mouse, so many of the menus require you to drill down into a new layer to edit something rather than having edit buttons to switch stuff at the level of a list of items. I spent quite a while getting comfortable with both keyboard and controller support. Unfortunately you have to exit to the main menu to change between them so hotswapping is out of the question - I feel like a lot of us who played Battlefield from the early PC days got good at quickly migrating from on-foot keyboard to a stick or pad for vehicles and, despite the technical challenges to providing the correct UI, more games should expect people to dynamically move between them.

Inquisition is clearly built primarily for controllers (even with the 8 "face button" actions not being as nice a fit as most action games manage). The movement with WASD feels clunky (once you rebind the comically outdated keyboard turning default); the auto-attack from Dragon Age 2 feels severely scaled back and no longer automatically deals with facing/collision/movement as elegantly; the menus are actually slower without keyboard shortcut keys that are bound on a pad to a quick press while a menu is open; and on and on. But what eventually got me to stick to keyboard was the (patched in after release) Unsheathe button - if you're exploiting unlimited fast stealth* then you need your weapons out. Normally you'd ping or jump quite often to ensure the cooldown never detects you're out of combat, otherwise you're stuck having to hunt a new mob to initiate infinite stealth off. Not so on keyboard, where you can return to a combat stance without swinging an attack (which breaks stealth and so ends you unlimited stealth). Without this, I might say gamepad is the better option on PC but, as with so many things in this game, it feels like you're always being denied the best solution.

An open world filled with stuff

I was almost 25 hours into my playthrough of Inquisition when I gave up on the slow mount speed (with inability to pick up crafting materials or ping for points of interest) and exploited being able to get unlimited stealth* with a speed-boost dagger to make running faster than mounted travel. It's a symptom of the world being too large and the traversal options feeling too limited. The only real downside is the mount system despawns your party while sprinting means they regularly teleport in front of you as you sprint around.
* A rogue's Skirmisher upgraded Flank Attack, when it connects, puts you into a stealth mode that's not got a duration timer as long as you don't attack afterwards; Lost in the Shadows upgrade means even running through enemies doesn't reveal you. Mages can beeline for the Ring of Doubt to get stealth. Stealth means creeping to get the mats for the crafting of a 1.75x speed-boosting Masterwork weapon. Warriors on PC may want to replicate this combat speed (that is possible with the unmodded game for two classes so is bordering on not even a cheat) via mods or just switch to a party member of a different class for traversal.

This is an impressively huge world (even cut into many zones), especially coming directly from the very restrained Dragon Age 2. But sometimes impressing and being readable are at odds with each other. It's impressive to not know where the edge of a player-explorable area is - a potentially infinite world - but that lack of readability makes it hard to efficiently explore. Earlier games showed you on the map that you'd reached the edge of the traversable area; Inquisition puts cliffs (looking just like the ones you can climb) or some rare invisible walls in the way rather than letting you understand the space as a floorplan. The corridor linearity of those zones and dungeons in previous games gave the feeling of a space without the navigational hurdle of actually working out how to get between any two locations on what looks to be a huge open expanse.

When you're constructing an RPG out of a more open world design, you start to hit those pain points that the previous Dragon Age campaigns had rarely encountered. "Here's a big huge Dwarven door that I've previously found another of unlocked on this map by collecting the draw-the-star puzzles. I'm level 4ish as this is one of the areas you can unlock very early on. Nothing in the game indicates why this doorway (marked as if it is a cave and currently showing there are things for a quest I'm currently on inside) is something I should ignore and come back to later." The quest markers are accurate, the quest objectives are absolutely inside but you need to be level 16, far later in the game, to unlock a different quest that unlocks this particular doorway. You need to look that up on a wiki or forum, thankfully now filled with hints from players who've already done everything.

We can go back to some of the early impressions of the game, where completionists (used to the previous campaigns and doing all the quests in a zone) just burned out on the very first large zone unlocked and the endless minor quests with little to not flavour. One of the things a chatty party offers designers is the option to add barks to suggest heading back to base (and trigger some more plot development). Huge open worlds require a lot more careful planning of how they introduce and guide the player to everything. There are a lot of points where the previous games had offered a clear map of the dungeon with which to navigate while the open spaces in Inquisition lead to a completely different way to parse traversal and it's hard to not pine for the old ways when you're trying to work out how to jump up a cliff to get to the collectable thing that's probably in reach.

The rewards at the end of the new collectathons also leave a sour taste. "I can't wait for Solas to have a big speech back at base about all those shards we collected and the Pride demon we took down once the final door unlocked in the zone that's basically just there to give you doors to feed the collected items into that gets introduced as important at the very start of the game." There is no follow-up dialogue or quest; no narrative reward for finding all the shards. An achievement pings when you cross the final door. The same total lack of fanfare occurred when I helped Solas with 10 stabilising widgets and got given a location of a standard (high level) rift closure that was "special" & "worth investigating" in the mission text but not in any real dialogue or narrative conclusion; not even an achievement dinged for that companion quest. I'm left wondering if this is still a BioWare RPG with so much less signature BioWare narrative tied to progression. Worse, I wonder if there is actually less good stuff or I just feel like that because it's watered down by so much more filler? Looking at hours played, Inquisition would need at least as many character moments and narrative developments as all three of the previous campaigns I'd played through to compare, simply due to just how many hours it takes to complete all the quests here.

Developers can say "just don't engage with it" about the less strongly-narrative content but the game design doesn't flag that there will be no payoff to the narrative setup they wrote to start the quest lines so how do you know what to ignore? Also how often have BioWare killed off a character if players failed to engage with their optional quests in the last decade so it's not exactly unreasonable that players have been trained to exhaust the quests and even dialogue trees (even asking borderline transphobic questions just in case it's vital to some progression that Krem gets asked something that shows you've got no clue) to try and avoid missing some critical but optional path. Dragon Age is a series where the fan parlance discusses "hardened" and "softened" characters over the arc of the entire narrative to track potential changes to character attitudes and what that means for where the story can go. In 2019, we know a new Dragon Age is coming and will almost certainly import the world state from the Dragon Age Keep (the online world state checker/editor).

The real killer I felt on this playthrough, during which I used a mod to turn the real-time waitathon mechanics off (instantly finishing quests on the "war table"), was how the large open spaces and walking round to chat were broken up by so many trips to trigger a text-dump "mission" on a glorified map you can only access by running to an area (with no fast-travel point just outside) in your base area. Timers making sure you don't play through the main missions or unlock new areas too quickly (even though there is already a currency that gates unlocking missions behind doing the less narrative content). And when you remove those blocks then the true absurdity becomes apparent: running from a companion spot to trigger a cut-scene and back to the map to start a "mission" you don't actually play that does what they suggested then immediately back to their location to trigger the continuation of the cut-scene.

Going forward

We're approaching the end of this series on a bit of a downer there. To be clear, I very much enjoy Dragon Age as a series and Inquisition as a bit-too-Skyrim-y big-budget entry in that series. If I didn't care about the characters (new and old) then I wouldn't be so invested in wanting more character moments. The technical issues (some of which we might generously call "era appropriate real-time rendering limitations") and stylistic choices along with the zone readability and collectathon issues stand to hinder some truly lovely spaces that could be filled with excellent gameplay and stories. It's the gap to greatness that makes me feel like Dragon Age should get another chance - just like Mass Effect 1 just needs the combat and inventory stuff reworked or Mass Effect 2 deserved a better ending. All of this modern BioWare era feels like it's so close to something not just extremely special but timeless. Nothing is perfect but some things stand out, even ten years later.

The first two big Inquisition DLCs - a dungeon and new zone - are very similar to the base game and if there's something this game wasn't desperate for it's even more content (providing more lore that BioWare will have to assume many players don't know about in the next game). The capstone Trespasser DLC however: a lot of my concerns with Inquisition actually felt like they were improved significantly, so the team were already moving in a good direction. We now know the next project from that team got cancelled or rebooted with the loss of the project lead so the future is less certain. One thing the teaser (which was for the new project) did make clear is that the narrative hooks at the end of Inquisition are definitely the jumping off point for the next game.

It seems likely that next game will arrive at some point on a new generation of consoles. So we still have some time to wait. Luckily there are four campaigns here that are very worth playing through.

Plug: why not help me justify spending over 200 hours replaying old games recently and writing up my thoughts by *jangling tip jar* becoming a patron.

Friday, 29 March 2019

Dragon Age(d): 10 Years Later

A decade after the first Mass Effect was released, I went back to the series to see how it held up. In the same vein, it's been almost 10 years since we first got to step into the Dragon Age setting (yes, Thedas) and I've just completed a full (all available quests in a single run) end-to-end replay of each major campaign in the series (Origins, Awakening, 2, Inquisition). Two months well spent.

Extremely broadly, Origins [~45 hours] and the full expansion Awakening [~20 hours] brought BioWare back towards their Infinity Engine early prime, right down to the interface on PC (which allows for an overhead camera for tactical decisions) and real-time with pause combat, all while retaining the more modern cinematic presentation and focus on character relationships (the prototypes for which started in Baldur's Gate and have become increasingly prominent in the more recent BioWare oeuvre).

The rushed full sequel, Dragon Age 2 [~35 hours], completely drops the tactical perspective (and PC-only interface) while retaining most of the mechanics and narrative focus. It doesn't feel great to click through 720p console UIs on a 4K PC and the budget constraints are everywhere, but it's a reasonably tight RPG due to the limited scope (not as compact as Awakening but also not feeling like an expansion) - the updated engine may be the same tech under the hood but the new UI does give it a clean feel, as does the forced closer camera and renderer tweaks. DA2 introduced many quality of life tweaks (a button to run to & auto-collect loot is huge) that make going back to Origins and Awakening a bit of an ask, much like going back to Mass Effect 1, but also it's hard to not applaud how much the first game got extremely right.

Finally, Inquisition [over 100 hours] went big (budget) and tried to please everyone: constructing the largest campaign by some margin (and unfortunately unlocking a lot of the MMO-y fetch quests early on, which turned off some completionist players); a tactical view returns while moving towards more of an action combat style (somewhat negating the value of the perspective, which also can no longer be used to play the rest of the game); generally more MMO-y sensibilities for side quests and map layout abound; gearing all party members came back (missing in DA2); but goodbye spending attribute points when levelling up. The new rendering engine (Frostbite) was also a radical departure and in 2019, I can't say I'm totally convinced the massively higher requirements (per pixel rendered) justifies the visual upgrade (clearly a more modern approach to materials and shaders, but plagued with aliasing issues you can't brute-force around yet at 4K & the new tech had teething issues with animations, which you can also see striking Andromeda several years later).

Start at the beginning...

Jumping into Dragon Age Origins, it's not doing badly for a game from 2009 (that probably only had a mid-tier budget for EA at the time - especially as a game that started out development before the BioWare acquisition). No one was trying to make this within a crowd-funded budget, even if CRPGs aren't known for bleeding edge visuals. The one thing that you do have to live with is that the PC interface was never intended for 4K screens, certainly not for beyond-4K internal resolutions which clean up any light shader aliasing.

The classic Infinity Engine games it harkens back to have all had their re-releases in more recent years that have basically fixed up their interfaces to work at higher resolutions but Origins is new enough to not get that work and also didn't have the best modding options so it doesn't have the fan-made UI overhaul of a Bethesda game. There is a mod to change the font sizes, which also boosts some of the UI boxes to take up the full screen and basically makes it something you can live with but it's not the best experience. Like Mass Effect 1, it feels ripe for a remake (Resident Evil 2 style) that retains the character and progression while rethinking some design decisions and updating the narrative details towards modern expectations.

It initially felt strange to play a "modernised" attempt at "Baldur's Gate, but without the D&D license", now we've had so many teams making explicitly retro RPGs (eg Pillars of Eternity) that are takes on those Infinity Engine games. There have always been other WRPG series, like (Divine) Divinity, that never totally left the classic mould, but Origins feels like it comes from a less certain time where BioWare knew they wanted to modernise a classic design but weren't quite sure what that meant. 2009 was before a consensus formed on what modern audiences wanted from their retro-compatible WRPG while in 2019 you can't throw a stone without hitting a crowdfunded RPG directly evoking those classic games.

Despite offering the classic perspective that removes the roof and offers great tactical control in battle, I quickly locked to the closer 3rd person view to enjoy the skyboxes and character details anywhere outside the most hectic of battles. It's not the up close shooter that Mass Effect always was, but there is the extra layer of immersion when the camera gets down towards character eye level. The basic programmable team AI, ranking abilities and setting preconditions to activate, actually did just enough to allow me to mainly play my character and let them get on with it (again, reducing reliance on the overhead view that I'd originally used to play the entire game). One of the more amusing defaults (which sticks throughout the series) is keyboard turning - WASD doesn't include strafe options unless you rebind it. For a series that increasingly aimed for fluidity with a controller, it feels extremely dated to default to the old MMO standard that was widely derided even back when WoW launched in 2004.

But outside of the perspective, everything feels just like the classic BioWare WRPGs from a previous era. Yes, cynically this can be viewed as trying to make a new D&D setting without the license (the same way Mass Effect builds a sci-fi setting without the Star Wars license) but the world is rich enough to support it. You travel an overworld map engaging in random battles between the detailed (fake magical middle ages) rural & urban quest hubs and the more combat heavy dungeons. As you play, you collect companions and, in the modern style, decide who to party with and eventually do affinity quests with. Banter is on tap no matter who you bring along and sometimes new dialogue options unlock due to who you bring along on a quest. The scale is epic and nations may rise or fall based on your decisions while most of the actual quests retain a very human scale (even surrounded by demons and the rest of the fantasy accoutrements).

I really appreciated how much was added with a whole new cast of companions within the shorter Awakening campaign - fleshing out concepts and themes from the main campaign while still being a meaty narrative itself. After Origins concludes with the definitive defeat of the Big Bad, there could have been a hole in the expansion but it's filled well; even if BioWare then completely drops the ball and have to plagiarise their own work during DA2's DLC to set up a new main antagonist for Inquisition (it's not like BioWare don't resurrect characters you can optionally kill elsewhere in the series). The other Origins DLC chapters are far more slight, adding [companion] backstory (some with heavy location reuse) and an ultimately disposable capping story in 1 to 2 hour micro-campaign chunks. Even these least satisfying blocks were something I enjoyed though - at no point does the storytelling feel like it lands totally flat.

However, replaying it now, the Person of Mass Destruction trope feels so so tired; not helped by it being a core theme for the entire series. It was always tired, but it's not getting any better with age (mine or the game's). We can do better than a marginalised and brutalised subgroup who are actually as dangerous as the dominant group claims and so it's morally gray if they really do have to be exterminated (or lobotomised) the second they become "too dangerous". Considering the real world oppression of people considered sub-human with super-human strength etc "justifying" violence then we should expect more from these stories - this was a brand new setting, it could have been about anything. And Dragon Age loves to play within this trope, often being pretty inconsistent on how it plays things like Blood Mages in what seems like an attempt to let the player character choose a morality but often it just leads to NPC dialogue and choices that paint a deeply inconsistent tone and characterisation (even when the player tries to pick a consistent stance).

I'm not expecting Dragon Age 4 to come out and completely deconstruct the trope but it would be nice to step a bit away from the Templars vs Mages focus that is never far from the core of the series. In spots Origins and Awakening are probably actually the best the series is on this front, often because of how self-contained many of the chapters feel and how you play part of an outsider cult. I started out as a mage for this playthrough (one of the several opening origin acts that give the subtitle) and was pleasantly surprised whenever it got referenced back through the main campaign. Clearly that's a lot of work, making many playable origin stories and then hooking them all into the main narrative, but it's possibly less work than is required in the sequels to hook in player choice events in the previous games. Throughout those origins and the wider game, there's some very smart reuse of locations; which brings us on to...

Travelling to Kirkwall (not in Orkney)

Of all the Dragon Age campaigns, this feels like the one most in need of a critical reappraisal in 2019. Although by no means panned on release, DA2 did not get the same gushing praise that met the rest of the series and so is remembered as the weak middle chapter. There was a player backlash (to the production values? to the total focus on characters? to devs talking "diversity" in interviews?) and corresponding oscillation of DA2 super-fans pushing back but my recollection of 2011 was mainly of a muted reception and jokes about how the entire game took place in a single location with one (cave) dungeon because it was developed behind EA's back (in case it's unclear, part of the joke is how ridiculous that would be).

Coming directly from Origins and Awakening (we'll consider that long enough to be its own thing rather than "DLC" - much longer than the hour or so for each of the Origins DLC offerings and maybe half the length of Dragon Age 2) then it's extremely apparent that DA2 was pushed out the door not much more than a year after production started (that one isn't a joke). As I've commented on Mass Effect, BioWare can sometimes do a touch too much copy & paste with distinctive (and theoretically unique in the fiction of the game world) art assets but the constraints for making DA2 really push that into a whole new realm. As the devs have since said in interviews, asset reuse isn't the problem, it's the lack of smart reuse that quickly becomes apparent as you get deeper into DA2.

Reuse is meant to clone meshes or props, not lock a player into repeatedly walking the same 20m loop and telling them they're on an epic journey - if you don't look carefully then lots of trees do kinda look the same but near-exact reuse at a floorplan scale is immediately uncanny to most players. It's an area where procgen or tileable block construction in the tools pipeline (eg Sucker Punch using hex tiles to rapidly assemble the inFamous map) can quickly help you bake variety into your floorplans (even without going full rogue-like procgen and randomising the elements for each user - just a tool for rapidly creating many similar but distinct floorplans); the absence of it is immediately obvious and disappointing. When you're going through cave after cave between narrative payloads then a varied skybox or surroundings helps offer something beyond the mechanics of combat to the player (just ask Bungie).

The use of a single city with day/night and several acts (progressing various things over several years, not unlike how some of the Origins areas are reused, especially showing partially destroyed variants) is great; I have always been a fan of attempting Warren Spector's idea of a One City Block RPG. But when the story calls for dozens of different caves and other areas, if you've only built two very distinctive cave layouts that have a few doorways that can be closed off to change the walking lines, it's too distinctive to work. Reuse happens everywhere else too, creating additional floors to areas that just copy floorplans which wouldn't layer on top of each other at all. There are probably fewer dungeon maps in the entire of DA2 than there are just random encounter maps in Origins, which is an easy thing to ding the game for and many reviewers did. Going back, I'm less annoyed by it (expectations are everything, I knew it was coming for a replay) but it's such an unforced and obvious error (of budget or planning).

The three acts tie DA2 together better than Origins, which generally lacked the feeling of structure that even Mass Effect achieved (around a similar broadly linear narrative structure with a meaty central chunk where several locations could be handled in any order + side-quests for flavour which also created the feeling of non-linear choice). Once again, the production schedule unfortunately robs some of the feeling that this was a choice rather than just not being able to get any more done to make it feel sprawling. Things like the random side-quest drops that have no narrative hook and generic "thanks for returning this" barks when handed in feel like they could and should have been expanded to a proper dialogue or cut as busywork. I'm not convinced by the acquisition of invisible armour perks for the single item of clothing every companion wears (with a couple of new outfit unlocks for plot progression) in a game that's still very much about checking the loot that drops from combat and finding chests. Especially as this time through I knew I should just grab the armour sets for Hawke, so there was absolutely no dressing up or finding cute clothing combinations to be had.

The back-to-back nature of my playthrough meant I moved from not really liking Anders but enjoying Justice in Awakening to just straight up not really caring for new Anders. It's a shame that I still don't really appreciate the arc of such a core character to this story but there will always be a companion or two whose motives and arc isn't all you want of it. I think there are better ways of doing the righteous terrorist/freedom fighter but I'm not sure I could do better while still weighed down with the Person of Mass Destruction trope. My joy at being propelled through the events of the story with most of the characters outweighed what reservations I had around certain plot devices or tropes, which isn't to say I never cringed at a scene or few.

Some of the writing around romances feels of an era we're trying to escape (especially setting up a binary of either completely naive or very promiscuous). Ultimately it gets immature-attempting-maturity in spots, much like when the Mass Effect writing falls down. A lack of outfit changes doesn't help when characters go into melee combat with little more armour than a Dead or Alive character. The entire "hardened" romance chains, gifts to buy affection, etc (which continue here barely changed from Origins) comes over as from an era before serious criticism had really pushed progressive teams to try to tighten up their attempts at what that messages to players. Some choices are still up in the air: should all romanceable characters be implicitly bisexual (DA2) or risk the Mass Effect 2 issue (where the Paramour romance options demanded a straight protagonist) by defining sexualities more concretely for party members (and having limited bandwidth to write full romance dialogue chains). I can't say I preferred where Inquisition went the other way, but I would like at least some range of queer representation and for all characters who are attracted to either Hawke to be more clearly bisexual during a single playthrough. There sure are a lot of bisexual men in this series who give little indication of that unless you play two complete playthrough with different genders.

I don't actually remember liking DA2 back in 2011 nearly as much as I did on this replay. Maybe that's playing Origins and Awakening with the closer camera so not being a shocking change or about better expectations for what DA2 was trying to do with a smaller scope than Origins. But it's real good. My issues with Mass Effect 2 around forgetting the impending universe-ending events and doing a smaller character piece are less of an issue here (that said, both ending are still kinda a mess in spots). Thanks to how Origins resolved, there's space for a new protagonist to have a more reactive story based around a cast of characters over time in a single city state (where the Mass Effect series didn't have that change with a single protagonist for the original trilogy despite being just as happy to actually kill off the protagonist!) and DA2 really pulls that off in 2019. It's not unparalleled storytelling but it's pretty good where I found it.

After 3000 words, we're going to take a break before wrapping this up. In part two of this blog series, it's time to dive into by far the longest Dragon Age campaign to date. The Inquisition, teased throughout the framing of DA2's story, takes centre stage...

Finally, if you've been enjoying this blog for a while, why not help me justify spending over 200 hours replaying old games recently and writing up my thoughts by *jangling tip jar* becoming a patron.

Friday, 15 March 2019

Good Enough Meets Extremely Fast

I've been playing a lot of games in the last month that are getting on for a decade old. Some of that is for a longer post (series of posts? my notes, not yet having finished the final Dragon Age game, are 3500 words) but I wanted to do something shorter about how these games (that exported their artist assets expecting most users to play them at 720p) stand up rather well on modern systems. There may be a touch of riffing on this recent blog post too.

Almost exactly a year ago I was asking similar questions to that linked post, but about the asset fidelity arms race and the last decade of progress measured in pure asset comparisons (that is, taking eg a 2011 game and comparing it to today by rendering both assets with roughly equivalent to today's real-time renderers). Playing through this series of games from 2008 to 2011 in quick succession was a great visualisation of how those old assets hold up in 2019 with 4K60 output.

None of the screenshots that I'm embedding here are doing anything fancy like injecting alternative shaders or swapping out the stock assets with higher poly community mods or more detailed textures. Dragon Age 2 has the "High Resolution Texture Pack" (advertised as for GPUs with a massive 1GB of VRAM) which is an optional official download on Origin but I'm pretty sure that was released on the same day at the base game (and is official anyway). Everything was captured looking for a ~60fps experience so it's not a DeadEndThrills approach of turning everything up to 11 even if it broke the framerate and then capturing and downsampling purely for the photography. These are faithful captures of the internal framebuffer for the game as played.

If you click through to the screenshots in this post, you'll notice some unusual resolutions involved because today DSR/VSR (super-sampling at the driver level - exposing fake higher resolutions to any game and then downsampling for output to the actual screen) is an absolutely stock technique. Something like a modern GTX1070 (my card will turn 3 years old next quarter - so not even that modern) has more than enough power to turn on any existing AA technique (MSAA hadn't totally died to deferred renderers in this era; FXAA etc had started to be imported from the consoles) and then also boost beyond 4K to help control some of the shader aliasing. The shaders aren't that complex so there is plenty of performance to play with and often no one is getting fancy with HDR to really explode everything (compared to games around 2015, which seem like they're going to be a dark period of high shader complexity but not great management of artefacts & defects in edge cases; not to mention not having good enough temporal anti-aliasing yet while most everyone had migrated to deferred where MSAA isn't viable).

Despite expecting most users to see these decade-old games on much lower resolution screens, the push around this era was for good enough textures for up-close inspection. What you get when the textures are good enough up close is that you've now got some decently detailed textures even for 4K output at medium-distance. I'm not going to say all of these games are perfect, as you do clearly get some muddy visuals even in the mid-ground in some places (especially stuff like a large flat repeating floor texture etc). But it holds up surprisingly well and even the primitive dynamic shadows are often so primitive as to be easy enough to ignore (if you can't brute force it via poking at config files and demanding the GPU just throws a GB at huge shadow maps). The several years of continued development from where Half-Life 2 (including Episode 2 refresh) had left us in terms of getting a reasonably coherent result while juggling multiple different systems (this is before a unified PBR push) is often impressive. You can nitpick the results, just as you can often point to comically low polygon density you'd not see today (outside of maybe indie games and even those often push their polygon budget quite well), but it's only a few spots rather than the entire scene looking out of place on a modern system.

While clearly miles from photo-realism, there is enough detail to know what everything is meant to be and for things like a poster or sign to get close to being the actual poster or sign without lashes of artefacts or having to use a special rendering technique to achieve it (here I'm thinking of how well Doom 3 did the in-world UI stuff back in 2004 being the exception even today). There is nowhere near the level of detritus you would see in a real world, but there is enough to make it look lived in. Those props look close enough to what they're meant to represent that we're not in the situation of years previous where it was a muddy texture and often a mess of polygons that you had to work at to understand once looking at them at a far higher resolution than was originally intended. There are enough assets that there is cruft on a desk rather than only the props required for the interactions and one fake bottle to avoid the artifice totally collapsing once interactable objects started to get glowing highlights or arrows above them.

Also the lack of PBR in this era for things like human(oid) characters means the artists seemed to be more free to push the more cartoon-y stylish approach (before you defaulted to starting out with a skin shader with sub-surface scattering and worked from there) which certainly helps avoiding the uncanny valley. Some of the animation systems from this era are clearly reaching towards a fluidity the tech did not make easy and the animators were not given the budget to hand-tweak them to perfection from whatever performance capture they may have started with. I'd say it does show an "emotive gap" from looking at the puppetry onscreen trying to convey subtle emotions via expressions but often not quite getting there - but even today this doesn't seem like a totally solved issue and I find the difference from studio to studio is far more significant than simply looking at the progression of technology. Around this era of games then we've got stand-out stuff from Naughty Dog showing you could do that stuff really well with the technology back then.

I am still energised by rendering questions. The introduction of real-time ray tracing is such an exciting time to be thinking about the next generation of engine designs (and even just what the new console generation will bring in terms of a baseline performance we can expect many many millions of users to have reasonably affordable access to). Even the more invisible things like a continuing focus on code quality and reliability engineering, with several studios talking about how they're looking at using Rust to really enforce higher coding standards (banning some patterns of design as too risky, which the Rust borrow checker enforces at compile time) in their work.

How do I feel going back a decade and enjoying all these games that still look good enough today (thanks to the extremely fast GPUs we've got)? Well it makes me think about what we're working on today and the hardware we'll be able to use to replay it in another decade. What slight visual deficiencies we'll be able to brute force around; just how detailed things might look on 8K TV panels with amazing contrast/brightness options (and maybe some of that Deep Learning algorithms tweaking the game output to enhance it without the horrible results from previous generations of "TV enhancements" to the input signal) or with VR headsets that sit us inside recreated 3D spaces and give us effectively even higher pixel counts (via head movements allowing us to be truly surrounded in a scene and 4K VR panels).

Games have longer shelf lives than ever before and can continue to grow even long after we've stopped actively working to develop them. We should probably think about making sure all our sliders can be unlocked to go up to 12 so that players in ten years can continue to poke the settings up as they get the hardware to run it.