Monday, 14 October 2019

Cheat Engine: Dev Basics

This year I've been playing through a lot of premium computer games that came out around the time Facebook was a platform where a lot of social-idle games were making a lot of money and getting a lot of attention from game designers. To the point that some of those mechanics for positive feedback loop economic models was being dumped into $60 games or linked via external social games on another platform. Even in the current era, where that stuff is less in fashion, a lot of the changes are still being applied to in-game economies (when not tied to micro-transactions and the quest for "AAA whales") compared to the process of balancing a game economy from the era previous. Some games have been patched to provide all the bonuses from engaging in external social games when those social games were taken down, others simply expect players to do more grinding in the game as that was always one of the play styles they considered viable at launch.

Personally, I've been approaching it from a different angle: one of someone who always wanted to know what the cheat codes were for games, even if I ended up not using them that much during a first playthrough. As a developer who has always believed that my code is usually a guest on someone else's hardware, the cheats available to me are rather broad. I don't feel the need to limit myself to the dev/debug commands that ship in a solo game (where I have not signed up to an agreed set of rules for play in a multiplayer environment). In my consideration, the means by which games are protected from players editing memory values to play content they do not own is called copyright law (as pasting in any parts of the game you have not sold them into memory would be an obvious violation of copyright) - knowing this makes the technical means by which you should operate clear. (And the ESA or anyone trying to shill DRM are not your friends.)


I occasionally have lively discussions with other devs on this topic but I'm against anti-consumer snooping or memory obfuscation having any place in solo experiences that have been sold to consumers (who should then expect to be able to tweak their play experience as long as it doesn't involve grafting on copyrighted content that was not included in the sale of data). Including source code is a method of assuring players that you have not hidden any anti-consumer systems in the thing they purchased (and given some expertise, they can explore and modify their experience however they like); some modding tools even approach this level of access.

Which brings us to today's topic: Cheat Engine. This is quite an advanced tool with a long history of updates so I'm only going to talk about a few of the simpler things that a lot of people use it for. If you make games but have never played with CE then this may be a good primer for what people are talking about when they discuss Cheat Tables for your game.


The simplest function of Cheat Engine is to scan the memory of a running application to find any instances of a certain value (a bit pattern that could be read as a certain value, optionally including fuzzy scans that find anything that might be a interpreted as a value or within a delta of it for floats) and save the list of those addresses. The Next Scan function allows this value to be edited & another scan run on only the addresses already found. A player can use in-game systems to tweak a number and then find all fixed memory locations that are mirroring that change by repeatedly scanning for addresses doing what the in-game value does. A canny player can even deduce that certain values in the UI are not immediately saved back to a permanent location (and the save process may not involve copying the same location as the UI is using) so to only rescan the memory at certain points (like after backing out of a buy screen into the main game UI, completing a virtual transaction).

Knowing where in memory the values are being updated, the player can track those locations and even lock their value to prevent them changing. This is very useful if the game is updating a handful of locations with the same value and the player wants to know which value is used for future calculations as the master value and which is just mirroring (or if they missed something in a previous scan and so don't have the core location they want in their current pool of memory locations - as developers then we have an advantage in how we think about memory & knowing what processes are going on that can cause data to be moved and plenty of players doing this stuff also have that knowledge). Often applying a lock and then trying to change the value in-game will show which location is key & which can be ignored. This is where the player can now basically save-file hack in the live game and change any value they can isolate. The scans are very fast so it's quite easy to do this at any point, especially if you're looking for an unusual bit pattern (eg not 0, 1, 2, 16, etc) that's easy to repeatedly change via in-game actions on demand.

November Edit: one nuance of this is that scanning for the bit pattern is only one of the various modes. Within a range, has not changed since last scan, has decreased (optionally by a specific value)... there's a lot of ways of doing a refinement. With a bit more time (we're talking only a few seconds to index it on a modern system with a typical binary) you can even start from "I don't know the initial value", which makes it surprisingly fast to find where player health etc are being stored in memory in a lot of games. And then locking that memory area (have Cheat Engine repeatedly writing that value to the location to erase any changes by the game). The versatility of the system was something I'd not considered before giving it a poke - average users really can find ammo, health, etc extremely quickly as they control when the value changes or stays fixed then refine memory locations mirroring their expectations.

But it's not common for these offsets to be fixed, so players would have to do this whenever they want to change something and maybe that's enough friction to consider it annoying. Which is where a slower but fancier trick Cheat Engine has comes it: once a player has an address they can look for any memory in the running application that looks like an offset or pointer to that address. Then they can do the same iteration & look for that value not changing. A player will note that after a while (or load, or game restart) the location of some in-game value moves and then check to see if any of those suspected pointers have moved to the new location they've found for that in-game value. Advanced use could even follow a chain of pointers. These saved pointer locations are often stable between level loads, game loads, and even some minor patch revisions (although the last one is uncommon, which is why Cheat Tables usually have the associated patch version tied to them). There is more complex stuff with code injection and advanced tweaks that can be done for fancy tables and reactive cheats (halving damage taken, boosting XP) but the bog standard DIY stuff is usually more limited. But this is still clearly powerful enough to have worked out where your CharacterInfo struct is and know how to follow the pointers and edit various values.

If a player wants a million credits to break your in-game economy, it's probably reasonable easy for them to hack it without that much expertise (most anyone could follow a tutorial on this stuff, even if it's sometimes faster to do with some expertise to understand the underlying systems going on moving data around in memory). Once upon a time, it was standard for computer games to include cheats and some development or debug tools that would make those extra credits something that didn't require an external tool. In recent years it has become a lot less common (maybe in part due to GTA Hot Coffee and similar "scandals" related to leaving assets and tools that the player was never meant to encounter in the release version; maybe the console push for Achievements​/​Trophies as "verified played good" permanent records for player profiles).

I think this stuff is good for games. Especially a few years after release, when players are going to want to really poke at all the systems in a game and find out the limits of how things work. Obfuscation work to frustrate players trying to do this is wasted resources that could be spent making a better final product and, often, isn't even entirely successful as it just takes one smart hacker to figure out what's going on and work out how to get round it by writing memory at a certain point or injecting a bit of extra code at just the right location. It's the user's memory so it's not like you can guarantee that they won't lift it from under you. Embrace the chaos, and kindly ask players to not submit bug reports if they've been editing their memory addresses while playing a game, because this is far far outside of developer supported play.

Friday, 30 August 2019

The Sharpening Curse

I should start this off by saying that there are times when sharpening filters are absolutely standard. Playing with local contrast using an unsharpen mask or clarity tool is a stock part of most digital photo development (baring skin, where the clarity tool is used in the opposite direction to reduce contrast and provide wrinkle suppression) and something like Adobe Lightroom even does an automatic (mild) sharpen on export for printing (in the default configuration).

That said, I welcome anyone to look at freeze frames from any 4K film print and tell me what you see. Watch it in motion and pay attention to any sub-pixel scale elements as they move through the scene. Watch it on a neutrally (professionally) configured screen that's accurately presenting the source input, not a TV that's doing its own mess of sharpening because it's configured for a showroom with everything dialled up to 11. Even if aggressively sharpened (and most films are not), then there is a lack of aliasing thanks in part to the ubiquitous use of an optical low-pass filter in front of the camera sensor during light capture and because an optical sensor is capturing a temporal and spacial integral (light hitting anywhere on the 2D area of each sub-pixel sensor & at any time during the shutter being open is recorded as contributing to the pixel value). Cinematic (offline) rendering simulates these features, even when not aiming for a photo-realistic or mixed (CG with live action) final scene.

When we move to real-time rendering, we're still not that far away from the early rasterisers - constructing a scene where the final result effectively takes a single sample at the centre of each pixel and at a fixed point in time and calculates the colour value. We're missing a low-pass filter (aka a blur or soften filter) and the anti-aliasing effect of temporal and spacial averaging (even when we employ limited tricks to try and simulate them extremely cheaply).

Assassin's Creed III using early TXAA
Assassin's Creed IV with TXAA

Even when using the current temporal solutions to average out and remove some aliasing (and the more expensive techniques like MSAA for added spacial samples, which doesn't work well with deferred rendering so has fallen out of fashion), the end result is still a scene with far fewer samples into the underlying ground truth (or the output you would expect if filming an actual scene with a real camera) than we would like and a tendency for aliasing to occur. When TXAA (an early nVidia temporal solution) was introduced then it sparked a mild backlash from some who wanted a sharper final result, but mainly because they are so used to the over-sharp mess that is the traditional output of real-time rendering. The result has been various engines that use temporal solutions now also offer a sharpening filter as post-process and AMD (& nVidia) are starting to advertise driver-level sharpening filters (as an enhancement to be applied to games for "greater fidelity").

While AMD are talking about their FidelityFX as an answer to nVidia's DLSS AI upscaling (using those Tensor Cores to upscale and smooth based on training on 64xSSAA "ground truth" images for each game - an effect I sometimes like in theory more than I love the final result), DLSS actually removes more high frequency aliasing than adding additional local contrast (it is primarily adding anti-aliasing to a low res aliased frame while also picking up some additional details that the AI infers from the training set). Technically AMD's FidelityFX contains two different branded techniques, one for Upscaling and another for Sharpening, but these two tasks operate in opposite directions (so combining is something to be attempted with extreme care and possibly not without something as complex at AI training to guide it) and the marketing seems to treat them under a single umbrella. Shader upscaling can certainly be better than just the cheapest resize filter you care to run but really, in the current era, I think temporal reconstruction is showing itself to be the MVP now that issues of ghosting and other incorrect contributions are basically fixed (outside of points of very high motion, where we are very forgiving of slight issues - just look at a static screenshot in the middle of what motion blur effects looks like in ~2014 games, but because we only see it as a fleeting streak then we don't notice how bad it can be). Unless DLSS steps up (while AMD and Intel also start shipping GPUs with dedicated hardware acceleration for this computation type), I think we should expect advancing temporal solutions to offer the ideal mix of performance and fidelity.

Edit: As I was writing this, nVidia Research posted this discussion of DLSS research, including: "One of the core challenges of super resolution is preserving details in the image while also maintaining temporal stability from frame to frame. The sharper an image, the more likely you’ll see noise, shimmering, or temporal artifacts in motion." - that's a good statement of intent (hopefully Intel plan to launch their discrete GPUs with acceleration of "AI" - something even a modern phone SoC offers more dedicated PR (and silicon area?) to than current AMD or Intel efforts).

So far we are seeing a lot of optional sharpening effects (optional on PC - I think stuff like The Division actually retained the user-selectable sharpening strength on consoles but not every console release includes complexity beyond a single "brightness" slider) but I'm worrying about the day that you load up a game and start seeing sharpening halos (oh no, not the halos!) and notice additional aliasing that cannot be removed.

A very mild level of sharpening absolutely can have a place (doing so via variable strength that adapts to the scene? ideal!) and is probably integrated into several game post-processing kernels we don't even notice, but a sharpening arms race seems like the opposite of what real-time rendering needs. We are still producing final frames that contain too much aliasing and should continue to lean on the side of generating a softer final image when weighing detail vs aliasing.

Wednesday, 31 July 2019

AAA Rental

When I was young, we used to go to the local video rental store in the nearest town and rent games. Initially this was computer games, including manuals etc in a plastic sleeve that allowed you to enter the correct code to start the game (back when code wheels or typing in a word on a page of the manual confirmed you weren't a pirate). A few years later it was mainly consoles, renting both hardware and a video game for the weekend. The store purchased games and then more than made the money back renting them out - all thanks to the concept of the first sale doctrine (which lobbying from software developers means isn't actually part of the legal framework in many places when it comes to games (?) but still guides what many think of as legal interactions with copyrighted material). Years later, when economic realities made collecting a proper library of games impossible for some years, I used to rent AAA console games via post (many of which I finally got into my library via used sales on last-gen titles no longer sold new).

One of the things that the recent transition to digital has done is really slow down those rental markets. Along with eroding used sales, the game rental services have also found it hard to operate in a world where publishers looked to things like Project $10 (EA making it so a one-time key unlocked content in a new game) and now look to digital as the primary platform to sell games (where there is no physical token to rent out which enables play). But never fear, publishers are stepping into the gap with their own rental offerings.

The biggest player right now is probably Microsoft with GamePass. Some might consider this "the Netflix model", with a mix of their own brand new content and content they're buying in from 3rd party publishers. Others have pointed to Spotify. I've previously said that Spotify (+ Apple Music + Google Music) could actually pay for the music industry as it is (artists are being ripped off by bad contracts, not a lack of consumer cash pumping into the system) but I'm somewhat concerned that gaming (an industry roughly an order of magnitude bigger) may not actually be able to be sustained by subscriptions in the short to medium term.

What makes me doubly concerned on that front is that some publishers have extremely deep pockets right now and so could lose money for a long time on subscriptions before pumping up the price to consumers once many other avenues for playing games had become eroded by artificially cheap subscriptions. That is the model of "disruption" used by plenty in tech with VC backing. As of right now, it's hard to ague with the value on offer (especially as something you subscribe to for a specific game & then dive into the archives and then unsubscribe - not really analogous to TV or music you like to have playing in the background so always want to be subscribed to at least one service with all the classics you enjoy).

As a player of games right now, it seems great to be able to jump through a large archive of games for about $10 per month. With that including the latest releases from the publisher offering the subscription, I don't see why I'd pay $60-100 for a AAA release on launch. With EA even offering a cheaper option if you're not interested in their latest releases and Ubisoft saying their upcoming service will also include all DLC and premium editions - it's starting to look like quite a poor option to give $60 for a brand new game and miss out on DLC when you could rent it once at launch and again when the DLC has all come out while still having more than enough cash in your pocket left to buy it on sale eventually if you want a permanent copy for your library.

This year I've been playing a lot of older games in between trying out subscription services. Sometimes I'm even doing so based on wanting to see credits roll in a game I've owned for a while but never completed before jumping into a sequel I never got round to buying (but is now available on these rental platforms). I've also noticed that once a game appears on a subscription list, I'm probably taking it off a store wishlist - I'll get round to it next time I subscribe rather than watching for an attractive sale price to buy it now. Another thing I've watched myself doing is treating everything like it's on a clock when you're subscribed and that ends up helping to keep me going (rather than getting distracted by reading or something else and not playing anything for a few weeks) - very Battle Pass energy but for games that aren't so multiplayer focussed or reliant on F2P hooks.

It's probably too early to predict how everything shakes out but I certainly think we're in for some turbulent times as everyone figures out how gaming adapts to publisher-driven rentals vs ownership. Ubisoft seem to be doing extremely well with maintaining extended support for their online games and providing several seasons (Year 4 Pass for Siege? Sounds a lot like a slow-mode Battle Pass) of updates for premium games - that likely maps well to pushing a subscription service, although I'm not sure their price point is ideal (lacking the cheap tier that EA has for people who only want older content). Will EA finally resurrect their proposed TV model of narrative? Games as a Service (as they currently do it) has maybe not been working out ideally at EA (without the huge revenue from gambling-like experiences in FIFA etc, disappointments like Anthem would probably be a lot harder for EA to work through) so it might be time for another strategy (as their subscription service finally arrives on the biggest console after Sony have agreed to let it onto their platform).

Thursday, 20 June 2019

Moving to Firefox

I was a big fan of Firefox from approximately the introduction of Live Bookmarks (before Google Reader or even my own use of Bloglines - literally all three of these RSS tools are now dead so RIP RSS in general: push notifications for new website content seems like the obvious right way to do things and yet support is slipping away) up until some decisions I considered strange (eg removing the ability to restrict which websites ran Javascript unless you installed a plugin to manage what I consider a core task of a browser interested in basic security). When Firefox still hadn't added back those basic security tools but decided to lock down running unsigned plugins (like the ones I'd written myself & didn't need external security audits) with the stable release branch, I had already mainly moved to Chrome as my daily browser (which retains the ability to decide which plugin code need to be signed & offers granular whitelist support for managing locally executed website code). Android has been the one place where I've continued to keep FF around as an option (although recently I had also basically moved to exclusively using Chrome because of how it syncs history, bookmarks, tabs, and settings between versions).

But recently my use of Chrome for daily browsing and Edge for occasional tasks with access to a different rendering engine (to avoid bugs) has been defeated by MS giving up on their own rendering engine and deciding that Chrome is the standard. Everything close to mainstream is a child of KHTML now (WebKit & Blink are not identical but they're both derived from a common ancestor and just steered in slightly different directions by Apple and Google). It's starting to feel Microsoft EEE plan crushing to stick with the Blink renderer in 2019; and I also have an ecosystem interest in Servo (built as one of the tentpole projects for Rust). But moving to Firefox wasn't entirely painless so it's time for a quick rundown for anyone else making the move - I'm starting from the Firefox Developer Edition (because they still force you to get your plugin code signed for the main stable branch) as Waterfox's Servo-derived version sounds like it is still early so I'm not yet thinking about projects that have forked from the main Firefox path.

Save often

An early crash seemed to wipe out FF's settings database, which includes most plugin configuration data, so make good use of the Export to File options that most plugins seem to offer. I'd personally prefer if all settings were stored in flat files which were easy to back up and sync between devices but it seems like FF prefers a central database which also stores most of the settings for the browser itself.

Outside of that one disaster of a crash (which ate customisation data & forced me to configure things twice, this time saving backups once I was done), everything seems stable. I was also leaving Chrome due to some rare stability issues that seemed to be triggered while several video streams were running at once and so far none of those issues have happened in FF. A tab has crashes once or twice but with about the same frequency as Chrome and the isolation (so it doesn't take out any other tabs) seems to be just a solid. Discord introduced a bug (that I only saw in FF) for about two days that caused its internal engine to detect a failure state and require refreshing, which indicates the major concern with moving away from the market leader: sites will not be as well tested in FF. On the other hand, a long-standing bug in TweetDeck (making scrolling a column jump around) is simply not an issue on FF so it's good to keep an open mind about which gripes you're just accustomed to.

Customise everything

One of the nice visual updates to Chrome some time ago was to drop the OS stock scrollbars and give us something a bit cleaner and often narrower (using a style extension to manage it). Unfortunately FF does not pick up on that extension but rather has its own extension with which you can request a skinny scrollbar (or the complete removal of one). I had to tweak some of my old CSS injections to customise the pages I often visit (eg TweetDeck) to look more like they do by default under Chrome. I'll write my own CSS injector for FF (as I did in Chrome - it's an ideal "my first plugin" learning experience) but right now I'm using Stylus.

Because I have a 4K desktop and so run my Windows UI above 100% zoom (in the mess that is the various HiDPI APIs in current Windows 10) there have been a few times I've needed to prod the page zoom settings to get everything feeling the same as before. The standout glitch was Discord, where the visible scrollbars are fake (elements drawn by the website, not the browser itself) but the code to hide the real scrollbars doesn't work perfectly outsize of 100% zoom in FF. But as they're not the actual scrollbars you're looking at or interacting with, the above extension can also be used to completely hide them and clean up the visuals (making it look just like in Chrome). Basically it's a lot easier to adapt when you're used to poking CSS to your satisfaction for certain web-apps anyway. I even caught up to modern CSS and the more recently added wildcards to catch all the Discord elements in a single line: div[class^="scroller-"] {scrollbar-width: none;}

Basically all of the actual browser experience customisation maps directly from Chrome to Firefox, from font preferences to interface layout and customisation. You can even tweak the "density" of the main UI to tweak whitespace, something I don't think Chrome offers, which leaves you with a narrower tab bar and more vertical space on a 16:9 screen for the actual website. A really nice stock feature is the Reader View, which toggles a clean article view when detecting a main text block (far from unique, but it's a clean stock implementation unlike Dom Distiller or a plugin). I think we're at a point where the stock features are pretty comparable, even if you do have to do the occasional search to translate it over (as I did for the scrollbars) or find a plugin on one platform to reach parity.

Plugin list

Most of the plugins I had in Chrome also exist for Firefox. Here is a list I'm currently running until I've moved most of my internal stuff to the new ecosystem. I'm not saying I've audited code, but I did at least do basic checks to avoid obvious snooper extensions (eg Stylus is designed to be the non-telemetry alternative to Stylish). There is currently no way to restrict which pages each plugin can read and modify, something I'm shocked hasn't been copied from Chrome on a browser that advertises it's security (FF only just added restricting plugins to not work on Private/Container tabs).

Facebook Container - Keep your logged in FB session in a special container so it's slightly harder for FB to track you elsewhere on the web.
Privacy Badger - EFF tracker blocker & url click-tracker remover for Google search etc links.
HTTPS Everywhere - Another EFF classic: make https the default for websites which haven't made the switch yet.
NoScript - IMO this should be a core feature in Firefox. In previous versions this was a stock feature. For now I'll use this to whitelist the few sites that do need client-side code execution rights.
uBlock Origin - I'm mainly using this as an easy way to suppress certain page elements as I read until I port over my plugin that does that job (I typically do not go for "Adblock" plugins but it's easy to configure & you can turn most of it off). It's a good extra line of security until I get comfortable with NoScript & my own plugins properly protecting me from JavaScript nasties.
Stylus - As mentioned above, this makes CSS injection really quick and easy until I port my own plugin over to customise how regular websites look.
Awesome RSS - Firefox took the classic RSS icon out of the address bar (so did Chrome: Google made an official plugin to add it back). Weep for RSS, an idea that made the web so much nicer to use that they tried to kill it!
Snap Links - This is the equivalent of the most esoteric plugin I love in Chrome: Linkclump. My index of RSS feeds in Feedly: sometimes I want my browser to open lots of links in several tabs ("I've got half an hour, give me 5 articles I've put a pin in as worth reading fully"; "Open all webcomics that have updated since last I checked") and this makes that as easy as dragging a box over all the nicely lined up links.

Friday, 31 May 2019

Co-ops: Sharing the Spoils

For quite some time this blog has been a dumping site for thoughts about how to operate as independent software creators while being fair to the users and developers we work with. Recently those thoughts have turned to the co-operative model, including the focus on giving back to a wider community (not exactly an uncommon consideration for an industry with so much FOSS foundation) while still aiming to operate as a commercially viable entity inside the capitalist hellscape we currently operate (until the seas boil).

Even with the new funding models around donations (eg Patreon and KickStarter), there has been little movement around changing the deal for users (from offering source code & unbaked assets as standard to taking investment as ownership - creating consumer co-operatives) or developers (eg moving to a worker co-operative to democratise the office that is now funded by thousands of small individual donations rather than an investor who takes ownership of the company and chooses the boss). Meanwhile, every week there is a story about workplace conditions and we all kinda know the only reason no indie teams are getting the negative press is because stories do the numbers when tied to well known corporate brands. The EA Spouse blog post is almost 15 years old and things only change at the slowest speed those in power think they can get away with (once again, see boiling oceans); and that's mirrored in how we push ourselves into early burnout (and to keep up with a competitive marketplace filled with so many products).

The big play with a worker co-operative is that they're democratically owned. Every worker buys into the institution and so become a co-owner. Big decisions usually require consensus votes, smaller things can be majority or even left to individuals. As a large company, you still have the same management tiers but ultimately they answer to all the workers rather than shareholders or a small group of private owners. The details are somewhat fluid so maybe in one place you can increase your share through time worked (while most places do it so that after a trial period everyone buys in with an equal vote/share) but fundamentally all workers can buy in and democratically control the institution while also receiving the full returns from their combined labour.

Some places are particularly precious about one vote/share per person. I think we're all aware of how soft power works and that every person having one vote does not mean everyone has equal power. As long as you're being rewarded (eg for time dedicated to the co-op, which increases institutional cohesion) and it has a low share ceiling then I feel those rules make enough sense. I'm actually somewhat more concerned by the other running decisions and initial investment, which is great if you're building a co-op by and for devs who all have $50k cash (and a lot of time to invest that we could value at market rate $10k/month) to create a viable business but becomes less great when you look at who that excludes and how the final system works (often with the aim to move to a salary system to even out income but at the cost of decoupling project profit from remuneration).

It's not helped by software as a product. Work several years on a video game with zero revenue and then you've got a source of cash, an IP bundle that can be duplicated for basically free as buyers are found for additional copies, that may or may not pay for the next development cycle. It's all a bit luck-based because the wider games industry is a hit-driven market. If you've got personal reserves to self-fund then you're buying those lottery tickets. Tying remuneration entirely to a project rather than salary system also seems inadvisable. I have been part of that process of a decade of obscurity and I'm not convinced that the co-op model automatically does anything to ensure those who built the foundations are fairly rewarded.

Buying a company

There are many ways of organising this (or less many, depending on your local legal landscape) but in general you buy your slice of the company when you join and everyone else has to collectively buy back that slice when you leave. I imagine it would be advisable to minimise the value of the company if you're doing direct ownership, because otherwise buying a slice could become prohibitive for new hires and difficult when someone leaves, although with so much being IP rights then minimising value isn't trivial.

My preference would be to put the co-operative's ownership into a trust to be run for the benefit of all employees. That means you can have the buy in be a dollar or similar symbolic value while a contract requires trustees to operate the co-operative for all workers, under rule by the decisions of those workers, while not requiring workers have the net worth of where they work inexorably linked to their own finances. The company as a focus of value to be exploited is an unhealthy model that pushes market cap maximisation and other unsustainable growth models which the co-op model already rejects (along with the potential to raise investment in that way).

Dividing up the IP

What we've started to build up here is a company that splits revenue between paying sustainable salaries to all workers with a bonus based on project contributions. Ultimately that's based on the agreement of all the workers, as they are co-owners, but an initial split would be for everyone who contributes to a project to vote on how the bonus is split. That's how the model works: we have suggested structures for how this works and recruit based on that but also, if all workers agree a different system, then even the core bylaws change with a unanimous vote to change them. The co-op can adapt and change over time.

The system I'm currently thinking through, and the impetus for this blog post, is to tie the IP and projects to the workers rather than the co-op as a whole. Clearly the co-op needs central funding to continue to operate and pay out salaries on unfinished projects. Without that, it all falls apart. Traditionally you'd assign IP ownership to the co-op and then it, as an entity, would divide out the bonus to workers on a project; keeping the rest as core funding and slowly increasing the accumulated IP the co-op owns.

But if we go back to our basic copyright law, there is already a suggested construction for IP which is worked on by several people and is indivisible. Shared copyright ownership where all contributors own the IP but either must agree any license or must equally compensate every owner for any individual deal done (how this works by default changes depending on local legislation). There is the framework for assigning the IP created on a project to the workers of that project and licensing it to the co-op as entity for commercial exploitation and future development. This goes beyond the original model of splitting the company between all workers and also splits the IP between the workers not just as via direct ownership but also applying via indirect ownership & on a per-project basis. If we're working on this, we could also attempt to spread our values even if the workers on a project leave the co-op with their IP, something that copyleft licenses are an example of.

A viral license

So what do we need this system to do and prevent? (Consider this working on top of our previously stated general rules for creating software, so the license will already include terms that automatically transfer the IP to the public domain after a certain number of years of commercial exploitation or after a high return on investment is achieved.)
  • The co-op must be given a reasonable ability to commercialise the project, which repays it for day-to-day costs, salary payments made during development, ongoing platform services, and ensures the future operation of the co-op. This may require it have the exclusive rights for some years to prevent competition from project members operating outside of the co-op. It should probably also have rights to develop new IP on top of the existing IP (sequels, use of the codebase in new projects etc).
  • The individuals on a project should be fairly compensated for commercialisation of their work, around an agreed bonus split. Future work to maintain ongoing development (patches etc) may need to be accounted for in this agreement or allow renegotiation of the original split.
  • To prevent IP becoming inaccessible due to disagreement between shared owners (something several commercial games currently are stuck with), the contract should err on the side of providing every individual with the ability to further commercialise the IP after any initial exclusivity, as long as the returns are split back to all individuals in a way considered fair (a new unanimous agreement) or along the lines of the bonus split (the original agreement).
  • To ensure the IP does not calcify, only able to be duplicated and sold as a fixed product, a viral license should allow new IP to be constructed on top of the existing IP by those who own a share of it. The value of the viral component is to ensure that any project member who takes the shared IP with them will also be constructing new projects that value shared ownership. This will require some sort of agreed structure for how various derivatives built on top of the IP are required to return some cut of their revenue to the original bonus split or find unanimous agreement in drafting a new split and cut amount.
As you can maybe sense, this is very much still a rough outline. I'd love to find projects already working along similar lines that manage to take the idea of worker ownership and split the difference between treating that as a project and co-op office being primarily owned by all workers. Also a way of sharing IP with a framework for future exploitation by the shared owners which has been designed with the expectation that everyone working on the project would be a shared owner (the existing things I've read around this are closer to classic music contracts where various things are not considered indivisible and they are never expected to scale out to a full team working on a project all getting shared ownership).

Worker co-ops are already heavily marketed as part of a wider social movement promoting more co-operatives, which would seem to be a great match for nailing down a form of shared IP ownership that also brings with it various restrictions that mean anyone exploiting it outside of the original co-op would still be bound to the principles of democratic shared ownership between every contributor and fair remuneration.

Sunday, 14 April 2019

Dragon Age(d): the Inquisition

So after finishing Dragon Age 2 in the last post, we're now up to 2014. In fact, the last DLC for Inquisition came out only three and a half years ago - not old enough for us to be diving back to recontextualise the game but also not new enough for this to be a stock review. And yet, if we're evaluating the Dragon Age series in 2019, this is the biggest entry and probably the foundations on which the teased Dragon Age 4 builds (quite clearly narratively but also probably mechanically - whatever that ultimately means for a game rebooted under new directors at a studio imploding under mismanagement & crunch and possibly pivoting to connected "live" experiences built directly on the Anthem code base).

Dragon Age initially went from an attempt to recapture the old BioWare WRPG spark (before crowdfunded revivals offered players a lot of choice there) to a more console-focused character-driven affair on a limited budget. But for the fourth campaign in the setting (and third standalone game), EA couldn't keep ignoring the siren call of Skyrim.

When this series was first in development, Oblivion was already showing where console-friendly RPGs could land commercially. The previous Dragon Age games each did well enough (into the several millions sold) but they didn't manage to compete with Oblivion's incredible long tail and certainly couldn't stand in the same pantheon as a break-out hit like Skyrim (now well beyond 30 million sales thanks to another huge tail and many ports to new platforms). Open worlds were not just for big budget action games that took a few elements of RPGs, and so BioWare chose to take a stab at the big money.


Technically speaking

Moving to a modern engine, it's immediately clear that a GTX1070 can't run close to 4K native or max-settings with MSAA (or higher than native with downsampling) and expect locked v-sync (unlike earlier games). What's worse, the mild shader aliasing that supersampling fixed in previous games becomes terrible shimmer here with more advanced material shaders, an HDR pipeline, and bokeh-simulating depth of field (enlarging any shimmering overbright pixel into a fat blob that would almost look like glimmer if it wasn't so clearly strobing at the frequency of an aliasing artefact). MSAA is still an option but it's not going to do anything about shader aliasing (also you probably don't have the GPU headroom to turn it on at high resolutions anyway - especially as on PC you can and should force a 60Hz mode everywhere) and the post-AA is typically somewhat inconsistent. A modern temporal AA solution is sorely missed here, even if it's no worse than many contemporary titles from this dark era for temporal stability.

The game's technical issues were never fully patched and I had more than a few DXGI_ERROR_DEVICE_HUNG crashes early on (which seemed to get nailed down to some resource management issues that became less prevalent with patches after release but clearly never got completely defeated). Early on I also encountered animation stuttering (especially in cutscenes where they should be playing back at a perfect 30Hz but clearly don't, with some scene elements updating correctly while other stalled for several frames) before deciding that 60Hz SimRate couldn't be worse. Despite being officially unsupported, it seemed much better than the default and provided pretty consistent frame pacing.

Dark (default) indoor
Metals in shadow
Imagine the DoF glints strobing

Also holy clipped brightness values! How was the monitor configured on which this tone-mapping was agreed upon? A few blown highlights outdoors and, far more significantly, severely crushed shadows inside; you're often entirely reliant on the phantom light your protagonist often emits onto the nearby dungeon walls. I ended up pushing the brightness up a few notches, although there is no proper gamma setting in-game so any slight improvement in the blacks also makes blown highlights more common - I could find no satisfactory setting and literally the only screenshot with the default brightness is the one immediately above from the very first dungeon.

There are other areas of taking a visual step forward only for it to lead to inconsistent results. The rather mechanical facial animation of the earlier games are gone and we enter the era of modern BioWare. Not as bad as Andromeda's "automated animation while management failed to schedule any time to hand-tweak the output" but the higher fidelity certainly pushes towards uncanny in a way the previous games didn't. There may also be an element of so many returning characters, with their previous visual representation so fresh in my mind. Playing the games back to back, it's striking - initially I almost wanted to look away to enjoy the vocal performances without the distraction (before getting mostly used to it and then missing seeing face close-ups at all in the many dialogue scenes where the camera doesn't even zoom in).

DAI "default" Hawke
My attempt at a custom job
My near-default Hawke in DA2

The technical chops of the new engine are clear (at the cost of GPU requirements per pixel rendered) and despite the tone-mapping issues, the lighting and material system does a great job of bringing the scenery up to where other Forstbite Engine games can reach. But the move to more realistic skin shaders is possibly going away from where I think BioWare have traditionally done so well - the painted portraits in Baldur's Gate right up to Dragon Age 2's very stylish designs (going as far as to lock the party visuals and heavily push the default look for Hawke, same as had been done for Shepard in Mass Effect). This series playthrough, I'd gone with a basically stock Hawke (modding in the option to tweak a few things but generally sticking to that iconic face you got from selecting the default) and then seeing what BioWare put in Inquisition as the default Hawke: that's really not an aged version of the previous protagonist's facial features. Thinking more closely about how I couldn't really make a custom character that looked like Hawke, it's not just the lacking options - you simply can't create a more cartoonish face from Dragon Age 2 in Inquisition's more realistic rendering palette.

It's worth remembering that this title spanned the console generations (also releasing on PS360) so was always going to straddle the visual expectations of both and Frostbite is a lot fancier today - yet more reasons for a full trilogy remaster for the upcoming consoles. The stories here are worth another stab at; the voice performances may need to be augmented (especially if Dragon Age 2 is to be expanded to a full-length middle chapter in the saga) but are still extremely good; and there's the kernel of some extremely good visual flair here (especially if slightly reworked towards a cohesive, less realistic, style that spanned the series).

When 'all things to all players' fails

Dragon Age 2 felt like it streamlined the RPG mechanics and removed busy-work; Inquisition adds extra busy-work like the "ping" button to reveal resources/loot while removing strategic choices like manually assigning character base attributes to customise a build. Ability trees have been simplified to fit the limit of only being able to hotkey eight active abilities for any character (including an 'ultimate' ability), similar to how some MMOs have streamlined their abilities in recent years and removing the need for large hotbars to play some of the most interesting classes.

A cavalcade of secondary systems have been added (expanded crafting, exploration goals, a million different progress bars, etc) so a desire to cut back on old systems makes sense. Players only have so much mental bandwidth to consider each system and their potential interactions. You can see where every decision comes from, but when put together it often feels like it doesn't lead to a great final experience and certainly doesn't flow from the previous games.


The tactical view returns, the PC UI does not. But it's not the Origins tactical view that allows playing the game as if it was an Infinity Engine game and it barely feels like it fits the more action-oriented combat modelled on Dragon Age 2. As a continued progression into only adding the merest facade of PC niceties, the ability tooltips now fail to actually provide details of what anything does (hover over the toolbar to get the name of an ability and literally nothing more). On such a large project, creating a PC UI seems like it would have been a reasonable task.

When you're using a mouse, so many of the menus require you to drill down into a new layer to edit something rather than having edit buttons to switch stuff at the level of a list of items. I spent quite a while getting comfortable with both keyboard and controller support. Unfortunately you have to exit to the main menu to change between them so hotswapping is out of the question - I feel like a lot of us who played Battlefield from the early PC days got good at quickly migrating from on-foot keyboard to a stick or pad for vehicles and, despite the technical challenges to providing the correct UI, more games should expect people to dynamically move between them.

Inquisition is clearly built primarily for controllers (even with the 8 "face button" actions not being as nice a fit as most action games manage). The movement with WASD feels clunky (once you rebind the comically outdated keyboard turning default); the auto-attack from Dragon Age 2 feels severely scaled back and no longer automatically deals with facing/collision/movement as elegantly; the menus are actually slower without keyboard shortcut keys that are bound on a pad to a quick press while a menu is open; and on and on. But what eventually got me to stick to keyboard was the (patched in after release) Unsheathe button - if you're exploiting unlimited fast stealth* then you need your weapons out. Normally you'd ping or jump quite often to ensure the cooldown never detects you're out of combat, otherwise you're stuck having to hunt a new mob to initiate infinite stealth off. Not so on keyboard, where you can return to a combat stance without swinging an attack (which breaks stealth and so ends you unlimited stealth). Without this, I might say gamepad is the better option on PC but, as with so many things in this game, it feels like you're always being denied the best solution.

An open world filled with stuff

I was almost 25 hours into my playthrough of Inquisition when I gave up on the slow mount speed (with inability to pick up crafting materials or ping for points of interest) and exploited being able to get unlimited stealth* with a speed-boost dagger to make running faster than mounted travel. It's a symptom of the world being too large and the traversal options feeling too limited. The only real downside is the mount system despawns your party while sprinting means they regularly teleport in front of you as you sprint around.
* A rogue's Skirmisher upgraded Flank Attack, when it connects, puts you into a stealth mode that's not got a duration timer as long as you don't attack afterwards; Lost in the Shadows upgrade means even running through enemies doesn't reveal you. Mages can beeline for the Ring of Doubt to get stealth. Stealth means creeping to get the mats for the crafting of a 1.75x speed-boosting Masterwork weapon. Warriors on PC may want to replicate this combat speed (that is possible with the unmodded game for two classes so is bordering on not even a cheat) via mods or just switch to a party member of a different class for traversal.

This is an impressively huge world (even cut into many zones), especially coming directly from the very restrained Dragon Age 2. But sometimes impressing and being readable are at odds with each other. It's impressive to not know where the edge of a player-explorable area is - a potentially infinite world - but that lack of readability makes it hard to efficiently explore. Earlier games showed you on the map that you'd reached the edge of the traversable area; Inquisition puts cliffs (looking just like the ones you can climb) or some rare invisible walls in the way rather than letting you understand the space as a floorplan. The corridor linearity of those zones and dungeons in previous games gave the feeling of a space without the navigational hurdle of actually working out how to get between any two locations on what looks to be a huge open expanse.

When you're constructing an RPG out of a more open world design, you start to hit those pain points that the previous Dragon Age campaigns had rarely encountered. "Here's a big huge Dwarven door that I've previously found another of unlocked on this map by collecting the draw-the-star puzzles. I'm level 4ish as this is one of the areas you can unlock very early on. Nothing in the game indicates why this doorway (marked as if it is a cave and currently showing there are things for a quest I'm currently on inside) is something I should ignore and come back to later." The quest markers are accurate, the quest objectives are absolutely inside but you need to be level 16, far later in the game, to unlock a different quest that unlocks this particular doorway. You need to look that up on a wiki or forum, thankfully now filled with hints from players who've already done everything.

We can go back to some of the early impressions of the game, where completionists (used to the previous campaigns and doing all the quests in a zone) just burned out on the very first large zone unlocked and the endless minor quests with little to not flavour. One of the things a chatty party offers designers is the option to add barks to suggest heading back to base (and trigger some more plot development). Huge open worlds require a lot more careful planning of how they introduce and guide the player to everything. There are a lot of points where the previous games had offered a clear map of the dungeon with which to navigate while the open spaces in Inquisition lead to a completely different way to parse traversal and it's hard to not pine for the old ways when you're trying to work out how to jump up a cliff to get to the collectable thing that's probably in reach.


The rewards at the end of the new collectathons also leave a sour taste. "I can't wait for Solas to have a big speech back at base about all those shards we collected and the Pride demon we took down once the final door unlocked in the zone that's basically just there to give you doors to feed the collected items into that gets introduced as important at the very start of the game." There is no follow-up dialogue or quest; no narrative reward for finding all the shards. An achievement pings when you cross the final door. The same total lack of fanfare occurred when I helped Solas with 10 stabilising widgets and got given a location of a standard (high level) rift closure that was "special" & "worth investigating" in the mission text but not in any real dialogue or narrative conclusion; not even an achievement dinged for that companion quest. I'm left wondering if this is still a BioWare RPG with so much less signature BioWare narrative tied to progression. Worse, I wonder if there is actually less good stuff or I just feel like that because it's watered down by so much more filler? Looking at hours played, Inquisition would need at least as many character moments and narrative developments as all three of the previous campaigns I'd played through to compare, simply due to just how many hours it takes to complete all the quests here.

Developers can say "just don't engage with it" about the less strongly-narrative content but the game design doesn't flag that there will be no payoff to the narrative setup they wrote to start the quest lines so how do you know what to ignore? Also how often have BioWare killed off a character if players failed to engage with their optional quests in the last decade so it's not exactly unreasonable that players have been trained to exhaust the quests and even dialogue trees (even asking borderline transphobic questions just in case it's vital to some progression that Krem gets asked something that shows you've got no clue) to try and avoid missing some critical but optional path. Dragon Age is a series where the fan parlance discusses "hardened" and "softened" characters over the arc of the entire narrative to track potential changes to character attitudes and what that means for where the story can go. In 2019, we know a new Dragon Age is coming and will almost certainly import the world state from the Dragon Age Keep (the online world state checker/editor).

The real killer I felt on this playthrough, during which I used a mod to turn the real-time waitathon mechanics off (instantly finishing quests on the "war table"), was how the large open spaces and walking round to chat were broken up by so many trips to trigger a text-dump "mission" on a glorified map you can only access by running to an area (with no fast-travel point just outside) in your base area. Timers making sure you don't play through the main missions or unlock new areas too quickly (even though there is already a currency that gates unlocking missions behind doing the less narrative content). And when you remove those blocks then the true absurdity becomes apparent: running from a companion spot to trigger a cut-scene and back to the map to start a "mission" you don't actually play that does what they suggested then immediately back to their location to trigger the continuation of the cut-scene.


Going forward

We're approaching the end of this series on a bit of a downer there. To be clear, I very much enjoy Dragon Age as a series and Inquisition as a bit-too-Skyrim-y big-budget entry in that series. If I didn't care about the characters (new and old) then I wouldn't be so invested in wanting more character moments. The technical issues (some of which we might generously call "era appropriate real-time rendering limitations") and stylistic choices along with the zone readability and collectathon issues stand to hinder some truly lovely spaces that could be filled with excellent gameplay and stories. It's the gap to greatness that makes me feel like Dragon Age should get another chance - just like Mass Effect 1 just needs the combat and inventory stuff reworked or Mass Effect 2 deserved a better ending. All of this modern BioWare era feels like it's so close to something not just extremely special but timeless. Nothing is perfect but some things stand out, even ten years later.

The first two big Inquisition DLCs - a dungeon and new zone - are very similar to the base game and if there's something this game wasn't desperate for it's even more content (providing more lore that BioWare will have to assume many players don't know about in the next game). The capstone Trespasser DLC however: a lot of my concerns with Inquisition actually felt like they were improved significantly, so the team were already moving in a good direction. We now know the next project from that team got cancelled or rebooted with the loss of the project lead so the future is less certain. One thing the teaser (which was for the new project) did make clear is that the narrative hooks at the end of Inquisition are definitely the jumping off point for the next game.

It seems likely that next game will arrive at some point on a new generation of consoles. So we still have some time to wait. Luckily there are four campaigns here that are very worth playing through.


Plug: why not help me justify spending over 200 hours replaying old games recently and writing up my thoughts by *jangling tip jar* becoming a patron.