Friday 15 March 2019

Good Enough Meets Extremely Fast

I've been playing a lot of games in the last month that are getting on for a decade old. Some of that is for a longer post (series of posts? my notes, not yet having finished the final Dragon Age game, are 3500 words) but I wanted to do something shorter about how these games (that exported their artist assets expecting most users to play them at 720p) stand up rather well on modern systems. There may be a touch of riffing on this recent blog post too.


Almost exactly a year ago I was asking similar questions to that linked post, but about the asset fidelity arms race and the last decade of progress measured in pure asset comparisons (that is, taking eg a 2011 game and comparing it to today by rendering both assets with roughly equivalent to today's real-time renderers). Playing through this series of games from 2008 to 2011 in quick succession was a great visualisation of how those old assets hold up in 2019 with 4K60 output.

None of the screenshots that I'm embedding here are doing anything fancy like injecting alternative shaders or swapping out the stock assets with higher poly community mods or more detailed textures. Dragon Age 2 has the "High Resolution Texture Pack" (advertised as for GPUs with a massive 1GB of VRAM) which is an optional official download on Origin but I'm pretty sure that was released on the same day at the base game (and is official anyway). Everything was captured looking for a ~60fps experience so it's not a DeadEndThrills approach of turning everything up to 11 even if it broke the framerate and then capturing and downsampling purely for the photography. These are faithful captures of the internal framebuffer for the game as played.


If you click through to the screenshots in this post, you'll notice some unusual resolutions involved because today DSR/VSR (super-sampling at the driver level - exposing fake higher resolutions to any game and then downsampling for output to the actual screen) is an absolutely stock technique. Something like a modern GTX1070 (my card will turn 3 years old next quarter - so not even that modern) has more than enough power to turn on any existing AA technique (MSAA hadn't totally died to deferred renderers in this era; FXAA etc had started to be imported from the consoles) and then also boost beyond 4K to help control some of the shader aliasing. The shaders aren't that complex so there is plenty of performance to play with and often no one is getting fancy with HDR to really explode everything (compared to games around 2015, which seem like they're going to be a dark period of high shader complexity but not great management of artefacts & defects in edge cases; not to mention not having good enough temporal anti-aliasing yet while most everyone had migrated to deferred where MSAA isn't viable).

Despite expecting most users to see these decade-old games on much lower resolution screens, the push around this era was for good enough textures for up-close inspection. What you get when the textures are good enough up close is that you've now got some decently detailed textures even for 4K output at medium-distance. I'm not going to say all of these games are perfect, as you do clearly get some muddy visuals even in the mid-ground in some places (especially stuff like a large flat repeating floor texture etc). But it holds up surprisingly well and even the primitive dynamic shadows are often so primitive as to be easy enough to ignore (if you can't brute force it via poking at config files and demanding the GPU just throws a GB at huge shadow maps). The several years of continued development from where Half-Life 2 (including Episode 2 refresh) had left us in terms of getting a reasonably coherent result while juggling multiple different systems (this is before a unified PBR push) is often impressive. You can nitpick the results, just as you can often point to comically low polygon density you'd not see today (outside of maybe indie games and even those often push their polygon budget quite well), but it's only a few spots rather than the entire scene looking out of place on a modern system.


While clearly miles from photo-realism, there is enough detail to know what everything is meant to be and for things like a poster or sign to get close to being the actual poster or sign without lashes of artefacts or having to use a special rendering technique to achieve it (here I'm thinking of how well Doom 3 did the in-world UI stuff back in 2004 being the exception even today). There is nowhere near the level of detritus you would see in a real world, but there is enough to make it look lived in. Those props look close enough to what they're meant to represent that we're not in the situation of years previous where it was a muddy texture and often a mess of polygons that you had to work at to understand once looking at them at a far higher resolution than was originally intended. There are enough assets that there is cruft on a desk rather than only the props required for the interactions and one fake bottle to avoid the artifice totally collapsing once interactable objects started to get glowing highlights or arrows above them.

Also the lack of PBR in this era for things like human(oid) characters means the artists seemed to be more free to push the more cartoon-y stylish approach (before you defaulted to starting out with a skin shader with sub-surface scattering and worked from there) which certainly helps avoiding the uncanny valley. Some of the animation systems from this era are clearly reaching towards a fluidity the tech did not make easy and the animators were not given the budget to hand-tweak them to perfection from whatever performance capture they may have started with. I'd say it does show an "emotive gap" from looking at the puppetry onscreen trying to convey subtle emotions via expressions but often not quite getting there - but even today this doesn't seem like a totally solved issue and I find the difference from studio to studio is far more significant than simply looking at the progression of technology. Around this era of games then we've got stand-out stuff from Naughty Dog showing you could do that stuff really well with the technology back then.


I am still energised by rendering questions. The introduction of real-time ray tracing is such an exciting time to be thinking about the next generation of engine designs (and even just what the new console generation will bring in terms of a baseline performance we can expect many many millions of users to have reasonably affordable access to). Even the more invisible things like a continuing focus on code quality and reliability engineering, with several studios talking about how they're looking at using Rust to really enforce higher coding standards (banning some patterns of design as too risky, which the Rust borrow checker enforces at compile time) in their work.

How do I feel going back a decade and enjoying all these games that still look good enough today (thanks to the extremely fast GPUs we've got)? Well it makes me think about what we're working on today and the hardware we'll be able to use to replay it in another decade. What slight visual deficiencies we'll be able to brute force around; just how detailed things might look on 8K TV panels with amazing contrast/brightness options (and maybe some of that Deep Learning algorithms tweaking the game output to enhance it without the horrible results from previous generations of "TV enhancements" to the input signal) or with VR headsets that sit us inside recreated 3D spaces and give us effectively even higher pixel counts (via head movements allowing us to be truly surrounded in a scene and 4K VR panels).

Games have longer shelf lives than ever before and can continue to grow even long after we've stopped actively working to develop them. We should probably think about making sure all our sliders can be unlocked to go up to 12 so that players in ten years can continue to poke the settings up as they get the hardware to run it.

No comments:

Post a Comment