Wednesday, 15 November 2017

Tabs vs Spaces is the Wrong Question

We've all been there, let's agree upon a style guide/rules for this source code so we can all work on it without anything falling apart via an inconsistent mess or the commit logs being endless back and forth over style changes. From the most basic (CR+LF or LF - because no one wants that flipping back and forward in a document's commits as people work on it from various systems) to K&R vs 1TBS vs Allman etc braces. If we can find agreement on indent styles, can we also find agreement on spaces (2? 4? 3?) vs tabs?

The problem is, this asks the wrong questions. Why does my view of this source code need to be the same as yours? Why is our version control system working so hard to force us to all agree on the most minute of details (even totally invisible ones like line breaks)? I'd always said tabs was the right answer to indentation because that way the user can set their own indentation length without changing the underlying text but this is not actually the answer. The answer is that we shouldn't be sharing source code as (pure, versioned char by char) text in the first place. We should look at what this source code actually looks like to the machine, the underlying meaning onto which we insert some decoration (comments that we should preserve at their attachment point in the source code): the AST.

Maybe I spent a year or two too many in the metaprogramming/code analysis/compiler dev space recently but this seems like not only the obvious solution (only held back by lack of inertia) but also something where everyone wins. We standardise a bit, because you'll no longer be messing around with fixed-width spacing to align notes and will have to get better at both self-documenting code and writing meaningful contained comments, and we allow the actual sharing of our source code to compare via ASTs. From there the actual storage medium can still be text documents (or some AST intermediate format) but before anyone views stuff then it goes through their own pretty printer. Whatever style guide you want to make to most clear what the code does, you apply that and that's what you see and edit (with your own settings keeping your style consistent). When you share it again then it gets turned back into the AST view and so only the actual changes are committed (to be shared back to everyone else).

Do you find K&R braces so much easier to parse? Then that's what you set your pretty printer to generate for your local copy. There's no reason why we should all be forced to view the same text document representation of source code. We all have different views on the various choices of formatting and which errors it makes easier or harder to spot. We all have different susceptibilities to inserting those defects. We should all find the best way to view source code to reduce our chances of making mistakes when writing our source code. We can start this today. Rather than spending time in meetings and maintaining or teaching the house style guide, spend the time writing tools to make your sharing layer style agnostic. Just start out by always pretty printing the code before commit into a common style and pretty printing changed documents with the user's preferred style after a pull. If you have someone on the team who tinkers with LLVM/Clang then you've probably already got the expertise to make those tools (even if you can't find something suitable that's already out there which can do what we need in reformatting for your programming language). If this idea gains traction, this is something that should be extended into the basic functionality of VCSs.

Once tools start to support this we can look at moving the VCSs to never needing the text version, just storing an AST representation. Also your commit system now automatically includes static code analysis options as you're parsing this text into an AST - no excuse for allowing commits that your compiler will choke on.

We can move into more deeply integrating this new freedom. While I may like mixedCase names/ids, another person may prefer snake_case and why do we need a common naming convention when actually we're just using different ways of compounding a list of words (the actual unique identifier). We can store them as lists of words tied to each name and pretty print then into the actual source code to suit your preference. As long as we're not tripping over any language restrictions then that should be totally fine. We don't need to fight over so much in these style guides, the compiler certainly doesn't care.

Edit: with thanks for links to projects already working to provide the tools to do exactly this: SwiftFormat & Esprima's Source Rewrite demo. I'll add that my own source rewriting tools were always built on PyCParser (which is a lot easier to throw something together with than using the transformation stuff in LLVM/Clang but also only for C99 unless you write your own grammar). I see Clang now has a mature ClangFormat tool so C/C++/Obj-C code doesn't need to poke around in the guts of (potentially moving) Clang APIs for formatting.

Also very good to hear of development teams who are already doing exactly what I've suggested above (with rewriting to a house style on commit & option to custom pretty print on pull).

Tuesday, 31 October 2017

Guild Wars 1: Halo Edition

I thought I was going to write about Forza again this month. New game, it was the first time back on the circuit-driving stuff for me for some time so all of those new tracks & cars would be fresh (and lovingly rendered in 4K HDR with the arrival of the Motorsport series on Windows). The series had finally caught up with the competition (dynamic conditions etc) and this has always been the best semi-sim racing to be had with a controller.

It was not to be. The actual racing still got me where it needed to be. Start out with assists off (baring a braking point so I don't need to memorise tracks), cockpit cam; overtake the pack for a few laps of racing then go into driving mode and chase that top clean lap for the leaderboards at the end. But I'd decided to play around with longer races in the career (nice option) and beyond the LoD system seeming to remove all the LoD levels at once (possibly now patched), the game's stability simply wasn't there. There are no in-race checkpoints that save your progress so if you select a three hour race (as the long version of the very first endurance race in the career is) and the game crashes after two hours then you have to start again. I asked myself, "what if it crashes again?" and realised I don't know if I want that groundhog experience. Guess I'm waiting for them to nail down a proper release (still no Forzathon events, an auction house, or leagues) that works 100% of the time or add in mid-race save points for endurance races.

What have I been doing instead? Catching up on some backlog stuff and then jumping into that Destiny 2 PC release. I'd generally not been that bothered by the release of the first game, especially with the crashlanding of the launch version they originally scraped together from their cancelled initial vision. Basically the idea of a Halo game without even the framework of a low-budget SciFi story to keep it all going forward sounded like something I'd find ok but not remarkable (and the bits of it I did play confirmed that it wasn't quite sticky enough for me to keep going with that gameplay loop around shooting the packs of enemies).

What changed? I guess I finally had a break from playing Halo games (Halo 4 was fine but maybe I'm done until they find something new to engage me with and Halo 5 was not that) which gave me time to look forward to more of that sort of SciFi shooting. I also found something in the RPG loop of The Division that scratched an itch I wasn't convinced you could scratch well (with years of FPS RPGs showing that anything but the actual shooting or gun collecting was where the real game was - the point of classic Deus Ex is not to constantly find the guns fun to shoot with, they're where you go when a plan is unravelling).

So Destiny 2 is a shooter based around being part of a class of people who are special but during the plot it turns out that actually you are the one true extremely special person (or person in a robot body? some of the world details are really not explained if you didn't dig into the website descriptions provided with the first game) and so get to do a load of heroic missions where everyone speaks in your ear to say what a good person you are for being good at shooting and literally unable to die. Except for the bits where you're not allowed to die and you get checkpointed back to the last save point. It's very Halo possibly by way of Bioshock's Vita-Chambers. But it tells a basic complete story which is accompanied by a load of good skyboxes and enough dialogue to keep you bouncing along that ten hours of main narrative. If you have played AAA solo FPSs, you know roughly what to expect.

There is also, in an almost Guild Wars 1 style move to bring a story to an online setting, lots of side missions (many co-op, others solo) and hidden treasure in the four large open areas that much of the story campaign is built around. There's probably another ten hours of narrative there (moving away from cut-scenes into purely audio and scenarios - some repurposing the mission areas with new enemies and set dressing) and a lot of it doesn't really get pointed out while you play the main campaign story (beyond being map icons to run past).

The story is for pushing players from level 1 to 20 which ties into a gear (power) level of 10 to 200. After which the gear goes to around 300 but you'll have finished the main campaign so it's an ideal time to do all the other side missions. I ended up at around 275 after enjoying those additional stories and hunting for some hidden caches in each open area. I even bumped into some other players and did some of the co-op encounters (not story but just a scenario that plays out against waves of enemies and some props that you have to destroy to capture within the time limit of the public event) while I was moving between locations. It's all very as expected for an online game like this without making it a game you need to constantly group up to enjoy. They put a full Halo in there, even if they did make it so basically better gear stops really dropping at 265 in order to make there a tier left for people who want to group up and do "end-level" activities (some PvP and PvE content, including a single raid, for which there is a bit of story setting it up in the game but not that much - they did kill the big bad at the end of the story so you can't fight them here).

One of the weirder things is that The Division (and what I understand from Destiny 1) built guns from random perks and archetypes/names with a bit of variety to give it that Diablo dice roll for exactly what you get. Here, a gun has a name and a power level; that's enough to completely define it. From there you get to equip your own paint skin, mod (a gem system for attaching a small extra stat to a single slot), and pick from what that weapon type has in terms of options. But you're not rolling a "Gun X with Red Dot 3" - all Gun Xs either have the option (ready to be enabled) to equip Red Dot 3 or they don't. A few guns have no options at all and many it's just a choice of short or medium scope (which changes the effective range far more critically than changing the actual optics of the weapon - if you're outside of the effective range then it doesn't matter if you always hit the weak point, you'll be doing next to no damage).

It's a fun game for the moment to moment and there's just enough variety in there to give you enough to get a strong preference for certain play styles and so weapons that enable that (eg by the end I had decided that I always wanted range for my kinetic slot and leave the 3-shot pulse rifles at home because I'll always take a scout rifle for the DMR-style experience). If you're looking for a slightly more evolved combat than just replaying a Halo game, Destiny 2 isn't a bad option.

Tuesday, 19 September 2017

Forza Motorsport 7: Demo

About a year ago I wrote about my history with the Forza series of games and how I found the state of the series with the PC beta/test release of Forza Motorsport 6: Apex.

We are two weeks away from Forza Motorsport 7 finally providing a full track experience outside of the closed console ecosystem and a demo (a three event series which has been used to show off the game at events but now runs outside controlled environments) has just been released. There are already a few concerns about FM7, from loot crates (including exclusive player customisation rewards) taking a prominent role to the Auction House and several other features being missing at launch (they are coming later but this does feel like an XB1X launch game they're pushing out early), but I'm sure we'll be able to get into that next month when we actually know exactly what's in that final retail experience.

I am no Forza pro, the sharpest end of the global leaderboards are beyond my skill levels, but I do enjoy a bit of a spin with assists off and the AI tuned up towards the top difficulty. The rewind options in solo play allows fast development of skills because you can constantly work at the edge of your skill level for that car class. The assists mean you can slowly turn off automation and get used to something new to master while the way those assists limit your performance give a reason to want to drive without them, even at the sharper end where reaction times are being tested. None of that can really be drilled into in a three track demo that fixes the vehicles (this is the introduction that tutorialises the start of the campaign in FM7) but we can get a feel for how it runs (on my ageing PC - hurry up Intel, release those mainstream hexacores) and if it feels significantly different to the previous Motorsport entries.

From a technical standpoint, the dynamics settings allow for a smooth 60fps v-sync on my system driving 4K with 4xMSAA (you can decide which things are static so you can enforce MSAA constantly rather than having it tweak on the fly - there is no temporal AA here to accumulate sub-pixel accuracy over time) but something about it feels off. I can view my replays and see the locked 60fps presentation often without dropped frames around the areas of concern (and what looks like even time-steps on my fixed-refresh screen) but it feels like a micro-stutter. My best guess at this point is that in the points where I feel it, the car is starting to shake on the track (at speed or over corners) and the more reactive camera causes this oscillation to combine with the framerate in a way that feels like stutter. I have tried changing the Camera Motion Effects but that only appears to change how the chase-cam and HUD shake - inspecting the replays shows the cockpit camera is still bouncing around inside the car whatever this setting is configured as. Something about it feels new and possibly something I'll need to get used to in the high-end vehicles before I can feel comfortable.

There are also some areas of actual stutter that I'm finding under extended play; perfect 60fps frame pacing that suddenly stalls for 5-6 frames in a row before catching back up, which is hopefully an issue they're tracking and not a sign my CPU isn't fast enough for the simulation. You can see a slightly concerning stall (10 frames long) just before the lap change in the video above but it's very rare that it drops more than an occasional single frame - completely different to the mess that was Forza Horizon 3's launch.

Edit 21/9: So I actually looked at the performance logs to see if I could find what was happening when the GPU stalled out for these up-to-a-dozen-frame blocks in the demo and just having task manager open made it obvious - I need more RAM. Due to my PC being over six years old, the motherboard no longer accepts multiple RAM sticks (again, weeks away from Intel responding to Ryzen and then I pick which system to buy) so I'm stuck on 8GB. Forza (at least with the settings on Ultra-Dynamic) plus OS (nothing else in memory, even video capture turned off) seems to want at least 10GB of RAM so these stutters (as seen above) are from RAM evictions/shuffling to pagefile.

The new more-forward cockpit position option is great, giving just enough of the dash to read the instruments (although a custom-FoV option would be appreciated to give a bit more control) without losing 50% of the screen to rendering the inside of the car. Everything feels good. In this demo there is no customisation, no sense of how the progression is, but as a small taste then it runs, if anything, better than Apex does. There are a few tweaks that should happen before release (icons for the vehicles on the mini-map rather than opaque rectangles is presumably a texture load issue) but this is looking like a solid platform on which they have hopefully built enough tools to keep everyone happy.

Thursday, 31 August 2017

Engine Change: Life is Strange

There has been a lot of discussion of the "fingerprint" of an engine recently. Does it matter to people who play games which engine has been used? Are we beyond an era when Unreal Engine games are desaturated brown blobs that slowly stream in the high detail textures several frames after the camera gets there or Unity Engine games are limited by the lacking level geometry options for efficient map construction? Generally the answer is yes, but there are still some things which you can choose to use from the default engine configuration that make things visible (beyond license deals to save money by showing the splash screen of the engine when you start the game - seriously publishers, pay to not show the dang logo; you can afford a proper premium license).

It's important to note that anything can be rebuilt on top of or in replacement of anything that ships by default in a major engine. Most of them now offer source code access rather than forcing developers to work in a scripting language to do their gameplay code and locking up the engine as a binary blob you can't poke and rewrite (traditionally something that was prohibitively expensive to license access to). Everything on the GPU is just a shader and you can write your own, the fingerprints can come from which shaders are in the examples directory or part of the effects package that is offered with an engine. If you want a depth of field effect, there's probably a well optimised version tuned in your engine options rather than rolling your own and possibly making some trivial mistakes (or some incredibly subtle ones that only pop up on unusual hardware platforms). If you're using physically based lighting then your engine probably has one specific way of doing that with a load of default parameters for exactly how it looks.

With the move from Unreal to Unity, let's explore that in Life is Strange. The interface immediately shows you where the Unreal Engine is an extremely mature option. Everything feels good to just move around with either mouse or joypad. Opening Before the Storm, Unity's UI (at least the iteration being used here) shows less polish. The deadzones that come with sensible defaults in Unreal are here unable to deal with the slight wobble on my old 360 stick. Is the deadzone still only 0.001 here? That does not feel like the 0.1-0.2 deadzone of sticks in something like Halo or Gears - something that should possibly be set even wider in a UI where you're only using the stick as a digital four-way most of the time.

From the very opening screens it is clear that these two games in the series are not going for the same look. We can highlight a few things here from a technical perspective. LiS has a very strong directional blur around the edge of the scene with strong chromatic aberration effect (the blur size is different for each colour, causing fringes) that's missing in BtS. Meanwhile, BtS immediately shows a high quality blur effect being used to create depth of field (which we see in LiS itself but not in this menu, where a thick fog effect really paints a haze onto the bay). There are also a thousand little things about how the lighting is calculated and the tone mapping used to resolve that to the final image which lead to very different results.

But outside of the technical, look at the art. These scenes are more than technically using slightly different technology and effect choices. LiS is always painted with a style that invokes a rectangular brush and that's already apparent here. Leaves to tree shapes to houses, to the white cliffs - it's all being textured for this very specific impression of a certain painting style, faking what would be brushstroke type despite being a clearly polygonal construction. Just look at the leaves on that main tree on the left in BtS - speckled with perfectly circular dots to give the impression of detail and the tree bark, while showing some of the rectangular blocks of colour, breaks them with far too much heavy detail texturing so the brush effect is lost. This is an artistic difference we will see throughout the comparison. It's not the engine that's providing the biggest cues to a changing style but the choices of the art director and other artists on the projects.

Here we finally pick up on that chromatic aberration in BtS but toned down to the point where we only just pick it up in the bottom of the scene (and will continue to only just catch it in the rest of these screenshots). But we can also see that the high quality blur is being used to good effect in this dark scene along with some decent light cones to make this 1337 jump out in the very first scene of the game.

But I still miss the visual style of the original game. Despite the night vs day difference here, I find that things like the heavy fog over a high quality blur make for a more satisfying result which highlights the texture work. It's rough and like the chromatic aberration, it's got absolutely no interest in even pretending to hide itself. It charms me, much like that characters in the game.

When blur is used here in LiS, it's that smearing effect that gets caught out on the polygon edges in the foreground. I know the technical reasons why and I'd avoid it 99% of the time as long as I could afford to use a more expensive DoF system but here, here it reinforces the blocky presentation. Yes, a deep blur is used in spots but it never seems to totally overwhelm the angular stroke edges it smoothes over. I will always look at the BtS depth of field and feel the roundness of it bringing out the smoothness of the scene in contrast to the more angular blocky LiS texture.

Clearly, with objective eyes to the technical details, Chloe is improved in BtS. The eyes immediately show more of a spark of life, there's an attempt to bring the characters towards a slightly cartoon end of realism and it comes with better facial animation and far closer to what you'd expect from CG actors. We will have to see how the quality is received after LiS ran into quite a lot of comments about the lip-sync that showed some had troubles connecting with the characters which getting caught up on technical details of the presentation.

LiS opened front and centre with some strong technical effects that highlighted the art direction and brush style being invoked. Water fell and left rectangular patches of wet to reflect the light, the simple models and texture style jumped out and reinforced that opening menu vista while showing off a familiarity with real-time effects around lighting that merged well with the more classic approaches like using fog to tint the scenes in a less photo-realistic style that built on those base textures. Effects were rarely subtle but it provided a very distinct final look. I am interested to see where BtS goes with this new approach which still nods to the old visuals while removing any rough edges.

Monday, 31 July 2017

Platform-Agnostic Hot-Swapping for C

Quick post discussing some coding things that are pretty old but also something possibly not obvious for people who are new or used to modern tools (IDEs) that remove the need for doing it yourself.

I've been really busy but tried to jump into a quick game jam to bash out a new renderer in Vulkan over a weekend. Doing so the core requirements were rapid iteration and low overhead (verbose, explicit everything Vulkan was an interesting constraint). As I'm back on Windows (after five+ years basically exclusively doing serious work on Linux) then I'm using Visual Studio and have access to Edit and Continue. But what if I didn't or was using a programming language not supported or wanted this to be something that worked over several IDEs (including ones without this feature)?

We're used to building our project, running it, and then rebuilding it for another test. But that's not rapid. Here's the old alternative that's possibly as old as shared libraries (maybe even older). Start your main program loop setting up some data storage and then:

[do all your initialisation here - don't leave it in your dll load]
while (notQuitting) {
  newTimestamp = getDllModifiedTimestamp();
  if (newTimestamp != currentTimestamp) {
    if (dllLoaded) {gameUnload(&data); FreeLibrary(tempDll);} // empty old library.
    CopyFile(Dll, tempDll); // so compiler can write Dll later, not blocked.
    LoadLibrary(tempDll); // get new library.
    gameLoad = (loadCall*)GetProcAddress(tempDll, "load");
    gameTick = (tickCall*)GetProcAddress(tempDll, "tick");
    gameUnload = (unloadCall*)GetProcAddress(tempDll, "unload");
    currentTimestamp = getDllModifiedTimestamp();
  gameTick(&data); // run the actual game.

Which gives us a rather simple implementation of a program that waits for us to swap out the dll it uses to actually do the work and then reloads that new dll. Use the storage you pass in as somewhere to pass through any data you want to persist (so ideal for the persistent game state if you're making a jam project). Just make sure you encode any tweaks to the structures you need into the updated OnLoad() method so you can tweak anything. No, it's not as powerful as stop-anywhere Edit and Continue with the automatic editing of locals to allow the program to resume but if you're writing a game then you really don't need that granularity - just reshape things as you need at the outside of the loop between game ticks.

Oh, and one final thing: when Visual Studio is actively debugging the loader application you're in, it doesn't want to let you build another project in the same solution. The command line gets round this issue but I have no idea why anyone thought that compiling a different project should be blocked when anything in the solution is being debugged.

msbuild gameDll.vcxproj /p:configuration=release /p:platform=x64

Monday, 19 June 2017

The End of Quad-Core Dominance

Quad-core CPUs on desktops have been the dominant PC configuration for a long time. Long enough that my old early-2011 system is finally reaching the point where the motherboard is probably dying and the CPU cannot be overclocked any higher to fix poorly optimised shipping games. In fact, the crashes and beeps from the motherboard are quite insistent that that overclock is now beyond the system.

However, console-followers will have noted that octo-cores are now the hot thing. This isn't hyperthreading (hardware schedulers that can shuffle two threads onto the execution units inside a single core without evicting either) but eight genuine Jaguar cores running around 1.6GHz in both the main consoles. The caveat being that a Jaguar core has about half as many execution units (count Int and FP ALUs that can be scheduled vs Ryzen above) in which to do the actual maths the code requires and is clocked at about half the frequency of modern desktop processors. Even the decode and dispatch front-end can only chew through half as much to feed the core when compared to the Ryzen's design - everything is relatively balanced. Effectively, there are eight cores but only about as much work can be done (with the maximum throughput) as with two cores on a high-end desktop CPU. This requires game engines be optimised to work well with low single-threaded performance (apparently unless you're porting Forza Horizon 3 to Win10/UWP!) when tuned to each console (where there is far less overhead from the OS/other tasks running).

My old Sandy Bridge's cores actually sits somewhere in the middle of Jaguar and Ryzen in terms of execution units. That's one of the reasons why a new CPU may not clock any higher (especially at the limits of overlocking) than my processor but it can do significantly more work. Each core is bigger and can do more each cycle. But, eventually, four cores is simply not something you can just keep making wider without leaving resources underutilised. This is one reason why hyperthreading becomes a really good move, because juggling two threads on each core increases the chances of being able to dispatch work to each execution unit. The big rumour (basically all but confirmed) is that by this time next year even Intel will have moved to six cores in their upper-end mainstream processors. If you're buying new hardware today (which is where I am) then you must consider this push to increasingly threaded work, the benefit of thread scheduling for wide cores, and the expected future where four cores is something you find on laptops and lower end desktops.

The i7-7700K may offer the fastest single core, but it appears that Intel's new High-End DeskTop platform (with beta motherboard firmware) is offering many cores without holding back single-threaded performance. With enough money, you can now buy six, eight, or ten cores (up to 20 threads with hyperthreading) with that supreme Intel single-threaded performance. Competition will only increase when AMD's Threadripper (four partially disabled Ryzen dies on a single socket) appears in August. What do these HEDT platforms offer that the current Ryzen (octo-core with those cores we already described as wide) doesn't? Twice as much RAM bandwidth from extra memory controllers and more dedicated PCI-Express 3.0 lanes (rather than lanes bottlenecked off the motherboard controller) to connect graphics cards and other high-speed devices. That becomes more of a concern for a future-looking platform as M.2 SSDs already push to use 4-lanes of bandwidth each. The short load times on PC continue to look like they'll go down, even without new SSD memory types.

CPU launch Cores/Threads CBr15 ST CBr15 MT CPU+mobo
Threadripper 1950X August 2017 16/32 170 3000 $1,200
Threadripper 1920X August 2017 12/24 160 2400 $1,000
i9 7900X June 2017 10/20 195 2200 $1,200
Threadripper 1910X??? Late 2017? 10/20? 165? 1950? $850?
i7 6950X 2016 10/20 165 1850 $2,000
i7 7820X June 2017 8/16 195 1800 $800
Ryzen7 1800X 2017 8/16 160 1650 $575
Ryzen7 1700X 2017 8/16 155 1550 $500
i7 6900K 2016 8/16 155 1500 $1,200
i7 8700K? Sept 2017? 6/12 195? 1400? $500?
i7 7800X June 2017 6/12 185 1350 $600
Ryzen5 1600X 2017 6/12 160 1150 $450
i7 7700K 2016 4/8 190 950 $475
Ryzen5 1500X 2017 4/8 155 800 $350
i5 7600K 2016 4/4 170 650 $375

If we assume that RAM will cost what it costs (4x8GB sticks is not significantly different to the price of 2x16GB sticks, everything uses DDR4), the platform differences will come down to CPU costs and motherboard costs. The HEDT platforms are both going to lack value motherboard offerings and so inflates the platform cost beyond simply buying a premium CPU. But also that will provide more connectivity, making use of the extra PCI-Express lanes. The full picture will only emerge in August when Threadripper launches but we can already look at some initial data. I've done a few guesstimates for where we've yet to see initial results and AMD's HEDT is definitely the far more speculative section as we don't even have pricing, let alone beta performance numbers.

Edit: shortly after writing this the main reviews (taken after the weekend BIOS updates) landed so those speculated scores for Intel HEDT have been replaced with solid data - the estimates were basically on the money except the 7820X is actually slightly stronger in single-threaded tests than expected.

Edit 2: By late July, it had become clear that Intel was likely going to react with a new desktop i7 (with six cores) earlier than 2018 and that the models of Threadripper on offer at launch were not the full range speculated upon earlier (rather than being two Ryzen on a chip, they are the parts that failed EPYC server testing so have half the cores disabled and may not offer a low-end cheap variant (1910X)). The table has been updated again (with finalised 1920/1950X data confirmed in August to be as expected, no 1910X on the horizon).

Threadripper will all have 60 PCI-Express 3.0 lanes, giving effectively unlimited bandwidth for anything that will fit on a motherboard. The top of Intel's offerings are also not going to worry anyone who isn't buying several GPUs (44 lanes on the 7900X, 40 on the 6950X & 6900K). Where Intel start to differentiate their offerings is the 7820X & 7800X which only have 28 lanes, not even enough to fully saturate two 16x GPUs, although currently GPUs rarely actually use the full bandwidth offered. The Ryzen and quad-core Intel mainstream CPUs all have 16 lanes for the GPU connection but then mainly rely on their chipset to provide anything else. Ryzen does have four extra lanes that can be dedicated to a M.2 SSD as well as the chipset connection while the Intels generally shuffle far more lanes off the chipset than X370 motherboards - but you can't use them all at the same time as they'll just bottleneck. The issue is when motherboards mask lanes, for example where you have several 16x slots but using them will start to cut bandwidth or disable other connections like M.2 ports. It's not an immediate concern as everything should be able to drive a high end GPU and SSD for now, but expandability may be more limited than the selection of ports (several 16x slots, multiple M.2 ports) on the motherboard implies - the second M.2 port may well be a 2x PCI-Express 2.0 connection so quarter of the bandwidth (2.0 is half the speed of 3.0) of a full M.2 port.

We can certainly see where a future hexa-core mainstream i7 may offer an extremely good value next year with both single-threaded performance and enough cores to compete with the brand new 7800X, even if the RAM bandwidth will be reduced - potentially starving cores with workloads that are mainly about fetching data. It is clear that for threaded tasks the Ryzen 1700X already offers a similar price for even more performance thanks to eight cores and Threadripper should offer a lot more. However, if we look at single-threaded performance then the void becomes apparent and that is what leads to some issues. CineBench 15 isn't the perfect test but it's illustrative of the gap, one that Threadripper is unlikely to dent. The 7820X retains most of the value of the 10-core cousin that costs $400 more and offers performance in every use case for an expensive but attainable price (no worse that a premium laptop). Of course, all of this changes if Threadripper has some secret sauce to provide single-threaded results beyond that of Ryzen. In less than two months we should have all of the data. The 7820X offers twice the performance (in tests that can spread the work) for less than twice the price of Intel's mainstream i7 option and without sacrificing any single-threaded performance or overclocking ability. For those who don't require the maximum single-threaded performance (especially overclocked), the current Ryzens already offer a significantly more attractive package at a similar price to the quad-core Intels.

Last year's $1200 Intel HEDT offering is certainly looking like a very bad choice while the $2000 premium combination looks to be made completely redundant with Threadripper. Hopefully by speculating about where the mainstream goes next year, we can avoid bad choices if we need to buy a new system this year.