Directx 11 Vs 12 Comparison Essay

Battlefield 1 launched in October of 2016, so yeah, you could say our analysis of the game’s performance is fashionably late. But we’re making up for it by including copious amounts of data. How thorough did we get?

Well, we tested 16 different graphics cards across two resolutions on a mainstream gaming platform using all four of Battlefield 1’s quality presets, then we tested 19 more cards on a high-end gaming platform at three resolutions, again, using all four of Battlefield 1’s quality settings. To wrap up, we benchmarked five host processing configurations with two, four, six, eight, and 10 cores at three resolutions to compare CPU scaling. All-told, we have 267 runs of the same sequence charted in various ways.

Battlefield 1: A Brief Recap

At this point, Battlefield 1 is a mature AAA title. DICE even released the game’s first major expansion, They Shall Not Pass, last month. Believe it or not, BF1 is considered the 15th installment in a franchise dating back to 2002’s Battlefield 1942. All but three were available for the PC, and excluding that trio, we’ve seen Battlefield evolve across five versions of the Refractor and Frostbite game engines. Frostbite 3.0, upon which Battlefield 1 was built, previously powered Battlefield 4 and Battlefield Hardline. Naturally, then, DirectX 12 is supported, and that’s where we focus our testing efforts.

EA’s minimum and recommended system requirements are fairly stout for a game with such mass appeal. In fact, they look a lot like the requirements for Mass Effect: Andromeda, also built on the Frostbite 3 engine.

Minimum Configuration

Processor
Memory
8GB
Graphics Card
Operating System
Windows 7, 8.1, 10 (64-bit only)
Disk Space
50GB
Online
512 Kb/s or faster Internet connection

Recommended Configuration

Processor
Memory
16GB
Graphics Card
Operating System
Disk Space
50GB
Online
512 Kb/s or faster Internet connection

Unfortunately, EA’s stance on multi-GPU support is that those configurations are not officially supported in Battlefield 1. This has become a point of contention as of late, since two big game updates apparently caused problems for owners of CrossFire- and SLI-equipped rigs in DX11 (neither technology works in any way under DX12). As of this writing, you’re best off with a fast single-GPU setup in this game, regardless of the API you choose to use (though AMD did just release its Radeon Software Crimson ReLive Edition 17.4.2, which supposedly addresses CrossFire scaling in Battlefield).

Graphics Settings

The first screen you’re presented with upon clicking More -> Options -> Video includes settings for screen mode, device, resolution, brightness, vertical sync, field of view, motion blur, weapon depth of field, and colorblind gamers. As far as our testing goes, we toggle between 1920x1080, 2560x1440, 3840x2160, and leave v-sync disabled. Everything else remains default.

All of the quality-oriented options live over on the Advanced tab. There, you can toggle DirectX 12 support on or off, alter the resolution scale, UI scale factor, and maximum frame rate. There’s a GPU Memory Restriction setting to keep the game from using more RAM than your graphics card actually has and a Graphics Quality preset selection field where individual quality settings are specified. The five options are Low, Medium, High, Ultra, and Custom.

Low turns Texture Quality, Texture Filtering, Lighting Quality, Effects Quality, Post Process Quality, Mesh Quality, Terrain Quality, and Undergrowth Quality to Low, and disables Antialiasing Post and Ambient Occlusion.

The Medium preset dials all of those options up one notch, also setting Antialiasing Post to TAA (temporal anti-aliasing) and Ambient Occlusion to HBAO (horizon-based ambient occlusion). TAA goes a long way in eliminating the nasty shimmer artifacts that affect objects like barbed wire, so the feature is recommended whenever your horsepower budget allows for it.

High bumps each setting up once more, maintaining TAA and HBAO.

Ultra does the same, and again leaves TAA and HBAO active.

How We Test Battlefield 1

This performance exploration is much more in-depth than the game coverage we typically try to publish days after a new title launches. It involves 29 unique graphics cards and two distinct platforms.

For our mainstream platform, we wanted to get as close to Battlefield 1’s minimum requirements as possible. Originally we had an FX-4350 installed, but swapped it out in favor of the lower-frequency FX-8320 (apparently our old six-core chips are no longer where they’re supposed to be). Eight gigabytes of DDR3-1333 from G.Skill on MSI’s 990FXA-GD80 motherboard is right in line with EA’s lowest spec. Moreover, Windows 10 Pro is necessary for testing under DirectX 12.

The higher-end platform needed to be powerful, but not budget-breakingly so. A Core i7-6700K on MSI’s Z170A Gaming M7 with 16GB of G.Skill DDR4-2133 is plenty fast to illustrate any differences between mid-range and enthusiast-oriented graphics hardware. Ryzen wasn't ready yet when testing commenced, so we miss out on AMD's latest and greatest. As you'll see shortly, though, the quality presets gamers really want to use are predominantly GPU-bound anyway.

Of course, then there are the graphics cards. Notably missing, of course, is GeForce GTX 1080 Ti, which also wasn't out when our data was collected. Titan (Pascal) comes close enough to that board's performance, though.

AMD
Nvidia

1st-Gen GCN

  • Radeon R9 270 2GB
  • Radeon R9 280X 3GB

Kepler

  • GeForce GTX 760 2GB
  • GeForce GTX 770 2GB
  • GeForce GTX 780 3GB
  • GeForce GTX 780 Ti 3GB
  • GeForce GTX Titan 6GB

2nd-Gen GCN

  • Radeon HD 7790 2GB
  • Radeon R9 290 4GB
  • Radeon R9 290X 4GB
  • Radeon R9 390 8GB
  • Radeon R9 390X 8GB

Maxwell

  • GeForce GTX 950 2GB
  • GeForce GTX 960 2GB
  • GeForce GTX 970 4GB
  • GeForce GTX 980 4GB
  • GeForce GTX 980 Ti 6GB
  • GeForce GTX Titan X 12GB

3rd-Gen GCN

  • Radeon R9 380 4GB
  • Radeon R9 Fury 4GB
  • Radeon R9 Fury X 4GB

Pascal

  • GeForce GTX 1050 Ti 4GB
  • GeForce GTX 1060 6GB
  • GeForce GTX 1070 8GB
  • GeForce GTX 1080 8GB
  • Titan X 12GB

4th-Gen GCN

  • Radeon RX 460 4GB
  • Radeon RX 470 4GB
  • Radeon RX 480 8GB

There is no built-in benchmark, so we had to find a sequence that could be reproduced hundreds of times without much risk of death. The opening sequence from Episode 4, O La Vittoria, gives us 80 seconds between the second artillery piece firing and reaching the barbed wire fence to collect data. Performance is captured using the tools detailed in PresentMon: Performance In DirectX, OpenGL, And Vulkan. Check out the complete sequence below:

Battlefield 1 Test Sequence

Bear in mind that this is but one slice of action from a long and varied single-player campaign. Moreover, the multi-player experience is much more frenetic, and based on what we’ve seen from Battlefield games in the past, we know it makes thorough use of fast multi-core CPUs.

MORE: Best Graphics Cards

MORE: Desktop GPU Performance Hierarchy Table

MORE: All Graphics Content

This site may earn affiliate commissions from the links on this page. Terms of use.

For the last few years, there’s been an ongoing debate about the benefits and advantages (or lack thereof) surrounding DirectX 12. It hasn’t helped any that the argument has been bitterly partisan, with Nvidia GPUs often showing minimal benefits or even performance regressions, while AMD cards have often shown significant performance increases.

[H]ardOCP recently compared AMD and Nvidia performance in Ashes of the Singularity, Battlefield 1, Deus Ex: Mankind Divided, Hitman, Rise of the Tomb Raider, Sniper Elite 4, and Tom Clancy’s The Division. Bear in mind that this was specifically designed as a high-end comparison that would compare the two APIs in GPU-limited scenarios at high resolutions and detail levels, and with a Core i7-6700K clocked at 4.7GHz powering the testbed. The GTX 1080 Ti was tested in 4K, while the less-powerful 1080 and RX 480 were tested in 1440p. Before you squawk about comparing the GTX 1080 and the RX 480, keep in mind that each GPU was only compared against itself in DX11 versus DX12.

Sniper Elite 4 was a rare game with very strong results in DX12 versus DX11 on Polaris. Data and graph by [H]ardOCP.

The answer to whether DirectX 12 was better or worse than DirectX 11 boils down to “It depends.” Specifically, it depends on whether you’re using an AMD or an Nvidia GPU, and it depends on the game itself. AMD GPUs were less likely to show a performance delta between the two APIs, while Nvidia cards still tended to tilt towards DX11 overall. [H]ardOCP notes in its conclusion that DX11 is still the better overall API option, but that DX12 support has improved from both companies, performance deltas between the two APIs have dropped, and in a few cases, DX12 pulls out strong wins.

Why DirectX 12 hasn’t transformed gaming

A few years ago, when low-overhead APIs like DirectX 12 and Vulkan hadn’t been released and even Mantle was in its infancy, there were a lot of overconfident predictions made about how these upcoming APIs would be fundamentally transformative to gaming, unleash the latent power in all of our computers, and transform the gaming industry. The truth, thus far, has been more prosaic. How much a game benefits from DirectX 12 depends on what kind of CPU you’re testing it on, how GPU-limited your quality settings are, how much experience the developer has in the API to start with, and whether the title was developed from the ground up to take advantage of DX12, or if its support for the API was patched in at a later date.

And the components you choose can have a significant impact on what kind of scaling you see. Consider the graph below, from TechSpot, which compares a variety of CPUs while using the Fury X.

Intel’s Core i7-6700K barely twitches, while the Core i3-6100T has an average frame rate 1.14x higher and a minimum frame rate less than half as high. AMD’s FX-6350 and FX-8370 both see average frame rates rise by nearly 27%, but, again, minimum frame rates drop severely.

A similar point is demonstrated below, with a graph of Hitman results. The 6700K is capable of driving the Fury X almost as fast in DX11 as it is in DX12, while the FX-8370 improved enormously.

One reason why we see things playing out the way they do is because the goal and performance-improving functions of low-overhead APIs have been misunderstood. It’s been known for years that Nvidia GPUs are often faster with lower-end Intel or AMD CPUs (pre-Ryzen) than AMD’s own GPUs are. Part of the reason for this is because Nvidia’s own DX11 drivers implement multi-threading, whereas AMD’s do not. That’s one reason why, in games like Ashes of the Singularity, AMD’s GPU performance skyrocketed so much in DX12. But fundamentally, DX12, Vulkan, and Mantle are methods of compensating for weak single-threaded performance (or for spreading out a workload more evenly so it isn’t bottlenecked by a single thread).

This article from Eurogamer is older, but it still makes an important point — the improvements to performance shown by Mantle and DX12 come from allowing the CPU to process more draw calls per second. If the GPU is already saturated with all the processing it can handle, stuffing more draw calls into the pipe isn’t going to improve anything.

Now, having said all this, was there any point to DirectX 12 at all? Absolutely yes. Games, as a category of applications, have been among the slowest to embrace and benefit from multi-core processors. Even today, the number of games that can scale above four cores is quite small. Giving lower-end CPUs the freedom to utilize their resources more effectively can absolutely pay dividends for consumers on lower-end hardware. DirectX 12 is also still fairly new, with just a handful of supporting titles. It’s not unusual for a new API to take several years to find its feet and for developers to begin supporting it as a primary option. Game engines have to be developed to work well with it. Developers have to become comfortable using it. AMD, NV, and Intel need to release drivers that use it more effectively, and in some cases, may make hardware changes to their own GPUs to make low-latency APIs run more efficiently.

Neither the fact that DX12’s gains against DX11 are less dramatic than many would prefer nor its limited adoption at this point in time are unusual for a new API that makes as many fundamental changes as DX12 makes relative to DX11. How those changes will shape games of the future remains to be seen.

0 Replies to “Directx 11 Vs 12 Comparison Essay”

Lascia un Commento

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *