AMD’s marketing materials show that their new generation of processors are taking a double-digit lead in FPS, but as we can see below when these processors are mounted on a real PC, things change a bit.
AMD Ryzen 9 5900X vs. Intel Core i9-10900K in gaming
The tests that we will show you below were carried out by TechPowerUp (link at the bottom of this article), and it must be said that they were carried out with a simulation under the graphics engine Unreal Engine 4 with the DirectX 12 API which seeks to keep both the processor and the GPU to the maximum without changing the scene, that is to say it always puts the material to the maximum regardless of the time that passes (unlike the graphic benchmarks that we know all, which have dynamic scenes that modify the load in which the team is subjected).
We’ll start by explaining the charts you can see below. On the vertical (Y) axis we have the frames per second (FPS), and obviously the higher the better (vertical sync is off, of course). The horizontal (X) axis shows the load factor as a percentage, which is relatively arbitrary, but you have to think that further to the left, each FPS is easier to render, so higher FPS is achieved, while more towards on the right, each image is more complex and places greater demands on the GPU. The idea here is to be able to play around with the CPU-GPU bottleneck to see how badly it happens and what happens when it does.
In the graphics above, you can see the results of what we have explained to you, using a GeForce RTX 2080 Ti. We can see how AMD achieves a much higher FPS than Intel when we look at the left area of the graph (around 20% improvement), and that’s the benefit of the Zen 3 microarchitecture CPI. In this limited CPU scenario, no there is no doubt that AMD is clearly the winner.
As we move to the right on the graph, we see that the FPS decreases, an expected behavior because as we indicated previously, the GPU is more and more loaded and slows things down. Interestingly, the drops are almost symmetrical on Intel and AMD, but at very different FPS speeds, and only above 40%, the gap between the two platforms begins to narrow.
At 80%, AMD and Intel are already getting almost the same FPS rates; we are starting to have more and more limitations because of the GPU. As the GPU load increases, there is less work on the processor because there are fewer frames to do calculations, and for game logic there is no reason to update states. more than once per image because only updates can be displayed. once per image, right? This reduces CPU usage, and that’s why the lines representing Intel and AMD end up getting closer.
Now that the GPU load is increasing, we are gradually seeing a small window between 80% and 120% where the Intel Comet Lake platform outperforms AMD (it is marked with a red arrow). It’s quite surprising, but the data is too clear to think that this is a random event, that is, it happens for something. One theory is that the aggressive logic of idle state of AMD processors it turns off some of its hearts because it is no longer fully charged; in theory, Zen 3 does this very quickly and with no interaction from the operating system (for power management) and boots them very quickly, but not instantly and it ultimately shows.
Once we hit 140% on the graph, the gaming FPS rates are pretty much the same for Intel and AMD. Let no one be surprised because the GPU is causing a bottleneck (on purpose), and whatever CPU you have, if the GPU doesn’t give more you won’t get more FPS in games because it no longer depends on the processor.
How RAM speed influences
The following test shows the effect of memory speed. Instead of using DDR4 3200 MHz CL14 RAM, DDR4 3800 MHz CL16 RAM was used for the test, with Infinity Fabric running at 1: 1 (IF: DRAM). This is the best possible scenario for AMD, as a Ryzen 9 5900X cannot run Infinity Fabric at 2000 MHz.
Here we can see that the shape of the curve is almost the same as the previous scenario, but there are slight differences: the area where Intel is faster than AMD is still there, but the advantage is much less pronounced.
What if a less powerful chart is used? And a more powerful?
In previous tests, as we mentioned, an RTX 2080 Ti was used, a graph that by price of course not many users can afford, so let’s see what would happen with a less powerful GPU, like a RTX 2060 which is the top seller. from the previous generation of NVIDIA.
Again we can see that the curve is almost the same, but in this case Intel comes out even better than before. When the processor is very limited, AMD registers huge gains over Intel in games; the “bump” in favor of Intel is still there, but in this case, instead of being between 80% and 120%, it comes much earlier, between 50% and 90%. This is because the RTX 2060 offers much lower base performance than the RTX 2080 Ti, which means the workload is tied to the GPU much earlier. It’s also worth mentioning that the space where Intel goes is much bigger.
Now let’s see what happens if we use an RTX 3090, NVIDIA’s high-end graphics card today.
Intel’s performance advantage has completely disappeared and AMD is taking the lead on all graphics. Also here AMD and Intel gaming FPS don’t meet up to 125%.
AMD is (much) better than Intel but only in ideal scenarios
After extensive testing, we can see that AMD has outperformed Intel in terms of gaming performance, but only in the best cases: using faster memory and with a very powerful graphics card. Sure, that’s a reasonable requirement, but the reality is that the biggest market niche is in the mid-range (especially when you factor in the price of high-end graphics cards), and that means that the majority of the users they will find themselves in the position that Intel gives them better performance than AMD in the games.
Note that AMD’s AM4 platform makes upgrades much easier due to its extensive backward compatibility. This makes gamers likely to upgrade their processor independently of the rest of the hardware, perhaps just by upgrading their processor while keeping the motherboard, memory, and graphics. In these cases, the gains will continue to be substantial, but only if games do not get the GPU to run at 100%, because then they would be in the position that an Intel processor would give them a better result.