Started By
Message

re: Build the Best Gaming PC Your Money Can Buy: A Detailed Guide (Updated Sep 2014)

Posted on 9/29/13 at 2:10 am to
Posted by ILikeLSUToo
Central, LA
Member since Jan 2008
18018 posts
Posted on 9/29/13 at 2:10 am to
------------------------
++++ALERT: You are reading an out-of-date version of the guide and wasting your time. Read the PDF for the most accurate up-to-date info.It's best to download the PDF and use a proper PDF reader. Google's formatting of PDFs breaks all of the links. Link to directly download the PDF. I have stopped updating the text in the thread because the forum's limited code makes it far too time-consuming to change images and add text.++++
------------------------

The CPU

In the graphics card arena, the AMD vs. NVIDIA war can get ugly. But it still doesn’t hold a candle to the vitriol exchanged in CPU debates—AMD vs. Intel. Those are the two CPU companies under consideration here, and I won’t attempt to convince you that one is better than the other in all situations. My goal will be simply to educate you on why it’s not that big of a deal in a gaming PC at the budgets we’ve set here. For that reason, I won’t spend much time discussing all of the different CPU options out there.

First, let’s clear the air regarding any preconceived notions about clock speed. It’s a common argument used among uninformed individuals who claim that AMD’s $200 4GHz CPU outperforms Intel’s $320+ 3.5GHz CPU. False.

When you consider everything that makes up a “CPU” (cores, cache, pathways, registers, etc. etc.), there's no point in comparing clock speeds. Even fairly educated enthusiasts, me included, find it difficult to make a purchase decision based solely on a CPU’s technical specifications. It’s far more practical to compare CPUs by looking at quantifiable real-world performance, benchmarking, and pricing.

====///====Working Harder vs. Working Smarter====\\\====
When a processor does its job (the workload), it breaks the load into stages/phases, then further divides those stages into units that can be processed more than one at a time. Software might provide commands/instructions in a linear manner, but a CPU can prioritize those instructions differently. Let's say a program sends out instructions X, Y, and Z. The CPU can divide those instructions into subunits (say, X1, X2, X3, Y1, Y2, and so on) and execute them in the most efficient order (which might be X2, Z1, X3, X1, Y2, etc.). That’s an oversimplified example, of course. It’s much more complex in reality.

The design of the CPU makes the clock speeds unrealistically comparable between different CPUs. It depends on how much can be done in what order, the divisions and subdivisions of labor it can provide, and the speed at which each of those units/subunits can be executed (some of it has a lot to do with the register’s capability, which stores commands, and the cache, which stores data associated with those commands).

Perhaps more difficult to explain is the difference in designs that give one CPU the ability to break down commands into more and therefore smaller subunits than another CPU because it extends beyond the number of cores. However, the end result is that the clock speed dictates how quickly those subunits will be executed, and the architecture/design dictates how many subunits can be executed per cycle—or, instructions per cycle (which is a blurrier distinction because I'm not an engineer). More subunits per instruction means smaller workload per unit, which means faster processing of that subunit at the rated clock speed. On top of that, less workload per subunit also means more subunits per clock cycle.

At this point, Intel takes the lead in doing more per clock cycle, and the differences can boil down to material choices, die size, transistor count, and overall design. But it can sometimes require a trade-off. More work at a slower pace, or less work at a faster pace? It largely depends on how efficiently the software can use the architecture, which is why CPU specs must be taken with a grain of salt and it is necessary to examine relevant benchmarks. And what benchmarks are relevant to us? The gaming ones, of course. More on that later.

====How Many Cores?====
So, what about core count? That 4GHz AMD has 8 physical cores, and the Intel 3770K/4770K only has 4 physical cores.

More cores are certainly better, but there is sometimes a misconception about AMD’s core count. The current 8-core AMD FX series are not exactly “true” 8-core processors. Their architecture uses what’s called a modular design, with 4 modules containing 2 cores each. Each of those 4 modules contains only a single floating point unit (FPU), which is responsible for carrying out computations. The cores on each module must share the FPU, as well as other components with the module, such as its cache. Granted, this does not mean it’s really a 4-core CPU. It still has 8 physical cores; they are just somewhat crippled by the need to share certain components.

A 4-core Intel CPU with hyperthreading has 4 physical cores and 4 virtual cores. As I said before, Intel’s current CPUs are able to do more work per clock cycle per individual physical core. When you add hyperthreading to the mix, you completely optimize the work output from those physical cores. Hyperthreading actively searches for any core that is currently in an idle state waiting its turn, and uses that idle core to immediately process the next set of instructions. It ensures that no cores are needlessly idle in applications that are actually coded to acknowledge multiple cores, or in general multitasking.

If we were building a 3D modeling, video editing/encoding machine, a quad-core hyperthreaded Intel i7 would perform better than AMD’s 8-core FX CPU and use less power—but it would also cost over $100 more than the AMD. However, since we’re building a gaming PC, all we really need is a CPU that won’t be the performance bottleneck in the games you enjoy.

====///====What Makes a CPU Bottleneck Game Performance?====\\\====
A CPU bottleneck means the processor is holding back the graphics card from rendering more frames per second. Meaning, the graphics card is not living up to its potential, and therefore neither is your gaming experience.

This bottleneck can occur because not only is your graphics card working hard to render what you see on the screen, your CPU is handling what’s actually happening in the game. CPUs are particularly good at handling physics, so that would include character movements, explosions, crashes, shooting, etc. in a dynamic environment.
Some games are CPU-intensive simply because they were coded that way. Skyrim, for example, relies heavily on your CPU because certain rendering components (such as shadows) were coded for the CPU.

This post was edited on 3/20/14 at 3:34 pm
Posted by ILikeLSUToo
Central, LA
Member since Jan 2008
18018 posts
Posted on 9/29/13 at 2:10 am to
------------------------
++++ALERT: You are reading an out-of-date version of the guide and wasting your time. Read the PDF for the most accurate up-to-date info.It's best to download the PDF and use a proper PDF reader. Google's formatting of PDFs breaks all of the links. Link to directly download the PDF. I have stopped updating the text in the thread because the forum's limited code makes it far too time-consuming to change images and add text.++++
------------------------

====What Kind of CPU Will You Need to Avoid a Bottleneck?====
For most games, not much, as the GPU is the primary source of processing power. That’s the short answer. A better answer is “It depends.” Look at these results published by The Tech Report (techreport.com) in 2012. They tested 18 different CPUs on 4 popular games using a Radeon 7950 (a fairly high-end card, as you already know).

First, let’s look at Batman: Arkham City, a GPU-dominant game with a lot of physics involvement. It’s also a game known for being poorly coded for PC.



Most of these CPUs are a bit dated, and I’ll add up front that the CPUs I will suggest for the sample budgets will have performance levels around the 2400–3570K in gaming.

To put things into perspective, the top CPU in that chart (the i7-3960X) is a $1,000 CPU, and the one right below it, i7-3770K, is a $300 CPU. Go down two rungs to the i5-3570K, and you have a $200 CPU. While those expensive CPUs are incredible pieces of hardware for other applications, they just don’t have much of an added benefit in PC games.

The Tech Report’s test also measured frame times (frame latency), which showed improvements in direct correlation with frame rate performance (i.e., the better CPUs had higher FPS and lower frame latency). I mentioned frame latency briefly in an earlier section. To read more about it, check out these links: Extremetech article and Tech Report article

Now, let’s look at the results from a notoriously CPU-dependent game, Skyrim, including a similar test conducted by Tom’s Hardware:



Tom’s Hardware used settings that are more aggressive and tested weaker CPUs, which seemed to cause a disparity between the top-end CPUs and some of the lower end chips, including the i3-2100.

Next up, we have the results from Crysis 2. The Crysis series are known for their GPU-intensive engines. At amped up settings, they’ll bring even the strongest GPU to its knees.



The Crysis 2 results are interesting, given that the CPUs ranging from $200 to $1,000 have virtually zero performance difference in such a GPU-intensive game. Now, look at Battlefield 3 (single player). The performance is nearly identical across every CPU.



The multiplayer mode of Battlefield 3 is far more CPU-intensive, however, due to the dynamic environment, and the fact that you can have up to 64 players at once in a server (requiring lots of real-time physics calculations).

Have you noticed something about all of these results? The Intel CPUs come out on top. Every single time. As I said, Intel’s architecture in those CPUs allow for far better performance per core. This still holds true in today’s CPUs despite AMD’s improved architecture. Tests are showing Intel’s i5-4670K ($220) outperforms AMD’s FX-8350 by nearly 50% in single-core performance. However, the 8350 (an 8-core CPU) is slightly ahead of the 4670K (a quad-core CPU with no hyperthreading) in multi-threaded applications. Basically, the more threads an application uses, the less of a performance advantage the 4670K will have over the 8350.

So, what does that have to do with gaming? Right now, not much. Many games are single-threaded (meaning, they only send one set of instructions to the CPU at a time), and even the multi-threaded games are only using 3 to 4 threads at a time to process in parallel. Therefore, multi-threaded games are not even using all 8 of the AMD FX-8350’s cores for there to be any advantage.

So what this ultimately translates to is negligible performance differences between these two CPUs for gaming. This is obvious for GPU-dependent games, but even in CPU-intensive games, the GPU is still doing enough work to make up for a lot of the difference in per-core performance between the two.

There are some who may argue that the next generation of consoles will bring more games coded for 8 cores—the reason being, the new consoles have 8-core CPUs. To be honest, I don’t see that happening in the near future. It’s been stated in numerous articles that the 8-core Jaguar CPU shipping with the consoles is a weak one, amounting to only about half of the performance of an FX-8350. It’s likely at least half of the cores will be devoted to the operating system and processes that must constantly run in the background. That essentially leaves a disproportionately weak quad-core CPU for gaming, meaning that coding games to take advantage of GPU power will still be important, perhaps even more important than ever.

However, I’m not saying that games won’t be coded to use more threads. On the contrary, since the console CPU is so weak, taking advantage of multi-threading will also be important, and this will certainly extend to benefit multi-core utilization on PCs as well. But this is good news for anyone with a modern PC, and especially great news for anyone planning to build one. This can mean more GPU-intensive games coupled with optimized code to take better advantage of any multi-threaded CPU.

This post was edited on 3/20/14 at 3:35 pm
first pageprev pagePage 1 of 1Next pagelast page
refresh

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on Twitter, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookTwitterInstagram