Started By
Message

re: Build the Best Gaming PC Your Money Can Buy: A Detailed Guide (Updated Sep 2014)

Posted on 9/29/13 at 2:10 am to
Posted by ILikeLSUToo
Central, LA
Member since Jan 2008
18018 posts
Posted on 9/29/13 at 2:10 am to
------------------------
++++ALERT: You are reading an out-of-date version of the guide and wasting your time. Read the PDF for the most accurate up-to-date info.It's best to download the PDF and use a proper PDF reader. Google's formatting of PDFs breaks all of the links. Link to directly download the PDF. I have stopped updating the text in the thread because the forum's limited code makes it far too time-consuming to change images and add text.++++
------------------------

====Printed Circuit Boards (PCBs)====
This is what I mean by a PCB:


Technically, the PCB is the actual board that holds everything else, including capacitors, mosfets, GPU, memory modules, conductive pathways, etc. But, for the purpose of this discussion, PCB refers to the board plus everything on it except the GPU and VRAM.

For simplicity sake, there are two types of PCBs to consider in video cards: Reference and Non-Reference.

A reference PCB is one that was designed by the GPU manufacturer (AMD or NVIDIA) specifically to run your GPU and VRAM—the factory standard. This design is provided to their Add-in-Board (AIB) Partners, which include the companies mentioned earlier. Some companies simply sell the GPUs with reference PCBs and reference coolers, merely slapping their brand name on the factory design. Other companies design better air coolers and sell them with reference PCBs. And then there are those that design their own PCBs for these GPUs. These “non-reference” PCBs may have added enhancements such as better power phase designs and custom coolers, all of which lead to better overclocking potential (and some PCBs are simply redesigned to accommodate the AIB partner’s custom cooler).

On the other hand, there is a dirtier side to non-reference PCBs, because some AIB partners design them with the exact opposite in mind—rather, the goal is to cut costs, make those PCBs as cheaply as possible, the minimum required for the card to run at its stock specification. When AMD and NVIDIA design their reference PCBs, every component on the card has a definite purpose. But an AIB attempting to be highly competitive will put the cards through a “component reduction” process, where the corporate bean counters make decisions on what components (voltage regulators, mosfets, capacitors, etc.) aren’t completely necessary, or which ones can be replaced by cheaper and inferior components.

You see more and more of these types of cheap non-reference cards pop up as a generation of GPUs get older, so always be diligent about checking reviews of specific models when looking for a deal, especially ones that appear to significantly undercut the price of the others. This “cost-cutting” extends to the cooling solution as well, and these cards tend to have heat and stability problems under extended workloads; they certainly aren’t going to overclock very well, if at all. In fact, quite a few non-reference cards come with locked voltage.

Reference PCBs also overclock fairly well and accommodate a variety of after-market air coolers. If you only plan to air cool, a non-reference card with an enhanced PCB and better cooling would be a good choice, as long as the price isn’t too far above reference designs.

====Warranty====
Video card warranties range from 1 year to lifetime. A typical warranty will be 3 years. Don’t overpay just because a card has a longer warranty, as not all companies honor their warranties as readily as others. Do, however, be wary of cards with shorter warranties, as they may be one of those “cost cutter” designs. In general, you can expect a defective card to show symptoms within the first 90 days anyway.

====Overall Reputation====
How well does the company honor its warranties? How poor is their customer service, phone support, etc.? How long does it take to get a replacement from them if your card fails? Here’s a hint: They’re all pretty bad. I guarantee you there are an endless amount of horror stories posted on the internet about every single company and their customer service. Your best bet is to read reviews and consider the rest of the criteria mentioned above, so that you buy a card with statistically a lower chance of needing warranty replacement. Companies who have to deal with fewer replacements may be more likely to provide a faster replacement if something does happen.

====///====My Video Card Picks====\\\====
$1,000–$1,100 budget—AMD Radeon HD 7970 (3GB)
Why the 7970: It comes in at the right price, around $300-$320 for decent brands as of August 2013. NVIDIA’s 2GB 770 may be a slightly better performer, but it doesn’t justify the price. Look for a 7970 under $320 (after rebates and promos) that has the best reviews. Even better if it’s a factory overclocked (GHz) model, but you may not find one at this budget. If you have the extra money, I’d recommend spending as much as $330-340 on a GHz model with well-reviewed custom cooling. If you opt for NVIDIA, I’d recommend the 4GB 770 over the 2GB version. It’s a $440 card, so you’d have to extend your budget by over $100. I honestly cannot make an educated assessment on whether or not it’s worth it.

$800 budget—AMD Radeon HD 7950 (3GB)
Why the 7950—While not as powerful as a 7970, it’s remarkably close. And given its price tag of $240 and below, it easily holds the title of best bang for buck. There have been some recent sales putting some of these cards under $200, including good brands such as the MSI Twin Frozr. If you prefer, $250 will get you an NVIDIA 760 (2GB). These cards are about even in current game performance, so the 7950 was chosen due to price and the extra GB of VRAM.

$600 budget—AMD Radeon HD 7870 (2GB) or AMD Radeon HD 7950 (3GB)
Why the 7870—Quite simply, it’s the only current gen card that can compete for performance at this budget. Many of them are comparable to a 7950’s performance. On the other hand, with the recent sales of 7950s, there is barely a $20 difference in price. You might as well opt for a well-reviewed 7950 at a $200 price point if possible. If the 7870 prices fall further to create a larger price gap between it and the 7950, I have no problem recommending the 7870.

Yes, I realize the above recommendations make it seem like I’m AMD biased. Believe it or not, I was looking for a reason to recommend the NVIDIAs first and foremost in the $1000 and $800 budget categories. However, the current pricing does not justify the performance increase obtained through a mere refresh of old architecture. They simply do not fit within these predefined budgets without making other hardware compromises that I believe to be unacceptable.
This post was edited on 3/20/14 at 3:34 pm
Posted by ILikeLSUToo
Central, LA
Member since Jan 2008
18018 posts
Posted on 9/29/13 at 2:10 am to
------------------------
++++ALERT: You are reading an out-of-date version of the guide and wasting your time. Read the PDF for the most accurate up-to-date info.It's best to download the PDF and use a proper PDF reader. Google's formatting of PDFs breaks all of the links. Link to directly download the PDF. I have stopped updating the text in the thread because the forum's limited code makes it far too time-consuming to change images and add text.++++
------------------------

The CPU

In the graphics card arena, the AMD vs. NVIDIA war can get ugly. But it still doesn’t hold a candle to the vitriol exchanged in CPU debates—AMD vs. Intel. Those are the two CPU companies under consideration here, and I won’t attempt to convince you that one is better than the other in all situations. My goal will be simply to educate you on why it’s not that big of a deal in a gaming PC at the budgets we’ve set here. For that reason, I won’t spend much time discussing all of the different CPU options out there.

First, let’s clear the air regarding any preconceived notions about clock speed. It’s a common argument used among uninformed individuals who claim that AMD’s $200 4GHz CPU outperforms Intel’s $320+ 3.5GHz CPU. False.

When you consider everything that makes up a “CPU” (cores, cache, pathways, registers, etc. etc.), there's no point in comparing clock speeds. Even fairly educated enthusiasts, me included, find it difficult to make a purchase decision based solely on a CPU’s technical specifications. It’s far more practical to compare CPUs by looking at quantifiable real-world performance, benchmarking, and pricing.

====///====Working Harder vs. Working Smarter====\\\====
When a processor does its job (the workload), it breaks the load into stages/phases, then further divides those stages into units that can be processed more than one at a time. Software might provide commands/instructions in a linear manner, but a CPU can prioritize those instructions differently. Let's say a program sends out instructions X, Y, and Z. The CPU can divide those instructions into subunits (say, X1, X2, X3, Y1, Y2, and so on) and execute them in the most efficient order (which might be X2, Z1, X3, X1, Y2, etc.). That’s an oversimplified example, of course. It’s much more complex in reality.

The design of the CPU makes the clock speeds unrealistically comparable between different CPUs. It depends on how much can be done in what order, the divisions and subdivisions of labor it can provide, and the speed at which each of those units/subunits can be executed (some of it has a lot to do with the register’s capability, which stores commands, and the cache, which stores data associated with those commands).

Perhaps more difficult to explain is the difference in designs that give one CPU the ability to break down commands into more and therefore smaller subunits than another CPU because it extends beyond the number of cores. However, the end result is that the clock speed dictates how quickly those subunits will be executed, and the architecture/design dictates how many subunits can be executed per cycle—or, instructions per cycle (which is a blurrier distinction because I'm not an engineer). More subunits per instruction means smaller workload per unit, which means faster processing of that subunit at the rated clock speed. On top of that, less workload per subunit also means more subunits per clock cycle.

At this point, Intel takes the lead in doing more per clock cycle, and the differences can boil down to material choices, die size, transistor count, and overall design. But it can sometimes require a trade-off. More work at a slower pace, or less work at a faster pace? It largely depends on how efficiently the software can use the architecture, which is why CPU specs must be taken with a grain of salt and it is necessary to examine relevant benchmarks. And what benchmarks are relevant to us? The gaming ones, of course. More on that later.

====How Many Cores?====
So, what about core count? That 4GHz AMD has 8 physical cores, and the Intel 3770K/4770K only has 4 physical cores.

More cores are certainly better, but there is sometimes a misconception about AMD’s core count. The current 8-core AMD FX series are not exactly “true” 8-core processors. Their architecture uses what’s called a modular design, with 4 modules containing 2 cores each. Each of those 4 modules contains only a single floating point unit (FPU), which is responsible for carrying out computations. The cores on each module must share the FPU, as well as other components with the module, such as its cache. Granted, this does not mean it’s really a 4-core CPU. It still has 8 physical cores; they are just somewhat crippled by the need to share certain components.

A 4-core Intel CPU with hyperthreading has 4 physical cores and 4 virtual cores. As I said before, Intel’s current CPUs are able to do more work per clock cycle per individual physical core. When you add hyperthreading to the mix, you completely optimize the work output from those physical cores. Hyperthreading actively searches for any core that is currently in an idle state waiting its turn, and uses that idle core to immediately process the next set of instructions. It ensures that no cores are needlessly idle in applications that are actually coded to acknowledge multiple cores, or in general multitasking.

If we were building a 3D modeling, video editing/encoding machine, a quad-core hyperthreaded Intel i7 would perform better than AMD’s 8-core FX CPU and use less power—but it would also cost over $100 more than the AMD. However, since we’re building a gaming PC, all we really need is a CPU that won’t be the performance bottleneck in the games you enjoy.

====///====What Makes a CPU Bottleneck Game Performance?====\\\====
A CPU bottleneck means the processor is holding back the graphics card from rendering more frames per second. Meaning, the graphics card is not living up to its potential, and therefore neither is your gaming experience.

This bottleneck can occur because not only is your graphics card working hard to render what you see on the screen, your CPU is handling what’s actually happening in the game. CPUs are particularly good at handling physics, so that would include character movements, explosions, crashes, shooting, etc. in a dynamic environment.
Some games are CPU-intensive simply because they were coded that way. Skyrim, for example, relies heavily on your CPU because certain rendering components (such as shadows) were coded for the CPU.

This post was edited on 3/20/14 at 3:34 pm
first pageprev pagePage 1 of 1Next pagelast page
refresh

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on Twitter, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookTwitterInstagram