Overclocking on Devil’s Canyon

There lies a strange dichotomy with Intel’s product line. On the one hand, Intel offers a CPU engineered for better overclocking and encourages their motherboard partners to invest in overclocking features, but on the other, the warranty is technically void if the CPU fails while overclocked. A fair number of regular end-users (leaving business aside), such as my father who knows how to build a computer but not enough to overclock, can be concerned about overclocking and warranties.

To put the concept of 'overclocking death' into perspective: I have been an amateur overclocker for almost a decade, competing in national and international competitions both live and through the internet. I mostly focus on air/water (i.e. 24/7 system) overclocking, especially when it comes to AnandTech CPU and motherboard reviews. Out of the 250+ CPUs I own, I have only ever had one CPU fail. This was because I thought I had a different processor in the system and input inappropriate numbers. Those were not random numbers, I simply recalled an erroneous list from memory and past experiences, and not something a user overclocking a single system would end up with. As part of our motherboard reviews here at AnandTech, I try to offer a scale showing how the overclock is built over time, rather than jump in at the straight end.

One of the easiest ways to do so is to leverage any automatic overclocking options on the motherboard. For example, ASUS offers OC Tuner and a new OC Wizard mode in their Z97 BIOSes to help with overclocking. ASUS’ software has an auto-tuning mode where you can select the maximum temperature or power draw you want. Overclocking can be as simple as using that automatic overclock feature, or as complex as you like. I typically use those automatic overclocking options as a reference point for manual overclocks.

Manual overclocking has advantages similar to using a manual transmission in a racing car. A manual transmission lets the driver select the gears, potentially with better precision than the automatic system. A racing driver can also use different gears for torque or engine braking, and lap times in manual race cars are usually quicker than those in automatic transmission vehicles. With manual overclocking, we can adjust the system to use the overclock we want/need at the lowest possible voltage. An automatic system may use a lot of voltage to ensure stability, or if the CPU is marginal – the manual selection can override this behavior.

Manual overclocking a CPU is not a dark art. With practice, guidance and reasonable expectations, a small overclock can be nurtured into a bigger one, without the fear of decreasing the longevity in a daily system. Most processors with a mild overclock will most likely end up being replaced before they become irreparably damaged. If we ventured into the extreme overclocking, real seat-of-the-pants type scenario, then it can be possible, but that is not recommended without experience.

Overclocking also introduces this strange notion of ‘stability’ – how stable is your overclock? The word ‘stable’ means different things to different people, but the basic assumption is that the system should be stable for everything you do. Intel and AMD ship their CPUs at a voltage and frequency which keeps them stable no matter the situation. Some users attempt to match that stability by stress testing their system, whereas others are satisfied for a gaming stability with no need for transcoding video stability. Testing the stability of a system typically requires some form of stress test, and again users will select a test that either emulates real world (video transcoding, PCMark8, 3DMark) or attempt to find any small weakness (Prime95, XTU). The downside of this latter testing philosophy is that a bad stress test has the potential to break a system. Personally, I shudder when a user suggests a system is not stable unless it passes ‘72hr Large FFT Prime95’, because I have seen users irreparably damage their CPUs with it. 

My stress tests here at AnandTech typically consist of a run of the benchmark PovRay (3 minutes, probes CPU and memory) and a test using OCCT (5 minutes, probes mainly CPU). If there is weakness in the memory controller, PovRay tends to find it, whereas if the CPU has not enough voltage for video transcoding, OCCT will throw up an error. There are outlier circumstances where these tests are not enough for 100% stability, but when my systems are stable with these tests, they tend to devour any gaming or non-AVX transcoding for breakfast.

Overclocking blurb aside, my usual procedure for the i7-4770K is as follows:

  • 1. In the BIOS, set the DRAM to XMP.
  • 2. Set the CPU to 4.0 GHz (40x multiplier) with CPU Core Voltage to Manual and 1.000 volts.
  • 3. Save and Exit, see if system boots to OS.
  • 4. If entering OS fails, go back to BIOS and adjust voltage by +0.025 volts and return to step 3.
  • 5. When in the OS, run the POV-Ray multithreaded benchmark and OCCT test for five minutes. If either tests fail, go back to BIOS and adjust voltage by +0.025 volts and return to step 3.
  • 6. Monitor temperatures during OCCT test. If the temperature is at the top of the users limit, stop overclocking.
  • 7. If temperatures are low, and both tests complete, then this CPU frequency/Core Voltage combination is stable and noted. To continue overclocking, go back to BIOS and adjust CPU multiplier by +1. Return to Step 3.

For programmers, in pseudocode:

System.Restart();
BIOS.XMP = true;
CPU.Multiplier = 39;
CPU.CoreVoltage = 1.000;
do {
              CPU.Multiplier++;
              try {
                            OS.Boot();
                            OS.PovRay.Test();
                            OS.OCCT.Test();
              } catch {
                            CPU.CoreVoltage += 0.025;
                            continue; // exit do while loop
              }
              OS.Log(CPU.Multiplier, CPU.CoreVoltage);
} while { CPU.Temperature < 85; };

Intel i7-4790K Results

Our i7-4790K sample actually had a high stock voltage – 1.273 volts at load. In chatting with other reviewers and overclockers, it would seem that 1.190 volts is another variant. This has repercussions in terms of overclocking headroom, as it makes the CPU warm from the stock settings. By contrast, with manual overclocking, we were able to achieve 4.4 GHz on all cores with 1.200 volts suggesting that Intel was ultra conservative with their stock voltage decisions.

For consistency with the other Intel CPUs, we underclocked the sample to 40x for all cores to begin, and changed Load-Line Calibration to Level 8 for the ASUS Z97-Pro.

The results were as follows:

From these results, we can see that large voltage jump from 4.6 GHz to 4.7 GHz, which is similar to what launch Haswell CPUs seem to require. This causes an upswing in both temperatures and power draw, meaning that the user really needs that thermal headroom or a nice CPU to move closer to 5.0 GHz. For 4.7 GHz, we also added some CPU Cache voltage (+0.050 volts) to ensure benchmark stability, which is a secondary technique when pushing the voltage limits.

From the stock settings, manually overclocking the system to 4.4 GHz gave a 23W drop in power draw and 8C for temperatures. I would happily take that for a daily system. The sweet spot for users with good cooling would seem to be 4.6 GHz with this CPU.

We asked Intel regarding where this CPU was in terms of their internal testing. It would seem our sample is actually below average (so were our i7-4770K and i7-3770K samples, incidentally). The couple of users who I have spoken to with 4.8 GHz would suggest that those frequency CPUs are going to be more common than they were with launch Haswell.

If we compare the i7-4790K alongside our i7-4770K launch sample, in the same motherboard with the same cooling:

The temperature delta is awesome. We have a 10-16C delta up to 4.5 GHz, which is still 8C at 4.6 GHz. This gives us enough headroom for 4.7 GHz, showing that Intel has partly solved the solution when it comes to heat generation. I am sure that some users will want more and still delid their CPU to get another few degrees, or ask why Intel has not gone all the way and directly soldered the IHS onto the CPU. The businessman in me tells me that it is all a matter of future headroom. Should they get competition, they can perform ‘simple’ tweaks to get the best out of the situation and perhaps stay in the lead. A sort of ‘never show your full hand unless you need to’ mentality. The other argument is one of progress, and we could wonder how many extra adjustments could be in Intel’s bag of tricks.

Intel i5-4690K Results

Although we never had a sample of the i5-4670K in to test, I was less excited about the i5-4690K CPU as it follows more of the Haswell refresh line of having a small speed bump over the CPU it was trying to replace. A move of 100 MHz means a 2-3% in absolute terms, although changing the package to allow for more headroom might make it more interesting. The i5 overclocking CPU makes more sense in terms of bang-for-buck if you are not running CPU-limited workloads, but it clearly has to match up to the i7 otherwise that price difference could be justified for a +10% increase in clock speed and +100% increase in threads.

Thankfully, our i5 sample overclocked as well as the i7 did – actually even more so. With no hyperthreading to deal with, we cannot load up a core with two simultaneous AVX threads to heat the CPU up as fast as an i7, and that can play an advantage. It would seem that the voltage/frequency characteristics for our i5 sample were also better than our i7 sample.

Our i5 overclocking results were as follows:

Due to this CPU being only 4.0 GHz at turbo, the temperatures and load voltage should be a lot less than the i7, which is shown above. The voltage scale follows a similar trend to the i7, although the jump from 4.7 GHz to 4.8 GHz is greater than +0.125 V, as the system was still giving a BSOD on booting into the operating system. The temperature readings are still nice and low, with 79C for a 4.7 GHz. Whereas with the i7 we were hitting the real upper limits and suggesting 4.6 GHz was a nicer position to be in, this CPU makes 4.7 GHz seem quite easy indeed. It might be worth noticing that the power draw at 4.7 GHz for the i5 matches the stock power draw on the i7, but do not forget that while the i5 and i7 have 4 cores, the hyperthreading on the i7 can really drive up the power consumption as it is doing more work (26.3% more POVRay for 26.9% more power at stock).

Devil’s Canyon Review: Intel Core i7-4790K and i5-4690K CPU Benchmarks
Comments Locked

117 Comments

View All Comments

  • GeorgeDBartlett - Saturday, July 12, 2014 - link

    hi
  • Spirall - Friday, July 11, 2014 - link

    Thanks for the article! I undestand Intel has no chance to go back with the improvements (including base frequencies) with Broadwell Ks. Would like to see memory speeds (including BLCK change impact on them). These are what Haswell Ks should have been from beggining.
  • CrystalBay - Friday, July 11, 2014 - link

    Thanks Dr. Ian it was worth the wait ! I had already bought a 4790k ,but your breakdown hypothesis of the TIM has put this article above others...
  • Kevin G - Friday, July 11, 2014 - link

    This is the Haswell Intel should have introduced a year ago for the enthusiast. There is a bit more tangible benefit for owners of Sandy Bridge and Ivy Bridge chips to jump to Haswell now. Still the main reason for the upgrades in my view is the improved chipsets with Z97 being rather nice over the older Z68 and Z77.

    As for coming Haswell-E chips, it'll remain rather niche like the current socket 2011 chips. You either have an explicit need for more cores, more PCI-e lanes or more than 32 GB of memory.

    The delays of Broadwell make me wonder just how long it'll be on the market before SkyLake arrives. At first Intel was targeting Broadwell as a pure mobile part but eventually recanted. Then the delays hit with desktop parts looking seemingly appearing in 2015. If SkyLake is a late 2015 part as envisioned, then Broadwell will have a short life span and ultimately worth passing to jump directly to SkyLake.
  • FlushedBubblyJock - Thursday, November 20, 2014 - link

    I can see the nm performance wall hitting... for general use gamers... some fancy software specialty is going to be the near future selling points... looks like it to me anyway.
  • Muyoso - Friday, July 11, 2014 - link

    I wish there was an overclocked 3770k @ 4.5ghz in the charts so that I could compare my current setup to what is new. I have a feeling that it would be fairly competitive still.
  • beginner99 - Friday, July 11, 2014 - link

    Why don't us test multiplayer games? Seriously single-player games on 1080p will be GPU limited basically always with the current top end intel chips. and these 5 fps differences are pretty much irrelevant. What would be interesting is BF4 64 player maps for example and with mantle or dx. These benchmarks here are useless for choosing a gaming cpu or even worse they can be misleading in some cases.
  • ZeDestructor - Friday, July 11, 2014 - link

    It's harder to make multiplayer completely reproducible.

    Then again, now that I think of it.. if you have a LAN server, it should be possible t do that...

    Hmmm...
  • Ian Cutress - Friday, July 11, 2014 - link

    Benchmarks have to be consistent. Just applying FRAPs to a few online games of BF4 for each CPU has no guarantee of consistency and it is an apples to oranges comparison. One match could be heavy in explosions for example, or if I decide to camp out as a sniper. There is also local variability based on the server you connect to and the number of individuals logged in.

    Until an intensive online multiplayer game has the option to record a time demo and emulates network delay accurately, the only way we can test gaming is via single player. BF2 came pretty close back in the day, but no modern titles that I know of have this feature and remain consistent.
  • et20 - Friday, July 11, 2014 - link

    Starcraft 2 games can be quite intense and the replay functionality is solid.
    I doubt it provides network delay emulation. Why do you want that anyway?

Log in

Don't have an account? Sign up now