CPU Tests: Microbenchmarks

Core-to-Core Latency

As the core count of modern CPUs is growing, we are reaching a time when the time to access each core from a different core is no longer a constant. Even before the advent of heterogeneous SoC designs, processors built on large rings or meshes can have different latencies to access the nearest core compared to the furthest core. This rings true especially in multi-socket server environments.

But modern CPUs, even desktop and consumer CPUs, can have variable access latency to get to another core. For example, in the first generation Threadripper CPUs, we had four chips on the package, each with 8 threads, and each with a different core-to-core latency depending on if it was on-die or off-die. This gets more complex with products like Lakefield, which has two different communication buses depending on which core is talking to which.

If you are a regular reader of AnandTech’s CPU reviews, you will recognize our Core-to-Core latency test. It’s a great way to show exactly how groups of cores are laid out on the silicon. This is a custom in-house test built by Andrei, and we know there are competing tests out there, but we feel ours is the most accurate to how quick an access between two cores can happen.

All three CPUs exhibit the same behaviour - one core seems to be given high priority, while the rest are not.

Frequency Ramping

Both AMD and Intel over the past few years have introduced features to their processors that speed up the time from when a CPU moves from idle into a high powered state. The effect of this means that users can get peak performance quicker, but the biggest knock-on effect for this is with battery life in mobile devices, especially if a system can turbo up quick and turbo down quick, ensuring that it stays in the lowest and most efficient power state for as long as possible.

Intel’s technology is called SpeedShift, although SpeedShift was not enabled until Skylake.

One of the issues though with this technology is that sometimes the adjustments in frequency can be so fast, software cannot detect them. If the frequency is changing on the order of microseconds, but your software is only probing frequency in milliseconds (or seconds), then quick changes will be missed. Not only that, as an observer probing the frequency, you could be affecting the actual turbo performance. When the CPU is changing frequency, it essentially has to pause all compute while it aligns the frequency rate of the whole core.

We wrote an extensive review analysis piece on this, called ‘Reaching for Turbo: Aligning Perception with AMD’s Frequency Metrics’, due to an issue where users were not observing the peak turbo speeds for AMD’s processors.

We got around the issue by making the frequency probing the workload causing the turbo. The software is able to detect frequency adjustments on a microsecond scale, so we can see how well a system can get to those boost frequencies. Our Frequency Ramp tool has already been in use in a number of reviews.

From an idle frequency of 800 MHz, It takes ~16 ms for Intel to boost to the top frequency for both the i9 and the i5. The i7 was most of the way there, but took an addition 10 ms or so. 

Power Consumption: Caution on Core i9 CPU Tests: Office and Science
Comments Locked

279 Comments

View All Comments

  • macakr - Tuesday, March 30, 2021 - link

    really? that bad? I can get that on a 15w Ryzen 4700u!
  • Slash3 - Tuesday, March 30, 2021 - link

    The 4700u mobile APU has a much stronger iGPU core, similar to that of Rocket Lake.
  • Alistair - Wednesday, March 31, 2021 - link

    Yeah it is that bad. Generally if you keep the resolution at 900p or 720p (or 50 percent scaling of 1080p, which is ~768p) the performance is ok. But it falls off dramatically at 1080p. No linear scaling here. Basically it is MUCH worse than laptop parts. I have DDR3600C16 so was expecting better. Oh well.

    Runeterra was barely playable at 1440p, just a basic card game, but the FPS shoots up dramatically at 1080p or lower, so that's fine. Would be nice to play Hearthstone and Runeterra with integrated graphics one day...
  • Tom Sunday - Thursday, April 8, 2021 - link

    I am getting on the years and like to finally replacing my 13-year old Dell XPS 730x. Its time after being forced to replacing (3-times) PSU's, Motherboards, AIOs, GPU's and RAM. The new Intel i5 11600K holds interest. Will the 'integrated graphics' be good enough for just browsing the Net and watching old western or war movies on utube and with not doing any gaming? How good is the IGPU in this regard? Once I have more money I can hopefully buy a used discrete GPU 'over the table' next year at the local computer show? Will probably have my new system cobbled together by the local stripcenter PC shop and by one of the Bangladesh boys. So it will be good to sound somewhat intelligent discussing the hardware and not being pushed into what is cheap and in stock that day. Thoughts?
  • Spunjji - Friday, April 9, 2021 - link

    The iGPU on Rocket Lake will be fine for those purposes. However, so would the iGPU on the cheaper Comet Lake processors out there - they may be a better (cheap) option if you're going to buy now and upgrade later.

    Another option would be to go for a system based around the AMD Ryzen 5 3600 and re-use an existing GPU, which would also give you the option to upgrade the CPU again to something like a 5800X or even 5900X later. Personally, I'd go with that approach.
  • 0ldman79 - Friday, April 16, 2021 - link

    The integrated GPU is fine for movies and web.

    I've got a Skylake laptop with GTX 960M, it uses the iGPU until I fire up a game.

    The h264 and h265 playback are accelerated through the iGPU, barely draws any power at all for video playback. The screen draws all the power. It'll play back 1080P 60 h264 or h265 all day long at under 2W. There are no issues using it for the web or anything else using integrated, it'll even play some games at lower settings, roughly 1/4 of a 750 Ti (960M) in gaming, though the newer chips will be slightly better.
  • Alexvrb - Tuesday, March 30, 2021 - link

    Vega 11 is actually a bit slower than the latest 8 CU Vega found in Renoir/Cezanne. Not enough to catch up to Iris Xe, I don't think... but impressive given the smaller GPU and same power (or better). That's still GCN, too. If they release an APU with a ~10 CU RDNA2 GPU, it should give them a substantial boost... as long as bandwidth doesn't cripple it. Next gen memory should help, but they might also integrate a chunk of Infinity Cache. It has proven effective on larger RDNA2 siblings, giving them good performance with a relatively narrow memory bus.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    Good ole iGPU distraction.

    How about the most important stuff? How about having it appear on the first page?

    • performance per watt

    • performance per decibel

    Apples-to-apples comparison, which means the same CPU cooler in the same case for Intel and AMD.

    That is important, not this obsession over a pointless sort-of GPU.
  • Jezwinni - Saturday, April 3, 2021 - link

    I agree the iGPU is a distraction, but disagree on what declare the important things.

    Personally the performance for the price is the important thing.

    Any extra power draw isn't going to blow up my PSU, make my electricity bills unmanageable, or save the world.

    Why you consider the performance per watt most important?
  • 0ldman79 - Friday, April 16, 2021 - link

    Performance per watt on iGPU only matters in mobile devices, even then it's barely measurable.

    The iGPU is only going to pull 10W max, normally they peak around half that.

Log in

Don't have an account? Sign up now