SPEC CPU - Single-Threaded Performance

SPEC2017 and SPEC2006 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.

We run the tests in a harness built through Windows Subsystem for Linux, developed by our own Andrei Frumusanu. WSL has some odd quirks, with one test not running due to a WSL fixed stack size, but for like-for-like testing is good enough. SPEC2006 is deprecated in favor of 2017, but remains an interesting comparison point in our data. Because our scores aren’t official submissions, as per SPEC guidelines we have to declare them as internal estimates from our part.

For compilers, we use LLVM both for C/C++ and Fortan tests, and for Fortran we’re using the Flang compiler. The rationale of using LLVM over GCC is better cross-platform comparisons to platforms that have only have LLVM support and future articles where we’ll investigate this aspect more. We’re not considering closed-sourced compilers such as MSVC or ICC.

clang version 10.0.0
clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
 24bd54da5c41af04838bbe7b68f830840d47fc03)

-Ofast -fomit-frame-pointer
-march=x86-64
-mtune=core-avx2
-mfma -mavx -mavx2

Our compiler flags are straightforward, with basic –Ofast and relevant ISA switches to allow for AVX2 instructions. We decided to build our SPEC binaries on AVX2, which puts a limit on Haswell as how old we can go before the testing will fall over. This also means we don’t have AVX512 binaries, primarily because in order to get the best performance, the AVX-512 intrinsic should be packed by a proper expert, as with our AVX-512 benchmark.

To note, the requirements for the SPEC licence state that any benchmark results from SPEC have to be labelled ‘estimated’ until they are verified on the SPEC website as a meaningful representation of the expected performance. This is most often done by the big companies and OEMs to showcase performance to customers, however is quite over the top for what we do as reviewers.

Single-threaded performance of TGL-H shouldn’t be drastically different from that of TGL-U, however there’s a few factors which can come into play and affect the results: The i9-11980HK TGL-H system has a 200MHz higher boost frequency compared to the i7-1185G7, and a single core now has access to up to 24MB of L3 instead of just 12MB.

SPECint2017 Rate-1 Estimated Scores

In SPECint2017, the one results which stands out the most if 502.gcc_r where the TGL-H processor lands in at +16% ahead of TGL-U, undoubtedly due to the increased L3 size of the new chip.

Generally speaking, the new TGL-H chip outperforms its brethren and AMD competitors in almost all tests.

SPECfp2017 Rate-1 Estimated Scores

In the SPECfp2017 suite, we also see general small improvements across the board. The 549.fotonik3d_r test sees a regression which is a bit odd, but I think is related to the LPDDR4 vs DDR4 discrepancy in the systems which I’ll get back to in the next page where we’ll see more multi-threaded results related to this.

SPEC2017 Rate-1 Estimated Total

From an overall single-threaded performance standpoint, the TGL-H i9-11980HK adds in around +3.5-7% on top of what we saw on the i7-1185G7, which lands it amongst the best performing systems – not only amongst laptop CPUs, but all CPUs. The performance lead against AMD’s strongest mobile CPU, the 5980HS is even a little higher than against the i7-1185G7, but loses out against AMD’s best desktop CPU, and of course Apple M1 CPU and SoC used in the latest Macbooks. This latter comparison is apples-to-apples in terms of compiler settings, and is impressive given it does it at around 1/3rd of the package power under single-threaded scenarios.

CPU Tests: Core-to-Core and Cache Latency SPEC CPU - Multi-Threaded Performance
Comments Locked

229 Comments

View All Comments

  • mode_13h - Monday, May 17, 2021 - link

    That's clearly not a M.2 drive and therefore not a laptop. Please reread my question.
  • xpclient - Tuesday, May 18, 2021 - link

    It is from an M.2 gen 4 drive in a gaming laptop - ASUS ROG Zephyrus S17 GX703 (GX703HS model – Core i9 11900H + RTX 3080 140W, 4K 120Hz screen) but I get your point - at the moment, not many will need such speeds. Gen 3 will serve them fine. Please do not question the benchmark itself. Personally I got an AMD Ryzen 7 5800H-based machine myself, without waiting for Tiger Lake H45.
  • mode_13h - Tuesday, May 18, 2021 - link

    xpclient> It is from an M.2 gen 4 drive in a gaming laptop - ASUS ROG Zephyrus S17 GX703
    xpclient> (GX703HS model – Core i9 11900H + RTX 3080 140W, 4K 120Hz screen)

    BS. Look at the numbers: you cannot do 10.5 GB/s read or 9.8 GB/s write over PCIe 4.0 x4.

    I don't know what it's from, but it's no mere x4 drive. Maybe a 4-drive RAID-0 or something like that.
  • Spunjji - Tuesday, May 18, 2021 - link

    @mode_13 Those scores are from a laptop that comes with a 3-drive RAID-0 config, which is - quite frankly - an absurd setup to have by default for a gaming system.
  • mode_13h - Tuesday, May 18, 2021 - link

    spunji> Those scores are from a laptop that comes with a 3-drive RAID-0 config

    Wow, so I was actually close!

    > which is - quite frankly - an absurd setup to have by default for a gaming system.

    Yeah, I'd say a 3-drive RAID-5 might make sense in a mobile workstation for editing digital cinema footage on-location.
  • xpclient - Wednesday, May 19, 2021 - link

    @mode_13h and @Spunjji, my apologies. I didn't notice the 3 drive RAID config in that review article of an ASUS laptop and missed that completely. My bad.
  • mode_13h - Thursday, May 20, 2021 - link

    > my apologies.

    No problem. It did spark an interesting tangent about RAID in laptops.

    Thanks for the follow-up. It's a good idea to sanity-check the numbers, since that's what first caught my attention.
  • Spunjji - Thursday, May 20, 2021 - link

    @xpclient - no harm no foul!
  • Bagheera - Tuesday, May 18, 2021 - link

    let's not forget that Intel was 2 years late to PCIe4 on DESKTOP and Intel fans didn't seem to mind.
  • mode_13h - Tuesday, May 18, 2021 - link

    > let's not forget that Intel was 2 years late to PCIe4 on DESKTOP

    It's there now. If you weren't in the market for a new PC in the past 2 years, what does it matter?

    > and Intel fans didn't seem to mind.

    Not even just Intel fans. AMD was early with their PCIe 4, on the desktop. That's why they caught Intel by surprise. Because, at the time, PCIe 3 was good enough. We're only just starting to see some minor advantages for PCIe 4, on the desktop. It's no game changer.

    Honestly, of all the criticisms you could make of Intel, this is one of the weaker ones.

Log in

Don't have an account? Sign up now