Final Words

Two laptops. Two platforms. It is rare to have a chance to see a manufacturer offer such equal footing to both AMD and Intel by outfitting a premium laptop with processors from both. It represents a rare opportunity to get to test the latest processors from AMD and Intel in a laptop in such an apples-to-apples fashion.

In the laptop space, design, cooling, and a manufacturers requirements can play a big part in how a particular chip performs, thanks to adjustable power level settings, surface temperature adjustments, and more. We have seen the lowest tier CPU outperform the highest tier CPU just by the virtue of a better cooling system, so to have processors from AMD and Intel, both of which launched in 2019, in the same chassis is a wonderful opportunity.

There aren’t too many ways to sugar coat the results of this showdown though. AMD’s Picasso platform, featuring its Zen+ cores and coupled with a Vega iGPU, has been a tremendous improvement for AMD. But Intel’s Ice Lake platform runs circles around it. Sunny Cove cores coupled with the larger Gen 11 graphics have proven to be too much to handle.

On the CPU side, no one should be too surprised by the results. We've already seen on the desktop that AMD’s Zen+ cores were competitive, but slightly slower than the previous Skylake platform; and the new Sunny Cove microarchitecture from Intel is a big step forward in terms of IPC for Intel. On purely CPU based tasks, Ice Lake really stretched its legs, and despite this being a 3.9 GHz chip, in single-threaded SPEC 2017, it managed to come very close to a 5.0 GHz Core i9-9900K with a massively higher TDP. Zen+ is outclassed here, and that showed in the benchmark results, and especially in the benchmark time. On our 8-thread SPEC 2017 run, the Ice Lake platform finished just a hair over two hours ahead of Picasso.

But things fare better for AMD on the GPU side of matters. Even though Intel has certainly closed the gap with Ice Lake's iGPU, AMD seems to continue to hold an advantage, especially on the 11 Compute Unit Ryzen Surface Edition processor found in the Surface Laptop 3. Intel has dedicated a lot more die area to the GPU and the results put them almost on equal footing with the Vega based GPU on Picasso. On the more complex GPU tasks, AMD tends to have a slight lead, and AMD’s low-level driver support also seems to benefit them on DirectX 12 based tasks. But, Ice Lake’s GPU is helped by the much quicker CPU it is coupled to, so depending on the specific test it can be even quicker.

Ice Lake does all of this with much better power efficiency as well. Overall battery life is quite a bit longer, and idle power draw is notably lower as well. Case in point: at minimum screen brightness, the Ice Lake system was pretty much only sipping power, drawing around 1.7 Watts, versus the 3.0 Watts for the AMD system.

It was fantastic to see AMD get a design win in a premium laptop this year, and the Surface Laptop 3 is going to turn a lot of heads over the next year. AMD has long needed a top-tier partner to really help its mobile efforts shine, and they now have that strong partner in Microsoft, with the two of them in a great place to make things even better for future designs. Overall AMD has made tremendous gains in their laptop chips with the Ryzen launch, but the company has been focusing more on the desktop and server space, especially with the Zen 2 launch earlier this year. For AMD, the move to Zen 2 in the laptop space can’t come soon enough, and will hopefully bring much closer power parity to Intel’s offerings as well.

Meanwhile for Intel, Ice Lake has been years in the making, and, after a long delay, it is finally here. After digging into the platform in-depth, it’s clear that Ice Lake is an incredibly strong offering from Intel. The CPU performance gains are significant, particularly because they were made in the face of a CPU frequency deficit. But the biggest gains were on the GPU side, where Intel’s Gen 11 GT2 in its full 64 Execution Unit configuration is likely the biggest single increase in GPU performance since they started integrating GPUs. It pulls very close to AMD’s Vega, closing the gap in performance to almost zero.

2019 has been a big year in the laptop space, with both Intel and AMD bringing new tools to the game. 2020 should be just as exciting, and if we’re lucky, we’ll get another chance to do this all over again.

 
Platform Power
Comments Locked

174 Comments

View All Comments

  • TheinsanegamerN - Friday, December 13, 2019 - link

    It isnt just speed, the intel chip uses LPDDR4X. That's an entirely different beat from LPDDR4, let alone normal DDR4.

    AMD would need to redesign their memory controller, and they have just...not done it. The writing was on the wall, and I have no idea why AMD didnt put LPDDR4X compatibility in their chips, hell I dont know why intel waited so long. The sheer voltage difference makes a huge impact in the mobile space.

    You are correct, pushing those speeds at normal DDR4 voltage levels would have tanked battery life.
  • ikjadoon - Friday, December 13, 2019 - link

    Sigh, it is just speed. DDR4-2400 to DDR4-3200 is simply speed: there is no "entirely new controller" needed. The Zen+ desktop counterpart is rated between DDR4-2666 to 2933.

    LPDDR4X is almost identical to LPDDR4: "LPDDR4X is identical to LPDDR4 except additional power is saved by reducing the I/O voltage (Vddq) to 0.6 V from 1.1 V." Whoever confused you that LPDDR4X is "an entirely different beat" from LPDDR4 is talking out of their ass and I caution you to believe anything else they ever say.

    And, no: DDR4-3200 vs DDR4-2400 would've tanked battery life, but simply made it somewhat worse. DDR4-3200 can still run on the stock 1.2V that SO-DIMM DDR4 relies on, but it's pricier and you'd still pay the MHz power penalty.

    I don't think RAM speed/voltage has ever "tanked" a laptop's battery life: shaking my head here...
  • mczak - Friday, December 13, 2019 - link

    I'm quite sure you're wrong here. The problem isn't the memory itself (as long as you get default 1.2V modules, which exist up to ddr4-3200 itself), but the cpu. Zen(+) cpus require higher SoC voltage for higher memory speeds (memory frequency is tied to the on-die interconnect frequency). And as far as I know, this makes quite a sizeable difference - not enough to really matter on the desktop, but enough to matter on mobile. (Although I thought Zen+ could use default SoC voltage up to ddr4-2666, but I could be wrong on that.)
  • Byte - Friday, December 13, 2019 - link

    Ryzen had huge problems with memory speed and even compatibility at launch. No doubt they had to play it safe on laptops. They should have it mostly sorted out with Zen 2 laptop, it is why the notebooks are a gen behind where as intel notebook are usually a gen ahead.
  • ikjadoon - Saturday, December 14, 2019 - link

    We both agree it would be bad for battery life and a clear AMD failure. But, the details...more errors:

    1. Zen+ is rated up to DDR4-2933. 3200 is a short jump. Even then, AMD couldn't even rate this custom SKU to 2666 (the bare minimum of Zen+). AMD put zero work into this custom SKU (whose only saving grace is graphics and even that was neutered). It's obviously a low-volume part (relative to what AMD sells otherwise) or such a high-profile design win.

    2. If AMD can't rate (= bin) *any* of its mobile SoC batches to support even 2666MHz at normal voltages, I'd be shocked.

    For any random Zen+ silicon, sure, it'd need more voltage. The whole impetus for my comments are that AMD created an entire SKU for Microsoft and seemed to take it out of oven half-baked.

    Or, perhaps they had binned the GPU side too much that very few of those CU 11 units could've survived a second binning on the memory controller.
  • azazel1024 - Monday, December 16, 2019 - link

    So all that being said, yes it had a huge impact. GPU based workloads are heavily memory speed dependent. Going from 2400 to 3200MHz likely would have seen a 10-25% increase in the various GPU benchmarks (on the lower end for those that are a bit more CPU biased). That changes AMD from being slightly better overall in GPU performance to a commanding lead.

    On the CPU side of things, many of the Intel wins were on workloads with a lot of memory performance needed. Going from 2400 to 3200 would probably have only resulted in the AMD chip moving up 3-5% in many workloads (20-40% in the more memory subsystem dependent SPEC INT tests), but that would have still evened the playing field a lot more.

    Going to 3766 like the Intel chip would have just been even more of the same.

    Zen 2 and much higher memory bandwidth can't come soon enough for AMD.
  • Zoolook - Saturday, December 21, 2019 - link

    It's not about binning, they couldn't support that memory and keep within their desired TDP because they would have to run infinity fabric at a higher speed.
    They could have used faster memory and lower CPU and/or GPU speed but this is the compromise they settled on.
  • Dragonstongue - Friday, December 13, 2019 - link

    AMD make/design for a client what that client wants, in this case, MSFT as "well known" for making sure to get (hopefully pay much for) what they want, for only reasons that they can understand.

    this case, AMD really cannot say "we are not doing that" as this would mean loss of likely into the millions (or more) vs just saying "not a problem, what would you like?"

    MSFT is very well known for catering to INTC and NVDA whims (they have, still do, even if it cost everyone many things)

    still they AMD and MSFT should have "made sure" to not hold back it's potential performance by using "min spec" memory speed, instead choosing the highest speed they know (through testing) it will support.

    I imagine AMD (or others) could have chosen to use LP memory selection as I call BS on others saying AMD would have no choice but to rearchitecture their design to use the LP over standard power memory, seeing as the LP is likely very little changes need to be done (if any compared to ground up for an entirely different memory type)

    they should have "upped" to the next speed levels however instead of 2400 baseline, 2666, 2933, 3000, 3200 as power draw difference is "negligible" with proper tuning (which MSFT likely would have made sure to do...but then again is MSFT whom pull stupid as heck all the time, so long it keeps their "buddies happy" who care about the consumers themselves)
  • mikeztm - Friday, December 13, 2019 - link

    LPDDR4/LPDDR4X is not related to DDR4.
    It's a upgraded LPDDR3 which is also not related to DDR3.

    LPDDR family is just like GDDR family and are total different type of DRAM standard.
    They almost draw 0 watt when not in use. And in active ram access they do not draw less power significantly compare to DDR4.

    LPDDR4 was first shipped with iPhone 6s in 2015 and it takes Intel 4 years to finally catch up.
    BTW this article has a intentional typo: LPDDR4 3733 on Intel is actually quad channel because each channel is half width 32bit instead of DDR4 64bit.
  • Dragonstongue - Friday, December 13, 2019 - link

    AMD make/design for a client what that client wants, in this case, MSFT as "well known" for making sure to get (hopefully pay much for) what they want, for only reasons that they can understand.

    this case, AMD really cannot say "we are not doing that" as this would mean loss of likely into the millions (or more) vs just saying "not a problem, what would you like?"

    MSFT is very well known for catering to INTC and NVDA whims (they have, still do, even if it cost everyone many things)

    still they AMD and MSFT should have "made sure" to not hold back it's potential performance by using "min spec" memory speed, instead choosing the highest speed they know (through testing) it will support.

    I imagine AMD (or others) could have chosen to use LP memory selection as I call BS on others saying AMD would have no choice but to rearchitecture their design to use the LP over standard power memory, seeing as the LP is likely very little changes need to be done (if any compared to ground up for an entirely different memory type)

    they should have "upped" to the next speed levels however instead of 2400 baseline, 2666, 2933, 3000, 3200 as power draw difference is "negligible" with proper tuning )

    IMO

Log in

Don't have an account? Sign up now