The Cortex-A77 µarch: Going For A 6-Wide* Front-End

The Cortex-A76 represented a clean-sheet design in terms of its microarchitecture, with Arm implementing from scratch the knowledge and lessons of years of CPU design. This allowed the company to design a new core that was forward-thinking in terms of its microarchitecture. The A76 was meant to serve as the baseline for the next two designs from the Austin family, today’s new Cortex-A77 as well next year’s “Hercules” design.

The A77 pushes new features with the primary goals of increasing the IPC of the microarchitecture. Arm’s goals this generation is a continuation of focusing on delivering the best PPA in the industry, meaning the designers were aiming to increase the performance of the core while maintaining the excellent energy efficiency and area size characteristics of the A76 core.

In terms of frequency capability, the new core remains in the same frequency range as the A76, with Arm targeting 3GHz peak frequencies in optimal implementations.

As an overview of the microarchitectural changes, Arm has touched almost every part of the core. Starting from the front-end we’re seeing a higher fetch bandwidth with a doubling of the branch predictor capability, a new macro-OP cache structure acting as an L0 instruction cache, a wider middle core with a 50% increase in decoder width, a new integer ALU pipeline and revamped load/store queues and issue capability.

Dwelling deeper into the front-end, a major change in the branch predictor was that its runahead bandwidth has doubled from 32B/cycle to 64B/cycle. Reason for this increase was in general the wider and more capable front-end, and the branch predictor’s speed needed to be improved in order to keep up with feeding the middle-core sufficiently. Arm instructions are 32bits wide (16b for Thumb), so it means the branch predictor can fetch up to 16 instructions per cycle. This is a higher bandwidth (2.6x) than the decoder width in the middle core, and the reason for this imbalance is to allow the front-end to as quickly as possible catch up whenever there are branch bubbles in the core.

The branch predictor’s design has also changed, lowering branch mispredicts and increasing its accuracy. Although the A76 already had the a very large Branch Target Buffer capacity with 6K entries, Arm has increased this again by 33% to 8K entries with the new generation design. Seemingly Arm has dropped a BTB hierarchy: The A76 had a 16-entry nanoBTB and a 64-entry microBTB – on the A77 this looks to have been replaced by a 64-entry L1 BTB that is 1 cycle in latency.

Another major feature of the new front-end is the introduction of a Macro-Op cache structure. For readers familiar with AMD and Intel’s x86 processor cores, this might sound familiar and akin to the µOP/MOP cache structures in those cores, and indeed one would be correct in assuming they have the similar functions.

In effect, the new Macro-OP cache serves as a L0 instruction cache, containing already decoded and fused instructions (macro-ops). In the A77’s case the structure is 1.5K entries big, which if one would assume macro-ops having a similar 32-bit density as Arm instructions, would equate to about 6KB.

The peculiarity of Arm’s implementation of the cache is that it’s deeply integrated with the middle-core. The cache is filled after the decode stage (in a decoupled manner) after instruction fusion and optimisations. In case of a cache-hit, then the front-end directly feeds from the macro-op cache into the rename stage of the middle-core, shaving off a cycle of the effective pipeline depth of the core. What this means is that the core’s branch mispredicts latency has been reduced from 11 cycles down to 10 cycles, even though it has the frequency capability of a 13 cycle design (+1 decode, +1 branch/fetch overlap, +1 dispatch/issue overlap). While we don’t have current direct new figures of newer cores, Arm’s figure here is outstandingly good as other cores have significantly worse mispredicts penalties (Samsung M3, Zen1, Skylake: ~16 cycles).

Arm’s rationale for going with a 1.5K entry cache size is that they were aiming for an 85% hit-rate across their test suite workloads. Having less capacity would take reduce the hit-rate more significantly, while going for a larger cache would have diminishing returns. Against a 64KB L1 cache the 1.5K MOP cache is about half the area in size.

What the MOP cache also allows is for a higher bandwidth to the middle-core. The structure is able to feed the rename stage with 64B/cycle – again significantly higher than the rename/dispatch capacity of the core, and again this imbalance with a more “fat” front-end bandwidth allows the core to hide to quickly hide branch bubbles and pipeline flushes.

Arm talked a bit about “dynamic code optimisations”: Here the core will rearrange operations to better suit the back-end execution pipelines. It’s to be noted that “dynamic” here doesn’t mean it’s actually programmable in what it does (Akin to Nvidia’s Denver code translations), the logic is fixed to the design of the core.

Finally getting to the middle-core, we see a big uplift in the bandwidth of the core. Arm has increased the decoder width from 4-wide to 6-wide.

Correction: The Cortex A77’s decoder remains at 4-wide. The increased middle-core width lies solely at the rename stage and afterwards; the core still fetches 6 instructions, however this bandwidth only happens in case of a MOP-cache hit which then bypasses the decode stage. In MOP-cache miss-cases, the limiting factor is still the decoder which remains at 4 instructions per cycle.

The increased width also warranted an increase of the reorder buffer of the core which has gone from 128 to 160 entries. It’s to be noted that such a change was already present in Qualcomm’s variant of the Cortex-A76 although we were never able to confirm the exact size employed. As Arm was still in charge of making the RTL changes, it wouldn’t surprise me if was the exact same 160 entry ROB.

Arm's Cortex's CPUs: Continuing on A76's Success The Cortex-A77 µarch: Added ALUs & Better Load/Stores
Comments Locked

108 Comments

View All Comments

  • abufrejoval - Monday, May 27, 2019 - link

    While extending all that prefetching seems such a great thing for performance, it also expands the attack surface of side-channel attacks. So I wonder if the public awareness came to late in the game for this design for the team to review or even correct for that.
  • SaberKOG91 - Monday, May 27, 2019 - link

    And I suppose we should stop using branch prediction too because anything speculative is inherently insecure? /s

    We've known about prefetch side-channel attacks for quite awhile now, as well as developed techniques to mitigate many of them. Do you really think ARM are so unaware of these issues?
  • rahvin - Tuesday, May 28, 2019 - link

    Although side-channel attacks have been known about for a long time, the first viable attack wasn't developed until last year and there has been a steady stream of additional viable attacks since the first. In fact it's arguable that the severity of the attacks is going up with each discovery.

    They called it Spectre for a reason, they figured it was going to haunt computer design for the foreseeable future. Side channel attacks are here to stay and they aren't done finding new ones. Though we can't abandon speculative execution, any company that didn't take into account this attack method in future designs was foolish.
  • SaberKOG91 - Wednesday, May 29, 2019 - link

    I don't disagree with any of your statements. I was annoyed by their ignorant attitude towards the importance of speculative features of modern CPU design. We need them for the sake of performance and there shouldn't need to be a steep trade-off between security and performance in order to keep these features. It's clear that AMD were able to design a more secure core that isn't affected by many of the side-channel attacks that Intel are vulnerable to, without a huge performance tradeoff. I doubt that ARM would willingly repeat these mistakes now that these attacks are known.
  • rocky12345 - Wednesday, July 3, 2019 - link

    Just a question I guess wasn't Spectre just what they called it when they found that someone could exploit the CPU like that. Up until then and I do not think there has been any real attacks that came out of this yet. My point is what if they did not make Spectre public and just fixed it behind closed doors. Then neither us or the attackers would have been clued in to it all. Also then there would not have been all these other finds for maybe attacks as well since then. I guess what I am saying sometimes us the public do not need to know some of these things because it just might also inform the low life's that like to do these exploits. Just my 2 cents worth and opinion on it all.
  • syxbit - Monday, May 27, 2019 - link

    I know you brought up Apple multiple times, but I really wish it got brought up even more. Every Q&A with ARM should say "Hey, stop boasting until you catch up to Apple"
    It's frustrating as an Android user to have seriously inferior SoCs....
  • Nozuka - Monday, May 27, 2019 - link

    Even more frustrating to have one of these SoCs, but nothing actually uses its power...
    Unless you count bragging rights
  • Wilco1 - Monday, May 27, 2019 - link

    Define inferior. Phone SoCs are not just about maximum single threaded performance. And Apple users found out the performance cannot be sustained by batteries...

    Android users get better power efficiency and lower cost phones due to lower die size of SoCs. As Andrei has shown, simply chasing high benchmark scores (like Samsung did with their custom cores) does not seem to work out so well.
  • syxbit - Monday, May 27, 2019 - link

    Lower cost phones?
    Flagships from Google and Samsung cost the same as an iPhone, and the iPhone trounces them in performance.
  • Wilco1 - Monday, May 27, 2019 - link

    There are plenty high-end Android phones which cost about half of an equivalent iPhone. Eg. OnePlus 7 Pro is $699 with 256GB flash vs $1249 for a similar spec Xs Max.

Log in

Don't have an account? Sign up now