Amazon Makes AMD Rome EC2 Instances Availableby Andrei Frumusanu on June 5, 2020 8:00 AM EST
After many months of waiting, Amazon today has finally made available their new compute-oriented C5a AWS cloud instances based on the new AMD EPYC 2nd generation Rome processors with new Zen2 cores.
Amazon had announced way back in November their intentions to adopt AMD’s newest silicon designs. The new C5a instances scale up to 96 vCPUs (48 physical cores with SMT), and were advertised to clock up to 3.3GHz.
The instance offerings scale from 2 vCPUs with 4GB of RAM, up to 96 vCPUs, with varying bandwidth to elastic block storage and network bandwidth throughput.
The actual CPU being used here is an AMD EPYC 7R32, a custom SKU that’s seemingly only available to Amazon / cloud providers. Due to the nature of cloud instances, we actually don’t know exactly the core count of the piece and whether this is a 64 or 48- core chip.
We quickly fired up an instance to check the CPU topology, and we’re seeing that the chip has two quadrants populated with the full 2 CCDs with four CCXs in total per quadrant, and two quadrants with seemingly only a single CCD populated, with only two CCXs per quadrant.
I quickly ran some tests, and the CPUs are idling at 1800MHz and boost up to 3300MHz maximum. All-core frequencies (96 threads) can be achieved at up to 3300MHz, but will throttle down to 3200MHz after a few minutes. Compute heavy workloads such as 456.hmmer will run at around 3100MHz all-core.
While it is certainly possible that this is a 64-core chip, Amazon’s offering of 96 vCPU metal instances point out against that. On the other hand, the 96 vCPU’s configuration of 192GB wouldn’t immediately match up with the memory channel count of the Rome chip unless the two lesser chip quadrants also each had one memory controller disabled. Either that, or there’s simply two further CCDs that aren’t can’t be allocated – makes sense for the virtualised instances but would be weird for the metal instance offering.
The new C5a Rome-based instances are available now in eight sizes in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Singapore) regions.
- Amazon's Arm-based Graviton2 Against AMD and Intel: Comparing Cloud Compute
- The AMD Ryzen Threadripper 3960X and 3970X Review: 24 and 32 Cores on 7nm
- AMD’s New 280W 64-Core Rome CPU: The EPYC 7H12
- AMD Rome Second Generation EPYC Review: 2x 64-core Benchmarked
- AMD Zen 2 Microarchitecture Analysis: Ryzen 3000 and EPYC Rome
Post Your CommentPlease log in or sign up to comment.
View All Comments
schujj07 - Friday, June 5, 2020 - linkAWS has been doing a disservice to the Epyc CPUs the entire time. More times than not the AMD instances follow the same RAM allotment that you would find with the Intel CPUs despite the AMD chips having 8 RAM channels vs Intel's 6.
awesomeusername - Friday, June 5, 2020 - linkAndrei, can you share which tool provided core-to-core latency results? There are open source ajakubek/core-latency which can be used for getting data, and then plotting it via some sort of python + mathlab - but solution in screenshot above already doing it.
Can you share some details?
Andrei Frumusanu - Friday, June 5, 2020 - linkIt's a custom tool I wrote. It's a generic atomic compare and set ping-pong on a value between two threads on a single cache line. The table is just Excel gradient of the CSV data.
MrCommunistGen - Friday, June 5, 2020 - linkI love these graphs! They are super insightful and provide "a little something extra" that I don't see in other tech publications.
NUMA, chiplets, and other recent changes in core-to-core latency and interconnectivity are an important part of the performance metrics of a CPU that help tell a deeper picture about why some workloads scale better than others on different platforms.
Tomatotech - Friday, June 5, 2020 - linkThat 96 x 96 thread latency chart is a thing of beauty. It seems only a few years ago that dual core CPUs were the new thing. To go from a 2x2 chart to a 96x96 chart ... just wow.
p1esk - Friday, June 5, 2020 - link192GB of RAM? I expected to see at least 512GB on the largest instances.
zipz0p - Friday, June 5, 2020 - linkMaybe we'll see that on the R-instances which are RAM-optimized (I hope so!).
Rudde - Monday, June 8, 2020 - linkThey already offer 768GB in their largest r5-series instances. We'll see if they'll release a 1024GB AMD instance before expanding their Intel offerings to the TB range.
DanNeely - Friday, June 5, 2020 - linkI'm guessing the 48 core instead of 64 was to give more consistent performance. With the similar 64core 280W model dropping down to 2.6ghz the performance of any one customer VM would vary a significant amount depending on how hard the rest of the chip is being used by other customers.
Topping out at 192GB isn't a surprise though with only 48 cores enabled; offering 256gb would result in 2.66 gb/vCPU vs them offering clean xGB/core offerings for everything (almost everything?) else. A maxed out 96 core/metal variation with 256gb wouldn't be fungible vs hardware hosting the smaller VMs.
If they eventually offer a tons of ram server based on this CPU it'd probably have all the ram channels populated and to a higher level than this offering; but only be offered in core counts where the ram divides out evenly assuming it's offered at anything below the entire server at all.
p1esk - Friday, June 5, 2020 - linkhttps://aws.amazon.com/ec2/instance-types/r5/