Intel this week announced that its processors, compute accelerators, and Optane DC persistent memory modules will power Aurora, the first supercomputer in the US projected to feature a performance of one exaFLOP. The system is expected to be delivered in about two years, and goes beyond its initial Xeon Phi specification released in 2014.

The US Department of Energy, Intel, and Cray have signed a contract under which the two companies and DOE’s Argonne National Laboratory will develop and build the Aurora supercomputer capable of a “quintillion” floating point computations per second. The deal is valued at more than $500 million, the system is expected to be delivered sometimes in 2021.

The Aurora machine will be based on Intel’s Xeon Scalable processors, the company’s upcoming compute accelerators based on the Xe compute architecture for datacenters, as well as a next-generation Optane DC persistent memory. The supercomputer will rely on Cray’s 'Shasta' architecture featuring Cray’s Slingshot interconnect, that was announced at Supercomputing back in November. The system will be programmed using Intel’s OneAPI and will also use the Shasta software stack tailored for Intel.

Around two years ago the DOE started its Exascale Computing Project to spur development of hardware, software, and applications for exaFLOP-class supercomputers. The organization awarded $258 million in research contracts to six technology companies, including AMD, Cray, Hewlett Packard Enterprise, IBM, Intel, and NVIDIA. As it turns out, Intel’s approach was considered as the most efficient one for the country’s first Exascale supercomputer.

It is noteworthy that ANL’s Aurora supercomputer back in 2014 was supposed to be based on Intel’s Xeon Phi codenamed Knights Hill produced using the company 10 nm process technology. The plan changed in 2017, when Intel canned the Knights Hill in favor of a more advanced architecture (and the fact that its Xeon processors were approaching a Xeon Phi-like implementation). Apparently, Intel and its partners are confident in the new chips to proceed with the project now.

The Aurora supercomputer will be able to handle both AI and traditional HPC workloads. At present, Argonne National Laboratory says that among other things this machine will be used for cancer research, cosmological simulations, climate modeling, discovering drug response, and exploring various new materials.

“There is tremendous scientific benefit to our nation that comes from collaborations like this one with the Department of Energy, Argonne National Laboratory, industry partners Intel and Cray and our close association with the University of Chicago,” said Argonne National Laboratory Director, Paul Kearns. ​“Argonne’s Aurora system is built for next-generation artificial intelligence and will accelerate scientific discovery by combining high-performance computing and artificial intelligence to address real world problems, such as improving extreme weather forecasting, accelerating medical treatments, mapping the human brain, developing new materials and further understanding the universe — and those are just the beginning.”

Related Reading:

Sources: Intel, Intel, Argonne National Laboratory

POST A COMMENT

25 Comments

View All Comments

  • Yojimbo - Friday, March 22, 2019 - link

    I would wager that Cray will make NVIDIA GPUs available in their commercial Shasta systems. Perlmutter, for example, is Shasta-based supercomputer, to be delivered in 2020, that includes NVIDIA GPU compute nodes. Cray seemed to enter into a close partnership with Intel under the Xeon Phi program and it bit them in the ass. Since then, they seem to have diversified their strategy a bit. Reply
  • Yojimbo - Friday, March 22, 2019 - link

    No, it throws them out. It's not that the DOE does not want IBM/NVIDIA machines at all, it's that they don't want to rely exclusively on any one architecture, and if they had Summit, Sierra, and Aurora all IBM/NVIDIA systems then they would be purchasing only one architecture under the program, and even though A21 is delayed to a later generation, since Intel's strategy is currently in flux it would really limit their purchasing options for Crossroads and NERSC-9. If they ended up with NVIDIA accelerators for all supercomputers delivered between 2018 and 2021 it would be going well against their philosophy. Reply
  • TeXWiller - Friday, March 22, 2019 - link

    Oh, I didn't mean to imply that Aurora were supposed to be a Power system. Intel/Cray just missed the first delivery stage due to various challenges and upgraded the plan for a later stage delivery. Reply
  • mode_13h - Friday, March 22, 2019 - link

    When they tout exaFLOPS, are we certain they're talking about fp64? Or could they be fudging things and really talking about fp16 or some specialized deep learning-flavored datatype?

    Just run the numbers and you'll see what I mean. Nvidia's V100 clocks about 7 TFLOPS of fp64. Let's say Intel's biggest Xe manages about 10 (could be higher, but probably less than 20, and I'm just talking about ballparks, here). So, you'd need 100k of those to reach exaFLOPS. I'm pretty sure that's well bigger than we've ever seen. Can it be done for $500 M? Hmmm... At $5k per GPU, my guess is it'd be a stretch (some of that amount has to go for CPUs, RAM, storage, Optane DIMMs, power, racks, networking, etc.).

    I think it more likely they're talking about fp16 performance.
    Reply
  • HStewart - Tuesday, March 26, 2019 - link

    A couple of things to keep in mind, there is a performance leak about Gen 11 graphics which are the Integrated graphic replace of 1 TFLOP GPU inside notebook and Xe are the Gen12 external graphics.

    https://hothardware.com/news/intel-gen-11-gpu-benc...

    My guess for Gen 12 graphics there will be multiple levels of GPU - from internal graphics for notebooks to consumer cards, higher end game cards and professional cards.

    Keep also in mind, that these Cray computers are not just GPU's and also have Covey Lake based Xeon's in the picture. So these CPU's also have AVX 512 or possible something even better in the picture..
    Reply

Log in

Don't have an account? Sign up now