As part of this evening’s AMD Capsaicin event (more on that later), AMD’s Chief Architect and SVP of the Radeon Technologies Group has announced a new Radeon Pro card unlike anything else. Dubbed the Radeon Pro Solid State Graphics (SSG), this card includes M.2 slots for adding NAND SSDs, with the goal of vastly increasing the amount of local storage available to the video card.

Details are a bit thin and I’ll update this later this evening, but in short the card utilizes a Polaris 10 Fiji GPU and includes 2 PCIe 3.0 M.2 slots for adding flash drives to the card. These slots are then attached to the GPU (it’s unclear if there’s a PCIe switch involved or if it’s wired directly), which the GPU can then use as an additional tier of storage. I’m told that the card can fit at least 1TB of NAND – likely limited by M.2 MLC SSD capacities – which massively increases the amount of local storage available on the card.

As AMD explains it, the purpose of going this route is to offer another solution to the workset size limitations of current professional graphics cards. Even AMD’s largest card currently tops out at 32GB, and while this is a fair amount, there are workloads that can use more. This is particular the case for workloads with massive datasets (oil & gas), or as AMD demonstrated, scrubbing through an 8K video file.

Current cards can spill over to system memory, and while the PCIe bus is fast, it’s still much slower than local memory, plus it is subject to the latency of the relatively long trip and waiting on the CPU to address requests. Local NAND storage, by comparison, offers much faster round trips, though on paper the bandwidth isn’t as good, so I’m curious to see just how it compares to the real world datasets that spill over to system memory.  Meanwhile actual memory management/usage/tiering is handled by a combination of the drivers and developer software, so developers will need to code specifically for it as things stand.

For the moment, AMD is treating the Radeon Pro SSG as a beta product, and will be selling developer kits for it directly., with full availability set for 2017. For now developers need to apply for a kit from AMD, and I’m told the first kits are available immediately. Interested developers will need to have saved up their pennies though: a dev kit will set you back $9,999.

Update:

Now that AMD’s presentation is over, we have a bit more information on the Radeon Pro SSG and how it works.

In terms of hardware, the Fiji based card is outfit with a PCIe bridge chip – the same PEX8747 bridge chip used on the Radeon Pro Duo, I’m told – with the bridge connecting the two PCIe x4 M.2 slots to the GPU, and allowing both cards to share the PCIe system connection. Architecturally the prototype card is essentially a PCIe SSD adapter and a video card on a single board, with no special connectivity in use beyond what the PCIe bridge chip provides.

The SSDs themselves are a pair of 512GB Samsung 950 Pros, which are about the fastest thing available on the market today. These SSDs are operating in RAID-0 (striped) mode to provide the maximum amount of bandwidth. Meanwhile it turns out that due to how the card is configured, the OS actually sees the SSD RAID-0 array as well, at least for the prototype design.

To use the SSDs, applications need to be programmed using AMD’s APIs to recognize the existence of the local storage and that it is “special,” being on the same board as the GPU itself. Ultimately the trick for application developers is directly streaming resources from  the SSDs treating it as a level of cache between the DRAM and system storage. The use of NAND in this manner does not fit into the traditional memory hierarchy very well, as while the SSDs are fast, on paper accessing system memory is faster still. But it should be faster than accessing system storage, even if it’s PCIe SSD storage elsewhere on the system. Similarly, don’t expect to see frame buffers spilling over to NAND any time soon. This is about getting large, mostly static resources closer to the GPU for more efficient resource streaming.

To showcase the potential benefits of this solution, AMD had an 8K video scrubbing demonstration going, comparing performance between using a source file on the SSG’s local SSDs, and using a source file on the system SSD (also a 950 Pro).

The performance differential was actually more than I expected; reading a file from the SSG SSD array was over 4GB/sec, while reading that same file from the system SSD was only averaging under 900MB/sec, which is lower than what we know 950 Pro can do in sequential reads. After putting some thought into it, I think AMD has hit upon the fact that most M.2 slots on motherboards are routed through the system chipset rather than being directly attached to the CPU. This not only adds another hop of latency, but it means crossing the relatively narrow DMI 3.0 (~PCIe 3.0 x4) link that is shared with everything else attached to the chipset.

Though by and large this is all at the proof of concept stage. The prototype, though impressive in some ways in its own right, is really just a means to get developers thinking about the idea and writing their applications to be aware of the local storage. And this includes not just what content to put on the SSG's SSDs, but also how to best exploit the non-volatile nature of its storage, and how to avoid unnecessary thrashing of the SSDs and burning valuable program/erase cycles. The SSG serves an interesting niche, albeit a limited one: scenarios where you have a large dataset and you are somewhat sensitive to latency and want to stay off of the PCIe bus, but don't need more than 4-5GB/sec of read bandwidth. So it'll be worth keeping an eye on this to see what developers can do with it.

In any case, while AMD is selling dev kits now, expect some significant changes by the time we see the retail hardware in 2017. Given the timeframe I expect we’ll be looking at much more powerful Vega cards, where the overall GPU performance will be much greater, and the difference in performance between memory/storage tiers is even more pronounced.

Source: AMD

POST A COMMENT

120 Comments

View All Comments

  • Communism - Monday, July 25, 2016 - link

    If they are going to price these things sky-high, why wouldn't people just get GP100 with NVLink and just use system ram?

    All in all, this seems incredibly pointless.
    Reply
  • testbug00 - Tuesday, July 26, 2016 - link

    why would AMD price them insanely high? AMD isn't NVidia or Intel. Even if acting like Intel or Nvidia would help them sometimes. Reply
  • just4U - Tuesday, July 26, 2016 - link

    and anyway... one would think that such things would be more relevant to Nvidia's GP102.. not yet out as far as I know. Reply
  • testbug00 - Tuesday, July 26, 2016 - link

    that depends what is required to make this work. It could be Nvidia could design their GP102 around this and have it out when AMD has it out. Reply
  • Michael Bay - Tuesday, July 26, 2016 - link

    AMD dreams to be exactly like nV and intel, it just fails. Reply
  • Samus - Tuesday, July 26, 2016 - link

    Their professional graphics division has turned an annual net profit since the acquisition of ATI.

    That says volume when considering almost no other AMD division has done so consistently. Innovations and niche products like this, Mantle, and so on, keep them afloat.
    Reply
  • eachus - Sunday, July 31, 2016 - link

    The "insanely high" pricing for beta prototypes are because these are "bleeding edge" prototypes. I spent years of my life on the bleeding edge, both as a customer and as a producer. If you have an application where this can save you thousands of dollars a month? AMD will hear from you, and provide the needed support to get the prototypes working. Then, next year, perhaps with a Vega 10 GPU instead, they will produce a commercial product.

    I know of several sesmic processing groups who would kill for this product, if it can cut their processing time on their biggest (and deepest) visualizations by a week or so.

    Oh, by the way. Note that Intel may by next year have a Crosspoint memory card to put in instead of a Flash RAM card.
    Reply
  • just4U - Tuesday, July 26, 2016 - link

    I am not really sure what to make of your comment.. Trolling? hmm.. Both AMD and indeed Nvidia sell professional graphic solutions that can be quite expensive due to the feature set and drivers involved. Think upwards of 7X the cost of a GP100. Aside from that this is a developers kit.. Reply
  • Demiurge - Tuesday, July 26, 2016 - link

    NVLink does not work that way!!!! ("Windmills do not work that way!!!!")

    1) Paths to the CPU/main memory often get saturated by the processing being much faster than the point-to-point bus they are connected. The idea here is to avoid the external bus and keep data local (close to the processing element). The larger the working dataset, the more evident this becomes.

    2) GDDR5X or HBM2 memory is not cheap, MLC flash is by comparison. This is similar the concept behind using the paging file (or virtual memory) on an HDD to enhance system memory for programs running on PC's. I don't know how much a GPU with 128GB to 1TB of GDDR5X or HBM2 memory will cost, but I can guarantee ~$10K for a development kit and another $10K for the SSD's through the 1-2 year lifetime is a bargain. That's really what the hidden proposition is here... value and scarcity of a comparable solution.
    Reply
  • Communism - Tuesday, July 26, 2016 - link

    Their SSD solution is 5 gB/s.

    PCIe 3.0 x16 is 16 gB/s per direction.

    NVLink is 40 gB/s per direction.

    This "solution" is completely and utterly pointless unless it costs a negligible amount over how much a polaris alone costs.

    We're being told it costs 10K for the card alone, sans the SSDs.
    Reply

Log in

Don't have an account? Sign up now