Alongside announcing their new Radeon RX 6000M notebook GPUs, AMD this evening is also using Computex to formally unveil their FidelityFX Super Resolution technology. The game upscaling technology has garnered a lot of interest since AMD first announced last year that they were working on the technology, today AMD is finally lifting the lid on the technology, at least for a brief moment. Overall today is more of a teaser of what’s to come on June 22nd, when AMD is planning to reveal more information on the technology, but for the moment this is the biggest information release on the technology since AMD’s initial announcement.

As a quick recap, AMD announced FidelityFX Super Resolution last year as part of the Radeon RX 6000 series launch. The in-development technology was being designed as AMD’s answer to NVIDIA’s increasingly popular Deep Learning Super Sampling (DLSS) technology, an advanced image upscaling technique NVIDIA introduced to allow their GPUs to render games at a lower resolution (and thus higher performance) without the severe hit to image quality. Research into DLSS and similar smart upscaling techniques has become increasingly intense, as upscaling offer a tantalizing way to improve game performance via what’s computationally a relatively cheap post-processing effect.

While NVIDIA seems to have finally hit their stride with DLSS 2.0, the downside for everyone who is not NVIDIA is that it’s an NVIDIA-only technology. Which for AMD, means it’s yet another NVIDIA value-add software feature that NVIDIA can use to outmaneuver AMD. And while it’s not strictly necessary for AMD to match NVIDIA on a one-for-one software feature basis, clearly DLSS is the start of a bigger shift in the game rendering landscape, so it’s an area where AMD is going to try to catch up, both to nullify an NVIDIA advantage and to give PC game developers another tool in their arsenal for better performance.

And thus FidelityFX Super Resolution was born. While today is more a teaser than a detailed technical breakdown, AMD is confirming a few major facts about their take on smart game upscaling.

First and foremost, FSR, as AMD likes to call it, will be another one of AMD’s GPUOpen technologies, meaning that it will be published open source and free for developers to use. And not only will developers be able to use it on AMD GPUs, but they will be able to use it on NVIDIA GPUs as well.

AMD is not going into the specific technical underpinnings of the execution model here – I’m assuming this is being implemented as a shader – but they are confirming that it doesn’t require any kind of tensor or other deep learning hardware. As a result, the technology can be used not only on recent AMD Radeon RX 6000 series cards, but also the RX 5000 series, RX 500 series, and Vega series. Meanwhile, though it won’t be officially supported to the same extent on NVIDIA cards, according to AMD FSR will work (on day one) on NVIDIA cards going back as far as the Pascal-based GTX 10 series (which pre-dates DLSS support). In fact about the only modern graphics hardware not supported at this point are the current-generation game consoles; AMD may get there one day, but for now they’re focusing on the PC side of things.

At this point AMD is not disclosing which games will support the technology, but the messaging right now is that developers will need to take some kind of an active role in implementing the tech. Which is to say that it’s not sounding like it can simply be applied in a fully post-processing fashion on existing games ala AMD’s contrast adaptive sharpening tech.

Following its June 22nd launch, AMD will be posting FSR to GPUOpen. Overall the company is stating that over 10 “game studios and engines” in 2021 will implement FSR, with more details to come on the 22nd. Expect to see Godfall among these game, as AMD is using it as their example game for today’s announcement.

Moving on, AMD is also revealing that FSR will have four quality modes. Similar to DLSS, I expect that these modes are all based on the upscaling factor used, and that the smaller upscaling factor used (the closer to native resolution you are) the higher the quality mode. Formally, these modes are Ultra Quality, Quality, Balanced, and Performance mode.

For today’s Computex presentation, AMD is publishing performance numbers from Godfall, running on a 6800XT. That card gets 49 fps when running at 4K with the “epic” image quality preset and ray tracing. This rises to 78 fps with FST in ultra quality mode, 99 fps in quality mode, 124 fps in balanced mode, and 150 fps in performance mode. The exact benefit will depend on the card and the game used, of course, but overall AMD is touting the tech’s performance mode as improving game performance by over 2x versus native 4K rendering.

(ed: this slide does not appear to have FSR applied; it’s just a fancy background for the performance data)

However the million dollar question – and the question that AMD won’t really be answering until the 22nd – is what the resulting image quality of FSR will be like. Like other upscaling techniques, FSR will live or die by how clean of an image it produces. Upscaling techniques are going for “good enough” results here, so it doesn’t need to match native quality, however it needs to be enough to produce a reasonably sharp image without serious spatial or temporal artifacts.

And, to drop into op-ed mode, this is where AMD has me a bit worried. In our pre-briefing with AMD, the company did confirm that FSR is going to be a purely spatial upscaling technology; it will operate on a frame-by-frame basis, without taking into account motion data (motion vectors) from the game itself.

For GPU junkies, many of you will recognize this as a similar strategy to how NVIDIA designed DLSS 1.0, which was all about spatial upscaling by using pre-trained, game-specific neural network models. DLSS 1.0 was ultimately a failure – it couldn’t consistently produce acceptable results and temporal artifacting was all too common. It wasn’t until NVIDIA introduced DLSS 2.0, a significantly expanded version of the technology that integrated motion vector data (essentially creating Temporal AA on steroids), that they finally got DLSS as we know it in working order.

Given NVIDIA’s experience with spatial-only upscaling, I’m concerned that AMD is going to repeat NVIDIA’s early mistakes. Spatial is a lot easier to do on the backend – and requires a lot less work from developers – but the lack of motion vector data presents some challenges. In particular, motion vectors are the traditional solution to countering temporal artifacting in TAA/DLSS, which is what ensures that there are no frame-by-frame oddities or other rendering errors from moving objects. Which is not to say that spatial-only upscaling can’t work, only that, if it’s competitive in image quality with DLSS, that would be a big first for AMD.

Unfortunately, AMD isn’t doing themselves any favors in this regard with today’s presentation. Within their slide deck there is a single split image with FSR seemingly enabled, which they use for a GTX 1060 performance comparison. I’ve gone ahead and extracted the raw image from the slide deck given to the press and uploaded it here, to try to preserve as much image quality as possible. (Be sure to click on it to see the full-resolution shot)

Taking a jab at NVIDIA by comparing the GTX 1060 running at 1440p native versus FSR in quality mode, the demonstration slide shows that performance is significantly improved, bringing the GTX 1060 from 27 fps to 38 fps. Unfortunately the image quality hit is quite noticeable here. The building and bridge are blurrier here than the native resolution example, and the tree in the background – which is composed of many fine details – easily gives up the fact that it’s running at a lower resolution.

With that said, as I haven’t seen the technology in person in motion yet, I’m not in a position to claim how good all of this looks in action. But at least for static screenshots, it does raise an eyebrow. Though I will give AMD credit here for publishing a (seemingly honest) screenshot like this, rather than cherry picking something that oversells FSR.

In any case, there’s a lot more technical information on FSR that we’ve yet to see, and which AMD will be presenting on June 22nd. And, given the open source nature of FSR, AMD has little incentive or ability to hold back on technical information for too long. So the questions raised today with their brief teaser will make for good discussion fodder for FSR’s actual launch.

Finally, ahead of the FSR launch AMD is soliciting feedback on what games users would like to see supporting the technology. AMD is preseumably going to use this to help guide their developer relations priorities, so if you're interested in supplying feedback, be sure to stop by AMD's FSR survey site.

Overall, it goes without saying that a lot of people are eager to see what AMD can do with game upscaling technology, both to level the playing field with NVIDIA and to bring it to a wider array of hardware and games. It’s an ambitious project from AMD, and it will undoubtedly be interesting to see how everything falls into place when AMD launches the tech on June 22nd. So be sure to stay tuned for that!

Comments Locked


View All Comments

  • Cooe - Tuesday, June 1, 2021 - link

    The FSR demos running on AMD cards upscaling to 4K looked way, WAAAAAAAY better than the Nvidia GTX 1060 demo. Meethinks it either performs worse on Nvidia hardware outside sheer raw performance OR it doesn't play nice with lower resolutions. My guess is it's mostly the former with a bit of the latter.
  • lightningz71 - Tuesday, June 1, 2021 - link

    I'm more interested in what this can do for APUs. APUs are heavily hamstrung by memory performance and limited silicon area for the gpu logic. If this can work, and work well, on the U series APUs, allowing them to render in full detail at 720p, where they seem to do just fine, and then properly upsample to 1080p while realizing playable frame rates with solid quality, this is a big win for mobile.
  • brucethemoose - Tuesday, June 1, 2021 - link

    This same problem exists in image/video upscaling land.

    Take a network built for images, run it on every frame of a video, and it looks great... until you watch it in motion and see the flickering. This isn't necessarily a problem with "traditional" upscalers and processors, as they're more deterministic.

    Also, Pascal was the first to support FP16, IIRC. Maybe thats the minimum required for support? And I think AMD FP16 support goes pretty far back.
  • mode_13h - Tuesday, June 1, 2021 - link

    > Pascal was the first to support FP16

    Only on the P100. However, what the gaming GPUs had was 4x int8 dot product. I think Turing has both, but then it has tensor cores which are a huge leap beyond simple "packed-math" (that's what AMD calls it).

    AMD added packed fp16 in Vega, and then packed int8 in Radeon VII (but with a very limited set of operations). RDNA really fleshed out their contingent of packed-math operations. Still, it's nothing like Nvidia's Tensor cores.

    AMD has apparently challenged Tensor cores with their own "Matrix Cores", in their CDNA-based MI100 (Arcturus), but that has no graphics units. So, it'll be interesting to see if they bring Matrix Cores to RDNA3.
  • evolucion8 - Wednesday, June 2, 2021 - link

    Not really, AMD was first with GCN 1.2 aka Tonga aka Radeon HD 285 and 380X, it supported FP16 back in 2014.
  • mode_13h - Saturday, June 5, 2021 - link

    > Not really, AMD was first with GCN 1.2 aka Tonga aka Radeon HD 285 and 380X

    Only as a sort of technicality, but not in any way relevant to the discussion at hand. In GCN 1.2, 16-bit operations were added in what was described as a power-saving move, occupying only half of a shader/SIMD lane.

    It wasn't until Vega that AMD decided to pack two fp16 values in a single lane, which they termed "Rapid Packed Math".

    Intel was actually first to do this, in their Gen8 HD graphics GPUs. Those shipped in Broadwell CPUs, back in 2015.
  • mode_13h - Saturday, June 5, 2021 - link

    If you really want to talk about the inclusion of fp16 in GPUs, for its own sake, it happened long before. This is dated 2002:

    And I assume its use in GPUs is what motivated its inclusion in IEEE 754-2008.
  • Ozzypuff - Tuesday, June 1, 2021 - link

    Ok sounds awesome but will my athlon 3000G run Genshin Impact better?
  • mode_13h - Tuesday, June 1, 2021 - link

    Wow, it appears the game they're using already has some sort of TAA. Zoom in on the lamp post, in the "off" side, and look at the high-frequency aliasing artifacts around it.

    I guess it could also be a compression artifact, depending on how this frame was captured.
  • maroon1 - Wednesday, June 2, 2021 - link

    "this slide does not appear to have FSR applied; it’s just a fancy background for the performance data"

    Haha. FSR is garbage

Log in

Don't have an account? Sign up now