NVIDIA's Computex Announcements & The Test

Alongside the launch of the GTX 980 Ti, NVIDIA is also taking advantage of Computex to make a couple of other major technology announcements. Given the scope of these announcements we’re covering these in separate articles, but we’ll quickly go over the high points here as they pertain to the GTX 980 Ti.

G-Sync Variable Overdrive & Windowed Mode G-Sync

NVIDIA is announcing a slew of G-Sync products/technologies today, the most important of which is Mobile G-Sync for laptops. However as part of that launch, NVIDIA is also finally confirming that all G-Sync products, including existing desktop G-Sync products, feature support for G-Sync variable overdrive. As the name implies, this is the ability to vary the amount of overdrive applied to a pixel based on a best-effort guess of when the next frame will arrive. This allows NVIDIA to continue to use pixel overdrive on G-Sync monitors to improve pixel response times and reduce ghosting, at a slight cost to color accuracy while in motion from errors in the frame time predictions.

Variable overdrive has been in G-Sync since the start, however until now NVIDIA has never confirmed its existence, with NVIDIA presumably keeping quiet about it for trade secret purposes. However now that displays supporting AMD’s Freesync implementation of DisplayPort Adaptive-Sync are out, NVIDIA is further clarifying how G-Sync works.

Meanwhile being freshly rolled out in NVIDIA’s latest drivers is support for Windowed Mode G-Sync. Before now, running a game in Windowed mode could cause stutters and tearing because once you are in Windowed mode, the image being output is composited by the Desktop Window Manager (DWM) in Windows. Even though a game might be outputting 200 frames per second, DWM will only refresh the image with its own timings. The off-screen buffer for applications can be updated many times before DWM updates the actual image on the display.

NVIDIA will now change this using their display driver, and when Windowed G-Sync is enabled, whichever window is the current active window will be the one that determines the refresh rate. That means if you have a game open, G-Sync can be leveraged to reduce screen tearing and stuttering, but if you then click on your email application, the refresh rate will switch back to whatever rate that application is using. Since this is not always going to be a perfect solution - without a fixed refresh rate, it's impossible to make every application perfectly line up with every other application - Windowed G-Sync can be enabled or disabled on a per-application basis, or just globally turned on or off.

GameWorks VR & Multi-Res Shading

Also being announced at Computex is a combination of new functionality and an overall rebranding for NVIDIA’s suite of VR technologies. First introduced alongside the GeForce GTX 980 in September as VR Direct, NVIDIA will be bringing their VR technologies in under the GameWorks umbrella of developer tools. The collection of technologies will now be called GameWorks VR, adding to the already significant collection of GameWorks tools and libraries.

On the feature front, the newly minted GameWorks VR will be getting a new feature dubbed Multi-Resolution Shading, or Multi-Res Shading for short. With multi-res shading, NVIDIA is looking to leverage the Maxwell 2 architecture’s Multi-Projection Acceleration in order to increase rendering efficiency and ultimately the overall performance of their GPUs in VR situations.

By reducing the resolution of video frames at the edges where there is already the most optical distortion/compression and the human eye is less sensitive, NVIDIA says that using multi-res shading can result in a 1.3x to 2x increase in pixel shader performance without noticeably compromising the image quality. Like many of the other technologies in the GameWorks VR toolkit this is an implementation of a suggested VR practice, however in NVIDIA’s case the company believes they have a significant technological advantage in implementing it thanks to multi-projection acceleration. With MPA to bring down the rendering cost of this feature, NVIDIA’s hardware can better take advantage of the performance advantages of this rendering approach, essentially making it an even more efficient method of VR rendering.

Getting Behind DirectX Feature Level 12_1

Finally, though not an outright announcement per-se, from a marketing perspective we should expect to see NVIDIA further promote their current technological lead in rendering features. The Maxwell 2 architecture is currently the only architecture to support DirectX feature level 12_1, and with DirectX 12 games due a bit later this year, NVIDIA sees that as an advantage to press.

For promotional purposes NVIDIA has put together a chart listing the different tiers of feature levels for DirectX 12, and to their credit this is a simple but elegant layout of the current feature level situation. The bulk of the advanced DirectX 12 features we saw Microsoft present at the GTX 980 launch are part of feature level 12_1, while the rest, and other functionality not fully exploited under DirectX 11 are part of the 12_0 feature level. The one exception to this is volume tiled resources, which is not part of either feature level and instead is part of a separate feature list for tiled resources that can be implemented at either feature level.

The Test

The press drivers for the launch of the GTX 980 Ti are release 352.90, which other than formally adding support for the new card is otherwise identical to the standing 352.86 drivers.

CPU: Intel Core i7-4960X @ 4.2GHz
Motherboard: ASRock Fatal1ty X79 Professional
Power Supply: Corsair AX1200i
Hard Disk: Samsung SSD 840 EVO (750GB)
Memory: G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)
Case: NZXT Phantom 630 Windowed Edition
Monitor: Asus PQ321
Video Cards: AMD Radeon R9 295X2
AMD Radeon R9 290X
AMD Radeon HD 7970
NVIDIA GeForce GTX Titan X
NVIDIA GeForce GTX 980 Ti
NVIDIA GeForce GTX 980
NVIDIA GeForce GTX 780 Ti
NVIDIA GeForce GTX 780
NVIDIA GeForce GTX 680
NVIDIA GeForce GTX 580
Video Drivers: NVIDIA Release 352.90 Beta
AMD Catalyst Cat 15.5 Beta
OS: Windows 8.1 Pro
Meet The GeForce GTX 980 Ti Battlefield 4
Comments Locked

290 Comments

View All Comments

  • Kosiostin - Monday, June 1, 2015 - link

    I beg to differ. 4K at monitor viewing distance is not overkill, it's actually quite pleasantly sharp. Phones, tablets and laptops are already pushing for 2K+ displays which is phenomenally sharp and out of the league for normal FHD monitors. Gaming at 4K is still not coming but when it comes it will blow our minds, I am sure.
  • Oxford Guy - Monday, June 1, 2015 - link

    People who care so much for immersion should be using 1440 with HDTV screen sizes, not sitting way up close with small monitors.

    Too bad HDTVs have so much input lag, though.
  • Kutark - Monday, June 1, 2015 - link

    Basically at a 5' viewing distance, you would have to have a 40" monitor before 4k would start to become noticeable.

    Even at 30" monitor you would have to be sitting roughly 3.5' or closer to your monitor to be able to begin to tell the difference.

    We also have to keep in mind we're talking about severely diminishing returns. 1440p is about perfect for normal seating distances with a computer on a 27" monitor. 30" some arguments can be made for 4k but its a minor. Its not like we're going from 480p to 1080p or something 1440p is still very good at "normal" computer seating distances.
  • mapesdhs - Wednesday, June 3, 2015 - link

    Human vision varies as to who can discern what at a particular distance. There's no fixed cutoffs for this. Personally, when wandering around a TV store back in January (without knowing what type of screen I was looking at), for visual clarity the only displays that looked properly impressive turned out to be 4Ks. However, they're still a bit too pricey atm for a good one, with the cheaper models employing too many compromises such as reduced chroma sampling to bring down the pricing, or much lower refresh rates, etc. (notice how stores use lots of static imagery to advertise their cheaper 4K TVs?)

    Btw, here's a wonderfull irony for you: recent research, mentioned in New Scientist, suggests that long exposure by gamers to high-refresh displays makes them more able to tell the difference between standard displays and high-refresh models, ie. simply using a 144Hz monitor can make one less tolerant of standad 60Hz displays in the long term. :D It's like a self-reinforcing quality tolerance level. Quite funny IMO. No surprise to me though, years working in VR & suchlike resulted in my being able to tell the difference in refresh rates much more than I was able to beforehand.

    Anyway, I'm leaving 4K until cheaper models are better quality, etc. In the meantime I bought a decent (but not high-end) 48" Samsung which works pretty well. Certainly looks good for Elite Dangerous running off a 980, and Crysis looks awesome.
  • Laststop311 - Monday, June 1, 2015 - link

    Why would most people be using DVI? DVI is big and clunky and just sucks. Everyone that gets new stuff nowadays uses displayport it has the easiest to use plug.
  • Crest - Sunday, May 31, 2015 - link

    Thank you for including the GTX580. I'm still living and working on a pair of 580's and it's nice to know where they stand in these new releases.
  • TocaHack - Monday, June 1, 2015 - link

    I upgraded from SLI'd 580s to a 980 at the start of April. Now I'm wishing I'd waited for the Ti! It wasn't meant to launch this soon! :-/
  • mapesdhs - Wednesday, June 3, 2015 - link

    Indeed, one of the few sites to include 580 numbers, though it's a shame it's missing in some of the graphs (people forget there are lots of 3GB 580s around now, I bought ten last month).

    If it's of any use, I've done a lot of 580 SLI vs. 980 (SLI) testing, PM for a link to the results. I tested with 832MHz 3GB 580s, though the reference 783MHz 3GB models I was already using I sold for a nice profit to a movie company (excellent cards for CUDA, two of them beat a Titan), reducing the initial 980 upgrade to a mere +150.

    Overall, a 980 easily beats 580 SLI, and often comes very close to 3-way 580 SLI. The heavier the load, the bigger the difference, eg. for Firestrike Ultra, one 980 was between 50% and 80% faster than two 3GB 580s. I also tested 2/3-way 980 SLI, so if you'd like the numbers, just PM me or Google "SGI Ian" to find my site, contact page and Yahoo email adr.

    I've been looking for a newer test. I gather GTA V has a built-in benchmark, so finally I may have found something suitable, need to look into that.

    Only one complaint about the review though, why no CUDA test??? I'd really like to know how the range of NV cards stacks up now, and whether AE yet supports MW CUDA V2. I've tested 980s with Arion and Blender, it came close to two 580s, but not quite. Would be great to see how the 980 Ti compares to the 980 for this. Still plenty of people using CUDA with pro apps, especially AE.

    Ian.
  • mapesdhs - Wednesday, June 3, 2015 - link

    Btw Crest, which model 580s are you using? I do have some 1.5GB 580s aswell, but I've not really done much yet to expose where VRAM issues kick in, though it does show up in Unigine pretty well at 1440p.

    For reference, I do most testing with a 5GHz 2700K and a 4.8GHz 3930K, though I've also tested three 980s on a P55 with an i7 870 (currently the fastest P55 system on 3DMark for various tests).
  • Mikemk - Sunday, May 31, 2015 - link

    Since it has 2 SMM's disabled, does it have the memory issue of the 970? (Haven't read full article yet, sorry if answered in article)

Log in

Don't have an account? Sign up now