Following up on last week’s launch of NVIDIA’s new budget video card, the GeForce GTX 1650, today we’re taking a look at our first card, courtesy of Zotac. Coming in at $149, the newest member of the GeForce family brings up the rear of the GeForce product stack, offering NVIDIA’s latest architecture in a low-power, 1080p-with-compromises gaming video card with a lower price to match.

As the third member of the GeForce GTX 16 series, the GTX 1650 directly follows in the footsteps of its GTX 1660 predecessors. Built on a newer, smaller GPU specifically for these sorts of low-end cards, the underlying TU117 GPU is designed around the same leaner and meaner philosophy as TU116 before it. This means it eschews the dedicated ray tracing (RT) cores and the AI-focused tensor cores in favor of making smaller, easier to produce chips that retain the all-important core Turing architecture.

The net result of this process, the GeForce GTX 1650, is a somewhat unassuming card if we’re going by the numbers, but an important one for NVIDIA’s product stack. Though its performance is pedestrian by high-end PC gaming standards, the card fills out NVIDIA’s lineup by offering a modern Turing-powered card under $200. Meanwhile for the low-power video card market, the GTX 1650 is an important shot in the arm, offering the first performance boost for this hard-capped market in over two years. The end result is that the GTX 1650 will serve many masters, and as we’ll see, it serves some better than others.

NVIDIA GeForce Specification Comparison
  GTX 1650 GTX 1660 GTX 1050 Ti GTX 1050
CUDA Cores 896 1408 768 640
ROPs 32 48 32 32
Core Clock 1485MHz 1530MHz 1290MHz 1354MHz
Boost Clock 1665MHz 1785MHz 1392MHz 1455MHz
Memory Clock 8Gbps GDDR5 8Gbps GDDR5 7Gbps GDDR5 7Gbps GDDR5
Memory Bus Width 128-bit 192-bit 128-bit 128-bit
VRAM 4GB 6GB 4GB 2GB
Single Precision Perf. 3 TFLOPS 5 TFLOPS 2.1 TFLOPS 1.9 TFLOPS
TDP 75W 120W 75W 75W
GPU TU117
(200 mm2)
TU116
(284 mm2)
GP107
(132 mm2)
GP107
(132 mm2)
Transistor Count 4.7B 6.6B 3.3B 3.3B
Architecture Turing Turing Pascal Pascal
Manufacturing Process TSMC 12nm "FFN" TSMC 12nm "FFN" Samsung 14nm Samsung 14nm
Launch Date 4/23/2019 3/14/2019 10/25/2016 10/25/2016
Launch Price $149 $219 $139 $109
 

Right off the bat, it’s interesting to note that the GTX 1650 is not using a fully-enabled TU117 GPU. Relative to the full chip, the version that’s going into the GTX 1650 has had a TPC fused off, which means the chip loses 2 SMs/64 CUDA cores. The net result is that the GTX 1650 is a very rare case where NVIDIA doesn’t put their best foot forward right off the bat – the company is essentially sandbagging – which is a point I’ll loop back around to here in a bit.

Within NVIDIA’s historical product stack, it’s somewhat difficult to place the GTX 1650. Officially it’s the successor to the GTX 1050, which itself was a similar cut-down card. However the GTX 1050 also launched at $109, whereas the GTX 1650 launches at $149, a hefty 37% generation-over-generation price increase. Consequently, you could be excused if you thought the GTX 1650 felt a lot more like the GTX 1050 Ti’s successor, as the $149 price tag is very comparable to the GTX 1050 Ti’s $139 launch price. Either way, generation-over-generation, Turing cards have been more expensive than the Pascal cards they have replaced, and the low price of these budget cards really amplifies this difference.

Diving into the numbers then, the GTX 1650 ships with 896 CUDA cores enabled, spread over 2 GPCs. This is actually not all that big of a step up from the GeForce GTX 1050 series on paper, but Turing’s architectural changes and effective increase in graphics efficiency mean that the little card should pack a bit more of a punch than it first shows on paper. The CUDA cores themselves are clocked a bit lower than usual for a Turing card, however, with the reference-clocked GTX 1650 boosting to just 1665MHz.

Rounding out the package is 32 ROPs, which are part of the card’s 4 ROP/L2/Memory clusters. This means the card is being fed by a 128-bit memory bus, which NVIDIA has paired up with GDDR5 memory clocked at 8Gbps. Conveniently enough, this gives the card 128GB/sec of memory bandwidth, which is about 14% more than the last-generation GTX 1050 series cards got. Thankfully, while NVIDIA hasn’t done much to boost memory capacities on the other Turing cards, the same is not true for the GTX 1650: the minimum here is now 4GB, instead of the very constrained 2GB found on the GTX 1050. Not that 4GB is particularly spacious in 2019, however the card shouldn’t be quite so desperate for memory as its predecessor was.

Overall, on paper the GTX 1650 is set to deliver around 60% of the performance of the next card up in NVIDIA’s product stack, the GTX 1660. And in practice, what we'll find is a little better than that, with the new card offering around 65% of a GTX 1660's performance.

Meanwhile, let’s talk about power consumption. With a (reference) TDP of 75W, the smallest member of the Turing family is also the lowest power. 75W cards have been a staple of the low-end video card market – in NVIDIA’s case, this is most xx50 cards – as a 75W TDP means that an additional PCIe power connector is not necessary, and the card can be powered solely off of the PCIe bus.

Overall these cards satisfy a few niche roles that add up to a larger market. The most straightforward of these roles is the need for a video card for basic systems where a PCIe power cable isn’t available, as well as low-power systems where a more power-hungry card isn’t appropriate. For enthusiasts, the focus tends to turn specifically towards HTPC systems, as these sorts of low-power cards are a good physical fit for those compact systems, while also offering the latest video decoding features.

It should be noted however that while the reference TDP for the GTX 1650 is 75W, board partners have been free to design their own cards with higher TDPs. As a result, many of the partner cards on the market are running faster and hotter than NVIDIA’s reference specs in order to maximize their cards’ performance, with TDPs closer to 90W. So anyone specifically looking for a 75W card to take advantage of its low power requirements will want to pay close attention to card specifications to make sure it’s actually a 75W card, like the Zotac card we’re reviewing today.

Product Positioning & The Competition

Shifting gears to business matters, let’s talk about product positioning and hardware availability.

The GeForce GTX 1650 is a hard launch for NVIDIA, and typical for low-end NVIDIA cards, there are no reference cards or reference designs to speak of. In NVIDIA parlance this is a "pure virtual" launch, meaning that NVIDIA’s board partners have been doing their own thing with their respective product lines. These include a range of coolers and form factors, as well as the aforementioned factory overclocked cards that require an external PCIe power connector in order to meet the cards' greater energy needs.

Overall, the GTX 1650 launch has been a relatively low-key affair for NVIDIA. The Turing architecture/feature set has been covered to excess at this point, and the low-end market doesn't attract the same kind of enthusiast attention as the high-end market does, so NVIDIA has been acting accordingly. On our end we're less than thrilled with NVIDIA's decision to prevent reviewers from testing the new card until after it launched, but we're finally here with a card and results in hand.

In terms of product positioning, NVIDIA is primarily pitching the GTX 1650 as an upgrade for the GeForce GTX 950 and its same-generation AMD counterparts, and this has been the same upgrade cadence gap we’ve seen throughout the rest of the GeForce Turing family. As we'll see in our benchmark results, the GTX 1650 offers a significant performance improvement over the GTX 950, while the uplift over the price-comparable GTX 1050 Ti is similar to other Turing cards at around 30%. Meanwhile, one particular advantage that it has here over past-generation cards is that with its 4GB of VRAM, the GTX 1650 doesn't struggle nearly as much on more recent games as the 2GB GTX 950 and GTX 1050 do.

Broadly speaking the GTX xx50 series of cards are meant to be 1080p-with-compromises cards, and GTX 1650 follows this trend. The GTX 1650 can run some games at 1080p at maximum image quality – including some relatively recent games – but in more demanding games it becomes a tradeoff between image quality and 60fps framerates, something the GTX 1660 doesn't really experience.

Unusual this year for NVIDIA, the company is also sweetening the pot a bit by extending their ongoing Fortnite bundle to cover the GTX 1650. The bundle itself isn’t much to write home about – some game currency and skins for a game that’s free to begin with – but it’s an unexpected move since NVIDIA wasn’t offering this bundle on the other GTX 16 series cards when they launched.

Finally, let’s take a look at the competition. AMD of course is riding out the tail-end of the Polaris-based Radeon RX 500 series, so this is what the GTX 1650 will be up against. AMD’s most comparable card in terms of total power consumption is their Radeon RX 560, a card that is simply outclassed by the far more efficient GTX 1650. The GTX 1050 series already overshot the RX 560 here, so the GTX 1650 largely serves to pile on NVIDIA’s efficiency lead, leaving AMD out of the running for 75W cards.

But this doesn’t mean AMD should be counted out altogether. Instead of the RX 560, AMD has setup the Radeon RX 570 8GB against the GTX 1650, which makes for a very interesting battle. The RX 570 is still a very capable card, especially versus the lower performance of the GTX 1650, and its 8GB of VRAM is further icing on the cake. However I’m not entirely convinced that AMD and its partners can hold 8GB card prices to $149 or less over the long run, in which case the competition may end up shifting towards the 4GB RX 570 instead.

In any case, AMD’s position is that while they can’t match the GTX 1650 on features or power efficiency – and bear in mind that the RX 570 is rated to draw almost twice as much power here – they can match it on pricing and beat it on performance. Which as long as AMD wants to hold the line here, this is a favorable matchup for AMD on a pure price/performance basis for current-generation games. The RX 570 is a last-generation midrange card, and the Turing architecture alone can’t help the low-end GTX 1650 completely make up that performance difference.

On a final note, AMD is offering their own bundle as well as part of their 50th anniversary celebration. For the RX 570 the company and its participating board partners are offering copies of bothThe Division 2 (Gold Edition) and World War Z, giving AMD a much stronger bundle than NVIDIA’s. So between card performance and game bundles, it's clear that AMD is trying very hard to counter the new GTX 1650.

Q2 2019 GPU Pricing Comparison
AMD Price NVIDIA
  $349 GeForce RTX 2060
Radeon RX Vega 56 $279 GeForce GTX 1660 Ti
Radeon RX 590 $219 GeForce GTX 1660
Radeon RX 580 (8GB) $189 GeForce GTX 1060 3GB
(1152 cores)
Radeon RX 570 $149 GeForce GTX 1650
TU117: The Smallest Turing Gets Volta’s Video Encoder?
Comments Locked

126 Comments

View All Comments

  • philehidiot - Friday, May 3, 2019 - link

    Over here, it's quite routine for people to consider the efficiency cost of using AC in a car and whether it's more sensible to open the window... If you had a choice over a GTX1080 and Vega64 which perform nearly the same, assume they cost nearly the same, then you'd take into account one requires a small nuclear reactor to run whilst the other is probably more energy sipping than your current card. Also, some of us are on this thing called a budget. $50 saving is a weeks food shopping.
  • JoeyJoJo123 - Friday, May 3, 2019 - link

    Except your comment is exactly in line with what I said:
    "Lower power for the same performance at a similar enough price can be a tie-breaker between two competing options, but that's not the case here for the 1650"

    I'm not saying power use of the GPU is irrelevant, I'm saying performance/price is ultimately more important. The RX 570 GPU is not only significantly cheaper, but it outperforms the GTX 1650 is most scenarios. Yes, the RX 570 does so by consuming more power, but it'd take 2 or so years of power bills (at least according to avg American power bill per month) to split an even cost with the GTX 1650, and even at that mark where the cost of ownership is equivalent, the RX 570 still has provided 2 years of consistently better performance, and will continue to offer better performance.

    Absolutely, a GTX1080 is a smarter buy compared the to the Vega64 given the power consumption, but that's because power consumption was the tie breaker. The comparison wouldn't be as ideal for the GTX1080 if it costed 30% more than the Vega64, offered similar performance, but came with the long term promise of ~eventually~ paying for the upfront difference in cost with a reduction in power cost.

    Again, the sheer majority of users on the market are looking for best performance/price, and the GTX1650 outpriced itself out of the market it should be competing with.
  • Oxford Guy - Saturday, May 4, 2019 - link

    "it'd take 2 or so years of power bills (at least according to avg American power bill per month) to split an even cost with the GTX 1650, and even at that mark where the cost of ownership is equivalent, the RX 570 still has provided 2 years of consistently better performance, and will continue to offer better performance."

    This.

    Plus, if people are so worried about power consumption maybe they should get some solar panels.
  • Yojimbo - Sunday, May 5, 2019 - link

    Why in the world would you get solar panels? That would only increase the cost even more!
  • Karmena - Tuesday, May 7, 2019 - link

    So, you multiplied it once, why not multiply that value again. and make it 100$?
  • Gigaplex - Sunday, May 5, 2019 - link

    Kids living with their parents generally don't care about the power bill.
  • gglaw - Sunday, May 5, 2019 - link

    wrong on so many levels. If you find the highest cost electricity city in the US, plug in the most die hard gamer who plays only new games on max settings that runs GPU at 100% load at all times, and assume he plays more hours than most people work you might get close to those numbers. The sad kid who fits the above scenario games hard enough he would never choose to get such a bad card that is significantly slower than last gen's budget performers (RX 570 and GTX 1060 3GB). Kids in this scenario would not be calculating the nickels and dimes he's saving here and there - they'd would be getting the best card in their NOW budget without subtracting the quarter or so they might get back a week. You're trying to create a scenario that just doesn't exist. Super energy conscious people logging every penny of juice they spend don't game dozens of hours a week and would be nit-picky enough they would probably find settings to save that extra 2 cents a week so wouldn't even be running their GPU at 100% load.
  • PeachNCream - Friday, May 3, 2019 - link

    Total cost of ownership is a significant factor in any buying decision. Not only should one consider the electrical costs of a GPU, but indirect additional expenses such as air conditioning needs or reductions in heating costs offset by heat output along with the cost to upgrade at a later date based on the potential for dissatisfaction with future performance. Failing to consider those and other factors ignores important recurring expenses.
  • Geranium - Saturday, May 4, 2019 - link

    Then people need to buy Ryzen R7 2700X than i9 9900K. As 9900K use more power, runs hot so need more powerful cooler and powerful cooler use more current compared to a 2700X.
  • nevcairiel - Saturday, May 4, 2019 - link

    Not everyone puts as much value on cost as others. When discussing a budget product, it absolutely makes sense to consider, since you possibly wouldn't buy such a GPU if money was no object.

    But if someone buys a high-end CPU, the interests shift drastically, and as such, your logic makes no sense anymore. Plenty people buy the fastest not because its cheap, but because its the absolutely fastest.

Log in

Don't have an account? Sign up now