The newest member of the NVIDIA Turing family arrived today, the 75W GTX 1650 with 4GB of GDDR5, a base clock of 1485MHz and boost of 1665 MHz, with a $150 price tag. The Guru of 3D reviewed the GTX 1650 Gaming OC Edition, showing its performance to be about where you would expect, a bit slower than the GTX 1660. As far as overclocking ability, they were able to push this particular card's boost clock to ~2GHz as well as getting the memory up to an impressive 9.55 GHz.
It is nice to see an affordable Turing GPU on the market, but their pricing may not have been aggressive enough as the Radeon RX 570 8GB model sells at roughly the same price.
"Join us as we review the new GeForce GTX 1650 Gaming OC edition. The single fan product is energy friendly as well, there's no need for even a power connector as it feeds from the 75 Watt PCIe power slot. But will it perform well enough for that 149 USD price tag?"
Here are some more Graphics Card articles from around the web:
- MSI GeForce GTX 1650 Gaming X 4 GB @ TechPowerUp
- Gigabyte GeForce GTX 1660 Ti Gaming OC @ The Guru of 3D
- BFV, Metro Exodus & SotTR Ray Tracing Performance with all GeForce cards @ BabelTechReviews
- he Ray Tracing Slideshow: DXR on Nvidia Pascal Tested @ TechSpot
The new 720p king.
The new 720p king.
any higher performing card
any higher performing card would be a “king” at 720p, so your statement makes no sense – ie, the 2080ti would be THE king of 720p, since it dominates at all levels.
With that said, what this card is really king at is OEM upgrades, where the only change made to said OEM machine is simply dropping this new card in and done, provied that that the card used doesn’t require an extra 6 pin power connector.
King of market
King of market segementation(1) as certian Turing feature sets are not included on the 1650 that are included on the higher GTX/Turing segements!
“NVIDIA GeForce GTX 1650 has a significantly watered down multimedia feature-set compared to the other GeForce GTX 16-series GPUs. The card was launched this Tuesday (23 April) without any meaningful technical documentation for reviewers, which caused many, including us, to assume that NVIDIA carried over the “Turing” NVENC encoder, giving you a feature-rich HTPC or streaming card at $150. Apparently that is not the case. According to full specifications put out by NVIDIA on its website product-page that went up hours after product launch, the GTX 1650 (and the TU117 silicon) features a multimedia engine that’s been carried over from the older “Volta” architecture.” (1)
So that’s more of a finally found out rather than finally arrived without Reviewers sampled in advance. So much for that 75 Watt low power HTPC usage model unless one goes up to the next higher priced 1600 series price segement.
(1)
“NVIDIA GTX 1650 Lacks Turing NVENC Encoder, Packs Volta’s Multimedia Engine”
https://www.techpowerup.com/254861/nvidia-gtx-1650-lacks-turing-nvenc-encoder-packs-voltas-multimedia-engine
It seems the numerology was
It seems the numerology was right! L O L
https://www.pcper.com/news/General-Tech/GPU-prices-just-too-damn-high-NVIDIA-might-have-something-you#comments
You were pretty close on that
You were pretty close on that indeed.
Check out the reviews. A 570
Check out the reviews. A 570 beats it in everything and is cheaper.
The RX 570 is the best
The RX 570 is the best electric radiator consuming 168 W against 71 W for the GTX 1650.
https://www.guru3d.com/articles-pages/zotac-geforce-gtx-1650-gaming-4gb-review,6.html
And no the RX 570 not even cheaper than the GTX 1650.
https://www.amazon.com/GIGABYTE-GeForce-Graphics-128-Bit-Gv-N1650IXOC-4GD/dp/B07QD3C4GX/
https://www.amazon.com/Gigabyte-AORUS-Radeon-Graphic-GV-RX570AORUS-4GD/dp/B06Y43ZKFF/
It seems AMD need to sell ATI to Intel. L O L
I found two in about a
I found two in about a 10-second search that are less than $140.
Plus here’s a 8gb model for $149… Nice try.
https://www.amazon.com/XFX-Radeon-1286MHz-Graphics-RX-570P8DFD6/dp/B077VX31FZ/ref=mp_s_a_1_1_sspa?crid=3FIHILUSG9ETA&keywords=rx+570+4gb&qid=1556122047&s=gateway&sprefix=rx+570+&sr=8-1-spons&psc=1
Does AMD provide the nuclear
Does AMD provide the nuclear plant with your RX 570? :o)
Anyway, even if the RX 570 is cheaper that would be a good opportunity to lower price of the GTX 1650 for the average Joe
which doesn’t even know who the f** AMD is. L O L
RX 570: (8GB VRAM is costing
RX 570: (8GB VRAM is costing little as $139.99 on NewEgg)
Shading Units: 2048
TMUs: 128
ROPs: 32
Compute Units: 32
L1 Cache: 16 KB (per CU) L2 Cache: 2 MB
GTX 1650: (4GB VRAM that’s retailing above $150 on average. And $170 for the 6 pin higher power variant That GamersNexus tested)
Shading Units: 896
TMUs: 56
ROPs: 32
SM Count: 14
L1 Cache: 64 KB (per SM) L2 Cache: 1024 KB
Let’s do a performance/watt analysis and a price/shader-core metric also on the RX 570 VS the GTX 1650(Gimp Edition)! Stop that Greenwashing, chipman!
It’s an extra $2.50 a month
It’s an extra $2.50 a month playing 5hrs a day… plus you won’t have to upgrade your card for a like a year.
Upgrading for what? Navi
Upgrading for what? Navi @1.21 JigoWatts? :o)
OK Chipman[child]! Could you
OK Chipman[child]! Could you just calm down and tell the jury where AMD Touched You!
Oh Intel’s got more dogs any
Oh Intel’s got more dogs any ponys going on and it’s all smoke and mirriors mixed in with loads of Grift! Pay no attention to that subzero chiller behind the curtain folks but that’s 400 watts of last years fail slapped in one big BGA based module of custom madness from the folks at Intel marketing. Oh the marketing slides to mask the Intel slide into 2nd place!
“You may have noticed SemiAccurate glossed over Intel’s new 9200 series of Cascade Lake CPUs. We did this for a reason, they aren’t real products, just a PR stunt meant to stave off embarrassment at the hands of AMD’s Rome.
Yes we said it, Cascade Lake-AP is a PR stunt and an expensive one at that. There is no point in spending the money to engineer, validate, design systems for, and bring this turkey to market other than for Intel to claim they aren’t being beaten like a drum by AMD’s Rome. The problem is simple, AMD’s Epyc based on Rome will have 64 cores vs Intel’s best Xeon 8280 at 28 cores. This isn’t to say the 8280 is a bad part, we really don’t think it is, just that Rome is in a different league. AMD out-engineered Intel, period, and will have a 50%+ lead in per-socket performance in a few weeks.” (1)
(1)
“A long look at the Intel Cascade Lake 9200 line
Is there a point other than PR?”
https://semiaccurate.com/2019/04/23/a-long-look-at-the-intel-cascade-lake-9200-line/
K, while we all talk about
K, while we all talk about wattage, in terms of power suply and cooling/air flow, at the end of the day the power difference you mention is less than standard incandescent light-bulb. If it makes a diffrence in your electric bill than you are being charged WAY too much for electricity in your district. And if that heat effects the ambient temperature in your home, your home is way to cold.
My point is having a less than 100w diffrence is NOTHING when you look back, just 4 or 5 years when AMD was pulling 250-350 watts to do what Nvidia did on 100. Nvidia had the fore-site (or just got lucky) to see that efficiency was the way the market was heading instead of the “As much power as you can get into it for the best performance” that had been the way of things since the 90’s. AMD is catching up, and while for the most part Nvidia still is the best choice, that is becoming less true every day.
Alot of us old timers who have been watching this fight for decades know, if Nvidia was more aggressive with it’s prices, actively undercutting AMD with “Loss Leaders” then AMD could be crushed. As it now stands people still go with “AMD, for when you cant afford the other guys.”
(and I say this all as a life long team red)
There’s no denying that the
There’s no denying that the rx 570 does consume more power, but it is also a more powerful card.
with that said, you are wrong on so many levels, so let’s break this down.
the 2080ti is, by your logic, an electric radiator because they can consume well over 300w… so yea
and Yes, the RX 570 is cheaper. YOU just cherry picked results that favor your agenda.
Here are the search page for the rx 570 from amazon and newegg:
https://www.amazon.com/s?k=570&i=computers&rh=n%3A284822&s=price-asc-rank&qid=1556122562&ref=sr_st_price-asc-rank
https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%20601296379&IsNodeId=1&bop=And&Order=PRICE&PageSize=36
the lowest priced NEW rx 570 on amazon starts at $129, the same for newegg too
And for the 1650:
https://www.amazon.com/s?k=1650&i=computers&rh=n%3A284822&s=price-asc-rank&qid=1556122937&ref=sr_st_price-asc-rank
https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709 601332298&IsNodeId=1&bop=And&Order=PRICE&PageSize=36
the lowest priced 1650 starts out at $149…
OOOOFFF
THE ONLY saving grace for the 1650 is the versions that don’t have 6 pin power connectors and only run powered through the pcie slot. These will sell well because the OEM’s are going to eat these up.
ATI is no longer and AMD
ATI is no longer and AMD needs that former ATI GPU IP for their professional GPU Compute/AI market so it ain’t going anywhere. Radeon Pro WX and Radeon Instinct make AMD some real markups and higher margins! So your repeatingly saying the same over and over and still being Dead Wrong every time is just the Dictionary Definition of insanity, chipman!
And you better read the TechPowerUp Article that was linked to in an above post because Gimpvidia is at it again with more feature segementation and gimping as usual from Team Green!
The RX 570 has more shaders and performs better so that’s going to require some Performance/Watt Metrics and the RX 570 was tested by GamersNexus against a higher wattage/higher priced 1650(With 6 Pin Power Connector) variant and the RX 570 still mostly cleaned up against the 1650 in the benckmarks. The 8 GB RX 570 variant is not costing as much at as low as $139.99 on NewEgg compared to the GTX 1650 4GB that’s retailing above $150 on average.
And Team Green has been doing its usual Gimpvidia with the encoding IP on that 1650 bottom feeder SKU.
I’d also like to see more RX 570 undervolting results and that lower pricing. And just look at the numbers of shader cores on the RX 570 compared to the GTX 1650 and there is where the power goes so lets look at the Watts per shader core metric and Price/Performance metrics also.
RX 570:
Shading Units: 2048
TMUs: 128
ROPs: 32
Compute Units: 32
L1 Cache: 16 KB (per CU) L2 Cache: 2 MB
GTX 1650:
Shading Units: 896
TMUs: 56
ROPs: 32
SM Count: 14
L1 Cache: 64 KB (per SM) L2 Cache: 1024 KB
Mr chipman’s Greenwashing is not working out so well now!
“I’d also like to see more RX
“I’d also like to see more RX 570 undervolting results and that lower pricing. And just look at the numbers of shader cores on the RX 570 compared to the GTX 1650 and there is where the power goes so lets look at the Watts per shader core metric and Price/Performance metrics also.”
Undervolting is a hack for geeks buying pee finished products.
FYI the sh!t storm of cores doesn’t make them running faster!
The true metric for the quality/price ratio should be:
performance/(price*consumption).
According to the Metro Exodus worst case scenario in 2560×1440 it gives for the same price:
RX 570: 27/(150*168) ~ 1,07/1000
GTX 1650: 17/(150*73) ~ 1,55/1000
Actually the GTX 1650 has the best quality/price ratio!
L O L
and yet, the rx 570 costs
and yet, the rx 570 costs less and is more powerful and has reviews to back up that the game play experience is objectively and noticeably better with the 570.
You are really trying to reach with that idiotic ‘quality/price’ ratio nonsense you came up with, trying so hard that it appears you are doing a reach-around.
It cost less only if you’re a
It cost less only if you’re a kid who didn’t leave his mom basement and don’t pay any electricity bill… 😉
^^^^ Troll Alert ^^^^^
^^^^ Troll Alert ^^^^^
something, something, bitcoin
something, something, bitcoin
Not just bitcoin but more
Not just bitcoin but more shader cores on the RX 570 so maybe more DXR Ray Tracing performance(Via DXR’s alternative code path that uses shader cores instead of RT cores on GPUs that have Shader cores, and they all do). Also I do not mean Ray Tracing for just games but Ray Tracing for graphics workloads where FPS is not a factor.
Even for current Blender 3D OpenCL based Cycles rendering the more shader cores the better to accelerate the Ray calculations done on the shader cores. And Blender 3d will work that Ray Tracing acceleration on as many GPUs as can be plugged into a PC’s available PCIe slots.
So the RX 570’s shader core counts are much higher and the per GPU cost is much lower. So 2 RX 570’s and 2048 * 2(4096 total shader cores for around $260 for 2 RX 570’s) and the RX 480/590 SKUs have 2304 shader cores but cost more.
I’d say that the RX 470/570 SKUs are still the best Cost/Shader-Core value at that Newegg pricing so if I were building a low cost rendering system then the RX 470/570 variants would give more shader cores/dollar currently.
Also if more DX12/DXR games could maybe make use of more than one GPU for gaming and DXR/Ray Tracing on the shader cores then that’s something to look into also for maybe even having the game make use of one GPU for Ray Tracing acceleration while the other GPU works the remainder of the graphics workload. DX12’s Explicit Multi-GPU Adaptor IP may have a new killer application usage in using any dual/more GPUs configurations for Ray Tracing acceleration in games.
Nvidia’s GTX/Turing 1650/1660 variants are still lower on the toral shader core counts so that’s not really as affordable on a cost/shader core basis as AMD’s offerings. I can’t wait to see what price reductions Navi will bring to the Vega generation GPU offerings but the Polaris Pricing will fall even fruther once Navi takes over for AMD’s mainstream market segement offerings where that’s mostly Polaris based currently.
All of AMD’s Polaris *70 variants have always been the price/performanc sweet spot for loads of applications that can make us of a GPU’s shader cores! So the more shader cores the more Rays Calculated, Hashes, etc.
tbh I stopped reading your
tbh I stopped reading your wall of text at the word “gimpvidia”. I gave it a chance, but if your argument just comes down to childish insults, is it really worth the time?
You also think that 3.5 is
You also think that 3.5 is equal to 4 and you think that you are always correct while everyone else is wrong.
Gimpvidia and the 1650 so stripped to the bone and overpriced and the RX 570 compared to the GTX 1650 is:
RX 570:
Shading Units: 2048
TMUs: 128
ROPs: 32
Compute Units: 32
L1 Cache: 16 KB (per CU) L2 Cache: 2 MB
GTX 1650:
Shading Units: 896
TMUs: 56
ROPs: 32
SM Count: 14
L1 Cache: 64 KB (per SM) L2 Cache: 1024 KB
That’s some serious Gimping and DePimping from DePimp Daddy JHH that’s so overpriced to milk the othertomperson’s wallet some more! And GN rates this SKU as DOA just like that Brain DOA othertomperson and his butthurt from bending over for more as JHH works those hips! That TU117 is really binned down to a PU117 that really reaks of that Gimpvidia Gimp to really DePimp with those Shaders and TMU counts.
Mega developments on high end
Mega developments on high end parts, Meh. Something happens on the low-mid range, LETS FUCKING BITCH ABOUT THIS.
amdmb.com fo the win YO!
LOL, can always count on you
LOL, can always count on you to keep it real.