I don't know why people insist on encoding screenshots from form-based windows in JPEG. You have very little color variation outside of text, which is typically thin and high-contrast from its surroundings. JPEG's Fourier Transform will cause rippling artifacts in the background, which should be solid color, and will almost definitely have a larger file size. Please, everyone, at least check to see how big a PNG will be before encoding it as JPEG. (In case you notice that I encoded it in JPEG too, that's because re-compressing JPEG artifacts makes PNG's file-size blow up, forcing me to actually need to use JPEG.)
It also makes it a bit more difficult to tell whether a screenshot has been manipulated, because the hitches make everything look suspect. Regardless, BenchLife claims to have a leaked GPU-Z result for the GeForce GTX 1050. They claim that it will be using the GP107 die at 75W, although the screenshot claims neither of these. If true, this means that it will not be a further cut-down version of GP106, as seen in the two GTX 1060 parts, which would explain a little bit why they wanted both of them to remain in the 1060 level of branding. (Although why they didn't call the 6GB version the 1060 Ti is beyond me.)
What the screenshot does suggest, though, is that it will have 4GB of GDDR5 memory, on a 128-bit bus. It will have 768 shaders, the same as the GTX 950, although clocked about 15% higher (boost vs boost) and 15W lower, bringing it back into the range of PCIe bus power (75W). That doesn't mean that it will not have a six-pin external power connector, but that could be the case, like the 750 Ti.
This would give it about 2.1 TeraFLOPs of performance, which is on part with the GeForce GTX 660 from a few generations ago, as well as the RX 460, which is also 75W TDP.
I would think the GTX 1050
I would think the GTX 1050 would come with performance about 5-10% faster than a GTX 960. With the current state of the video card landscape, anything less is a complete waste.
BenchLife is usually legit
BenchLife is usually legit but on thig bugs me with this screenshot…why is all the text aligned along the top edge of each field box?
disregard…was just me…or
disregard…was just me…or an illusion. lol
It is really odd to produce a
It is really odd to produce a dedicated chip for this card while i’m sure they’ve got plenty of “not fully grown” chips from higher up on the line that could fit the specs, so what do they do with these if not dumping them into 1050’s ? (i know you’re going to tell me they go into low end 1060’s but still …. )
Yeah. You’re right, they’re
Yeah. You're right, they're going into 3GB 1060s, but I agree. While this actually is common, it does seem like a lot of designs floating around.
Nvidia is good at producing a
Nvidia is good at producing a more segmented product line, and milking for every extra dollar paid for every little incremental increase of extra performance and pricing its top end consumer SKUs even higher! This is why one company having too much market share is a bad thing. Intel does the same thing with its line of many SOC/CPU SKUs with Intel’s top consumer SKUs priced way above the average price of Intel’s other consumer offerings.
Yeah of course they’re really
Yeah of course they’re really good at segmenting, but so far gpu’s transistors count compared to cpu’s made things a bit more complicated. So my question is, did the price of making a new low end gpu went so low that binning is just not necessary anymore in this segment ?
Nvidia has enough sales
Nvidia has enough sales volume to justify more new tapeouts than AMD, what with Nvidia having a hold over more laptop OEMs. Just look at the numbers of gaming laptops with the desktop versions of the GPUs stuffed in with Nvidia having more design wins($$$ to buy into OEM laptop SKUs) than AMD who just now has maybe one design win for a desktop RX 470 shoehorned into a gaming laptop. Nvidia’s hold over that majority market of consumer GPUs allows for Nvidia to strategically segment its product lines in an effort to get the die sizes down and more dies per wafer to make for more profits. AMD has to stick with Polaris 10(RX 480/470) and Polaris 11(RX 460) and the RX 470 is a binned part, while the RX 460 is designed that way from a different tapeout.
AMD maybe could use a new Polaris 10X tapeout with more than Polaris 10’s complement of total shader/other resources for some more competition with the GTX 1070 line of Pascal SKUs before Vega arrives. Nvidia already has its mobile variants while AMD has so few laptop design wins that AMD is better off trying to get more of those desktop Polaris SKUs inside of more gaming laptops, and using its RX 460 to make its mobile variants. I’d like there to be more Laptop variants from AMD derived from the Polaris 10 design with a wider GPU bus to more GDDR5 memory, but really that one RX 470 desktop SKU inside a laptop SKU is a great idea for more sales, and Nvidia gets more sales that way. I really hope that there will be some high end gaming laptops with at least 6/8 Zen cores and an RX 470 desktop SKUs for the really high end gaming laptop market.
I think that AMD’s future Zen/Polaris APUs on an interposer designs may be better for the majority of the mainstream laptop market, even if the APU only comes with one HBM2(4GB) stack, as that would offer enough bandwidth with the HBM2’s 1024 bit wide connection directly to the Zen/Polaris die(Or dies) and the rest of the RAM supplied buy a single Channel to DDR4 DIMM DRAM with at least 12 more GB of DRAM.
So a much fatter integrated Polaris GPU could be properly fed by the HBM2 that acts like a L4 faster cache memory for the remainder of the slower DIMM based DDR4 memory. So 4GB of HBM2 feeding mostly the integrated Polaris GPU’s needs and some of the Zen cores’ needs also with the rest of the slower memory mapped into the HBM2 with maybe a 24/32 way cache memory table into the HBM2 from the slower DDR4 memory. One thing to note is that the DDR4 memory would have its own channel, with the HBM2’s separate channel being on the interposer and separately assessable. So the HBM2’s stack’s 1024 bit connection divided into smaller channels would only need to utilize one or two smaller HBM2 channels to service any transfer requests to and from HBM2 cache from DDR4 RAM. The Zen cores would not really need that much of the HBM2 memory to operate, with the rest given over to the GPU’s needs while gaming, or using graphics software.
Only discrete cards counted
Only discrete cards counted and not mobile chipsets. I don’t think Nvidia ever segmented as much as AMD did with the 200 series of video cards. 19 different models. Who was being greedy and milking their consumers?
https://en.m.wikipedia.org/wiki/AMD_Radeon_Rx_200_series
600 series had 13 models, 700 series had 16 models and 900 series had 6 models. Tsk, tsk. You sounded cool bashing on Nvidia but was a little short on facts. Looks like after 700 series Nvidia got a lot less segmented. So far with Pascal this seems to be the case as well.
https://en.m.wikipedia.org/wiki/GeForce_700_series
https://en.m.wikipedia.org/wiki/GeForce_600_series
https://en.m.wikipedia.org/wiki/GeForce_900_series
I have to wonder at just how
I have to wonder at just how selectively you chose to count those models, based on your links there.
Are you excluding mobile chipsets? Because excluding mobile chipsets, I’m counting 26 models in the 600 series, which is way more than the Rx-200 series’ 19. Are you counting the 4 different GT630 models as one? What about the 5 different GT640 models? The two different 620s? Are you counting the 660 and 660 OEM as the same one?
What about the 19 different models of the 700 series? They had three different 730s, two 740s and two 760s.
Even the 900 series has 8 models. Unless you’re not counting the OEM models – in which case, your counts on everything else are off as well – including AMD’s 200 series (which, incidentally, is either 21 models or 16 models, depending on whether you’re counting the OEMs.)
How exactly are you counting them?
In any case, yes, after the
In any case, yes, after the 700 series, Nvidia got a lot less segmented.
And AMD got a lot less segmented after the 300 series.
And at this point in time, Nvidia is still more segmented than AMD.
Just heard 1040 is coming
Just heard 1040 is coming down the line
We won’t have to wait long to
We won’t have to wait long to find out about it as release date is rumored to be October and of course this is also to be rumored to be competition to Rx 470. Maybe a cutdown model of this will compete with rx 460. Maybe Nvidia will just let AMD have their scraps.