AMD beat NVIDIA to the punch with its 7000-series “Southern Islands” graphics cards, and if the rumors hold true the company may well accomplish the same feat with its next generation architecture. Codenamed Sea Islands, the architecture of AMD’s 8800-series is set to (allegedly) debut around January 2013 time frame. Featuring DirectX 11, GPGPU and power efficiency improvements, 3.4 billion transistors on a 28nm process, and a rumored sub-$300 price, will the 8850 and 8870 win over enthusiasts?
AMD launched its Southern Island graphics cards with the Graphics Core Next (GCN) architecture and Pitcairn GPU in March of this year. Since then NVIDIA has moved into the market with the 660 and 660Ti, and budget gamers have lots of options. However, yet another budget gaming GPU from AMD will be coming in just a few months if certain sources' leaks prove correct. The 8850 and 8870 graphics cards are rumored to launch in January 2013 for under $300 and offer up some significant performance and efficiency improvements. Both the 8850 and 8870 GPUs are based on the Oland variant of AMD’s Sea Islands architecture. As a point of reference, AMD’s 7850 and 7870 are using the Pitcairn version of AMD’s Southern Islands architecture – thus Sea Islands is the overarching architecture and Oland is an actual chip based on it.
Sea Islands is essentially an improved and tweaked Graphics Core Next design. It will continue to utilize TSMC's 28 nm process, but will require less power than the 7000-series while being much faster. While the specifications for the top-end 8900-series is still up in the air, Videocardz is claiming sources in the know have supplied the following numbers for the mid-range 8850 and 8870 Oland cards.
Videocardz put together a table comparing AMD's current and future GPU series.
The GPU die size has reportedly increased to 270mm^2 (squared) versus the 7850/7870’s 212mm^2 die. This increase is the result of AMD packing an additional 600 million transistors for a total of 3.4 billion. 3D Center further breaks the GPU down in stating that the 8870 will feature 1792 shader units, 112 texture manipulation units (TMU), 32 ROPs, and support a 256-bit memory interface. The 8850 graphics card will scale the Oland GPU down a bit further by featuring only 1536 shader units and 96 TMUs, but keeping the 32 ROPs and 256-bit interface.
For comparison, here’s a handy table comparing the 8850/8870 to the current-generation 7850/7870 (which we recently reviewed).
Radeon HD 7850 | Radeon HD 8850 | Radeon HD 7870 | Radeon HD 8870 | |
Die Size | 212mm^2 | 270mm^2 | 212mm^2 | 270mm^2 |
Shader Count | 1024 | 1536 | 1280 | 1792 |
TMUs | 64 | 96 | 80 | 112 |
ROPs | 32 | 32 | 32 | 32 |
Memory Interface | 256-bit | 256-bit | 256-bit | 256-bit |
Bandwidth | 153.6 GB/s | 192 GB/s | 153.6 GB/s | 192 GB/s |
So while the memory bus and number of ROP units is staying the same, you are getting more shaders and texture units along with a boost to the overall memory bandwidth with the larger die size – sounds like an okay compromise to me!
AMD has managed to increase the clock speeds and GPGPU performance with Oland/Sea Islands as well. On the clockspeed front, the 8850 has a base boost GPU clockspeed of 925 MHz and 975 MHz respectively. Further, the 8870 has base/boost clocks of 1050 MHz/1100 MHz. That is a nice improvement over the 7850’s 860 MHz clockspeed, and 7870’s 1000 MHz clockspeed. AMD is also adding its PowerTune with Boost functionality to the Oland-based graphics cards which is a welcome addition. The theoretical computational power of the graphics chips has been increased as well, by as much as 75% for single precision and 60% for double precision (7870 to 8870). The single precision performance has been increased to 2.99 TFLOPS on the 8850 (1.76 TFLOPS on the 7850), and 3.94 TFLOPS on the 8870 (7870 has 2.25 TFLOPS). The single precision numbers are relevant to gaming and general applications that consumers would run that are GPU accelerated. The figures are not really suited/representative of high performance computing (HPC) workloads where precision is important (think simulations and high-end mathematics), and that is where the double precision numbers come in. The 8800 series gets a nice boost in potential performance as well, topping out at 187.2 GFLOPS for the 8850 and 246 GFLOPS for the 8870. That is in comparison the 7850’s 110 GFLOPS and 7870’s 160 GFLOPS.
The sources also disclosed that while the 8850 would have the same TDP (thermal design power) rating as the 7850, the higher-end 8870 would actually see a decreased 160W TDP versus the previous generation’s 175W. Unfortunately, there were not any specific power draw numbers talked about, just that the cards were more power efficient, so it remains to be seen just how much (if at all) less power the GPUs will need. The sources put the 8870 at the same performance level as the NVIDIA GeForce GTX 680, which would mean that this will be an amazing mid-range card if true. Especially considering that the cards have a rumored price of $279 for the 8870 and $199 for the 8850. Granted, those prices are likely much lower than what we will actually see if AMD does indeed launch the cards in January as the company will not have competition from NVIDIA’s 700 series right away.
In some respects, the rumored specifications seem almost too good to be true, but I’m going to remain hopeful and am looking forward to not only seeing the mid-range Oland GPU coming out, but the unveiling of AMD’s top-end 8900 series (which should be amazing, based on the 8800-series rumors).
What do you think of the rumored 8850 and 8870 graphics cards from AMD? Will they be enough to temp even NVIDIA fans?
I just need to decide if I
I just need to decide if I want to buy a 7950 now and crossfire it later when its much cheaper or wait through the next few grueling months to pick up an 8870 or 8950
Totally, I tend to upgrade
Totally, I tend to upgrade ever other, or every third (depeding how much life I can squeeze out it), generation and the 8950 looks like a good candidate to replace my 6950 🙂
6950 2gb card…best bang for
6950 2gb card…best bang for the buck I’ve ever seen when unlocked to 6970.
Yup, that’s the one I have, a
Yup, that's the one I have, a psuedo 6970 :). It's an XFX 6950 2GB referrence version. I have it with unlocked shaders but running at 6950 clockspeeds. It can be clocked higher, but gets a bit too warm for comfort (doesn't help the AC in the hot summer heat).
I was actually running mine
I was actually running mine fully unlocked and at stock 6970 speeds no problem (MSI ref version). At one point i even OC’d it for a friendly PC mark competition with a buddy of mine and had it OC’d about as high as real 6970s OC to, which was a huge surprise. I figured it was stupid to leave it oc’d so high though because all i was playing was MW2 at the time. Not exactly a graphics hog of a game.
hehe, yeah the 6950s had
hehe, yeah the 6950s had plenty of overclocking headroom so long as you didn't get a dud with a particular card. I managed to get mine to 1005MHz core, and I could probably go a bit higher now that I'm using water cooling but I'd mostly just be wasting power :). Once games come out that it can't handle at the settings I'm used to, I'll probably crank it up to wring enough life out of it to get me to a 8970 upgrade :D.
1005MHc @ 1250mV 🙂
https://docs.google.com/spreadsheet/ccc?key=0AslQjNW7LjDpdHBlMkpoMkxDZGNidWhxeEJ6Vnd1WWc&authkey=CNeSnssI
Same here, I’ve stuck with my
Same here, I’ve stuck with my 2×5870 till now, it is finally starting to have trouble keeping up decent and fluid framerates on new games.
I was first thinking of getting a 78xx or 79xx series card, but hearing the 8xxx series was coming rather soon after, waiting wasn’t a hard decision.
Even if the 8xxx series isn’t a large improvement over the 7xxx series, it should still mean that the 7xxx series will go down even further in price, allowing for the cheap replacement of both my cards, but looking at the projected pricepoint got the new generation, it wouldn’t seem to be that much of a bigger investment to get 2 8xxx series cards to mop the floor with everything to come for a few years again.
The 7950 will still be very
The 7950 will still be very competitive throughout the entire 8000 series, so it’s still a good choice. I still run a 6970, they are all high end DX11 cards and are good. The only reason I am contemplating jumping over to 2x 7850’s, is for the very slight performance boost, of, probably, 10-15% vs my single 6970, at less the power consumption of my overclocked 6970, and less ambient heat, and reduced noise (e.g. my 6970 works overtime on its own and throws off a lot of heat and sound, but is blazing fast, considering its age). I can run any game maxed. There’s another part of me telling me just to pick up another 6970, but that would require a PSU upgrade, since, they are power hungry. I would then give my wife my 6970 and let it run quiet and cool-like, in her system.
but even today, you can’t go
but even today, you can’t go wrong with a 6xxx-8xxx AMD card, just like you can’t go wrong with a 4xx-6xxx card. They are all DX11, personally, if you fit into any of these categories, at least consider purchasing a 2nd, 3rd or 4 card (SLI), if you are running a single card, and just hold out until DX12.
I think AMD will find that by
I think AMD will find that by cashing in on people’s goodwill with their “fuck you” prices on the 7000 series cards, they have really only screwed themselves in the future. AMD graphics is not a premium brand, and they’re about to find out the dollar store doesn’t get to become Target simply by raising prices because they suddenly want to be “upscale”.
Crash and burn AMD, crash and burn so someone can buy out the whole thing (large patent-holding companies never die, they just get bought) and fire all the management, starting with Rory “you’ve got to be kidding” Read.
at the previous comment you
at the previous comment you are a fucking retard. Dare you to beg of an answer.
The butthurt is strong with
The butthurt is strong with this one…
The “high” AMD prices have
The “high” AMD prices have been gone for a long time…
WOW, so the butt hurting
WOW, so the butt hurting trolls are out again, it happens everytime AMD are about to release or stomp on nvidia. I feel this time, nvidia are really going to feel the pain. Nvidia have only soo many engineers, working on both the Tegra and GPU etc, whereas AMD have separate engineers for CPU and GPU and they both work together on the APU. We will see nvidia falling behind AMD every Gen, we are already seeing this happen. People need to start selling their nvidia shares ASAP and buy AMD shares which will soon rocket up.
lol. i’m not sure you’re
lol. i’m not sure you’re being serious or joking around
Shut up, stupid miscarriage,
Shut up, stupid miscarriage, and stop thinking, because it hurts the humanity.
I like the idea of a -20% to
I like the idea of a -20% to -25% price reduction on the same tier of cards on the next generation. If performance mirrors the increases in clock speeds and whatnot it can be a new golden boy card like the GTX 460 ti in that 150 to 250 price point.
GL for 1500mhz memory to
GL for 1500mhz memory to achive that BW.
Indeed, I was curious about
Indeed, I was curious about that as well, hence the 'too good to be true' worry of mine 🙁
Pretty much all of the NVIDIA
Pretty much all of the NVIDIA GTX 680 cards features 1500 MHz memory (6 GHz effective). The stuff is pretty common now. What isn't common is a memory controller that can actually run the chips at those speeds. NVIDIA spent a LOT of time getting their controller right, and able to run at those speeds. AMD just has some catching up to do, but the memory chips are there.
Ah, you’re right GTX 680 is
Ah, you're right GTX 680 is 256-bit bus, I was thinking it was larger than that. In that case, this should be doable if AMD follows NVIDIA's lead 🙂
If the price point is fair. I
If the price point is fair. I will gladly get one of these cards and throw my GTX570 away.
hehe, feel free to toss it my
hehe, feel free to toss it my way 😉 lol!
And as some of you may
And as some of you may remember, Nvidia’s GTX 680 is not actually their real flagship GK110, theyOnly put the GK110 into their Tesla line so far. The GTX 680 was supposed to be their mid-range, but it was such a leap in performance that they didnt need to release the bigger/badder GPU and could make more money on the cheaper one.
Rumors rumors rumors 🙂
Hmm true, GK110 does look
Hmm true, GK110 does look really beastly from what I've seen specs wise :). I'm assuming that will be 700 series?
Who knows…i mean i don’t
Who knows…i mean i don’t know anyone that works for Nvidia (cough cough). But i wouldn’t be surprised if they already redeveloped the GK110 for something better for 700 series….I mean they finished that chip months ago and only released it as Tesla.
hehe 🙂
Yeah, it would be
hehe 🙂
Yeah, it would be really weird for NV to spend all that money developing GK110 and then only use it for Tesla. Granted, they probably make more money with Tesla per card than gaming cards (just my guess) but if they could also use it for the basis of a 700 series gaming card lineup, I would think that would be the smart thing to do :).
yup. i take GK110 with some
yup. i take GK110 with some defective core will be most likely end up being GeForce or Quadro parts
based on the spec fully
based on the spec fully enabled GK110 will have 15 SMX and IMO nvidia can makes several model based on the chip alone. i already think that the next nvidia flagship might not using fully enabled GK110.
fuck you
fuck you
It seems like the the 8850
It seems like the the 8850 will be 1GB, which at $199 would match the current price of the 7850 2GB. Perhaps there won’t be a 2GB 8850 until a refresh in the early or late summer? I can wait that long, but it will be worth it. My monitor is 1920×1080, so I would benefit from the extra memory…The 1GB would be a great card for a 1650×1050 or 1280x720pm monitor.
I do have a friend who works
I do have a friend who works as an microstructure engineer, not for Nvidia, but for a contracting company that works with Nvidia…she has leaked a little to me, knowing I am a 3d modeling and gaming fan, and the Architecture jump from the 600-700 series cards isn’t going to be as dramatic as some of you may think.
Nvidia sees the budget battle is heating up and is planning to launch models accordingly.