Benchlife.info, via WCCFTech, believes that AMD's Radeon R9 300-series GPUs will launch in late June. Specifically, the R9 380, the R7 370, and the R7 360 will arrive on the 18th of June. These are listed as OEM parts, as we have mentioned on the podcast, which Ryan speculates could mean that the flagship Fiji XT might go by a different name. Benchlife.info seems to think that it will be called by the R9 390(X) though, and that it will be released on the 24th of June.
WCCFTech is a bit more timid, calling it simply “Fiji XT”.
In relation to industry events, this has the OEM lineup launching on the last day of E3 and Fiji XT launching in the middle of the following week. This feels a little weird, especially because AMD's E3 event with PC Gamer is on the 16th. While it makes sense for AMD to announce the launch a few days before it happens, that doesn't make sense for OEM parts unless they were going to announce a line of pre-built PCs. The most likely candidate to launch gaming PCs is Valve, and they're one of the few companies that are absent from AMD's event.
And this is where I run out of ideas. Launching a line of OEM parts at E3 is weird unless it was to open the flood gates for OEMs to make their own announcements. Unless Valve is scheduled to make an announcement earlier in the day, or a surprise appearance at the event, that seems unlikely. Something seems up, though.
I hope we get different 300
I hope we get different 300 sreries and not the oem. 7850 as 370? 😮 such a bad joke.
With the Nvidia friendly
With the Nvidia friendly press preparing all the guns to fire at AMD’s face, it’s logical to keep Fiji away from the rest of the line. Unfortunately AMD will probably do, what we already know from the OEM 300 series. Introduce many rebrands. They do have empty pockets. So negative press about those rebrands will be logical and fair and all over the place.
That will give the opportunity to the green press to try to focus the reader’s attention to the rest of the line and not on Fiji’s performance. Fiji will be fast and great and HBM and stuff and whatever, hopefully, but I bet from now that much ink will be used to explain why this series is a disappointment and how Fiji is only too little and only for very few.
LOL JohnGR, again, doing
LOL JohnGR, again, doing AMD’s dirty work for them I see, apologizing as any good apologist/fanboy would for what you expect to come.
So, would the “Nvidia Friendly press” have any thing to fire at AMD’s face if AMD didn’t provide all the ammunition to begin with in this expected almost full line-up of Rebadgeon products? Just curios what your take on this is. 🙂
We saw the press with the GTX
We saw the press with the GTX 970 covering up Nvidia’s dirty work by accepting all excuses from their marketing department.
The press uses two completely different standards when facing AMD’s or Nvidia’s products. That was proved no matter how you want people to forget it.
You especially had only good words for Nvidia’s 970 fiasco. You where all over the place defending them. Not just being apologetic, but also advertising the 970.
People like you laugh because your beloved company can do all the dirty tricks and get away with it.
Covering up? LOL, they went
Covering up? LOL, they went into detail testing and getting the background information, how do you think it got enough attention for Nvidia to issue a statement on it going all the way up to the CEO?
And I was everywhere making excuses for them? Please show me one link lol! I clearly stated things like this need to be checked and corrected and any company should be held accountable for the specs they publish. Obviously Nvidia should have been more careful and I am sure they will be in the future. I also said anyone who honestly feels wronged or cheated should absolutely pursue refund and be granted it by the retailer or Nvidia directly.
But I also said the paper spec change did not change the immense value and performance the 970 offered. Indeed, there is not a single thing that has changed from those initial reviews for the end-user, it is still a fantastic card and value at that $300-$330 range. Clearly the market agrees as well, as Nvidia has still made a killing in those quarters since then.
Now, how do you feel about AMD covering up and lying about all things FreeSync. What happened to 9-240Hz? lol. Oh right, you were clearly busy lying, covering up, and apologizing for AMD there. And yes, I can quite easily provide links to prove it. 😉
If I lie to you, believe me,
If I lie to you, believe me, when you find out about that, I will start explaining as much as possibly, hoping not to get sue by you. It is called damage control. As for the CEO “IT WAS A GOOD DESIGN” LOL
And thank you for proving once again that you will defend and advertise Nvidia even in cases where you usually attack if it is another company. Links, everywhere. Now you just hide behind that “show me links”. If it wasn’t true you would give them yourself.
By the way. Show me a GTX 970 with 4GB of unified RAM, 2MB of cache, 64 rops and a true 256bit data bus and then let’s compare the results of that card with a retail 970 and I will show you where is the problem.JMO
Oh geez, here’s you stupidly
Oh geez, here’s you stupidly posting nonsense again. I have not defended Nvidia’s handling of the 970 a single time. I’ve clearly stated anyone who feels wronged by the misstated specs should pursue and be granted a refund. Nvidia should absolutely make sure to be more careful in their specifications in the future. To any non-retard, this is in no way, defending or making excuses for Nvidia.
Now I am simply pointing out the obvious. None of the restated specs had ANY adverse or negative change to the tremendous value and performance the 970 provided at launch and continues to provide to its end-users, today.
This is VERY different than all the BS regarding FreeSync that you’ve repeatedly made excuses for and defended AMD. AMD made a number of misleading statements that turned out to be untrue that adversely impacted their users or those who had plans to use their tech, and yet you are still here defending them. Interesting.
-They lied about FreeSync being essentially free, firmware flashable on panels already on the market.
-They lied about FreeSync not requiring any additional hardware.
-They lied about FreeSync being better supported, when even many of their own GPUs can’t support FreeSync.
-They lied when they said FreeSync would handle VRR better, with 9-240Hz ranges.
And what do we see on the market? People who are being burned by these misleading statements and realizing, FreeSync isn’t better than G-Sync and there are still a number of unresolved issues like ghosting, overdrive being disabled, limited VRR windows, VRR disabling at low FPS etc.
Any update on those simple driver fixes btw? As someone who has spent so much time defending AMD and insisting these are easy problems to fix, I would think you would be actively communicating with AMD reps and pushing for a fix, given you have put your name to these apologist remarks defending AMD.
Just look at your post. Another advertisement for the 970.
Every time you want to criticize Nvidia, you advertise their products. And then you attack at AMD.
For something that it is 100% Nvidia’s mistake, you cover up Nvidia, advertise their product and attack AMD.
LOL you are funny.
I see no links from you, because there are none.
LMAO again, how is it an
LMAO again, how is it an advertisement when it is just stating the obvious to idiots like you?
Net sum of Nvidia’s 970 spec misstatement and correction to the end-use: Nothing.
Net sum of AMD’s dishonest account of FreeSync and subsequent sub-par implementation on the market: a lot of market confusion and FreeSync in disarray.
See the difference? There is nothing to make excuses for regarding Nvidia, as they have owned their mistake and the net sum of that mistake is basically meaningless for the end-user.
Meanwhile, FreeSync is junk and AMD fanboys like you are blaming ANYONE *EXCEPT* for AMD. So who loses? Fanboys like you, because you end up with a shoddy broken product that may never get fixed.
Once again, advertising
Once again, advertising Nvidia using two different standards.
Once again, too stupid/fanboy
Once again, too stupid/fanboy to see the difference:
Me: Nvidia is 100% at fault for the 970 memory spec error and should take full responsibility for their actions by refunding customers if they ask for it.
Reality: 970 is still the best GPU in that price range providing the same great value and performance to anyone who buys one.
You: AMD isn’t at fault for FreeSync monitors, the vendors need to fix their monitors, its not FreeSync’s fault that VRR is broken and panels ghost or break Overdrive.
Reality: FreeSync is still broken and in disarray, with no guarantee any of these issues are fixed to the detriment of AMD fanboys like you.
See the difference? Oh right, too fanboy. LMAO.
Once again, advertising
Once again, advertising Nvidia using two different standards.
Once again, too stupid/fanboy
Once again, too stupid/fanboy to see the difference
I am really hoping next year
I am really hoping next year is a return to form for AMD on their CPU side. Zen has piqued my interest. I always want to go AMD but I just cant when Intel has the better processor and Nvidia has the better driversbut if I like what I see on Zen and there new graphics for 2016 are awesome I will definitely do an AMD build next.
Why do you say nvidia has
Why do you say nvidia has better drivers? That is simply not the case.
Hi Roy, that’s pretty cool
Hi Roy, that’s pretty cool you are interacting with the community. I just saw someone retweet you were working hard to get AMD’s performance in ProjectCARS up to par with the competition, you don’t think this is an example of Nvidia having better drivers, especially on Day1?
Its an example how Nvidia
Its an example how Nvidia gimps previous generation chips inorder to sell new ones.
How can a 960 beat a Titan or 780.
Oh yeah, Don’t ask that question because its inconvenient for Nvidia with forums complaining about previous gen performance on new drivers. Old gen users have to be careful installing new drivers.
Wow its hard to find this
Wow its hard to find this level of brilliance on the internets. Maybe ProjectCARS is programmed to take full advantage of newer architectures like Maxwell?
I guess you similarly wondered how an 8800GT was able to beat a 7800GTX? There are going to be fringe cases of new midrange beating old high-end. That’s why we upgrade, and that’s why its seldom a good idea to buy an outgoing flagship. Chances are, the newer chip gets it done as good or better with less heat, less hardware behind it.
But lets ignore the more important question, why is that same 960 beating AMD’s flagship 290X? 🙂
Oh right, AMD drivers just not up to par.
Stupid response like always
Stupid response like always from Chizow
Stupid response as usual from
Stupid response as usual from anonymous AMD fanboy #2259
Actually he has a valid
Actually he has a valid point. You pointed out the reason ProjectCars is doing better on Maxwell then Kepler is because the game is written for that specific architecture. Therefore how can it run comparable on any other architecture be it Nvidia or AMD if the game is the deciding factor for bad performance.
No, I said newer
No, I said newer architectures, like Maxwell. That could also mean GCN. Kepler is last-gen and there were significant changes in design compared to Maxwell, so if Devs are coding for the latest arch then obviously they will perform better than on old hardware, this is normal and expected.
AMD’s problems however are clearly driver related, because Project Cars gets an instant uplift of 20-30% in Win10 and on older driver builds which the Devs have made mention of. Hardware is the same, arch is the same, AMD being inconsistent in their driver development and delivery, the same.
Also, this game is extremely CPU dependent, so AMD’s poor multi-threaded DX driver performance is undoubtedly rearing its ugly head, again. If you don’t believe me, please look at any of the 3D Mark DX12 API draw call tests and look at single-threaded vs. multi-threaded driver performance. You will see that not only they identical for single and multi-threaded, the multi-threaded is lower than Nvidia.
full @ AT: http://www.anandtech.com/show/9112/exploring-dx12-3dmark-api-overhead-feature-test/3
Your argument is only true if
Your argument is only true if the underlying hardware for each generation for both AMD and Nvidia is the same. This is patently not true, however. AMD hardware and Nvidia hardware do similar things, but use different ways of doing them.
They start in the same spot, take different paths, then wind up in similar, though not quite the same, place. The external interfaces make them look the same, but they aren’t internally.
The most obvious way to see this is in the way each counts compute units on the chip. Take any set of cards from Nvidia and AMD with similar benchmark performance, and notice the drastic difference in the number of “Cuda cores” or “Stream Processors.” That isn’t just marketing, that is in direct relation to how the internals of the chips are structured.
When a game dev tunes their game for only one generation of a chipset, they are putting the work on making the game run well on the driver developers, in this case AMD for all of their cards, and Nvidia for their older cards if they want to support them.
Yes, drivers do play a part in all of this, they display what looks to be the exact same interface tot he game, but when the game is tuned for the “fast path” of one card without consideration for the others it is the games fault, not the vendors. Unless you want to argue that the vendors have been supporting buggy games for more than a decade, that part is on the vendors.
No, not at all. I am
No, not at all. I am comparing GCN to both Maxwell and Kepler, because it is clearly underperforming relative to Maxwell in this situation compared to other games. It is pretty well established at this point Kepler is lagging behind Maxwell, but again, this can be easily attributed to hardware arch changes.
If Devs are making more use of certain API functions that run certain shader routines, the benefits are going to be more obvious on hardware that better supports those calculations. This is all a part of ASIC design, you have certain forward-looking functionality that you may support, but run slower due to less dedicated hardware, but as that tech becomes more prevalent, you may dedicate more hardware to facilitating those calculations on newer hardware. A good example is the evolution of Tesselation performance on DX11 hardware. It was very expensive and inefficient at the beginning of DX11, but as time went on and Devs implemented it more in games, both IHVs dedicated more resources to improving Tesselation performance.
The DX12 comparison is not
The DX12 comparison is not Apples to Apples. No DX12 benchmark can give you a reliable comparison to DX11 drivers. Period. They are two very different things.
You can compare DX12 to a similar thing, such as Mantle or Vulkan, but DX11 is an inherently single threaded api with the appearance of being multi-threaded. The DX11 driver model lies. That goes for AMD and Nvidia.
DX12 is built from the ground up as a multithreaded api, with all the multi core knowledge tat has been gained in the last few years since multi core processor became the norm. You can maybe compare the performance on the same card, so a Titan DX11 vs a Titan DX12, and see what multithreading can get you. But accross cards in the same family is pointless, and across vendors is meaningless.
Now, AMD has already shown they can dot the multithreading API with moth Mantle, and the Xbox One and PS4. What, did you think Microsoft wrote that driver? Or Sony? When AMD had all the info about the hardware, and drivers to base the code from? Yeah right.
Again, please read the test
Again, please read the test and how it is set-up. The DX11 drivers are run and tested in DX11, so yes it is relevant if they are just taking a snapshot of how many draw calls each driver could produce in those situations. I mean, you do realize these were the same reasons that AMD, DICE and everyone else espousing the benefits of low level API, have been screaming from the rooftops forever right?
Its no secret that AMD’s DX11 multi-threaded driver has been lacking in performance and this API test clearly quantifies this fact. It does make you wonder what would’ve happened if AMD just focused their resources in improving their DX driver instead of wasting time and money on Mantle, though.
Its my personal experience
Its my personal experience and opinion on the matter. I have much better experience with Nvidia drivers then I have had with AMD in the past. My friend has an 8350 and a 7970 GHz edition and its a great system and for reference I have a 3770k and a GTX 780 so they are pretty on par but with my processor being a littler better but Nvidia has consistently given better drivers and we compare all the time. Like I said its the experience I have had but if I can I do prefer AMD.
My experience with drivers,
My experience with drivers, are pretty much that Nvidia seems to be better at quickly putting out performance drivers for the higher profile AAA games.
But other than that, as a person who has been alternating brands with each new card i have gotten the last 4 years. Both companies are on par with drivers. Im talking, there has been no issues or weird stuff happening on either side when i have downloaded a new driver.
Are you talking to yourself?
Are you talking to yourself?
“My friend has an 8350 and a
“My friend has an 8350 and a 7970 GHz edition and its a great system and for reference I have a 3770k and a GTX 780 so they are pretty on par but with my processor being a littler better…”
If you think that, you should go back to consoles. An XBOX and PS are on par. An 8350 and 7970 on par with a 3770 and a 780? You’re trippin on heroin.
Benchmark leaks just in
Benchmark leaks just in too
Pretty old and probably fake
Pretty old and probably fake benchmarks that someone reposted today. I hoped for something new, but those are NOT new. Sorry.
Those are old leaks, problem
Those are old leaks, problem with leaks are well there is NO proof to back up if they are legit or bogus BS. The one graphic is “19 games performance” which is all games in 1 bar which says NOTHING. There is no listing of settings used or games used so can’t say that test was even done fair with same settings or even unbiasedly.
Considering that the
Considering that the 360/370/380 are direct rebrands of the 260/270/285 and the 390(X) is the new (presumably) much faster card, that would leave a massive gap between the 380 and the 390 with no Hawaii rebrand in sight.
I can imagine two scenarios. Either they leave the 3xx series OEM only and release new cards as 4xx, or they simply drop the performance level cards entirely.
That is strange. I was
That is strange. I was wondering the same thing. Two things are possible I think. 1, the 390X is not a fast as we think(hopefully wrong), or 2, some other hawaii rebrand fits in the middle somewhere.
cards arent even out we dont
cards arent even out we dont know if their rebrand we dont know the speeds or spec of the cards why are people jumping the gun.also why are people complaining that the new cards might be like the r9 285, that card was a good card, is not new that companies release like a preview verszion of their newer cards it their old ine up like the gtx 750 was a preview of the gtx 900 series cards.i personaly think the newer amd cards are going to be like the r9 285 cut dow at the lower end and maybe a bit of a clock boost in the new r9 280 oh well who knows.
Huh? I am talking about the
Huh? I am talking about the high end version. It appears to be quite a bit faster than the new 380/70/60 series. I am saying that it worries me because there is nothing really in between. I have a 290 right now and the 380/70/60 would be a stupid sidegrade/downgrade. I am hoping the “390x” is close to twice as fast as my 290 and I am worried it might not be. I am excited nonetheless.
hmmm maybe but i dont think
hmmm maybe but i dont think so i think we are going to get like around a 10% performance increase and some power reduction maybe
For the Top of the line AMD
For the Top of the line AMD GPU? NO WAY is that even close to correct.
If rumored specs are correct,
If rumored specs are correct, it is 4096 shaders vs. 2816 for Hawaii, so shouldn’t this be more like 45% (clock speed dependent) before other improvements? We may get a higher clocked, power optimized version of Hawaii though.
I want to see a 6 core Zen
I want to see a 6 core Zen laptop SKU, as Intel does not appear to be focusing on getting the core count up on and mainstream quad core i7 laptop SKUs. With all that the new DX 12, and Vulkan graphics APIs are promising with respect to the new graphics API’s multicore abilities, having more than 4 cores/8 processor threads for gaming laptops would more than likely add to the gaming performance more so than in the past. If AMD can further its integration of graphics with the CPU cores for its Zen based APUs, when they are available for the gaming laptop form factor, AMD could have a popular SKU, be it 4 core, but even better a 6 core gaming laptop SKU. AMD’s integrated graphics if it can be user delegated in games for gaming physics, while the discrete GPU/s did the graphics would be a definite asset for gaming, as well as using dual integrated/discrete GPU combinations for gaming or other graphics tasks.
AMD needs to be more forthcoming with the technical information on just what is the difference between AMDs products that are certified HSA 1.0 compared to its current products that are not HSA 1.0 certified. So AMD needs to provide such information about how much of its HSA 1.0 specification is implemented in hardware, and how much of the HSA 1.0 specification is implemented in software.
AMD is not the only member of the HSA foundation that is working towards HSA 1.0 compliance, so different SOC manufacturers may have different implementations that achieve the HSA 1.0 compliance.
I would imagine at some point in time for AMD’s Zen or later APUs that they will have even further integrated the CPU/GPU to the point, that in the CPU hardware the floating point instructions could be directly dispatched to the integrated GPU, without the necessary software/call, or driver API methods that are currently done between GPUs, and their integrated GPUs. Unified memory addressing is a start, as well as other methods, but at some point in time there may be some functionality on the integrated GPU that can be dispatched directly from the CPU core’s decoders/scheduler directly to the GPUs floating point cores, and not a pointer pass via unified memory address space, but some kind direct instruction dispatching from the CPU to the integrated GPUs floating point resources. AMD appears to be slowly merging the CPU with the GPU in their APU line of offerings, and how far they are away from completely doing this is what has me interested.
That rumored server APU with 16 full Zen cores and tightly integrated High End Greenland GPU accelerator, both sharing HBM memory, if true, will most certainly make available the possibility of a high end derived desktop gaming APU, or even a gaming APU that came directly from a binned version of the server SKUs itself. Even better would be having such an APU with high powered graphics available, and HBM, on a PCI card and give the user the ability to add more CPU processing alongside the high powered GPU cores. The potential of having the server/HPC market buying such a high powered APU with its high power GPU used to accelerate server/HPC workloads if popular, will give AMD plenty of revenues towards R&D, and this research is mostly funded through the server/HPC purchases from business/government would still benefit gamers as the technology will be made available for the gaming market. So even gaming would benefit from AMD having a very successful server/HPC APU, as the R&D costs will be shared by a larger market than just gaming alone.
Nvidia gets a lot of its R&D budgets supported through its HPC/server accelerator sales, they just do not pass any savings on to their consumers as much as AMD does. So if AMD gets its high powered HPC/Server APUs more popular in the server market, I’d gladly take a binned variant with 8, or 10 Full Zen Cores, HMB memory, and a big Fat Greenland GPU.
Most server applications do
Most server applications do not require much of any GPU unless it is a streaming game server or something. HBM may be very useful as an L4 cache though. It isn’t as low latency as on die SRAM, but it is better than going all the way to system memory.
Server and HPC have very different requirements these days, so using server/HPC doesn’t make much sense, IMO. HPC and “workstation” can be similar, but is application dependent. A development workstation used mostly for compiling code would be happy with server hardware (many integer threads, little to no GPU use), but a graphics workstation obviously needs GPU power.
I am wondering if it is possible to get rid of the vector units in the CPU (MMX, SSE, AVX) also as these seem like a waste of space if you have an on die GPU. If you have code that could take advantage of something like a 512-bit AVX unit, then it would almost certainly run better on a GPU. It may be difficult to get latency down for using the GPU as a replacement for the vector units though, even with HSA.
“I am wondering if it is
“I am wondering if it is possible to get rid of the vector units in the CPU (MMX, SSE, AVX) also as these seem like a waste of space if you have an on die GPU.”
… …. … lol…. lmao…. ROFLMAO. If you only knew how important those are in past and present software. Get rid of them he says…. HA HA HA!
The idea would be that you
The idea would be that you would run vector code on a tightly coupled GPU, so not “get rid of”; more like don’t put two copies of the same hardware. These vector instructions are important for past and present software, but not necessarily future software. Intel is still working on AVX2 but this is supposed to come out on their Xeon Phi line first which is closer to GPU than a CPU.
Is there any demand for AVX2? Any code that can really take advantage of 512-bit vector units will probably run a lot faster on a GPU right now. GPUs have some interesting advantages compared to CPUs. Most of the disadvantages of running on a GPU are soon to disappear with HBM and HSA. There wouldn’t necessarily be any reason to actually remove vector extensions from the ISA though.
X87 co-processors were once separate chips. They eventually moved onto the same die as the CPU. The x87 FP instructions haven’t been removed, but they probably are not executed on anything resembling a x87 FP unit either if you actually set-up a compiler to use them. Once the CPU and GPU merge, why wouldn’t vector extensions run on the same hardware as GPU shader programs? Eventually, you would expect the vector extensions to not be used much, just like x87 instructions are not used much. You could also argue the other way, that GPU shader cores will be converted to be (essentially) the same as CPU vector units. This seems to be the way Intel is moving with their Xeon phi line. This drops some of the advantages that running on a GPU has though and reminds me of IA-64 (Itanium processors), which were a disaster.
First, the GPU and the CPU
First, the GPU and the CPU are two separate chips. There is latency when they talk to each other. For something like that to work, the cpu would have to translate the instruction, recompile it into a format the GPU could use, then send it off to the GPU, and hope it isn’t in the middle of doing something like, I don’t know, a heavy duty game that would make use of those exact same instruction as well as the GPU.
This doesn’t give you any speed gains, in fact you will lose speed very quickly.
The CPU and the GPU didn’t and aren’t merging. Don’t believe any marketing claiming otherwise. CPUs are good at general purpose instructions. GPUs are good at heavy duty number crunching. They are good at those things because they are designed to do their individual duties well. If you try to merge them and don’t redesign the way x86 works you will lose performance on both ends.
Now, something like the Mill, that is designed to give you kinda the best of both worlds, could come close, but that doesn’t run x86 code, and can’t be based on x86 and still be the Mill.
HSA is only a way of addressing memory and making it seem like one big pool of memory to programs. AMD has released their Open Source implementation of HSA drivers for Linux/Mesa quite some time ago. The only reason the earlier hardware is not certified as HSA 1.0 compliant is a last minute change by the HSA foundation requiring a hardware spec that was not in the previous generation. AMD had released that earlier gen with the expectation that it did in fact meet the specs, as indeed it did at the time.
CPUs and GPUs are not
CPUs and GPUs are not necessarily separate chips. Let me put it this way: would you want to convert a GPU over to using a fixed ISA? Something like Intel’s x86 + AVX3 with 512-bit vector units. I would say no. I don’t think there are going to be too many consumer applications where the latency of 512-bit vector units is an issue. Setting up such an ISA limits the implementation, since you have to either support it directly in hardware, fake it in hardware, or just break compatability.
Throughput optimized code is quite suited to run-time compilation or intermediate instruction formats. The amount of code is very small compared to the amount of data and latency is not anywhere near as important as it is with branch heavy integer code. This allows a the use of a higher level code, and the run-time system can optimize it for whatever the implementation happens to be.
CPU cores, even “fat cores” are tiny compared to a gpu on current process nodes. With the introduction of HBM, this removes the memory bottleneck that limited integrated graphics processors. AMD could technically put some CPU cores on die, put it on an interposer with HBM, and sell it as a socket CPU/GPU. There is very little off package interconnect; in fact it would probably be less than current CPUs, unless they included a memory controller for expandability.
I suspect applications will start using more gpu code, and the vector units present on the CPU will mostly get used for scalar operations since x87 is terrible. It is a bit of a waste to use large vector units for scalar operations. The integer core and supporting caches should be more optimized for latency, but because we need to support these vector extensions, it has to be skewed towards bandwidth instead. This has lead to all kinds of complexity that was probably unnecessary if they had just added proper scalar FP instructions rather than massive vector units.
Someone is letting their
Someone is letting their dreams run wild.
I don’t really understand how
I don’t really understand how this kind of speculation is fun, but I do know people who get a kick out of this.
AMD being so tight-lipped is a first for me. Either they have announcements out of the blue or there are staggered rumor releases that point to an imminent product release.
I would rather have them provide sufficient stock of high performance parts at the time of announcement rather than getting my grubby hands on one moths after the release. Again, I would wait and hold off to see if the reviews point to a hot part (due to a poor reference cooler design for the 290X but a good chip none the less) or not.
I wish AMD would just adopt extra wide (which extend an inch or more over the bracket) but shorter PCBs to fit wider, thinner and denser heat exchangers accompanied by large, slow spinning fans that keep the temps and the noise down. It is a win-win for everyone who does not wish to fork out the extra dough for a custom third-party design or wait for a water block.
Something seems to have changed in AMD to stay silent for so long. AMD needs a good product to sell well if it wants to stay alive and relevant in the market, and in the minds of the user base. I also wish for the drivers to be ready at launch (for once).
My 2 cents.
Reason AMD being so tight
Reason AMD being so tight lipped about everything, is last few times they made announcements about gpu’s, Nvidia reared up a week or 2 later and layed smackdown on them with a better more efficient gpu. Likely AMD trying to keep that from happening this round.
When did that happen? And
When did that happen? And don’t be vague. Also, if that were true, Nvidia has no reason to wait for the benchmarks, they would just release better cards/drivers around the release anyway.
Looking at the Wikipedia page
Looking at the Wikipedia page “List of AMD graphics processing units”, there are entries for 8xxx GPUs in between the 7xxx series and the Rx 2xx series. They are OEM only cards. I don’t really remember the “Radeon HD 8xxx” series in the retail market. What was the story behind these cards? We seem to be in a similar situation with the OEM only Rx 3xx series branding.
I am wondering if they is going to be a larger brand name change. We had various “Radeon” branded cards before the switch to “Radeon HD” branding and then the switch to “Radeon Rx 2xx” branding. I was thinking perhaps a “4k” branding of some kind since TV and display makers will be pushing 4k hard in an attempt to get people to upgrade and this is also a driving force for pc graphics cards.
These were for the most part
These were for the most part mobile part re-brands, and were done by the manufacturers instigation in between a product refresh cycle, before AMD released their own new and re-branded parts. These same products were rebranded again into the R7, and R8 mobile parts.
They just rebranded to this just 2yrs ago. It would be strange, though not impossible, for them to do so again. 4K doesn’t seem like a great brand when the previous gen was the R9.
I could see them calling it the R9 4K series. I don’t want to see that, but I could see them doing it.
Most branding in the tech
Most branding in the tech industry is terrible. It generally obscures what you are getting rather than clarifying. I don’t like the R9, R7, and R5 branding. It is mostly meaningless. I could definitely see AMD jumping on the 4K bandwagon. I was thinking more along the lined of adding 4K at the end rather than in middle or at the beginning. The “k” designation implies 4k, which tv and display makers will be pushing. It would make their model numbers sound like Intel’s “k” series CPUs though; which could be good or bad. The high end GPUs with HBM are obviously targeting 4k displays or multi-display set-ups. You don’t need that kind of power to game at 1080p.
Announcing rebrands at E3 is
Announcing rebrands at E3 is about the dumbest thing AMD could do. And if AMD has an entirely new lineup of chips coming, why are they still keeping their mouth shut? nVIDIA is talking up Pascal, and Intel announced 14nm Broadwell parts while AMD sits on their hands. I think Fiji is just not good enough for AMD to brag about. IF it is, AMD will still have the worst marketing ever.
It is very possible that the
It is very possible that the reason for this OEM rebrand series is to discretely get a lot of 200 series stock out the door before the new series hits the shelves. I’d think that this stock is the primary main reason why the new series hasn’t been released yet to begin with, just took time to get the various OEM’s to place their orders and parts shipped and since June is close, made sense to wait until then to announce new line up.