The Expected Unexpected
Raja has been a staple at AMD for 15 years, but now moves onto Intel
Last night we first received word that Raja had resigned from AMD (during a sabbatical) after they had launched Vega. The initial statement was that Raja would come back to resume his position at AMD in a December/January timeframe. During this time there was some doubt as to if Raja would in fact come back to AMD, as “sabbaticals” in the tech world would often lead the individual to take stock of their situation and move on to what they would consider to be greener pastures.
Raja has dropped by the PCPer offices in the past.
Initially it was thought that Raja would take the time off and then eventually jump to another company and tackle the issues there. This behavior is quite common in Silicon Valley and Raja is no stranger to this. Raja cut his teeth on 3D graphics at S3, but in 2001 he moved to ATI. While there he worked on a variety of programs including the original Radeon, the industry changing Radeon 9700 series, and finishing up with the strong HD 4000 series of parts. During this time ATI was acquired by AMD and he became one of the top graphics guru at that company. In 2009 he quit AMD and moved on to Apple. He was Director of Graphics Architecture at Apple, but little is known about what he actually did. During that time Apple utilized AMD GPUs and licensed Imagination Technologies graphics technology. Apple could have been working on developing their own architecture at this point, which has recently showed up in the latest iPhone products.
In 2013 Raja rejoined AMD and became a corporate VP of Visual Computing, but in 2015 he was promoted to leading the Radeon Technology Group after Lisu Su became CEO of the company. While there Raja worked to get AMD back on an even footing under pretty strained conditions. AMD had not had the greatest of years and had seen their primary moneymakers start taking on water. AMD had competitive graphics for the most part, and the Radeon technology integrated into AMD’s APUs truly was class leading. On the discrete side AMD was able to compare favorably to NVIDIA with the HD 7000 and later R9 200 series of cards. After NVIDIA released their Maxwell based chips, AMD had a hard time keeping up. The general consensus here is that the RTG group saw its headcount decreased by the company-wide cuts as well as a decrease in R&D funds.
AMD was in trouble, and to turn things around they chose for RTG to tread water with their GPU offerings while strengthening their core CPU business. This is not to say that RTG did not explore new and different avenues. The Radeon Fury was one of those avenues, but it came a little too late. Work on the next generation of parts featuring the Polaris architecture was well underway, but the design goals seemed fairly mundane as compared to what NVIDIA was pushing with their Pascal architecture. AMD has sold their Polaris based parts very well due to new crypto-mining, but for the core gaming market it has not stacked up well to NVIDIA.
The Vega Stack.
The Vega launch was supposed to propel AMD into the GPU stratosphere. It was a much more aggressive part in terms of design and this was expected to give it a real boost in performance and efficiency. It would support the latest High Bandwidth Memory 2 specification allowing it to support 8 to 16 GB of memory running at 500 GB/sec of throughput at relatively low power levels. It would integrate the latest Infinity Fabric that AMD developed to provide high speed data movement as well as fine grained power control and gating. Sadly, the final product did not quite meet expectations. This is not to say that Vega is not competitive, because in pure performance it is. It just requires more power and runs a little hotter to achieve the same performance as comparable NVIDIA parts at those price ranges.
AMD really had little choice in the matter in how they addressed RTG. The company was desperate to claw back CPU marketshare as it was still their primary business. Getting Zen to market in a timely manner was of the utmost importance, and I believe funds and manpower were prioritized in that direction while leaving RTG in a somewhat depleted position. RTG also had to support the Raven Ridge products by successfully integrating the Vega GPU component into the APU. Getting this product to work with the new Infinity Fabric with the brand new Zen cores likely was no simple task. (Also, supporting semi-custom designs for Xbox and PlayStation also drained resources.)
Image courtesy of Intel Corp.
Raja was able to oversee these projects successfully. They were mostly on time and within rough specifications. I just do not think that Raja had a lot of help in keeping the boat afloat with the focus on the CPU portion of the company. There have been no real breaks for Raja since he assumed the lead of RTG. Vacations were likely few and far between considering the position RTG was in. Burn out had to be a big factor in Raja’s decision to leave the company.
Speculation of where Raja would go was rife after the resignation. Some thought he would go back to Apple, others considered a spot at Tesla along with Jim Keller (also formerly of AMD), and then there was the rumor that he had in fact been hired by Intel. Today we found out that Intel is the correct answer.
Great article Josh, looking
Great article Josh, looking forward to the 30 minute monologue about it on the podcast today
A year ago, the idea of AMD CPU + Intel GPU would have sounded like the worst possible combination one could have, but now it’s quite exciting. How quickly things change in this industry.
It’s also telling (and worrisome for AMD’s graphics division) that someone looking for a company with more graphics focus/R&D allocation would move from AMD to Intel, of all places. Hopefully Zen makes AMD enough money to keep the graphics R&D alive and eventually competitive again.
Are there any concerns about non-compete clauses for Raja, moving so soon into a position where he is a direct competitor to his former employer?
I was wondering about that
I was wondering about that non-compete also. . .
I’m not sure where Raja had
I’m not sure where Raja had been based, but non-compete clauses are not legal in California.
Intel truly lost their
Intel truly lost their marbles and I will bet you Raja will be just as unremarkable just as he has been at all his previous workplaces – he’s a product of conjecture and not skill
Why! Just look at Apple’s
Why! Just look at Apple’s Graphics in its A series SOCs that’s Raja’s Tweaking of a Licensed of the Imagination Technologies PowerVR refrence design. And that’s even lower powering using than Nvidia’s Maxwell/Later Mobile first GPU designs. Ever think of why the Tegra/Devner core SOCs from Nvidia did not get much if any smart phone design wins.
You Gamers need to stop blaming Raja for doing his Job at AMD and creating a Vega 10 base die design based on the Vega GPU micro-arch within the limited budget that RTG was given to complete that design. Vega 10 is a compute/pro market first design that was all RTG had the funding to complete from AMD.
Raja is not the one in charge of the purse strings as that’s alwys been Lisa’s and the AMD BODs hands. The GPU AI/Compute markets are why AMD focused of the Vega 10 design first and not any gaming only focused, and Intel will be doing the same with Raja’s very experienced hands at the enginneering helm for GPU/AI/Compute accelerators before Nvidia can eat Intel’s lunch with Volta in the HPC/Server markets.
AMD did not have the funds to focus on a mobule first gaming GPU design like Nvidia has as AMD has really less Laptop Mobile gaming GPU market share to justify the extra expense what with AMD’s Epyc CPUs needing a little extra GPU help for the professional HPC markets that Lisa over at AMD sais that was AMD’s main focus. AMD has always been a CPU company first and AMD’s stock was never higher when AMD’s Opteron processors had a sizeable share of the Server/HPC market! AMD’s stock price hovered around the mid 90 dollar range then and that was before AMD purchased ATI.
I do not care how Intel’s marketing is using Raja’s move to garner Intel some mind share with the gaming public as Raja is there to save Inte’s beacon form Nvidia’s compute/AI Volta SKUs in the HPC/Server markets first and formost.
Raja is a very competent GPU engineer as Jim Keller is with CPUs. It’s not Raja’s fault that most of the gaming public is not very well versed in the realities of the corporate world or the IP/Legal implications concerning GPUs and the patent systems around the world that are there for a reason. Raja did the job he was tasked to do at AMD with the funding that he was provided with by AMD and that was not focused on gaming, just as Intel/Raja will not be at first focused on gaming as there is a real threat to Intel from more than just AMD’s Epyc and IBM’s/OpenPower’s Power9’s and that threat is Nvidia’s Volta. Intel’s bread and butter comes from the server/HPC markets and Nvidia’s Volta is a threat as will be tha Vega 20 based designs from AMD.
AMD is a switch hitter in the professional compute/GPU markets with CPUs(Zen Based) and GPU Compute/AI accelerator markets and that enterprise/AI market is going to maket the consumer gaming market look subatomic in size. Intel has to get its switch hitter on ASAP!
“Raja is a very competent GPU
“Raja is a very competent GPU engineer as Jim Keller is with CPUs”
Again? Putting Raja under the umbrella of others’ completely unrelated competence/duty?
I see a pattern here.
Almost like a career pattern.
Really, Keller and Koduri are
Really, Keller and Koduri are engineers and not so bad at their respective trades and areas of expertise.
Raja’s problems with AMD where not engineering based they where higher management/funding based and both Raja and Jim have made the rounds more than a few times when the offers were presented to them. AMD got its Vega 10 compute die and Vega GPU micro-arch certified and on the market so Raja’s work is done. Navi is just going to be a modular die version of Vega/GCN with improvements. But Navi is more about those modular dies and some interpoaer based IF glue that will tie the scalable Navi GPU chiplets to each other and with Zen dies and HBM2/3 for a fully modular/scalable system to create GPUs and APUs that are made up of many dies on an Interposer MCM. Navi GPU die chiplets that are smaller and eaiser to fabricate with greate die/wafer yields is what Navi is more about just like Zen and those Zeppelin modular dies and that Ryzen to Threadripper to Epyc product stack all made up from one Base Zeppelin modular/scalable die design.
As far as gaming and that FPS metric its the ROP count and clock speed that makes for those market winning Gaming GPU SKUs. Because what gamer is really going to notice at 60+ FPS any lowered image quailty from any lower shader count on an GPU with an excess of shader resources. GCN is good enough and any GCN base die design with the proper amount of ROPs can fling the FPS out there. It’s just that any excess of shaders not needed for gaming at 4k just generates extra heat on gaming focused GPU workloads under DX11 more than DX12 but still Vega is a shader heavy design.
Vega 56 is a very nice binned Vega 10 variant with its lower shader counts while retaining the same ROP counts as the Vega 64(64 ROPs) and also the GTX 1080(64 ROPs). It’s just Vega 56’s clock speeds that are keeping Vega 56 from performing like that GTX 1080! And Vega 56’s Texture fill rates is closer to the GTX 1080 Ti’s as the Vega 10 TMU counts are much higer than Nvidia’s TMU conts. But is’t the ROPs that feed the frames for the FPS so all AMD has to do is tweak that Vega base die layout to have less Shaders and more ROPs and Vega can match any Pascal variant in gaming and power usage. TMUs come in handy for non gaming graphics arts rendering workloads where FPS is meaningless, ditto for extra shaders and ray tracing interaction calculations that are usefull for those graphics arts sorts of workloads that professionls need.
Raja like Keller/Zen has the Vega Micro-Arch design work finished and others can tweek the shader to ROP ratios on future Vega micro-arch based GPUs for gaming only usage.
Jim Keller also breaths
Jim Keller also breaths oxygen like Raja… meaning?
Keller’s career is a successful one and he really excels on what he does and on the approach in doing it.
And as far as I know, he was not responsible for a division of a company and the good outcome of its products in the market, or at least the supervision of the QC of said products, not mentioning the reading of the PCIe3 spec.
Also, Keller doesn’t have temporarily “burn outs” on much harder tasks.
But I get it. Raja needed a rest, so he went to the rest-house.
It’s understandable.
Yes he is sitting at his new
Yes he is sitting at his new digs in that resort with the blue ocean of cash after a well earned rest. Executive Burnout has been a thing from even before this(1) Harvard Business Review Article was published in the July–August 1996 Issue of HBR and it’s actually a reprint from:
“This article was originally published in May–June 1981,” (1)
So there is some history in the business world with that sort of thing! Do read up and this problem is an old one with roots that go back much fruther.
[Meanwhile back at the deserted island resort.]
Thurston to his wife Lovey: I just can keep SVPs from dropping like flies but I’ll be damned if I’ll stoop as low as to hire a Yale man!
Thurston to his stuffed bear Teddy: Maybe I’ll have Sperry wire you up to a UNIVAC AI and get you a key to the executive washroom.
[Someplace in Space.]
Doctor Smith to William: Oh the pain! The pain! The terrible pain of project management, and not a moment to rest!
(1)
“When Executives Burn Out”
https://hbr.org/1996/07/when-executives-burn-out
Edit: can keep
To: can not
Edit: can keep
To: can not keep
Under Koduri, AMD RTG has
Under Koduri, AMD RTG has NEVER been able to surpass NVidia in performance.
If Koduri was an NFL or College Coach he would have been fired. He cannot win the big game.
Good riddance.
nothing Raja did was by his
nothing Raja did was by his own designs
at Apple their work was based based on Imagination’s PowerVR
at AMD they did nothing but milk ATi’s GCN arch
Raja is a very competent GPU engineer based on? Raja is a meh engineer based on all his career
—
you compare Raja to Keller?!?! o.O
Keller saved AMD’s bacon not once, but twice! what Raja did? he wasn’t even capable to save his own job
Excuse me? Raja worked at ATI
Excuse me? Raja worked at ATI when they made their early super-competitive cards after NVidia claimed to have “invented” the GPU. Later, he created the TeraScale microarchitecture which smashed the competition with a far smaller die size than what it was competing against – He also started the work on the GCN microarch before going to Apple. When he resumed in 2013, he started work on creating new renditions of GCN, such as Fiji, Polaris and Vega. He might not be doing all the work, as that takes manpower, but he’s the one choosing how they work, and what they work on. That’s called leadership and vision, something VERY necessary for developing good products with such a small amount of manpower and economic resources. He has a deep understanding of how GPU’s work, and that is highly valuable.
Vega is not a desktop gaming GPU. That’s just not what it is – it’s a compute beast, and actually quite efficient at that. When you cut it down enough, you see the gaming efficiency catching up as well. Just look at Vega 10 in their new APUs. It’s super-efficient.
You seem entirely ignorant of what his achievments are, and what they mean.
They’re competing against NVidia, a company that got most of the sales even when their products were vastly inferior (GTX 200 series vs HD 4000 series), and thus a company with a WAY higher budget for research and development. Do you think these products come out of thin air? You’ve gotta pump money, effort and smarts into the innovation machine to get a good GPU out of it. AMD has had mostly the latter two, and not so much the first one, to work with. And they still compete. That alone shows that their effort and smarts are mostly superior to their competitors.
Vega 10 is NOT the name of a
Vega 10 is NOT the name of a GPU micro-arch, Vega is the name of the GPU micro-arch! Vega 10 is a base die design that is used in the Radeon Pro WX 9100 and the Radeon Instinct MI25 professional GPU and the Radeon Pro Founders Edition.
The Vega 10 base die is also used for the Vega 64 consumer gaming card and the Vega 10 base die design is binned down for the Vega 56 that uses less shaders than the Vega 64.
The problem with the Vega 10 base die design is that it’s really for compute/AI workloads first and formost and is a bit shader heavy relative to the numbers/ratio of shaders to ROPs.
ROPs are what produce the FPS and Nvidia’s GTX 1080 has the exact numbers of ROPs(64) as the Vega 64(64 ROPs) and the Vega 56(64 ROPs). Vega 64 is a very shader hevy design while Vega 56 with its lower amount of shader relative to its 64 ROPs sees Vega 56 performing almost as good as Vega 64. The only reason that Vega 56 can not match the GTX 1080 is that Nvidia’s GTX 1080 has higher clocks than the Vega 56. The GTX 1080’s texture fill rate is lower than the Vega 56’s texture fill rate. And all of the Vega 10 based GPUs have much higher texture fill rates compared to the GTX 1080 and even the GTX 1080ti!
The only thing that makes the GTX 1080ti perform so much better in that FPS metric is has its 88 ROPs that can really fill the frame buffer fast compared to the GTX 1080 or the Vega 64 and Vega 56. And the GTX 1080Ti clocks higher, and so do all the Pascal GPU variants.
If AMD can get Vega 56’s shader count lowered down just a little more without getting rid of any of Vega 56’s ROPs(64 ROPs) and get Vega 56 redone on GF’s 12nm node and get its clocks higher then Vega 56 may just match the GTX 1080(64 ROP) in performance. The GTX 1080 FE’s texture fill rate is 277.3 GTexel/s and the RX Vega 56’s texture fill rate is 329.5 GTexel/s. The GTX 1080Ti’s texture fill rate is 354.4 GTexel/s and the RX Vega 64’s texture fill rate is 395.8 GTexel/s.
The GTX 1080Ti’s pixel fill rate is 139.2 GPixel/s while the RX Vega 64’s pixel fill rate is 98.94 GPixel/s and the GTX 1080 FE’s pixel fill rate is 110.9 GPixel/s with RX Vega 56’s pixel fill is 94.14 GPixel/s. So it’s not hard to see that RX Vega 56 is clocked lower and still has a pixel fill rate(94.14 GPixel/s) that’s not too far behind the GTX 1080’s pixel fill rate of 110.9 GPixel/s.
Nvidia uses higher clocks on its GPUs and that GTX 1080TI’s ROP count(88) is why is can really fling the frames out there and win the FPS race. Nvidia’s power usage metrics are better because Nvidia trims the shader counts down on its gaming focused SKUs. And Nvidia has more targeted base Pascal die designs GP100(Compute), GP102(Compute/Graphics)
GP104(Consumer gamimg) and GP106(Consumer/Gaming) while currently AMD only has Vega 10 that is a one die version that’s great for compute and games OK but uses the juice.
In DX 11 gaming that does not make use of AMD’s extra compute efficiently Vega is not so good on some titles but in DX12 that extra compute can sometimes pay off handsomely for Vega.
AMD’s needs a ROP heavy based die design that can be trimmed of excess shaders without reducing ROP numbers and Vega 11 for the mainstream market if it has the proper ratio of shader cores to ROPs may not be so bad. AMD needs to be carful with trimming back the shader counts to much because DX12 and Vulkan can make better use of those extra shaders. AMD really needs to Invest in a Mobile first GPU based design for the laptop market.
Even a common chat bot can
Even a common chat bot can make better engineering decisions than Raja.
Not if management says focus
Not if management says focus on the Enterprise/HPC/AI markets like AMD’s management told Raja to do with Vega and the Vega 10 base die design that is a rather compute heavy base Die design variant like GP100 is! And that’s what Intel will be having Raja doing also for the Enterprise/HPC/AI markets and Intel can source its gaming graphics dies from AMD’s previous years GCN Generation at a more affordable price instead of having Raja distracted by a consumer gaming market where the margins just are do not justify the effort for Intel.
Raja’s task at Intel is to compete with GV100 and Intel can purchase some semi-custom Polaris based die variants to attatch to Intel’s EMIB nano-motherboard sort of MCM IP.
Both Intel and AMD are rather more the CPU companies wuth their respective x86 products producing the real revenues currently but that GPU driven Compute/AI market is becoming relatively more important for both AMD and Intel lately whta with JHH’s GV100 getting more market share for Nvidia than GP100 that got its share of that Enterprise/HPC/AI markets’ business and high margin revenues.
Raja’s was management at AMD.
Raja’s was management at AMD. His mistakes were, from what I’ve heard, his poor vision and failure to Redirect AMD graphics to take advantage of the rise of GPUs in AI/Machine learning. I’m not sure Intel is a good move for Raja. They have the resources, but they don’t have a culture that will let Koduri succeed I predict he will be frustrated and will leave within 2 years. I hope Lisa and AMD will restructure their Graphics Division so that they can attract new talent while retaining the people they have. They can’t afford any more defections.
Not TOP management and Lisa
Not TOP management and Lisa and AMDs BOD hold control over the purse strings not RTG, RTG was only semi-indipendent. AMD’s money maker has always been it’s x86 based CPU business and AMD is pretty much like Intel is only Intel lacks a discrete GPU product Currently.
AMD greatest stock price(mid $90 range) and market CAP valuation came at a time before AMD acquired ATI and that market CAP was mostly on AMD’s Opteron server business! So Epyc will represent most of AMD’s focus, Epyc and AMD’s future workstation grade APUs on a full Silicon Interposer for the workstation market where AMD will combine Zen and Vega and HBM2 for a Professional Graphics focused WX branded APU. Look for the introduction of the active silicon interposer from AMD where the interposer actually is etched with active circuitry in addition to traces. AMD will probably be looking to move the Infinity Fabric and its control circuitry onto the interposers’ silicon and free up space on the dies for more processing circuitry.
Silicon interposers will get some active processing power in addition to the traces as the silicon interposer is made of the very same silicon as the procesor dies so that’s a natural evolution in the process. AMD will be more focused on the higher margin Enterprise/HPC/AI/Workstation markets where the higher margins can justify the higher R&D funding necessary for interposer based APUs.
Intel hired Raja to compete with Nvidia’s GV100 Volta SKUs with those Tensor Processing cores for the fast growing AI/compute accelerator markets that represent the greatest new growth potential in a high margin new market in addition to the traditional HPC/Server market high margin business. Intel will simply source any gaming GPU dies from AMD and not worry about too much distraction in that low margin business as AMD’s GPUs will help keep Intel’s CPUs inside laptops/mini desktops for a few more years.
The very reason that AMD lacked the funds to give to RTG in the first place is that AMD still has not gotten that Epyc server/HPC/Workstation market share back to AMD’s Opteron levels of the past where AMD’s stock valuation was in the middle 90 dollar range and AMD market CAP was much higher than it is currently. Those Epyc revenues will begin the process as will AMD’s Workstation grade silicon Interposer based APUs for the professional market.
None of this makes sense. Why
None of this makes sense. Why use a integrate a Radeon GPU now if you are making your own push back into graphics? Was this just a stop gap measure as it will easily be 2+ years before they can bring anything to market internally.
maybe just using AMD as a
maybe just using AMD as a stop gap till they get there own gpu upto par?
AMD’s and Nvidia’s Unified
AMD’s and Nvidia’s Unified Shader patents must be running out and so are Imagination Technologies related GPU patents! So both Intel and Apple can now spin their own respective in-house GPU designs without need for licensing. Now if those x86 32/64 bit ISA patents would just run out and Apple and Nvidia could spin up their own x86 based SOCs.
Let’s force the Patents on any x86 ISA IP to be expired also and bring even more competition into the computer market place! Apple’s got more money than GOD let some of that be used to design an x86 based SKU with Apple’s in-house GPU design!
The x86 ISA is so ubiquitous it’s damn near a tacit public utility so let declare the x86 ISA Licensable under FRAMD terms for the good of the fair markets!
Edit: FRAMD
To: FRAND
Isn’t
Edit: FRAMD
To: FRAND
Isn’t Fram a filter brand!
Eh no loss for AMD, I am sure
Eh no loss for AMD, I am sure he did a lot of good work and is a hard worker, but he let a few things happen that soured my feeling toward him.
It was AMD’s fault for not
It was AMD’s fault for not financing Raja’s RTG properly. See where that got you AMD. But Vega 10 is a damn good compute design with the Vega 10 die runts available to be binned down for consumer SKUs. AMD you should have allowed Raja to bake more ROPs into the Vega 10 Shader to ROP ratio mix. That would have allowed Vega 10 to be more competative with Nvidia with those Vega 10 based Radeon Vega 56/64 SKUs.
Vega 56 with the its current same numbers of ROPs as the GTX 1080 would have done nicely if its shader to ROP ratio had been a little less shader heavy and a little more ROP heavy to filing those FPS metrics out there vs Nvidia’s SKUs. Less shaders woud have had Vega 56 on a power usage metric parity with Nvidia’s GTX 1080/1070 designs!
A little more ROPs to Shader ratio adjustment and lower overall shader counts is all AMD needs to compete with GP104. And any new Vega micro-arch based die spins need to have a little more ROPs if GF’s 14nm process can not get the clock speeds as high as Nvidia’s GP104 SKUs.
More ROPs to fling the Frames at the highest FPS metrics to feed that ePeen ego boost that drives the sales figures for gaming GPU sales to the unwashed masses! GF’s 12nm refresh process node to its 14nm(licensed from Samsung) process looks to be good also if GF can pull that off!
This article contradicts what
This article contradicts what Ryan said at his other job. Ryan said he confirmed AMD was just selling chips to Intel nothing more. This article paints the picture that they are collaborating in design and form factor.
Form factor is probably more
Form factor is probably more to do with the laptop’s thin and light form factor and the limited thermal budget. So that’s more about AMD and Intel getting the power usage heat generated to the lowest possible value and not any actual GPU IP sharing on the Intel EMIB/MCM SKUs
Thermal engineers are necessary from both Intel and AMD working together to get that down into the laptop form factor design envelope. Ultrabooks and Apple’s SKUs are about saving every micrometer in the Z dimension also, and that’s a boatload of engineering that has less to do with GPU IP sharing between AMD and Intel.
Typical AMD, hire the best
Typical AMD, hire the best and hamstring him by giving him no power to effect the changes that need to happen.
We will see what happens with Navi and beyond. I predict AMD never reaches parity (at the top level) with Nvidia again. (I also predict that Zen has come as close as it ever will to matching Intel’s desktop offerings; they simply don’t have the vision or ability to hold on to the talent needed to stay competitive.)
The best? Absent
The best? Absent stratospheric mining demand the last 12 months would have been abysmal for RTG. Poor marketing, communication, delays, uncompetitive products, reinforcing hot and loud perception with crappy polaris cooler and vega power consumption, RX 480 breaking PCI spec, another round of rebrands 9 months after launch. If that’s his track record as the best I dunno what mediocre looks like.
He has been very successful
He has been very successful in IP trading.
Let’s see:
S3 (huge success) —> ATI (another) —> AMD (value added) —> Apple (looking better on LinkedIn) —> AMD (my “creation” is not out yet, but I am!) –> Intel (gotta be cautious now…)
I find it funny when some say that he hadn’t had enough $ from a strugling AMD when he decided to go with a new technology (HBM) 3 times more expensive than DDR5 on gamming cards, where final profit margin is everything for the retailers, also hence the MSRP rising.
Another angle on this, is that aquiring “burned out” assets during this joint project with AMD, Intel is holding the IP card close to its chest, just in case… or the 7nm Navi.
I really hope AMD won’t have to drain their really hard earned $ on legal cases in a near future, since it looks like the usual method against AMD’s innovation and disruption delivery.
Too “burned out” to wait and see his creation/responsability come out, but not that much for another jump to the lap of the competitors.
He is indeed a very accessible person.
A lot of the AMD, Nvidia, and
A lot of the AMD, Nvidia, and Imagination Technologies GPU IP/Patents may just be running out and one need only look at Apple telling Imagination Technologies to hit the road may be an indicator that that IP/Patent expiration is happening with the GPU IP on file at the USPTO. Intel dropping its licensing arrangement with Nvidia and hiring Raja and Intel designing its own GPUs alludes to that also.
I’d love to see the x86 ISA Patents expire as that would be great for the PC/laptop market and the server/HPC markets to a lesser degree. There is nothing stopping Intel, AMD, or Nvidia from licensing Power8/9 from OpenPower and working up some of those designs with any respective GPU IP.
What do you think crosspoint
What do you think crosspoint is for? And why it is so important for Intel to keep it proprietary? For cache acceleration? New SSDs?… Nice rabbit-holes those are.
Imagine no pipelines, no serialization, no copys, no mallocs and what it can do for a GPU/paralell compute?
All proprietary.
They just need some “help” from AMD right now.
Intel still can’t be trusted and Nvidia will live happily forever with its merging with Toys’rUs. Finally.
The end.
XPoint is not going to stop
XPoint is not going to stop Apple from creating it’s own line of in-house graphic products for Apple’s line of PCs/Laptops if those Unified Shader patents have begun to expire. Apple has more cash($256.8 Billion) on hand than the combined market Caps of AMD(10.73 Billion) and Intel(215.96 Billion) and Apple is getting up in the range of that trillion dollar market Cap(898.65 Billion).
And for your information, Elmer, Micron is the co-creator of XPoint along with Intel and Micron’s QuantX products are supposed to arrive before the end of 2017. Micron also will be licensing its version of XPoint to other drive makers so that’s that for XPoint IP under Intel’s total control!
“Micron also will be
“Micron also will be licensing its version of XPoint to other drive makers so that’s that for XPoint IP under Intel’s total control!”
Now, this one I didn’t know. The rest is trivia.
I’ll dig into this.
“The company is licensing its
“The company is licensing its 3D Xpoint technology to other storage makers. Micron’s QuantX will also be available the form of DDR-style memory, the company has said.” (PCWorld)
This is a game-changer.
“funny when some say that he
“funny when some say that he hadn’t had enough $ from a strugling AMD”
The “some” are backseat drivers gossiping, likely none has the experience in managing chips from design to production; the food looks good, so it tastes good : )
“AMD (my “creation” is not out yet, but I am!)”
Two different heights of HBM stacks, current draw over the standards, and buzzing noises twice; an architect draws buildings but structure and construction people build them.
“Two different heights of HBM
“Two different heights of HBM stacks, current draw over the standards, and buzzing noises twice; an architect draws buildings but structure and construction people build them.”
Yes. That low cast people that get overwelmingly payed while doing a terrible job that thankfully, in the end, has noone to oversee it and properly test it, currupting such an vanguard piece of art.
Damn those guys whose, like eveyone else, don’t even notice when a pure and gentle underpayed visionaty genious is getting stressed by the horrible noise of the untouchable office mayde’s vacuum-cleaner.
Oh the injustice! Oooooh the pain|… I need 1.1 million more.
Okokok… I’ll keep it
Okokok… I’ll keep it factual:
‘Catch-up-vanguard piece of art’
There.
… or, if you
… or, if you prefer:
‘competitive in a two player game’
The best? Absent
The best? Absent stratospheric mining demand the last 12 months would have been abysmal for RTG. Poor marketing, communication, delays, uncompetitive products, reinforcing hot and loud perception with crappy polaris cooler and vega power consumption, RX 480 breaking PCI spec, another round of rebrands 9 months after launch. If that’s his track record as the best I dunno what mediocre looks like.
Intel probably doesn’t have
Intel probably doesn’t have anywhere near the process tech lead that they once had. All of the companies with fabs are spending lots of money for very small advancements. Intel will have a very limited ability to pull ahead in performance. Even without AMD in the compute lead for multithreaded applications, the ARM competition will be approaching the same wall Intel already hit. AMD64 might end up in a similar position to the old RISC architectures that died from AMD64 coming up from the low end.
We should have had CPUs with a lot more cores a long time ago, but intel has been holding back. We get near the Ryzen mobile release and all of a sudden we get updated 4 core / 8 thread parts in the 15 watt category from Intel. That was obviously a pre-emotive release. We also get 6 cores in the main stream desktop market. Not 8 cores. And they had to break comparability with the existing platform due to power consumption. I think they had to accelerate some plans. Intel is going to give consumers the absolute minimum possible. I would not recommend Intel over AMD CPUs at all at this point. More cores wins in the long run. I am dealing with this at work now where they went for slightly higher clock and less cores when they spec’ed the machine, but we had to go more parallel for performance. We wouldn’t have a problem if we had a few extra cores. A few hundred MHz extra clock speed is not anywhere near as useful.
The situation with Nvidia is even worse than with intel in my opinion. It is in Nvidia’s best interest to hold the whole industry back since better support for DX12 helps AMD. Nvidia’s DX11 driver is also probably a worst case scenario for AMD’s Ryzen architecture. Nvidia’s DX11 driver looks like it will not scale well to many core CPUs. It has to share a lot of data between threads with small granularity. While Intel’s latest CPUs do have an improved cache architecture, they are still not as low latency as a little 4 core chip and they seem to pay a big power penalty for the latency that they can offer. With DX12 or Vulkan, the multithreaded scaling is massively improved. Relatively independant threads can submit work to the GPU without combining it into a single thread. Nvidia’s DX11 driver burns a lot of cpu power in their driver and there is a reason why low core count CPUs still do the best. They are probably about maxed out though.
So you are saying the opposite of what is actually going on. Nvidia and Intel have been holding the industry back for years. That isn’t “vision” or innovation. We should have had more than 4 core CPUs in the main stream market years ago. Even fat x86 cores are tiny on 20 and 14 nm. From my perspective, most of the forward progress has been from AMD innovation. Nvidia and Intel stick with the status quo because it has been very profitable for them. AMD integrated the memory controller on the cpu long before intel did. AMD sold a lot of Opteron processors because they were better than the intel competition. We still have an old dual socket Opteron system from about 2009 running an server where I work. Intel eventually followed suit and integrated the memory controller and point to point interconnects. AMD has to create their own graphics API (Mantle) to push the industry forward. Left to Nvidia and Intel we would have been using 4 core CPUs and DX11 for probably several more years. DX11 is a major handicap. If you look at what Microsoft and AMD can do with 8 low power cores in an Xbox One X, then you get some idea of how inefficient current architectures are. AMD’s GPU architectures have always been much more forward looking. Nvidia builds their GPUs with about a 6 month to 1 year window. AMD’s older parts are probably still performing very well. The 390s actually had massive compute power. In fact, the performance will probably continue to increase as the software catches up. The same cannot be said for Nvidia parts, but you are fine with buying a new one every year, right?
This doesn’t even get into Zen. I have been repeatedly amazed at what AMD has accomplished with the Zen design. It scales all the way from a 100$ Ryzen r3 up to a 32 core single socket processor. Intel wants to stick with their expensive giant monolithic die with their high profit margins. I believe that is the reason that Intel has stayed out of the GPU market. If you look at what a high end cpu sells for vs. a GPU of similar size, the high end cpu has much higher profit margins. By trying protect those high profit margins, they have let Nvidia get a massive presence in the HPC market. A cpu with some AVX units is still no competition for a GPU. The GPU is massively more parallel. Eventually, they are going to have to adopt an architecture similar to AMD Epyc design for their CPUs, just like they did after Opteron, because the giant super expensive monolithic die CPUs are just not going to be able to compete. It looks like they will be developing GPU compute now, but they are way behind. I have to work with Nvidia GPUs, but AMD’s GPUs seem to have much better HPC features in many respects, so I am hoping that people realize that having one dominant company is not good for business. I don’t see Intel challenging Nvidia seriously in GPU based HPC for a few years maybe. AMD already has great HPC products, they just don’t have the mind share yet. Hopefully Epyc will put them on the radar for the people making decisions.
TL;DR: Hey; you kind of got that backwards there. It is the small companies that have vision and innovation (because they have to) and the dominate companies just hold everything back to maximize profits (because they can).
Nope. Intel never stated
Nope. Intel never stated this.
“Koduri will expand Intel’s leading position in integrated graphics for the PC market with high-end discrete graphics solutions for a broad range of computing segments”
He will be in charge of the next kabt lake G , making sure Intel continue to use leading dGPU.
Nowhere did intel state they would design custom dGPU for PC.
What they did state is that they will accelerate their effort in compute/AI
Their is not money making a dGPU for kaby lake G, of the gaming market when they face nvidia. Intel would get decimated.
Intel want a piece of nvidia 120 BILLION market cap, that is all AI and compute.
“n this position, Koduri will
"n this position, Koduri will expand Intel's leading position in integrated graphics for the PC market with high-end discrete graphics solutions for a broad range of computing segments."
http://app.plan.intel.com/e/es?s=334284386&e=1991393&elqTrackId=1095b24e4d5a4c36a50a51a8350a8b33&elq=ba4e70bfe7e848b98d687e0fc8430374&elqaid=31946&elqat=1
High-end discrete graphics =
High-end discrete graphics = Workstations
No where did Intel imply or state they were venturing into the GPU gaming market.
High end laptop graphics is
High end laptop graphics is EMIB/MCM and AMD’s GPU dies currently for Intel and Intel’s/Raja’s most pressing need, and the reason that Intel hired Raja, is to get a GPU based server/HPC compute/AI product to compete with Nvidia’s Volta.
Intel can always buy AMD dies for its EMIB/MCM laptop or even desktop consumer/gaming focused products. Raja’s job from the get-go jump street is to whip up a Volta Enterprise compute/AI competitor to Nvidia’s Volta and AMD’s Vega 20 offerings.
Power9 licensed by Google from OpenPower and paired with Nvidia’s Volta has been giving Intel the sleepless nights along with AMD’s Epyc workstation/server offerings and AMD’s MI25’s and WX 9100’s.
AMD still has a workstation grade APU on an interposer in the pipeline and that Interposer will be an active interposer that will host the entire traces and active Infinity Fabric circuitry and will see AMD’s Zen cores die a little more tightly coupled to the Vega die and HBM2 via that Infinity Fabric fully cache coherent fabric, Zen cores to Vega Fat GPU die. This APU on an active interposer design will allow AMD more room on the various CPU/GPU procesor dies to be used for compute with that spaced saved by moving the inter-die connection fabric/active fabric circuitry onto the interposer’s silicon substrate. Silicon Interposes are really no different than silicon dies so maybe there will be some other things on the active interopser like buffer memory and decoder logic.
This PDF(1) reserch paper from AMD/academic and funded by US Goverment Exascale Initiative research grant money shoude give you a good Idea of where AMD is heading for the workstation/server market also in addtion the Exascale market. It’s just a matter of connecting the dots.
(1)
“Design and Analysis of an APU for Exascale Computing”
http://www.computermachines.org/joe/publications/pdfs/hpca2017_exascale_apu.pdf
Edit: Volta Enterprise
to:
Edit: Volta Enterprise
to: Volta like Enterprise
AMD is the WorldLeader in
AMD is the WorldLeader in Mantle API – so Nvidia is DOOOOM’ed!
Mantle is now open sourced
Mantle is now open sourced and renamed Vulkan. And if you look at some of the function names and function definition declarations and code in Mantle/Vulkan and compare them to DX12 then Microsoft just copy/pasted the exact code from the Mantle/Vulkan source code with no extra refactoring, variable/function name changes, or namespace decorations errors and all.
Copypasta! copypasta! copypasta!
Nvidia will have to be allowed equal access to the Intel EMIB/MCM nano-motherboard thingy if too many third party laptop OEM’s start using Intel’s EMIB/MCM nano-motherboard thingys in their laptop SKUs. No Intel/AMD x86 ISA based CPU/Radeon GPU Collusion/Trust monkeybusiness allowed.
I didn’t think anyone was
I didn’t think anyone was gullible enough to believe that myth, i guess i was wrong.
If you had even the slightest idea of the level of cooperation and lead times involved in developing an new API you’d understand how stupid it is to say Microsoft copied Mantle, but don’t take my word for it as Robert Hallock has gone on record saying those claims are BS.
http://www.redgamingtech.com/directx-12-amd-comment-on-new-api/
Robert Hallock is going to
Robert Hallock is going to cover up for AMD’s semi-custom client, Microsoft, because that’s what the global technical marketing lead for AMD’s Radeon Technologies Group(Robert Hallock), Job Requires him to do.
I’ll trust Charlie over at S/A a little more than any AMD marketing Talking Head!
And Charlie states:
“Just as SemiAccurate predicted months ago, Microsoft has adopted AMD’s Mantle but are now calling it DX12. The way they did it manages to pull stupidity from the jaws of defeat by breaking compatibility in the name of lock-in” (1)
(1)
“Microsoft adopts Mantle but calls it DX12
GDC 2014: Completely different because it is not called Mantle, just ask MS PR”
https://semiaccurate.com/2014/03/18/microsoft-adopts-mantle-calls-dx12/
I remember this one.
Some
I remember this one.
Some thought it was an MS first move for buying AMD. Like myself.
But no. It was just another normal MS ripoff.
You know it’s called
You know it’s called SemiAccurate for a reason right?
Saying Microsoft copied Mantle shows a very poor level of understanding and demonstrates you don’t know what you’re talking about, maybe you should start with teaching yourself about how an API like Mantle and DirectX are developed and the man hours involved in such projects.
If there’s any copying going on it wouldn’t have been Microsoft copying Mantle it would have been the other way around, you don’t just come up with a brand new API like Mantle over night without anyone knowing about it years in advance.
Obviously you’re free to believe whatever you like but it does make you look more than a little stupid when you parrot other people just because their opinion happens to reinforce your opinion, it’s called confirmation bias.
https://hothardware.com/news/new-reports-claim-microsofts-directx-rips-off-mantle-wont-help-xbox-one–but-is-it-true-
Yes keep posting those
Yes keep posting those friendly press articles but Microsoft is a big AMD semi-custom client and AMD’s Mantle is now named Vulkan and that’s now a Khronos managed open standards project. Microsoft got the source to Mantle as an AMD semi-custom client(console APUs) and Microsoft knew that AMD was going to open source Mantle and AMD open sourced Mantle to the Khronos Industry Standards APIs Group which Kronous renamed to Vulkan and started that project off and running.
Microsoft knew that it had to take Mantle(Now called Vulkan) and make DX12 under Microsoft’s OS vendor lock-in model where users wanting the latest in Graphics APIs on windows are forced to get the latest windows OS or do without the latest DX## API! Microsoft’s OS vendor lock-in is in the DX## graphics API for gamers. Nvidia uses CUDA the same way for a GPU vendor lock-in reason instead of OpenCL!
And Charlie over at SemiAccurate has the Journalistic Credentials and a take no ad dollars approach compared to some enthusiasts websites that have to tow the line for ad dollars and free review samples and that’s one big conflict of intrests right there.
Charlie has his ways but he is not tied to any conflict of intrests that limit what he can say. So Charlie can and does speak his mind and that’s fine by me and fine for many folks that want an unvarnished and Conflict of Intrests free source of information.
A lot of the online press is in the business of selling mind-share for a fee and some are restricted by free review sample NDA types of restrictions even after the products are officially released! And ditto for ad revenues obtained from the same companies for these review sites ad income they are supposed be impartially reviewing those same companies products. Free Review samples can come with permanent review restrictions and that’s the nature of the beast that is the online review business.
You do know Charlie Demerjian
You do know Charlie Demerjian is a council member of the Gerson Lehman Group whose clients included corporations, hedge funds, private equity firms, professional service firms, and non-profit organizations, right?
The guy has a history of trolling and sensationalism.
How about you start thinking for yourself instead of parroting what other people tell you, how about you address the points i raised like how long do you think an API like Mantle and DirectX take to develop and the man hours involved in such projects.
The ‘close-to-the-metal’ part
The ‘close-to-the-metal’ part would take long. But that was done.
The rest is just mainly prototyping and inheritance, exposing some public methods and properties.
Tada.
Next question?
addedum:
I forgot the “man
addedum:
I forgot the “man power” part. Sorry. Here goes:
In the case of MS, hundreds of idolatrizing non-binary SJWs.
It’s clear you don’t have a
It’s clear you don’t have a clue what you’re talking about, you’re just spouting anti-whatever rhetoric, like i said how about you engage in some independent thinking.
How about applying Occam’s Razor to your rather comical theory, is it more likely that company A with 22 years of experience of an API and their own device that depends on the aforementioned API along with a $10 billion dollar R&D spend copied and pasted something, or that Company B with zero previous experience of an API, no devices that depend on a specific API and only a $230 million R&D spend allowed company A to steal from them.
Do you really think AMD are so stupid that they wouldn’t bother suing Microsoft?
Now your handicap is clear –
Now your handicap is clear – thinking that thinks. You can now heal.
~ Thank you for sharing with us. ~
You’re obviously not a programmer. Stop asking questions you don’t know nothing about in hope the other part also doesn’t.
Please stick to that “you get what you payed for” mentality.
But be careful! Look what that mentality made Intel do with Raja?
“… is it more likely
“… is it more likely that…”
So, you’re ‘supposing’ while asserting others cluessness.
Well, that’s an effort. But in what part of this forum says it is a ‘suppositorium’?
Not at all, I’m asking what
Not at all, I’m asking what you think but by the looks of things you don’t posses the skills needed for independent thinking, and that before we even get onto your rather deranged demeanor.
—
Your quotes;
“… shows a
—
Your quotes;
“… shows a very poor level of understanding and demonstrates you don’t know what you’re talking about…”
“… but it does make you look more than a little stupid when…”
“… You do know Charlie Demerjian is a council member of the Gerson Lehman Group whose clients included corporations, hedge funds, private equity firms, professional service firms, and non-profit organizations…” (put x-files theme here)
“… It’s clear you don’t have a clue what you’re talking about, you’re just spouting anti-whatever rhetoric…”
“… don’t posses the skills needed for independent thinking, and that before we even get onto your rather deranged demeanor…”
“… how about you address the points i raised like how long do you think an API like Mantle and DirectX take to develop and the man hours involved in such projects…”
—
I did address them. Under ‘WhoElse’ nick. You got owned by the answer and now you’re trying to let it slip away.
Then, your intent here got exposed by ‘notmuch’ revealing how dishonest you are in your assertions, and now you come with the “deranged” insinuation.
Now, in this forum, we all also know what ‘projection through frustration’ is. You being the example of it.
Once again:
~ Thank you for sharing. ~
You didn’t address anything,
You didn’t address anything, you’re hardly even managing to string a coherent sentence together.
And since when was having a conversation about “getting owned” (how old are you anyway?), having a conversation is and should be about the exchange of information and opinions, it’s not a vehicle for prepubescent teens who feel the need to display their machismo to the world.
“The ‘close-to-the-metal’
“The ‘close-to-the-metal’ part would take long. But that was done.
The rest is just mainly prototyping and inheritance, exposing some public methods and properties.”
This was the addressment.
Since it’s obvious you don’t have the slightest idea of what’s being addressed there, at least you could have assumed that it had something to do with your question. You know, the one you were expecting noone would respond? Remember?
But that would have only be possible if your pedantic IQ value were to be above average room temperature values.
You’re just full of it, hence the insinuations.
“You didn’t address anything, you’re hardly even managing to string a coherent sentence together.”
Actually, what you need are pedagogical drawings. Request them next time. Somewhere else.
“… it’s not a vehicle for
“… it’s not a vehicle for prepubescent teens who feel the need to display their machismo to the world.”
I beg your pardon? Is this other insinuation a response to the “idolatrizing SJWs” referenxe I’ve made?
Are you assuming all ‘idolatrixing SJWs’ are women? Looks like.
Sorry for the lower-left
Sorry for the lower-left keyboard mess up. Unlike your brain, it was just a temporary left-hand dormency.
Yes and those clients want
Yes and those clients want unvarnished reviews and assesments of any technology made by any companies that they may be investing in. So that precludes any websites that may have free review sample and ad revenue conflict of interents and makes Charlie’s S/A website even more of a good choice as most of Charlie’s consulting for those investors involves getting at the real information about said products. Charlie’s income depends on his ability to provide his clients with the realsitic and not the marketing wonks views of any technology company’s products potential.
Investors like retirement funds and other big instutional investors that have to perform their due diligence or fund managers can lose their jobs or even find themselvs in legal troubles in front of the SEC and other federal and state reglators.
So Charlie posts acerbic towards your Godhead and you are thus offended and so you must smite old Charlie for not following your marketing driven agenda!
Without AMD in the picture,
Without AMD in the picture, Nvidia can decide where hi end gaming and enthusiasts will go in a hypothetical future where it will be the only maker of hi end GPUs, holding all the proprietary software and the most patents for making a hi end GPU. And Nvidia is with the ARM camp, not x86. Intel needs hi end GPUs.
The thing is that Raja, is just Raja, not Raja and 10000 patents. So his task at Intel will not be easy. Of course, if Apple can create a GPU without Imagination’s patents, then maybe Intel can also do so. They both have money.
Fun stuff.
Keller was at AMD. Keller goes Tesla. Rumors talk about AMD chip for Tesla.
Raja was at AMD. AMD and Intel collaborate. Raja goes at Intel.
I wonder if AMD is building it’s future in markets where it is NOT present, if it’s building future collaborations with companies that are even competitors today but share a common enemy, by sending it’s best to work to those other companies.
Such a move can only benefit
Such a move can only benefit AMD as let’s face it they don’t have the clout, in both terms of cash and reputation, to drive widespread adoption of much of the technology that benefits their GPU’s.
I would imagine with Raja moving to Intel any future Intel GPU tech is going to incorporate and drive adoption of things like OpenCL, Mantel, FreeSync, Heterogeneous computing and such, this can only be a good thing for a cash strapped AMD as they can ride on the coat tails of Intel, it’s obviously a gamble but AMD needs to pick its fights if it want’s to see their plans come to fruition.
If you guys think Intel won’t
If you guys think Intel won’t take advantage of Raja’s knowledge of GPU’s as they pertain to PC gaming you checked your brain at the door. The press release itself says “….for a broad range of computing segments.” ..and you really don’t think gaming would be one of those segments?? Intel tried to squeeze their way into the GPU territory but ultimately failed due to lack of absolute talent in my opinion. Now that Intel has Raja on board, we’ll eventually see a CPU from Intel with an iGPU that’s actually worth gaming on.
Just don’t stress him to
Just don’t stress him to much…
Last time i checked the Intel
Last time i checked the Intel quote is
“Koduri will expand Intel’s leading position in integrated graphics for the PC market with high-end discrete graphics solutions for a broad range of computing segments.”
Hes going to expand Intel leader ship “WITH” high-end discrete graphics solutions for broad range of “COMPUTING SEGMENTS”
Last time i checked the gaming market is low-mid-high. The only High-End computing segment is workstations and servers where the big money is.
Intel can see that CPU
Intel can see that CPU roadmaps are slowing down in incremental improvements. This will allow competition to catch up. Time to diversify into complementary markets.
This subrepticious web bug
This subrepticious web bug tracker:
‘http://pixel.quantserve.com/pixel/p-67zIajlzwUyLI.gif’
,,, is ruining ‘pcper.com’ HTTPS compliance – “Connection is not secure” in Firefox.
In case ‘pixel.quantserve.com’ doesn’t use TLS, you guys could wrap that external link with ‘‘.
How about it?
“… you guys could wrap that
“… you guys could wrap that external link with *html remark tags*”
btw,
Can I use it in all
btw,
Can I use it in all pages of my 3,916 sites? Just for R&D?
If Raja were an NFL or
If Raja were an NFL or College Football coach he would have been fired last season. Raja can not win the big game.
AMD RTG has not been able to steal the march on nVidia since Raja took over the helm.
Good riddance.
Bring back Jim Keller!!