Phoronix have been hard at work testing out AMD's new server chip, specifically the 2.2/2.7/3.2GHz EPYC 7601 with 32 physical cores. The frequency numbers now have a third member which is the top frequency all 32 cores can hit simultaneously, for this processor that would be 2.7GHz. Benchmarking server processors is somewhat different from testing consumer CPUs, gaming performance is not as important as dealing with specific productivity applications. Phoronix started their testing of EPYC, in both NUMA and non-NUMA configurations, comparing against several Xeon models and the performance delta is quite impressive, sometimes leaving even a system with dual Xeon Gold 6138's in the dust. They also followed up with a look at how EPYC compares to Opteron, AMD's last server offerings. The evolution is something to behold.
"By now you have likely seen our initial AMD EPYC 7601 Linux benchmarks. If you haven't, check them out, EPYC does really deliver on being competitive with current Intel hardware in the highly threaded space. If you have been curious to see some power numbers on EPYC, here they are from the Tyan Transport SX TN70A-B8026 2U server. Making things more interesting are some comparison benchmarks showing how the AMD EPYC performance compares to AMD Opteron processors from about ten years ago."
Here are some more Processor articles from around the web:
- Core i7 vs. Ryzen 5 with Vega 64 & GTX 1080 @ TechSpot
- AMD Threadripper 1950X Linux Benchmarks @ Phoronix
- The Top 5 Best CPUs of All Time @ [H]ard|OCP
We still have a 10 year old
We still have a 10 year old or so Opteron 2.2 GHz dual socket system with 32 GB of memory as an nfs server where I work. Seems to still work fine. Haven’t had any issues with it.
Who would’ve thought at the
Who would’ve thought at the beginning of this year that AMD would beat Intel in multithreaded workloads for servers all while being significantly cheaper?
Infinite fabric seems to be genius, small dies, glue them together…. it’s beautiful
The AMD platform also seem to
The AMD platform also seem to be ~30% more power efficient.
So cheaper to buy, cheaper to run.
Server farm have a thermal capacity and power capacity.
Epyc allow them to have more client, and offer services for cheaper.
So I agree, who saw it coming. I bet even AMD is surprised 🙂
Well if that Infinity Fabric
Well if that Infinity Fabric is all that AMD says it is, Raja included with his statments about The Infinity Fabric, then I’d expect that those Dual Vega Cards will be made to look like one GPU to the software.
So I’m looking forward to seeing the Vega Nano and maybe even an dual GPU card based on some Vega variant with each GPU core wired up via that Infnity Fabric and each core using less power than even Vega but offering plenty of gaming performance relative to the GTX 1080ti or Titan XP. 2 Vega 56’s on one PCIe card woud have 128 ROPs vs the Titan XP’s 96 ROPs or the GTX 1080Ti’s 88 ROP’s. So if That Infinity Fabric can do for Dual Vega 56’s on one PCIe card what it does for Those Zeppelin Dies on Epyc/Threadripper then AMD will have some scalable options for its dual GPU/One PCIe card variants that are almost sure to appear before Navi gets here and takes that to the next level.
I don’t think that will
I don’t think that will happen with Vega. There have been some rumors of an HPC APU that uses a small Vega die with HBM on a separate interposer and then placed on an MCM with two Ryzen die. That makes 4 links available to connect the CPUs and the GPU. It might make an excellent blade type device plugged into back plane for HPC. I expect infinity fabric on the GPU is mostly for compute applications, not gaming, at least for this generation.
Rendering is parallel enough that you don’t really need that high of bandwidth communication between dual GPUs. The software support has been the limiting factor. Some DX12 features will make it easier. Just using asynchronous compute a lot could help, since that can be run on either gpu easily. AMD released the Radeon Pro Duo (dual Fury GPUs) card mostly to help drive software support for multiple GPUs. If they could easily make two GPUs look like one monolithic GPU, then such work would be unnecessary. I don’t know if even Navi will be made to look like a monolithic GPU. I wouldn’t be surprised to see Navi set up like Epyc though. It could use small GPUs with HBM memory of some type and then mount them on an MCM. Pci express v4 should be at 31.51 GB/s for an ×16 link by then. It will be interesting to see what they can do with that high of bandwidth for both CPUs and GPUs.
I suspect that if you want to make a multi-die GPU look like a monolithic GPU, you would need to design the whole thing from the start to go on one silicon interposer. The die probably would not be usable independently. AMD has not done that with Vega. It is designed to be a discrete GPU. Such an interposer based device may be possible for Navi, but the MCM type implementation would probably be a lot cheaper. It would also allow individual component GPUs to be sold in multiple form factors, just like the zeppelin die used for Ryzen and Epyc.
No the Infinity fabric(IF) is
No the Infinity fabric(IF) is more than just a simple connection fabric there is also a control fabric that is on its own BUS part of the IF in addition to the IF’s the Data/Cache Coherency fabric parts. So the entire Infinity Fabric can make any 2 processor die/s that use the infinity fabric IP included become modular and scalable, big processor DIEs or little processor DIEs.
And you do not need an MCM module just to connect up 2 GPU DIEs with the Infinity Fabric as even on the Epyc/CPU systems the two Epyc socketed CPUs can be linked up across the Motherboard’s PCB to make use of that Infiniy Fabric and share Cache coherency all the way up among all the CCX Units on both Zen MCMs and Zeppelin dies. So this is not like using PCIe(Protocol)/Cross fire(In the Drivers)to wire-up the 2 vega dies on one PCIe card if the dual die connection(Wires) are using the Infinity Fabric IP to look to the software as one single/Larger GPU.
Navi will take the Infinity Fabric and use it the same way as Epyc/Zeppelin and Vega/Vega 10-Dies and it’s just that Navi will use more/smaller modular GPU dies while Vega will make use of the Vega 10 larger dies(Supports the Infinity Fabric) over the Infinity Fabric.
That Infinity Fabric does as Charlie over at S/A says in that the the Infinty Fabric will underpin all that AMD does going forward. And that includes 2 Vega 10 dies that support the Infinity Fabric or, at a later time, more than 2 smaller Navi modular GPU dies that are currently being designed for the Navi SKUs in late 2018-2019.
That Infinity Fabric IP is more complex than PCIe and PCIe is only a Bus/coherency protocol while the Infinity Fabric has more features than that with regard to the way the IF can be used on dual AMD processor DIEs(CPU, GPU, FPGA, Other) configurations. Look at Nvidia’s NVLink IP and the Infinity Fabric goes even beyond that level currently to wire up AMD’s processors(Any Processor DIEs) that are designed to support the Infinity Fabric. So 2 Vega Dies on one PCIe card using the Infinity Fabric protocol is not the same as 2 Vega deis on the same PCIe card using PCIe protocal. The Infinity Fabric has more ability than PCIe with regards to AMD’s processors that support the Infinity Fabric IP.
“I suspect that if you want to make a multi-die GPU look like a monolithic GPU, you would need to design the whole thing from the start to go on one silicon interposer. The die probably would not be usable independently. AMD has not done that with Vega. It is designed to be a discrete GPU. Such an interposer based device may be possible for Navi, but the MCM type implementation would probably be a lot cheaper. It would also allow individual component GPUs to be sold in multiple form factors, just like the zeppelin die used for Ryzen and Epyc.”
And that Ability is already designed into the Infinity Fabric IP for any Processor that AMD makes that has support for the Infinity Fabric IP, that’s why AMD calls it the Infinity Fabric! Vega 10 supports the Infinity Fabric just as the Navi Micro-arch will, it’s just that the Navi GPU dies will be smaller and more affordable to fab with higher yields. So Vega already has the Infinity Fabric IP that Navi will be using to wire up processor dies. It’s just that with Vega/and the Infinity Fabric AMD only had the funds to design one larger die for compute/AI and Gaming while Navi will be more of smaller GPU DIEs and Navi will have even more GPU micro-arch tweaks. Both Vega and Navi will use the Infinity Fabric IP that’s why AMD invested so much effort in creating the Infinity Fabric in the first place to make all of AMD’s processor products modular/scalable!
I agree but I think your dual
I agree but I think your dual socket epyc example flawed. Its more like the dual socket if a form of/integrates an mcm, among other things. Sockets may link over the bus, but its fabric, not pci.
In simple terms, there seem slim chance two discrete Fabric devices (like vega cards) could communicate over the pcie bus except in pcie protocol.
Its not impossible (pcie devices like nvme connect direct to fabric e.g.) and it would be great to see improved interconnects between fabric devices over pcie, but am skeptical.
Like u, i dont think it possible, i think it ~inevitable, and like u, amd can completely relax re vega matching nvidia – all they have to do is double up on gpuS on a card or mcm & frugally pool resources for sharing, to easily kick silicon in the 1080i’s eyes.
It just fits amd’s big picture MO too perfectly. Modular, and scaleable using fabric.
Here is a huge clue in the latest drivers, that we are right:
https://www.anandtech.com/show/11864/amd-releases-radeon-software-crimson-relive-edition-1792
“However more interestingly – and perhaps more telling – there is no mention of CrossFire terminology in the press release or driver notes. Rather, the technology is always referred to as “multi-GPU”. While the exact mGPU limitations of Vega weren’t detailed, AMD appears to specify that only dual RX Vega56 or dual RX Vega64 configurations are officially supported, where in the past different configurations of the same GPU were officially compatible in CrossFire.”
The Epyc 7401P(24 cores/48
The Epyc 7401P(24 cores/48 threads at $1075) and that single socket and that GIGABYTE MZ31-AR0 Extended ATX Server Motherboard Socket SP3 at $609 dollars looks to be an intresting deal and that MB price will come down with some other MB makers starting to offer competing products.
That Gigabyte single socket Epyc/SP3 MB is priced the same as the dual socket Epyc/SP3 motherboards currently but the dual socket Epyc CPU SKUs are still a little bit more costly that any of AMDs single scoket Epyc/SP3 “P” CPU variants with the P variants offering even better pricing on a Per/Core basis than even the consumer Threadripper variants. And Even at the current $609 for the first single socket Gigabyte SP3 variant on NewEgg, the number of PCI lanes offered on any Epyc platform(128 lanes) still come out costing less per PCLe lane than any X399 motherboard costing $349. So hopefully Gigabyte will get some Epyc/SP3 competition from the other motherboard makers soon enough to force it to lower that $609 initial pricing.
The Epyc platform motherboards also offer 8 memory channels compared to TR/X399’s 4 memory channels and that Gigabyte Epyc/SP3 SKU also offers dual 10Gb ethernet and the fully certified/warrentied support for ECC memory as do all the Epyc P and non P CPU SKU’s. The single socket Epyc/SP3 P/1P single socket MB’s represent an actual reverse Segementation compared to Intel’s Xeon processors in that AMD’s Single socket CPU/SP3 MB offerings are actually on a feature for Feature/core for core basis actually the better deal even compared to AMD’s consumre Threadripper/X399 offerings which is most definitely NOT rure for Intel’s Xeon offerings.
So those looking for more Affordable Workstation options from AMD can instead go full Epyc and even get an better feature/dollar offerings than any consumer AMD part for workstation workloads and that’s something that Intel can not match. And the Epyc Branded parts get the 3 year warrenties and extended product support that comes with the real professional SKUs, ditto for the Epyc/SP3 motherboards and their BIOS/Firmware and ECC support from AMD/Epyc Motherboard partners.
Gee thanks for that. It saves
Gee thanks for that. It saves me a lot of legwork confirming my exact general thoughts.
Its a doozy of a notion.
For so little more than a $1000 TR, which seems too cheap anyhoo to a former intel hedt user, u can go to the 24 core epyc. Its really just a dearer but better mobo. If anything, u save money on ram using smaller sticks.
More generally, this is a recurring pattern with AMD’s transmogrification (when a pupae becomes a butterfly :)).
Just when you are about to concede they are beat in some category (like vega or propoganda), Fabric’s magic comes to the rescue with an unanswerable extra processor modules to decisively retake the crown.
If intel get a ryzen beater, amd sell them a TR at that price point. Similarly from TR to Epyc. To beat the 1080ti, Amd do a dual gpu, not in the kludgy CF/sli form of yore, but which team as well as zeppelin die in an epyc.