EPYC makes its move into the data center
We detail today’s launch of the AMD EPYC 7000 series of CPUs to target the highly profitable data center customer.
Because we traditionally focus and feed on the excitement and build up surrounding consumer products, the AMD Ryzen 7 and Ryzen 5 launches were huge for us and our community. Finally seeing competition to Intel’s hold on the consumer market was welcome and necessary to move the industry forward, and we are already seeing the results of some of that with this week’s Core i9 release and pricing. AMD is, and deserves to be, proud of these accomplishments. But from a business standpoint, the impact of Ryzen on the bottom line will likely pale in comparison to how EPYC could fundamentally change the financial stability of AMD.
AMD EPYC is the server processor that takes aim at the Intel Xeon and its dominant status on the data center market. The enterprise field is a high margin, high profit area and while AMD once had significant share in this space with Opteron, that has essentially dropped to zero over the last 6+ years. AMD hopes to use the same tactic in the data center as they did on the consumer side to shock and awe the industry into taking notice; AMD is providing impressive new performance levels while undercutting the competition on pricing.
Introducing the AMD EPYC 7000 Series
Targeting the single and 2-socket systems that make up ~95% of the market for data centers and enterprise, AMD EPYC is smartly not trying to swing over its weight class. This offers an enormous opportunity for AMD to take market share from Intel with minimal risk.
Many of the specifications here have been slowly shared by AMD over time, including at the recent financial analyst day, but seeing it placed on a single slide like this puts everything in perspective. In a single socket design, servers will be able to integrate 32 cores with 64 threads, 8x DDR4 memory channels with up to 2TB of memory capacity per CPU, 128 PCI Express 3.0 lanes for connectivity, and more.
Worth noting on this slide, and was originally announced at the financial analyst day as well, is AMD’s intent to maintain socket compatibility going forward for the next two generations. Both Rome and Milan, based on 7nm technology, will be drop-in upgrades for customers buying into EPYC platforms today. That kind of commitment from AMD is crucial to regain the trust of a market that needs those reassurances.
Here is the lineup as AMD is providing it for us today. The model numbers in the 7000 series use the second and third characters as a performance indicator (755x will be faster than 750x, for example) and the fourth character to indicate the generation of EPYC (here, the 1 indicates first gen). AMD has created four different core count divisions along with a few TDP options to help provide options for all types of potential customers. It is worth noting that though this table might seem a bit intimidating, it is drastically more efficient when compared to the Intel Xeon product line that exists today, or that will exist in the future. AMD is offering immediate availability of the top five CPUs in this stack, with the bottom four due before the end of July.
EPYC will take form in 32-core, 24-core, 16-core and even an 8-core option, with base clock speeds ranging from 2.1 GHz to 2.4 GHz, and Turbo clock rates of up to 3.2 GHz. And that peak Turbo clock is on the 32-core part (!!) though the “all core loaded” turbo will be around 2.7 GHz. TDPs will be either 180 watts or fall into the 155/170 watt segment, depending on the speed settings of DDR4 memory. The baseline 8-core part, mainly targeted at systems that need storage and IO connectivity, will run at 120 watts. (If you are wondering why that is higher than consumer parts that run at much higher frequency, remember that the added PCIe and DDR4 channels increases the power draw considerably.) Also, because the EPYC 7000 series of CPUs includes the south bridge functionality of USB 3.0, SATA and more, the TDP calculations for system builders do not need to add in the power of a chipset as they do on Intel’s Xeon line.
All the EPYC 7000 series product line will offer 8 channels of DDR4 memory, with speeds hitting 2666 MHz. That is double the current Xeon family and 33% more than the 6 channels expected on the Xeon Scalable product coming out later this year, thus providing substantially higher memory bandwidth and up to 2TB of memory capacity in a 2-socket system. Every EPYC 7000 CPU will feature 128 lanes of PCI Express, allowing data center and workstation builders to utilize substantial resources outside of the processor including graphics cards, networking, storage controllers, and more.
AMD claims to have significant performance advantages in a 2P (two socket) configuration compared to the equitably priced Xeon product line. At the flagship level, comparing the >$4000 EPYC 7601 to the Intel Xeon E5-2699 v4, AMD expects to have 47% faster SPECint performance. Keep in mind that the E5-2699 v4 is a 22-core/44-thread processor, so that performance advantage AMD holds is a result of having 45% more cores at the same price point. Clock speeds of the Xeon part are comparable, with a 2.2 GHz base and 3.6 GHz Turbo clock.
As you move down the price / competitive metrics on the slide above, the advantages to AMD increase slightly, going as high as 70% for the ~$800 product line, again using SPECint as the basis for this comparison. Though I assume we are all intelligent enough to understand that one benchmark does not make the case for an entire platform, AMD seems eager to get more benchmarks and workflow comparisons in front of the media, proving its case for the value EPYC will provide. And even though we know we will see higher core count processors in the upcoming Skylake refresh of the Xeon product family, all indications are that pricing from Intel remains stable (or is going up), so AMD should maintain its performance advantages using price as the target metric.
Though the single socket platform for servers is a significantly smaller market, AMD has a few interesting SKUs built specifically for it.
These solutions are useful for storage systems or GPU-centric builds that don’t need a significant amount of CPU compute power. Customers will still get access to 8-channels of DDR4 memory and 128 lanes of PCI Express, something that comparable Intel single-socket solutions will not be able to match.
AMD is running the comparison of a single EPYC processor against a pair of Intel Xeon processors, with the obvious performance advantage going to AMD in SPECint once again.
The software ecosystem is of critical import for the early success of AMD EPYC, and though there is clearly some additional work and partnerships to come, AMD feels confident that the collection of software partners it has on-board for the launch window is solid.
All the key operating system and hypervisor companies appear to be working with AMD to prepare the software infrastructure for use. Microsoft, Red Hat, Ubuntu, VMware, Citrix, etc. and development tools like Visual Studio and GCC have libraries and compilers in place to help give AMD the steady footing it needs to push upwards with EPYC.
In a similar way, AMD has been working with the hardware ecosystem to prepare for today’s release, bringing many of the industry’s top-level competitors into the fold.
HPE, Dell EMC, Tyan, Supermicro, and even Lenovo are building platforms for EPYC.
Wow. I sort of wish I was
Wow. I sort of wish I was still doing datacenter work.
32 core on 1s. 48 core next
32 core on 1s. 48 core next year.. maybe 64 core in 2020
I know that EPYC is strictly
I know that EPYC is strictly server platform (SoC), but it seems that AMD are back and if TR really delivers the goods then I’m on course for my first AMD build since dinosaurs died out, which is quite a long time. LOL
Go AMD, force Intel to do innovation for a change instead slowing down progress, monopolizing and milking the market.
I am a bit disappointed, I
I am a bit disappointed, I was expecting higher clocks for 8 core parts, something to compete with Xeon E5-1680 v4 for example. Not everyone needs 16 or 32 cores in datacenter. Xeon E5-1660 v4 and E5-1680 v4 is very popular with our customers and AMD does not deliver anything comparable with their EPYC lineup. AMD seems to target E5-26XX and upper Xeon products.
Really a Ryzen processor with
Really a Ryzen processor with ECC enabled would be suitable competition for a Xeon E5-1680 v4. They are both 8 core/16 thread, but Ryzen has dual channel memory instead of quad channel memory. There would be some market for dual die parts (like ThreadRipper), but I don’t know if there has been any info about those from AMD. I don’t know why they wouldn’t sell some of them as workstation parts also, although it is unclear how they would be branded. AMD may want to create a wide separation between consumer level parts and Epyc.
Supermicro are planning a “DP
Supermicro are planning a “DP Tower” workstation:
https://mma.prnewswire.com/media/525719/Super_Micro_Computer_AMD.jpg?w=1600
https://www.supermicro.nl/Apl
https://www.supermicro.nl/Aplus/motherboard/EPYC7000/H11DSi.cfm
Hi Ryan,
because of the
Hi Ryan,
because of the uniformity of i/o features (DRAM,PCIe,socket),
do you think it is possible for AMD to make EPYC “software upgradable” with a simple bios flash?
I mean , with the security processor embedded, you can conceive the ability do let the cores count be “software defined” instead using blown up fuses…
Thanks and all the best
Carlo
wow, i had the same thought
wow, i had the same thought minutes ago. pay per core~, & activate more for a fee as an upgrade path.
better than just zapping ok cores.
Yes. this is my #1 concern…
Yes. this is my #1 concern… AMD disabling 75% of the core for the entry $480 4 die EPYC seem like AMD its DESTROYING its die.
they pay Globalfoundry full price , and ruin the die to sell at at a deep discount… makes no sense.
I don’t know the prices, but
I don’t know the prices, but I wouldn’t be surprised if the actual costs to produce a chip (after you’ve already paid R&D and FAB upgrade costs) are WAY below what they sell the chips for. If that is the case it makes sense to try to sell as many chips at highest prices possible. Sometimes that means you can only sell them for 200$ a pop for a lower end processor, sometimes that means you get to sell them for 2000$ a pop for higher end processors. Turning off certain cores is just a way to differentiate the prices more than anything (from a business point of view).
All companies salvage dies
All companies salvage dies with parts that are not fully functional. Intel used to only make 2 or 3 different Xeon dies and all others where salvaged with cores disabled. I would doubt that AMD are really getting 80% that are fully functuonal. There can be defects in cores, caches, and in the pci-e/interprocessor links. AMD’s architecture should allow them to sell just about everything they make. The die with defects in the infinity fabric can be sold in the consumer market as Ryzen processors. Ryzen processors have a tiny number of HSIO links compared to the fully functional die. If it has some defective links, it may still be usable for a ThreadRipper part.
They obviously can sell parts with the full range of core counts. The parts that they would sell as an 8-core Epyc processor almost certainly have multiple defective cores, but fully functional interprocessor links. I am surprised that they are even going to be selling an 8 core Epyc processor since Ryzen is already 8-cores. The only reason to buy an 8-core Epyc processor would be if you need massive memory bandwidth or massive IO, but not much compute performance. There are some applications that might perform very well on it though. It will have a huge amount of cache per core and a huge amount of memory bandwidth, but thread to thread communication will have to be coarse grained to perform well. Stuff where the main system is ust sending data to the GPUs could be a candidate also. It would be very wasteful to sell a die with multiple bad cores, but fully functional HSIO links as a low end Ryzen part when it could be a low end Epyc part.
I doubt they are getting an
I doubt they are getting an 80% yield too – the original Article stated that the RUMOR is that they got an 80% yield of 6 fully functional Cores.
There’s a means to ‘cheat’ (work smart) by using decades old technology, sub-field Stitching; where similar pieces are joined to make a larger Die than what could normally be produced.
This is (to the limited extent that it is used) prevalent in Image Sensors, where the Equipment available may be small but a Full Frame or much larger Sensor is required.
Stitching Image: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2572992/bin/nihms29502f9.jpg
So you could visually examine the finished Wafer and see which Dies looked good and Stitch adjoining ones together, cut them apart wire them up with Infinity Fabric and then put four together to make one Epyc CPU – when that didn’t work you could burn a Fuse and have Ryzen Pro, failing that Ryzen or recyclables.
That would possibly give you 80% yield of ‘something’ that “works” at some Frequency; probably not a yield of 80% 6 perfect Epyc Cores.
A 50% yield would almost double the cost (not double because you can SEE errors before too much expensive additional work is done) so the yield ought to be over 50% or it’s going to be expensive.
“AMD is disabling cores in a
“AMD is disabling cores in a symmetrical pattern, so a 32-core part will have four dies with all 8 cores enabled on all four. A 24-core processor will have 1 core of each CCX, and thus 2-cores per die, disabled. A 16-core part will have 2 cores of each CCX disabled, 4 per core. And the 8-core part will have 3 of the 4 cores per CCX disabled,”
hmm, if amd are getting 80% yields on their cores, it sounds they will be trashing a lot of ok cores.
If you look at the die shot
If you look at the die shot of Zen, the 8 core dont even make 50% of the die.
Yet all EPYC have all the very sensitive un core fully functional.
Seem like AMD is paying globalfoundry full price, then go on to destroy the die, so they can sell it at a deep discount.
It is possible to have a die
It is possible to have a die with multiple defective cores, but fully functional interprocessor links. Without selling such die as low core count Epyc processors, they would have to be sold as very low end Ryzen parts due to the low number of functional cores.
Seem like AMD will be
Seem like AMD will be trashing a lot of ok cores and go on destroy the die! javascript obfuscator