EPYC makes its move into the data center
We detail today’s launch of the AMD EPYC 7000 series of CPUs to target the highly profitable data center customer.
Because we traditionally focus and feed on the excitement and build up surrounding consumer products, the AMD Ryzen 7 and Ryzen 5 launches were huge for us and our community. Finally seeing competition to Intel’s hold on the consumer market was welcome and necessary to move the industry forward, and we are already seeing the results of some of that with this week’s Core i9 release and pricing. AMD is, and deserves to be, proud of these accomplishments. But from a business standpoint, the impact of Ryzen on the bottom line will likely pale in comparison to how EPYC could fundamentally change the financial stability of AMD.
AMD EPYC is the server processor that takes aim at the Intel Xeon and its dominant status on the data center market. The enterprise field is a high margin, high profit area and while AMD once had significant share in this space with Opteron, that has essentially dropped to zero over the last 6+ years. AMD hopes to use the same tactic in the data center as they did on the consumer side to shock and awe the industry into taking notice; AMD is providing impressive new performance levels while undercutting the competition on pricing.
Introducing the AMD EPYC 7000 Series
Targeting the single and 2-socket systems that make up ~95% of the market for data centers and enterprise, AMD EPYC is smartly not trying to swing over its weight class. This offers an enormous opportunity for AMD to take market share from Intel with minimal risk.
Many of the specifications here have been slowly shared by AMD over time, including at the recent financial analyst day, but seeing it placed on a single slide like this puts everything in perspective. In a single socket design, servers will be able to integrate 32 cores with 64 threads, 8x DDR4 memory channels with up to 2TB of memory capacity per CPU, 128 PCI Express 3.0 lanes for connectivity, and more.
Worth noting on this slide, and was originally announced at the financial analyst day as well, is AMD’s intent to maintain socket compatibility going forward for the next two generations. Both Rome and Milan, based on 7nm technology, will be drop-in upgrades for customers buying into EPYC platforms today. That kind of commitment from AMD is crucial to regain the trust of a market that needs those reassurances.
Here is the lineup as AMD is providing it for us today. The model numbers in the 7000 series use the second and third characters as a performance indicator (755x will be faster than 750x, for example) and the fourth character to indicate the generation of EPYC (here, the 1 indicates first gen). AMD has created four different core count divisions along with a few TDP options to help provide options for all types of potential customers. It is worth noting that though this table might seem a bit intimidating, it is drastically more efficient when compared to the Intel Xeon product line that exists today, or that will exist in the future. AMD is offering immediate availability of the top five CPUs in this stack, with the bottom four due before the end of July.
EPYC will take form in 32-core, 24-core, 16-core and even an 8-core option, with base clock speeds ranging from 2.1 GHz to 2.4 GHz, and Turbo clock rates of up to 3.2 GHz. And that peak Turbo clock is on the 32-core part (!!) though the “all core loaded” turbo will be around 2.7 GHz. TDPs will be either 180 watts or fall into the 155/170 watt segment, depending on the speed settings of DDR4 memory. The baseline 8-core part, mainly targeted at systems that need storage and IO connectivity, will run at 120 watts. (If you are wondering why that is higher than consumer parts that run at much higher frequency, remember that the added PCIe and DDR4 channels increases the power draw considerably.) Also, because the EPYC 7000 series of CPUs includes the south bridge functionality of USB 3.0, SATA and more, the TDP calculations for system builders do not need to add in the power of a chipset as they do on Intel’s Xeon line.
All the EPYC 7000 series product line will offer 8 channels of DDR4 memory, with speeds hitting 2666 MHz. That is double the current Xeon family and 33% more than the 6 channels expected on the Xeon Scalable product coming out later this year, thus providing substantially higher memory bandwidth and up to 2TB of memory capacity in a 2-socket system. Every EPYC 7000 CPU will feature 128 lanes of PCI Express, allowing data center and workstation builders to utilize substantial resources outside of the processor including graphics cards, networking, storage controllers, and more.
AMD claims to have significant performance advantages in a 2P (two socket) configuration compared to the equitably priced Xeon product line. At the flagship level, comparing the >$4000 EPYC 7601 to the Intel Xeon E5-2699 v4, AMD expects to have 47% faster SPECint performance. Keep in mind that the E5-2699 v4 is a 22-core/44-thread processor, so that performance advantage AMD holds is a result of having 45% more cores at the same price point. Clock speeds of the Xeon part are comparable, with a 2.2 GHz base and 3.6 GHz Turbo clock.
As you move down the price / competitive metrics on the slide above, the advantages to AMD increase slightly, going as high as 70% for the ~$800 product line, again using SPECint as the basis for this comparison. Though I assume we are all intelligent enough to understand that one benchmark does not make the case for an entire platform, AMD seems eager to get more benchmarks and workflow comparisons in front of the media, proving its case for the value EPYC will provide. And even though we know we will see higher core count processors in the upcoming Skylake refresh of the Xeon product family, all indications are that pricing from Intel remains stable (or is going up), so AMD should maintain its performance advantages using price as the target metric.
Though the single socket platform for servers is a significantly smaller market, AMD has a few interesting SKUs built specifically for it.
These solutions are useful for storage systems or GPU-centric builds that don’t need a significant amount of CPU compute power. Customers will still get access to 8-channels of DDR4 memory and 128 lanes of PCI Express, something that comparable Intel single-socket solutions will not be able to match.
AMD is running the comparison of a single EPYC processor against a pair of Intel Xeon processors, with the obvious performance advantage going to AMD in SPECint once again.
The software ecosystem is of critical import for the early success of AMD EPYC, and though there is clearly some additional work and partnerships to come, AMD feels confident that the collection of software partners it has on-board for the launch window is solid.
All the key operating system and hypervisor companies appear to be working with AMD to prepare the software infrastructure for use. Microsoft, Red Hat, Ubuntu, VMware, Citrix, etc. and development tools like Visual Studio and GCC have libraries and compilers in place to help give AMD the steady footing it needs to push upwards with EPYC.
In a similar way, AMD has been working with the hardware ecosystem to prepare for today’s release, bringing many of the industry’s top-level competitors into the fold.
HPE, Dell EMC, Tyan, Supermicro, and even Lenovo are building platforms for EPYC.