Intel’s Data-Centric Innovation Day 2019 Overview: 56-Core CPUs, 100GbE, and Optane DIMMs, Oh My

Intel’s biggest launch ever covers everything from the cloud to the edge

Intel today made a number of product and strategy announcements that are all coordinated to continue the company’s ongoing “data-centric transformation.” Building off of recent events such as last August’s Data-Centric Innovation Summit but with roots spanning back years, today’s announcements further solidify Intel’s new strategy: a shift from the “PC-centric” model that for decades drove hundreds of billions of dollars in revenue but is now on the decline, to the rapidly growing and ever changing “data-centric” world of cloud computing, machine learning, artificial intelligence, automated vehicles, Internet-connected devices, and the seemingly unending growth of data that all of these areas generate.

Rather than abandon its PC roots in this transition, Intel’s plan is to leverage its existing technologies and market share advantages in order to attack the data-centric needs of its customers from all angles. Intel sees a huge market opportunity when considering the range of requirements “from edge to cloud and back:” that is, addressing the needs of everything from IoT devices, to wireless and cellular networking, to networked storage, to powerful data center and cloud servers, and all of the processing, analysis, and security that goes with it.

Intel’s goal, at least as I interpret it, is to be a ‘one stop shop’ for businesses and organizations of all sizes who are transitioning alongside Intel to data-centric business models and workloads. Sure, Intel will be happy to continue selling you Xeon-based servers and workstations, but they can also address your networking needs with new 100Gbps Ethernet solutions, speed up your storage-speed-limited workloads with Optane SSDs, increase performance and reduce costs for memory-dependent workloads by supplementing DRAM with Optane, and address specialized workloads with highly optimized Xeon SKUs and FPGAs. In short, Intel isn’t the company that makes your processor or server, it’s now (or rather wants to be) the platform that can handle your needs from end-to-end. Or, as the company’s recent slogan states: “move faster, store more, process everything.”

Product Details

Intel’s announcements cover both hardware and software/firmware. We’ll touch on the latter but focus primarily on the former before turning our attention to why Intel is making these moves. There is a lot going on with this product launch, so we’ll try to hit the key points and provide supplemental coverage as additional information becomes available.

That said, let’s jump into some of the product announcements.

2nd-Generation Xeon Scalable Processors

Intel did all the hard work back in 2017 with the rebranding and repositioning of its Xeon processors under the “Scalable” brand, leaving little opportunity for any transformative changes in this second generation. The new parts are based on Cascade Lake, which remains a 14nm microarchitecture. Cascade Lake includes hardware mitigations for Spectre and Meltdown, and some new features which we will address shortly, but considering just traditional computing power, it’s not a significant leap, in general, over Skylake. Indeed, while briefing the press on the new lineup, Intel focused on the performance benefits for enterprises on a four-to-seven-year upgrade cycle.

However, in keeping with Intel’s coordinated data-centric strategy, there are some changes that will be key for certain workloads. First, at the top end, there’s a new Platinum 9200-series lineup offering up to 56 cores/112 threads at base and turbo clocks of 2.6GHz and 3.8GHz, respectively. Oh, it also has a TDP of 400W. Pricing, surprisingly, was not disclosed.

The announcement of the 56-core SKU was a bit of a surprise considering Intel stated last fall that Cascade Lake would top out at 48 cores. But, regardless, you won’t be buying a Xeon Platinum 9282 directly, at least not officially. Intel has worked with its industry partners to design a new server chassis specifically for the highest end Cascade Lake Xeons, complete with custom compute modules featuring optional liquid cooling.

Customers with the need and budget for these sure-to-be-shockingly-expensive configurations will buy complete servers from the usual enterprise vendors.

Looking at other changes to the 2nd-Generation Xeon Scalable lineup, Intel’s Speed Select Technology on select SKUs will let users optimize their processors for specific workloads (e.g., focus on fewer cores at higher frequency, or more cores at lower frequency), ensuring that certain workloads or power requirements are prioritized as intended, and DL Boost improvements promise big gains for certain AI workloads — up to 30X, Intel claims — although even then not to the point of being price competitive with GPU solutions. Intel’s counter to the price—performance argument for AI processing was that if you already have a Xeon Scalable processor, you don’t necessarily need to go out and purchase some NVIDIA Telsas if you only have small or infrequent AI-based workloads.

This new second-generation of Xeon Scalable Processors also offers support for Optane DC Persistent Memory, which we’ll discuss further below.

With two generations of Xeon Scalable processors now on the market and numerous subdivisions within them, it’s getting a bit complicated keeping track of the naming conventions. Fortunately, Intel offers this handy guide for identifying parts based on generation, performance tier, and capabilities.

Check out the full list of new Xeon Scalable Processors:

Finally, as mentioned earlier, Intel has also introduced a number Xeon SKUs that are “optimized” for specific industries or workloads based on factors such as core count versus frequency, amount of cache or memory support, and operating conditions. Examples include value-oriented VM density, networking, and single-core performance.

Xeon D-1600 Series

Intel’s D-series Xeons are intended for situations which require a good amount of power in a compact, energy efficient configuration. Examples range from low-end servers to higher-end NAS devices. An upgrade over the D-1500, the new D-1600 lineup offers a 1.2 to 1.5X increase in base frequency with configurations between 2 and 8 cores and TDPs between 27 and 65W.

The processors also support hardware virtualization, Intel QuickAssist, up to 128GB of DDR4 memory, and up to four 10Gbps Ethernet ports.

Optane DC Persistent Memory

Intel has been talking about Optane DC Persistent Memory for quite some time now. The company’s pitch is that while Optane/3D XPoint is a great option for “traditional” solid state storage, its true potential was always limited by the interface, even fast PCIe interfaces. Optane DC Persistent Memory solves this limitation by switching to a DIMM form factor, which gives it a huge improvement in latency and “bridges the gap” between current PCIe-based flash storage and DRAM.

The point here is that DRAM is expensive, and capacities may be limited. So why not use Optane to supplement your servers’ DRAM? Intel acknowledges that Optane DC is certainly slower than DRAM, but for many workloads the amount of memory is more important to overall performance than the absolute speed (at least, up to a limit). Certain workloads may also have other bottlenecks such that any difference in speed between data stored in DRAM and Optane DC won’t be noticed (or the cost savings may be deemed to be worth the performance hit).

One interesting facet of this is that, unlike DRAM, Optane is not volatile. To prevent compatibility issues, Intel gives users the choice of running their Optane DC Persistent Memory in either Memory or App Direct modes.

In Memory Mode, the system effectively emulates traditional DRAM. It caches as much as it can to the actual DRAM and then clears the Optane DC modules every power cycle to mimic volatility. The benefit is that no software changes are required to use this mode; from the software’s perspective, it’s all just DRAM. The downside is latency will vary and be limited entirely by Optane DC when there’s a cache miss in DRAM.

In App Direct Mode, your software must be updated to take advantage of “Persistent Memory Aware” storage. That allows the system to effectively “put Optane DC in its place” by creating a new tier between DRAM and your traditional storage. Your apps know that this storage tier is non-volatile and slower than DRAM, so it will keep data stored in the optimal locations between DRAM and Optane without wasted performance on management like paging.

Again, this won’t be suitable for every workload. But if your workloads can accommodate Optane DC Persistent Memory, the costs savings can easily rise into the tens of thousands per system. You spend less on expensive DRAM, add in some slightly less expensive Optane DC modules, and you either have the same amount of effective memory for less, or more effective memory for the same cost.

Optane SSDs

In addition to Optane DIMMs, Intel detailed two new Optane SSD options. The first is a dual port version of the enterprise-grade Optane DC SSD, allowing for both improved performance and resiliency for critical applications.

More unusual was the Intel SSD D5-P4326, a long ruler-shaped SSD using the E1.L form factor. Using QLC flash, it’s a cost-optimized option and its unique form factor means you can squeeze lots of them (up to 1PB) in a 1U chassis.

The reason for Intel’s embrace of QLC flash for the enterprise mirrors that of companies producing QLC drives for the consumer and small business market: mass market adoption. QLC-based SSDs lag their MLC and TLC competitors in terms of both burst and sustained performance, but in most cases they still easily beat traditional spinning media. Making flash cheaper and denser allows enterprises to move more and more data to “warm” storage: not as fast as your high-end Optane or 96-layer TLC drive, but much more accessible than mechanical drives when you need it.

Intel Ethernet 800 Series

With all the processing and local data management, Intel didn’t forget about actually moving data between locations. The company announced its latest Ethernet chipset, the Ethernet 800 series. Codenamed “Columbiaville,” it supports speeds of up to 100Gbps.

Beyond the base bandwidth increase, Intel is using Application Device Queues (ADQ), a feature that can queue and steer application-specific packets for better performance consistency, reduced latency, and in some cases greater throughput.

It’s doing more than the familiar network QoS; ADQ can link supported applications directly to specific queues, which can reduce latency in addition to maintaining throughput.

Agilex FPGAs

The final product to touch on is the Agilex FPGA, a chiplet-packet 10nm-based part utilizing 3D system-in-package (SiP) technology that can be highly customized for specific workloads or interoperability. Options include DDR5 and HBM memory, Optane DC Persistent Memory, multi-gig networking, PCIe 5.0, and cache coherency with Xeon Scalable processors.

The Agilex series will be available in three variants targeting different performance levels and capabilities.


We’re still waiting on complete pricing information as of the date of publication, but Intel is at least officially announcing availability for most of the new products.

Available Today:

  • 2nd-Generation Intel Xeon Scalable Processors (with the exception of the Platinum 9200 Series)
  • Xeon D-1600
  • Optane DC Persistent Memory
  • D5-P4326 SSD

Available Q3 2019:

  • Intel Ethernet 800 Series (mass production; samples shipping now)

Available H2 2019:

  • Agilex FPGAs
  • Xeon Platinum 9200 Series (some systems to ship H1 with ramp up for H2)


  • Optane DC SSD D4800X

What’s Driving Intel’s Transformation?

Now that we’ve sampled what Intel launched, the next step is to ask the same question we’ve asked at each juncture in Intel’s now multi-year transformation: why?

There are a multitude of factors at play when attempting to analyze the reasons for Intel’s shift. But three factors stand out: the declining PC market, the new and urgent needs of the “data-centric” technology sector, and, perhaps most importantly, the arguably surprising emergence of a legitimate competitor in the enterprise.

The first factor is clear and obvious. The stall and later decline of a market so long dominated by a single company is, of course, bad news for that company. So, regardless of any other justification, Intel’s push to expand its total addressable market is necessary to counter an existential threat.

The second factor is the one that Intel officially acknowledges and promotes: the data problem. From traditional data such as enterprise databases to the constant logging and transfer of information via telecommunication networks, cloud computing, artificial intelligence and analytics, and IoT devices, the global technology industry is creating a surge in data generation that Intel argues can’t be accommodated by current processing, storage, and networking infrastructures.

For example, a report late last year from Seagate and IDC claimed that about 33 zettabytes (33 billion terabytes) of data was created in 2018. That number is expected to grow to 175 zettabytes per year by 2025. The view of Intel and other companies looking at this issue is that existing solutions simply can’t move, store, and process this much data. Intel stated in its press release for this event that only about “two percent of the world’s data has been analyzed.”

The argument is that this growing gap between the amount of data generated and the amount we can process with existing hardware and software deprives companies and consumers of any benefits that data may provide. Therefore, by offering a coordinated suite of products and services supposedly designed for this very challenge, Intel can help its customers begin to regain control of the situation and start to make up ground.

The final and most interesting factor, particularly for many in our audience, is AMD and the remarkable progress the company has made over the past two years thanks to the Zen architecture. With Ryzen on the desktop and EPYC in the data center, for the first time in years AMD is putting up a legitimate challenge to Intel at almost all performance tiers, especially when considering the price–performance ratio.

The somewhat good news for Intel is that AMD’s market share gains have failed to keep pace with its performance gains…at least thus far. AMD saw an increase in market share last year for all traditional categories (server, desktop, laptop), but while the company’s server share more than doubled, it’s still sitting in the low single digits.

Now AMD is readying the next generation of EPYC parts, codenamed “Rome.” AMD’s teaser late last year and a string of leaks since then all point to the likelihood that, in certain workloads, Rome will outperform its Xeon Scalable counterparts at notably lower cost and include additional features such as PCIe 4.0. These relative surges in performance for AMD are hitting just as Intel is hoping to overcome the years of process technology challenges that stalled the company’s CPU development at 14nm.

Intel won’t lose every performance tier to AMD, of course. Certain workloads still perform better with Intel’s architecture and, in an interesting role reversal from the Opteron days, EPYC servers are limited to two sockets while Xeon Golds can go up to four sockets and Xeon Platinums up to eight. But Intel faces the very real threat of losing head-to-head comparisons in terms of pure performance, let alone price or availability. Why not then, simply, change the game?

Intel still commands an overwhelming share of the sever market, in excess of 96 percent. This is an incredibly lucrative market, and Intel isn’t wrong about the challenges this market faces involving the data growth and management problem. The mission for Intel, as those initial Rome benchmarks and pricing figures start to show up, is to convince its current (and potential) customers that losing by a few percentage points over here is easily offset by, for example, the cost savings of Optane DC Persistent Memory over there. Intel’s acquisition and development of storage, networking, and, increasingly, graphics, gives the company a broad menu of solutions that it can present to customers tempted to jump to EPYC. At least, again, in theory.

As this article is posted, I’m in San Francisco attending Intel’s official launch. The company briefed us last month on the technical and strategic side of things, but one thing missing was the customer’s perspective.

Intel’s strategy relies significantly on how well it has read both the needs of its customers as well as the degree to which those customers value price and performance in each product category. I’ve already heard Intel’s side; my goal this week is to hear from customers — albeit customers carefully selected by Intel — about more real-world examples of what happens when Intel’s planning is put into practice.

The remainder of 2019 should be a bit wild in terms of server market share and perception, as AMD fires its next salvo while Intel continues to promote its new vision. The slow, calculated nature of enterprise-level hardware upgrades means that Intel’s market share dominance won’t end soon, but If both companies can secure sufficient vendor partners (AMD was especially hampered last year by having limited availability of EPYC for months) and consistently meet demand, we could see a major escalation in this new server war.