You might have caught our reference to this on the podcast, XPoint is amazingly fast but the marketing clams were an order or magnitude or two off of the real performance levels. Al took some very nice pictures at FMS and covered what Micron had to say about their new QuantX drives. The Register also dropped by and offers a tidbit on the pricing, roughly four to five times as much as current flash or about half the cost of an equivalent amount of RAM. They also compare the stated endurance of 25 complete drive writes per day to existing flash which offers between 10 to 17 depending on the technology used.
The question they ask at the end is one many data centre managers will also be asking, is the actual speed boost worth the cost of upgrading or will other less expensive alternatives be more economical?
"XPoint will substantially undershoot the 1,000-times-faster and 1,000-times-longer-lived-than-flash claims made by Intel when it was first announced – with just a 10-times speed boost and 2.5-times longer endurance in reality."
Here is some more Tech News from around the web:
- Thieves can wirelessly unlock up to 100 million Volkswagens, each at the press of a button @ The Register
- McAfee outs malware dev firm with scores of Download.com installs @ The Register
- Creator of Chatbot that Beat 160K Parking Fines Now Tackling Homelessness @ Slashdot
- New Air-Gap Jumper Covertly Transmits Data in Hard-Drive Sounds @ Slashdot
- Galaxy Note 7 to get Android 7.0 Nougat in 'two to three months' @ The Inquirer
My Comment at
My Comment at http://www.theregister.co.uk yesterday:
Let’s start with a very simple and basic block diagram:
CPU —–> chipset —–> storage subsystem (i.e.3D XPoint).
Try to visualize the CPU as a radio frequency transmitter:
4 cores x 64-bits per register @ 4 GHz is a lot of binary data
On the right is 3D XPoint.
As their measurements show,
Micron achieved “900” w/ PCIe 3.0 x4 lanes; and,
Micron achieved “1800” w/ PCIe 3.0 x8 lanes.
Read: almost perfect scaling.
And, the flat lines speak volumes:
in both cases, the storage subsystem
saturated the PCIe 3.0 bus.
Now, extrapolate to PCIe 3.0 x16 lanes:
wanna bet “3600”? My money says, “YES!”
Now, extrapolate to PCIe 4.0 x16 lanes:
my money says ~ “7200” — flat line
(maybe not perfect scaling,
but you get the idea 🙂
Conclusion: 3D XPoint is FAAAST, and
Micron’s measurements show that
the chipset is now the bottleneck —
all cynicism aside.
It’s the XPoint durability
It’s the XPoint durability figures that are of more concern for any XPoint/DRAM DIMM, or XPoint/DRAM HBM, NVM/DRAM hybrid usage/integrated NVM that can not be replaced if it goes bad!
They seem to be going ahead
They seem to be going ahead with hybrid DIMMs based on flash. I could see these wearing out quickly, especially if they are used improperly. I think it would probably be better to make separate modules rather than combine both on a single device. I doubt that we will see any combined HBM and non-volatile devices. They occupy different areas of the memory hierarchy.
Yes for DIMMs as they can be
Yes for DIMMs as they can be replaced, and DIMMs have more space for larger more over provisioned NVM dies, but too bad for XPoint dies on the HBM2 stacks along with The HBM2 DRAM dies. Maybe they can get XPoint’s durability figures higher and get at least 7+ years out of the XPoint. I’d also like to see more results on some working XPoint SKUs and not just engineering samples, so maybe there will be better XPoint durability metrics once the products are shipping.
mmm… “marketing clams”
mmm… “marketing clams”
Marketing the “profession”
Marketing the “profession” that traces its roots to the snake oil salesman and the very first fib ever told!
and vulture capitalists!
and vulture capitalists!
Damn marketing clams! They
Damn marketing clams! They really know how to market.
Worth fixing for you 😉
Worth fixing for you 😉
I think the 1000x faster was
I think the 1000x faster was talking about the write latency, which I think is actually in the realm of 1000x faster. Of course there is a lot more to the performance of an entire product than that one statistic, though.
Speaking of marketing
Speaking of marketing …
…I’m going to stick my neck WAAAAY OOUUUT here
and make the following suggestions to Intel,
knowing that they probably won’t be reading this.
If I were Intel, with all of their mighty and
sophisticated manufacturing capabilities,
I would:
(A) ramp up production of modular Optane chips
which can be easily installed in 2.5″ and 3.5″ form factors
e.g. U.2 connections, as well as other form factors,
possibly also SATA and SAS connections too;
(B) “secretly” implement 2 key options enabled
via Option ROMs, jumpers, or other methods:
(i) pre-set transmission clocks e.g. 6G, 8G, 12G and 16G:
you KNOW that prosumers will want to try 16G ASAP!
(ii) 128b/130b jumbo frames already recognized
by the PCIe 3.0 standard;
(C) price the 2.5″ version very aggressively,
in order to enlarge the installed base rapidly;
this approach recovers R&D with a large sales volume
and relatively small profit margin;
(D) THEN, not so “secretly”, exploit User Forums
and Tech Support groups to LEAK the methods
for enabling the faster clock speeds -and-
the jumbo frames — all of this UNofficially
(of course);
(E) step (D) above should excite the overclockers
around the world;
(F) even if this approach is a “loss leader”
financially speaking, the volume of user experiences
and sheer amount of prosumer experimentation
will give Intel great “word-of-mouth” publicity;
(G) and, the feedback from those prosumers
will help guide Intel’s future decisions
concerning future Optane and chipset developments;
(H) OEM a modern compatible NVMe RAID controller that
also supports jumbo frames and pre-set clock speeds
e.g. 6G, 8G, 12G and 16G; and, do the same with
both SATA and SAS RAID controllers, e.g. like
Highpoint’s latest;
(I) whenever users complain about the narrow lanes
of the DMI 3.0 link, refer them to (H) above;
(J) encourage prosumer experimentation with
NVMe RAID controller installs in the first x16 slot
on all modern motherboards e.g. work with mobo
manufacturers to enhance UEFI/BIOS subsystems
to make this happen smoothly;
(K) advocate JEDEC-style “settings” for
all future 2.5″ and 3.5″ NVMe SSDs;
(L) ensure that engineers continue to honor
the principles of Plug-and-Play as much as possible.
My 2 cents 🙂
MRFS
I usually dont jump into
I usually dont jump into conspiracies but:
Are we sure the 25DWPD is the actual true DWPD for the technology, even at gen 1? Or is the real DWPD actually much higher than this but Micron (and in extension; Intel) implemented a predetermined number so the controller artificially limit the endurance of the drives?
Even if Optane and QuantX would cost 5 times as much as flash, Optane and QuantX would kill flash based SSD:s in the enterprise, especially in datacenters, even Intel and Microns own SSD market if Optane and QuantX had “only” 100 times the endurance of flash (10x less than the original claim), enterprise care about TCO, not list price and if Optane and QuantX had 100 times the endurance of a P3700 at 5 times the cost, Optane and QuantX would have much lower TCO than the P3700 so obviously the consequence would be that no enterprise would buy any of their flash based drives again so could Micron and Intel artificially limit QuantX and Optane?
OR maybe 25DWPD is just limited to a short initial gen1 devices when the technology hits the market and then quickly launches gen2 with 100DWPD, ooooor, a third option, the 25DWPD maybe is just an P3500 equivalent device, 50DWPD is the P3600 equivalent and 100DWPD for a P3700 equivalent device?
I think the actual endurance of Optane and QuantX is MUCH higher than 25DWPD, maybe not 1000DWPD but atleast over 100DWPD
It seems like planned
It seems like planned obsolescence is becoming a bigger problem now that the rate of progress has begun to slow down. It used to be that you’d gladly upgrade most of your components after a 1-2 years for performance reasons, but now it feels more like 2-4. I’m sure this keeps manufacturers up at night.
http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead
Notice in the famous techreport ssd endurance experiment that the Intel drives were the first to go, and that was on their own terms. I’m surprised I didn’t hear much of an outcry after this came to light. I guess they didn’t realize that in order to actually implement planned obsolescence, you needed to get buy-in from your competitors.
> I think the actual
> I think the actual endurance of Optane and QuantX is MUCH higher than 25DWPD
Agreed!
Let’s work backwards from a PCIe 3.0 chipset:
let’s assume an x8 lane add-in card (NOT a DIMM)
and let’s assume a 1TB storage capacity
for this calculation …
Then:
x8 lanes @ 8G / 8.125 = 7.88 GB/second
1,024 GB capacity / 7.88 GBps = 130 seconds (~2 minutes)
25 DWPD x 2 minutes per DWPD = 50 minutes (???)
We don’t have any solid numbers from which
to calculate controller overhead, so 50 minutes above
assume zero controller overhead.
(The “flat lines” do speak volumes, but
we still don’t have empirical numbers
for controller overhead, and the controller
they used may be an FPGA.)
I suspect, at this point in time, that
a DWPD claim for Optane is almost
entirely meaningless, almost as meaningless
as claiming some “DWPD” for DDR4 DRAM.