At a huge press event like Flash Memory Summit, being in the right place at the right time (and with the right camera), matters greatly. I'll just let a picture say a thousand words for me here:
..now this picture has been corrected for extreme parallax and was taken in far from ideal conditions, but you get the point. Samsung's keynote is coming up later today, and I have a hunch this will be a big part of what they present. We did know 64-Layer was coming, as it was mentioned in Samsung's last earnings announcement, but confirmation is nice.
*edit* now that the press conference has taken place, here are a few relevant slides:
With 48-Layer V-NAND announced last year (and still rolling out), it's good to see Samsung pushing hard into higher capacity dies. 64-Layer enables 512Gbits (64GB) per die, and 100MB/s per die maximum throughput means even lower capacity SSDs should offer impressive sequentials.
Samsung 48-Layer V-NAND. Pic courtesy of TechInsights.
64-Layer is Samsung's 4th generation of V-NAND. We've seen 48-Layer and 32-Layer, but few know that 24-Layer was a thing (but was mainly in limited enterprise parts).
We will know more shortly, but for now, dream of even higher capacity SSDs 🙂
*edit* and this just happened:
*additional edit* – here's a better picture taken after the keynote:
The 32TB model in their 2.5" form factor displaces last years 16TB model. The drive itself is essentially identical, but the flash packages now contain 64-layer dies, doubling the available capacity of the device.
so with this mean even more
so with this mean even more lower capacity models will be phased out to not incur the performance loss of high capacity but fewer dies? or will they just use lower capacity dies
The lower capacity models
The lower capacity models would be phased out eventually anyway. Hopefully this will move larger capacity drives down in price. They may want to put multiple interfaces on each die if the write performance is insufficient. I thought there was already some design that implemented something similar to that, but I don’t remember what it was. It would be easy to make a single die look like multiple die to the controller. They also could solve this with TSV interconnect. The stack could easily present multiple interfaces to the controller.
Didn’t seagate just announce
Didn’t seagate just announce a 60TB SSD? Isn’t 60 > 32?
yup, clearly Samsung didn’t
yup, clearly Samsung didn’t get the memo.
Samsung’s is in 2.5″ form
Samsung's is in 2.5" form factor. Seagate is 3.5".
Each marketing sign clearly
Each marketing sign clearly makes the claim that it is the highest capacity sad in the world in a lovely large bold font. Samsung looks like a fool for putting up that sign.
they did. read their paper
they did. read their paper precisely 🙂
Samsung’s – world’s largest 32TB SSD …
guessing no other samsungs have bigger one than 32GB …
lol
it doesn’t matter much
it doesn’t matter much because is the best in the business.
That’s a tall order at the
That’s a tall order at the IHOP, but I hear that going beyond 64 layers is going to be accomplished by stacking 2 or more milt-layer chips because of the alignment issues with the layers making things more difficult as the number of layers gets higher.
Edit: milt-layer
To:
Edit: milt-layer
To: multi-layer
LibreOffice still does not have the “Multi” prefix added to their english dictionary.
Stacking dies is already a
Stacking dies is already a thing – it's how the flash dies are packaged.
Are most of them using TSV
Are most of them using TSV now or are they still offset with edge connected wires?
Toshiba is TSV, but I
Toshiba is TSV, but I believe Samsung is still edge.
There is probably an optimum
There is probably an optimum number of layers. Going up too many layers will increase die cost due to added process steps and more defects. Similar situation with x/y die size; larger die increases the number of defective die. We have had stacking of flash die for a long time, but they were using offset die with edge connections. They are starting to use TSV, but the bandwidth per die isn’t really high enough to require it yet.
The high density combined
The high density combined with the improved speed and write endurance is really great. They just need to work on the price!
I wonder how the cost of building new fabs compares to the cost of all the extra processing steps for multi-layer flash. If they can keep existing fabs running for many more years, is that enough to keep driving down the price?
They went back to an older,
They went back to an older, larger process with the initial VNAND. Not sure what they are using now.
Yes larger in the X and Y but
Yes larger in the X and Y but more layers in the Z is the way to go, with plenty of atoms to retain all those states over longer periods of time. They should try and get beyond 64 layers or stack more multi-layer dies using TSVs to save as much space as possible. Now get some hybrid 2TB drives with at least 32/64 GB of XPoint cache and the right amount of DRAM to buffer the XPoint. I’d really like the controller to be able to manage the XPoint and NAND like a tiered storage system and keep the most active data on the XPoint like the paging swap, and essential OS/application files and the rest of the files on the 2 TB of NAND. Let the much more durable and faster than NAND XPoint take the wear and tare by keeping the active blocks on the XPoint as much as possible
If they could set up the XPoint Cache to be multi-way set associative with whatever associated blocks of NAND to try and keep as much of the active blocks of storage staged on the XPoint then it would be easy to offer 5+ years warranty on any similar SSD.
There is going to be an
There is going to be an optimum z height. More layers means more expensive and more chances to create defects. Each layer adds extra processing steps making the die more expensive. Stacking multiple die with TSV is not free either. You still need area for the TSV through channels also. I don’t know how economical going above 64 layers pe die will be.
For using it as a cache, this is far enough out in the memory hierarchy that associativity is not really a concern. You would generally use large size blocks and allow fully associative placement. This is how virtual memory systems work. Block size is generally a 4K page, I believe, and a page can generally be placed anywhere in memory that isn’t reserved.
Are these ‘uber drives’
Are these ‘uber drives’ (32/60TB) likely to be very high power consumption due to the total # of transistors involved?
Is there a point where the operating power of a SSD is substantially higher than a HDD? (do NAND cells need to be ‘kept warm’ for access time to be fast?)
The simple answer is not
The simple answer is not really. Power consumption really comes from the raw throughput (it takes x power to program a cell). That 60TB Seagate SSD was <20W because it can only program cells at a rate ~1.5 GB/s, limited by the interface. Sure there is per-die standby power, but it is minimal. That same 60TB SSD standby power was ~4W.
Thank you!
Thank you!
Most of the power is consumed
Most of the power is consumed by the controller and the DRAM cache. The flash die have a relatively small number of active circuits, so power consumption is quite low. This what allows them to stack the flash with the controller on top. I think even Samsung’s PCIe drives (higher transfer rate takes more power) only consume about 5 to 7 watts max.
Meant as reply to power
Meant as reply to power consumption question.
I have read your blog post
I have read your blog post on. Thank you very much for sharing this awesome post.