Intel and Micron are jointly announcing new 3D NAND technology that will radically increase solid-storage capacity going forward. The companies have indicated that moving to this technology will allow for the type of rapid increases in capacity that are consistent with Moore’s Law.
The way Intel and Micron are approaching 3D NAND is very different from existing 3D technologies from Samsung and now Toshiba. The implementation of floating-gate technology and “unique design choices” has produced startling densities of 256 Gb MLC, and a whopping 384 Gb with TLC. The choice to base this new 3D NAND on floating-gate technology allows development with a well-known entity, and benefits from the knowledge base that Intel and Micron have working with this technology on planar NAND over their long partnership.
What does this mean for consumers? This new 3D NAND enables greater than 10TB capacity on a standard 2.5” SSD, and 3.5TB on M.2 form-factor drives. These capacities are possible with the industry’s highest density 3D NAND, as the >3.5TB M.2 capacity can be achieved with just 5 packages of 16 stacked dies with 384 Gb TLC.
A 3D NAND cross section from Allyn's Samsung 850 Pro review
While such high density might suggest reliance on ever-shrinking process technology (and the inherent loss of durability thus associated) Intel is likely using a larger process for this NAND. Though they would not comment on this, Intel could be using something roughly equivalent to 50nm flash with this new 3D NAND. In the past die shrinks have been used to increase capacity per die (and yields) such as IMFT's move to 20nm back in 2011, but with the ability to achieve greater capacity vertically using 3D cell technology a smaller process is not necessary to achieve greater density. Additionally, working with a larger process would allow for better endurance as, for example, 50nm MLC was on the order of 10,000 program/erase cycles. Samsung similarly moved to a larger process with with their initial 3D NAND, moving from their existing 20nm technology back to 30nm with 3D production.
This announcement is also interesting considering Toshiba has just entered this space as well having announced 48-layer 128 Gb density 3D NAND, and like Samsung, they are moving away from floating-gate and using their own charge-trap implementation they are calling BiCS (Bit Cost Scaling). However with this Intel/Micron announcement the emphasis is on the ability to offer a 3x increase in capacity using the venerable floating-gate technology from planar NAND, which gives Intel / Micron an attractive position in the market – depending on price/performance of course. And while these very large capacity drives seem destined to be expensive at first, the cost structure is likely to be similar to current NAND. All of this remains to be seen, but this is indeed promising news for the future of flash storage as it will now scale up to (and beyond) spinning media capacity – unless 3D tech is implemented in hard drive production, that is.
So when will Intel and Micron’s new technology enter the consumer market? It could be later this year as Intel and Micron have already begun sampling the new NAND to manufacturers. Manufacturing has started in Singapore, plus ground has also been broken at the IMFT fab in Utah to support production here in the United States.
Was there any additional
Was there any additional architecture information other than “floating-gate technology and “unique design choices””?
No. They were not providing
No. They were not providing any specific details on architecture or lithography
I’ll take MLC over TLC any
I’ll take MLC over TLC any day on any process node, most of the issues with SSDs are with TLC and the difficulties with handling more storage states and data retention, as apposed to MLC, and SLC. if this stacked memory can get the SSD price points for 1 TB SSDs down to a reasonable price point, even if it is close to what a hard drive costs per GB then for laptops SSDs will become standard. I would like to see more hybrid drives come with equal amounts of SSD cache, and HDD storage maybe 500GB of SSD paired with a 500GB hard drive portion as a long term storage pool for the SSD. Certainly with the stacked memory, hybrid drives should at least come with 128, or 256 GBs of SSD cache, paired with a 750gb hard drive. I could even see motherboards coming with special high speed PCI flash sockets, with enough stacked memory SSD space to host the OS and paging files, with most of the user data stored on an optional SSD/Hard drive.
“most of the issues with SSDs
“most of the issues with SSDs are with TLC”
Most of my experience with failed SSDs (a few thousand install base with a mix of Intel, Sandforce, Indilinx and Micron controllers) I have yet to see a failure in NAND, or even failures during or following a gradual wearout (SMART logs of reassigned sectors). The failures I have seen have been catastrophic controller failures. Either the drives ends up totally dead and completely unresponsive, or one day it will decide it has a total capacity of a handful of kb, or in one case decide to lose all branding and report itself as a SANDFORCE drive with 0kb capacity and a serial number of 0 (that was an odd one).
It’d be nice to be able to send them back to the manufacturer for testing and definitive failure, but drives that arrive on-site cannot leave unless in shredded form so all diagnosis is from my own testing rather than with a proper board-level test rig, so take these results as anecdotal rather than data.
Not really talking about
Not really talking about modes of failure , as all SSD, and hard drives have modes of failure MTBF! With TLC is more of the trouble with keeping/retaining more memory states than MLC, or SLC, with TLC requiring more error correcting be done by the SSD’s controller, This excessive error correction on TLC as opposed to MLC, and SLC, and the issues around slower read and write speeds, and long term data retention in the TLC memory cells as the planar cell size was reduced in the newer planar process nodes. The more into the 3D plane stacking on a larger process node is able to utilize MLC and still provide more memory per unit area by stacking layers, so the density problems can be solved by not having to depend as much on TLC’s ability to pack more data in a single cell, at the costs of performance, and excessive long term memory degradation, and the excessive error correction it takes to read stale TLC data. Hopefully this stacking can continue and be made to not be as dependent on even MLC for SSD storage density gains, and SLC memory will become more common in the consumer grade SSD. SLC is the fastest and most error free as there is only the binary states of on or off to worry about and SLC will have the fastest read and write times, and longest data retentions, to go along with a much simpler error correction algorithms.
Hopefully the doubling and tripling of the number of stacked layers will lead to more affordable SLC options, with each individual layer able to be made thicker in the z axis to allow for smaller dimensions in the x and y while still retaining enough atoms to retain/store the state long term, and require the least amount of error correction.
I’ve never encountered an
I’ve never encountered an actual NAND failure either (MLC or TLC), it’s always the controller. Unlike spinny disks, though, SSDs are much easier to recover data from. If it still powers up, just image the entire “dead” SSD to a new one – even if it reports that it has no data on it – and it has worked for me every time.
Did you not read of the
Did you not read of the Samsung 840 EVO issues, and the amount of error correction on stale/old data and how the performance of the drive was degraded, and your anecdotal evidence and continued talk of failure is not what is being discussed! Again with the Failure Mode analysis, Did you even take the time to read the now two posts you have replied to?
“it’s always the controller”, No it’s not the controllers fault that the TLC cells, on a too small of a plainer process node do not have sufficient state retaining capacity to store data long term(Un-Refreshed) and cause the SDD’s controller to have to perform excessive error correction, to the point that the controller can not deliver the data at normal SSD speeds! So its not about the total failure of the TLC, its about TLC’s inherent susceptibility to not being able to retain data, and the inherent SSD read/other performance degradation issues.
You are not going to notice any SSD cell failures unless you go specifically looking for the failed blocks, as overprovisioning in the SSD will replace any failed blocks with spare blocks from the overprovisioning pool.
“spinny disks”! the discussion is about the suitability of TLC to excessive error correction/data retention issues, not about spinning rust, and again with the complete failure mode analysis! When the issue is TLC, and Performance degradation, not total failure of the drive!
Edit: suitability of TLC
To:
Edit: suitability of TLC
To: susceptibility of TLC
nom,nom,nom
nom,nom,nom …
http://i.imgur.com/dJj4xJQ.jpg
well another nail in the
well another nail in the coffin for spinning rust. now we just have to wait on price parity
lol.
lol.