Last year we saw Micron toy with the idea of dynamically flipping flash memory dies between SLC and MLC modes. Ok paper, it sounded like a great idea – get the speed of SLC flash while the SSD is up to 50% full, then start shifting dies over to MLC mode to get the higher capacity. This tech did not exist until the ability to flip dies between modes existed, which was not until shortly before the M600 SSDs were introduced. Realize this is different than other types of mixed mode flash, like that on the Samsung 'EVO' models, which have a small SLC segment present on each TLC die. That static partitioning kept those types of solutions more consistent in performance than the M600 was when we first evaluated its performance.
What if we borrowed the idea of keeping the flash mode static, but just keeping to the faster mode? Transcend has announced it will be doing just that in the coming year. These will be SSDs equipped with MLC flash, but that flash will be configured to operate in SLC mode full time. This will enable ~4x write speeds and higher endurance ~30,000 write cycles compared to ~5-10k P/E cycle figures of the same flash operating in MLC mode. This performance and endurance boost comes at a cost, as these SSDs will consume twice the flash memory for the equivalent MLC model capacity. We predict this type of substitution for standard SLC flash will be a continuing trend since SLC flash production volume is insignificant compared to MLC. This trick gets you most of the way to SLC performance and endurance for (in the current market) less cost/GB of a straight SLC SSD.
Upcoming Transcend models to include SuperMLC technology:
- SSD510K – 2.5”
- MSA510 – mSATA
- HSD510 – half slim
- MTS460 & MTS860 – M.2
Well, it sure takes advantage
Well, it sure takes advantage of the pricing of MLC(2)level NAND flash, as the MLC(2) supply is larger than the SLC supply of flash made for the market. So treating the MLC(2) memory as SLC in the controller’s firmware/software will allow for more R/W speed but sacrifice capacity. It sounds like a great idea, and keeping track of 2 cell states rather than 4 for R/W and for better data retention by only having to manage two state’s thresholds will definitely lead to better wear resistance, and less state management/error correction for the SSD’s controller.
I wonder if on some TLC based devices that are no longer able to operate on some blocks because of some of the TLC NAND’s cells ability to operate in TLC mode is degraded beyond the usage threshold for TLC NAND! Could the controller be programmed to try shifting the degraded TLC cells to MLC(2) cells and still get some usage from the TLC Cell as a MLC(2), or even SLC mode, at the cost of some capacity.
I wonder too, nice idea of
I wonder too, nice idea of switching TLC -> MLC -> SLC based on the degradation, this would be interesting, the drive racing to its death!
This is just what the market
This is just what the market needs to help ramp up NAND production en masse and drive prices down.
This would bring the prices
This would bring the prices of SLC devices down, but it probably will not effect the price of flash memory in general since SLC is a smaller, high-priced market. If it works well, then it might kill of the last bit of SLC production since it would no longer be needed if higher volume (cheaper) MLC can be used instead. Flash prices should continue to fall anyway, but I wouldn’t expect this new use for MLC to make that much of a difference.
For all we know that SLC cell
For all we know that SLC cell storage probably could have had MLC capabilities all along, it’s just that the controllers were never programmed to utilize it as such, so having one cell per bit of storage was costly for manufacturers when NAND flash was first on the market! But the manufacturing costs per cell have came down with the technology more mature and the per die capacities doubling over each year or 18 mo since the Flash Market started! The manufacturers have been milking SLC for more profits because of its speed and durability relative to MLC and TLC, rather than the costs of producing an individual NAND flash Cell.
The current levels of competition among the makers of flash memory chips, and the introduction of Xpoint memory will make the enterprise SLC Flash NAND market come down in overall profitability for the Low Volume enterprise SLC Flash NAND based SSD drives, and the NAND flash makers will have to think more about volume production/sales to maintain revenue levels and some form of profits.
The overall PC/Laptop market is still in decline on the way to more of a replacement level of new PC/Laptop devices purchases rather than the explosive growth of the PC/Laptop boom years. The smart phone market is probably looking at Xpoint as a lower cost alternative to more DRAM, and Xpoint’s ability to retain its state even with no power. There will probably be phones with DRAM/Xpoint hybrid memory to compete with in the future.
TLC flash will no longer be needed as much with MLC and SLC becoming more prevalent. The Using of MLC flash in SLC mode will mean that the cost pressures on SLC flash will even be more pronunced once Xpoint makes it to the market.
I doubt that they had MLC
I doubt that they had MLC capabilities all along. It is quite simple to have a cell drive a sense amp and determine between two values. It is much more complicated to have a sense amp determine a much more precise voltage (4 or 8 levels) and drive 2 or 3 bit lines based on that. It is also much more complicated to write a charge into the cell to get the exact voltage you are aiming for.
Intel’s X-point could take a big chunk of the flash market if it actually delivers. I use a small 120 GB flash drive for OS and swap. X-point could easily replace this and it could be much more durable and much faster. This will turn flash into the budget option for small drives. This will push the focus for flash into higher volume mass storage, but it still is a long way from competing with hard drives on price. It could still be very useful in a data center since they often need to keep large amounts of data accessible very quickly. It is probably more power efficient to keep such data on flash than on a disk raid array.
They should be able to increase the density with further 3D stacking, but this will have limits. There is a certain amount of cost per wafer which drives the 2D space constraints, but with larger number of layers, each layer increases the wafer processing cost and increase the chance for defects. This will place limits on the number of layers which can be stacked economically. Samsung will be moving to 48-layers soon, I believe, which is supposed to get us up to one terabyte on a tiny m.2 stick, but it will not be cheap.
It would be nice if they could get solid state storage down cheaper than hard drives, but I don’t see it happening soon. I don’t think the average consumer has that much need for terabytes of flash storage anyway. An X-point drive plus hard drives for mass storage may be sufficient. This may drive flash into more of a niche market rather than to higher volume. The number of people who need terabytes of flash storage is small. The market would be for larger drives, but the number of drives sold may go down. Also, we are moving more into cloud storage, where individuals do not have to buy their own devices and storage management is left up to enterprise level buyers.
Look what a confusion. New,
Look what a confusion. New, fully empty SSD has capacity of 100. Then I store 50 and capacity increases to 400.
There will be a lot of calls to customer service.
How does this actually work
How does this actually work in the hardware? I do not know how much hardware is on the flash die and how much is on the controller. I assume that the sense amps to read the charge out of the cell must be on the flash die, so for SLC, the sense amp would just drive one bit line. For multiple level cells, it would need to drive 2 or 3 bit lines. Therefore, to use an MLC die as SLC, you would not want to use a single bit line since this would be two very close voltage levels (00->01 or 000->001). Does it just read the multi-bit value and interpret anything in the upper bits as 1 and anything in the lower bits as 0? I assume this would make programming much quicker since if it hits any voltage in the upper bit range, it would read as a 1. I guess it may also be better to disallow the values in the middle to avoid drifting over the boundary.