Results – M600
The sequence, one last time:
- Partition full drive capacity and format NTFS.
- Perform a truncated round of benchmarks to simulate an OS install.
- Fill to 30% in 10% increments (with wait periods each 10%).
- Perform abbreviated random workload on the first 10% to simulate OS file writes over time (registry, log files, etc).
- Fill to 50% in 10% increments (with waits after each 10%).
- Perform truncated round of benchmarks to evaluate performance.
- Evaluate sequential read speed of first 50% (fragmentation check).
- Fill to 80% in 10% increments (with waits).
- Perform truncated round of benchmarks to evaluate performance.
- Fill to near capacity (97%) (wait at 90%).
- Perform truncated round of benchmarks to evaluate performance.
First I'll present the progression of filling the M600s with data throughout the sequence. Remember, these were given anywhere from 10 minutes to several hours of idle time between write increments:
M600 128GB:
M600 256GB:
M600 1TB:
There's no nice way to put this – the write speeds were wildly inconsistent here. I've spend the past week trying to get my head around what it takes for writes to the drive to appear more like what Micron explained in the briefing, but it just does not consistently happen. At capacities beyond 50%, I tried varying waits between 10% increases, but was only left more baffled as to what was taking place. During the same 10% increment, I would see one sample fall into MLC write speeds almost instantly, while another wrote at SLC speeds for the entire 10%. Then after leaving the samples overnight, I would see the an inverted result while writing the next 10%.
My best guess was that the M600's only initiate a round of 'die flips' only once specific allocation thresholds have been reached, and if you've written to just under one of those thresholds, you might catch it with little to no SLC area remaining. Just stepping through the 128GB sample results, the amount of SLC writes you get on a given 10% write is seemingly random.
The other important data point to note is the 'worst case' speed seen while writing the last 10%. Taking the M600's to 95% with a full 10% write catches them in the act of shuffling data between dies, which demonstrates a significant penalty to write speeds. In case after reading this paragraph the math doesn't seem to add up, the test files were actually just under 10% such that the total of 10 of them equals just over 95%. I stuck with the 10% increments for ease of writing and explanation.
Additionally, the 1TB model shows what appears to be three distinct speed grades, yet the press briefing states that for 2.5" SATA models, only the 128GB and 256GB support DWA. That leaves us puzzled as to why there is a drop in continuous write speeds after 50% full (with that drop appearing to come in at differing places – just like the two smaller models). The step decrease would reasonably be lower, as there are plenty of dies at the 1TB capacity to keep speed relatively high even when in MLC mode.
ATTO:
Now let's evaluate for inconsistencies as the M600's are filled. I'll demonstrate with ATTO passes of the 256GB model:
M600 256GB empty:
M600 256GB 50%:
M600 256GB 80%:
M600 256GB 95%:
These results tell us a few things. First, there is inconsistency as the SSD is filled and write speed drops from SLC speeds to MLC speeds. Second, and perhaps more importantly, writes appear to 'roll through' the flash area, meaning that random writes within a static test file (ATTO's file is only 256MB in size), might occur at MLC or SLC speeds. This is seen by inconsistency *within* the ATTO passes. Further backing up the apparent randomness of SLC speed availability, we actually saw *better* performance at 80% than we did at 50%.
Fragmented file reads:
Given that the M600 is based on similar controller architecture as the M550 and MX100, we expected to see a similar read speed drop in an in-place fragmented file taking the first 10% of the SSD (first 20% of the 50% copy here):
M600 128GB:
M600 256GB:
M600 1TB:
Yup, there it is, as expected. It should be noted that performing an in-place sequential write to that same 10% area restored read speeds to full. This was verified on all Micron controlled samples here, and confirms that randomly written files will slow when being read back.
If there are more reviews
If there are more reviews like this where people are not able to get their heads around the Micron controller concept, they should simply release the successor to the MX100 line with their low cost standard controller (upgraded of course). This would become the go to SSD for millions. A consistent 256GB SSD for $80 sounds much better than the new dinky M600 for anything.
The M600 looks like a lemon to me at the moment.
There’s the rub. Testing in
There's the rub. Testing in this manner revealed that the MX100 has issues as well – just different ones. See the bottom of page 4 for details and explanation.
Makes one wonder if the
Makes one wonder if the marvel controller’s quirk is exclusive to the 88SS9189. I know sandisk uses previous revisions of the controller in their ssd’s.
Different companies, and different firmwares though. Probably not likely.
I’m a SSD neophyte, my
I’m a SSD neophyte, my primary usage: Photoshop, Lightroom, Audio recording, (minimal video)
I’m going to replace my 1TB Boot HD with a 1/2TB SSD (480,500,512). I’m leaning to the Crucial M550 over the M100 (only $20>), some say the M550 “is built for heavier use”. (?) I was looking at the Samsung but not after Twits “Padre SJ” and this review discuss slowdown issues.
Do the M550’s have the any slowdown issues? Or is this only the M600 due to the different/new controller?
Allyn M. talked about the M550 on July 25, 2014. (no “review”)
Q: Are the potential specs of the M600 series worth waiting for it to come out, or should I just pull the trigger on the M550 and stop waiting?
Thanks,
Dokk
ALLYN
A SANDISK ULTRA 2-Thru
ALLYN
A SANDISK ULTRA 2-Thru the same tests would be a great addition,
as the third variation of this tech……………..
The Sandisk Ultra II drive
The Sandisk Ultra II drive uses the Marvell 88SS9187 instead of the 88SS9189 controller and uses different firmware. So in my opinion it’s probably doubtful. Gonna take some months to also test whether or not sandisk figured a way around the leaky tlc problem.
My info tells me Sandisk is
My info tells me Sandisk is using-
9190-4ch for 120 and 240 GB drives,
and 9189 for larger drives……..
But it’s the tech i would like to see compared.
Sammy has a static cache,
Micron is using dynamic,
Sandisk is using on chip copy……………
Hmmm on closer inspection it
Hmmm on closer inspection it does seem that Sandisk likes to variate which Marvell controller is used on a drive or even capacity basis.
Example, the sandisk x300s drive uses the 9189 controller.