Results – M600

The sequence, one last time:

  • Partition full drive capacity and format NTFS.
  • Perform a truncated round of benchmarks to simulate an OS install.
  • Fill to 30% in 10% increments (with wait periods each 10%).
  • Perform abbreviated random workload on the first 10% to simulate OS file writes over time (registry, log files, etc).
  • Fill to 50% in 10% increments (with waits after each 10%).
  • Perform truncated round of benchmarks to evaluate performance.
  • Evaluate sequential read speed of first 50% (fragmentation check).
  • Fill to 80% in 10% increments (with waits).
  • Perform truncated round of benchmarks to evaluate performance.
  • Fill to near capacity (97%) (wait at 90%).
  • Perform truncated round of benchmarks to evaluate performance.

First I'll present the progression of filling the M600s with data throughout the sequence. Remember, these were given anywhere from 10 minutes to several hours of idle time between write increments:

M600 128GB:

M600 256GB:

M600 1TB:

There's no nice way to put this – the write speeds were wildly inconsistent here. I've spend the past week trying to get my head around what it takes for writes to the drive to appear more like what Micron explained in the briefing, but it just does not consistently happen. At capacities beyond 50%, I tried varying waits between 10% increases, but was only left more baffled as to what was taking place. During the same 10% increment, I would see one sample fall into MLC write speeds almost instantly, while another wrote at SLC speeds for the entire 10%. Then after leaving the samples overnight, I would see the an inverted result while writing the next 10%.

My best guess was that the M600's only initiate a round of 'die flips' only once specific allocation thresholds have been reached, and if you've written to just under one of those thresholds, you might catch it with little to no SLC area remaining. Just stepping through the 128GB sample results, the amount of SLC writes you get on a given 10% write is seemingly random.

The other important data point to note is the 'worst case' speed seen while writing the last 10%. Taking the M600's to 95% with a full 10% write catches them in the act of shuffling data between dies, which demonstrates a significant penalty to write speeds. In case after reading this paragraph the math doesn't seem to add up, the test files were actually just under 10% such that the total of 10 of them equals just over 95%. I stuck with the 10% increments for ease of writing and explanation.

Additionally, the 1TB model shows what appears to be three distinct speed grades, yet the press briefing states that for 2.5" SATA models, only the 128GB and 256GB support DWA. That leaves us puzzled as to why there is a drop in continuous write speeds after 50% full (with that drop appearing to come in at differing places – just like the two smaller models). The step decrease would reasonably be lower, as there are plenty of dies at the 1TB capacity to keep speed relatively high even when in MLC mode.

ATTO:

Now let's evaluate for inconsistencies as the M600's are filled. I'll demonstrate with ATTO passes of the 256GB model:

M600 256GB empty:

M600 256GB 50%:

M600 256GB 80%:

M600 256GB 95%:

These results tell us a few things. First, there is inconsistency as the SSD is filled and write speed drops from SLC speeds to MLC speeds. Second, and perhaps more importantly, writes appear to 'roll through' the flash area, meaning that random writes within a static test file (ATTO's file is only 256MB in size), might occur at MLC or SLC speeds. This is seen by inconsistency *within* the ATTO passes. Further backing up the apparent randomness of SLC speed availability, we actually saw *better* performance at 80% than we did at 50%.

Fragmented file reads:

Given that the M600 is based on similar controller architecture as the M550 and MX100, we expected to see a similar read speed drop in an in-place fragmented file taking the first 10% of the SSD (first 20% of the 50% copy here):

M600 128GB:

M600 256GB:

M600 1TB:

Yup, there it is, as expected. It should be noted that performing an in-place sequential write to that same 10% area restored read speeds to full. This was verified on all Micron controlled samples here, and confirms that randomly written files will slow when being read back.

« PreviousNext »