Performance Comparisons – Client QD Weighted
These results attempt to simplify things by focusing on what really matters – the Queue Depths that folks actually see when using these products. A dimension is eliminated from the previous charts by applying a weighted average to those results. The weights were derived from trace recordings of moderate to heavy workloads, which still ended up running closer to QD=1-2 even on a slower SATA SSD. The intent here is to distill the results into something for those wanting 'just the facts' to grab and go when making their purchasing decisions.
Don't be alarmed by the low figures. Remember, these are low queue depths – the place where these SSDs actually operate when in use by those not just running benchmarks all day!
Samsung has (rightly) been focused on low QD performance for some time now. They even went as far as to proudly place QD=1 specifications on their product page. It has paid off for them here, as weighing the results towards the more used Queue Depths results in superior real-world performance.
Perhaps fanciful, but i agree
Perhaps fanciful, but i agree it could be a killer app?
“Conclusion: we have now reached a new era
in which mass storage is capable of performing
at close to the same sequential performance
as volatile DDR3 DRAM. Four such M.2 SSDs
in RAID-0 mode == ~8TB (before formatting).”
My take on it would be a less ambitious 2 drive raid 0 of 512gm 960 ssds.Best performing and cheaper.
PCIe Gen 3.0 allows 1GB ps per lane, bidirectionally, so 2GB per lane theoretical max.
OR, 8GB ps for the 4 lane dual M.2 ports on moboS.
In theory thats sufficient to max out 2 raid 0 960 ssdS, but 3500MB ps sequential reads (writes are 2100MB), are of course unidirectional.
so in theory it seems raid 0 pair of 960s yields 4000MB sustained, read or write.
I am pretty sure we will see 8 lanes available to m.2 mobo sockets (even w/ bargain AMD Ryzen mobos & cpus (32 lanes BTW)), allowing 7000/4400 MB ps read write in theory, w/o fancy controllers.
I dunno the numbers for ram bandwidth. a lot better am sure. not sure thats a deal breaker for my argument.
point is, 7000/4400MB are numbers in a league of their own compared to anything before – even in the server world. Its a new paradigm for coders.
ok, using it for virtual memory isnt as fast as real memory, but shit its big. I dunno enough about architecture etc., but a TB of ram may open many possibilities for completely new approaches to old coding problems.
the killer benefit of ssdS was fast random access. It transformed our PCs.
~150MB ps sequential was livable, access times were the killer on HDDs performance.
As many have said re the 960, more of the same will be barely noticed by many.
give a gamer 1 TB of passable virtual memory, and apps which use it, then that could be revolutionary.
it bears repeating btw, that IOPS has shown even more stellar performance gains in the 960, and I imagine thats important for virtual memory. As we hear, many consider this the main reason to spend the extra for the 960 over the 950.
PS, upon reflection, poor
PS, upon reflection, poor mans raid 0 on 4 lanes is still attractive for swap/page files, even with little read speed gain. Write speed almost doubles from a theoretical 2200 MB ps to 4000MB ps.