Introduction, Specifications and Packaging
2TB of USB 3.1 storage for those on the go
Introduction
Around this same time last year, Samsung launched their Portable SSD T1. This was a nifty little external SSD with some very good performance and capabilities. Despite its advantages and the cool factor of having a thin and light 1TB SSD barely noticeable in your pocket, there was some feedback from consumers that warranted a few tweaks to the design. There was also the need for a new line as Samsung was switching over their VNAND from 32 to 48 layer, enabling a higher capacity tier for this portable SSD. All of these changes were wrapped up into the new Samsung Portable SSD T3:
Specifications
Most of these specs are identical to the previous T1, with some notable exceptions. Consumer feedback prompted a newer / heavier metal housing, as the T1 (coming in at only 26 grams) was almost too light. With that newer housing came a slight enlarging of dimensions. We will do some side by side comparisons later in the review.
Packaging
The packaging is a bit nicer and larger than that of the T1, with a window to see the T3. The new packaging is also better suited for hanging on pegs for display and sale. Here are a few more pics comparing the external packaging to a T1.
Still volatile, cells still
Still volatile, cells still leak. That is all.
Magnetic domains still shift,
Magnetic domains still shift, and heads still crash (especially in portable drives). Does that mean all HDDs are volatile as well? V-NAND has proven itself to be far more relaible (even in TLC form) than most other storage mediums. Further, flash that has been cycled only a few times (a use case that most T3's would see) has extremely good retention and virtually zero cell leakage, as there has not been enough cycles to cause significant break down.
Samsung got a bad rap for their issue where the 840 EVO series was not properly correcting for cell voltage drift (something that *all* flash memory does), but just as hard drives can change amplification of the head output, flash can change its read thresholds to compensate. One type of SSD initially not properly implementing that mechanism does not mean all flash is 'leaky and volatile'.
I would say characterizing
I would say characterizing the response to the 840 problems as a bad rap isn’t accurate. It suggests that in some way the response to the problem is undeserved, but it was a huge misstep by Samsung which they have never been able to truly fix in the affected models. More importantly it doesn’t appear that any other SSD manufacturer has shipped a model to consumers which suffers from this problem.
At this point Samsung needs to earn back the trust by shipping problem free drives. It could be worse, IBM botched one drive and had to abandon the harddrive market.
When Samsung finally fixed
When Samsung finally fixed the issue correctly, read speeds were brought up to full speed *without the need for an optimization pass*, meaning yes, it was fixed, and due to the previous issue, the 840 SSDs came out with a means to optimize performance further than any mechanism employed by competing SSDs, meaning the stopgap for the fault was converted into an advantage. It's also fairly clear that it has been fixed by the sharp decline in activity in those forum threads discussing the issue.
As far as Samsung having to earn back trust, they remain the top seller of SSDs by a fairly significant margin, so you might be in the minority there.
Dude, that’s a pretty weak
Dude, that’s a pretty weak “argument” on “HDDs not being a 100% reliable solution, too”. Unless you’re DELIBERATELY putting your HDD near a strong magnet or electromagnetic current…sigh, why I even bother? We both know how it all works in reality. As for “heads failure”…maaan, I ain’t touching that Western Digital TRASH with a six-feet pole (at least not since that “Green Caviar parking”-fiasco of theirs). Nowadays I mainly use only old Hitachis, enterprise-level Constellation Seagates and He8 HGSTs as my external backups and libraries. WDs can suck my ass. And as for SSDs, I’ve said it before many times and I will never get tired of repeating this: the mere fact that SSD technology itself is not reliable enough yet to be 100% non-volatile, makes ALL SSDs a very inappropriate choice when it comes down to long-term external storage of information. Yes, you can store it on a shelf/in a closet for a couple months just fine and all the data would probably be retained in it’s perfect form when you power it on after that period of time, but that still NOWHERE even remotely near any HDDs. I’m that kind of a person who tends to write new data to my external storage (or update/revise it) just once every 8~24 months, by absolute majority of the time just letting those drives sit on a shelf and “collect dust” for a while, and I just know that even the most “reliable” and “top-notch quality” SSD out there is still NOT reliable enough to provide such amount of trust in it’s long-term storing capabilities, at least at this particular point in time, at least in my eyes it’s not. That was my point. SSDs ARE great IF you’re using them often, no doubt about that, but as long-term external solutions that you might power off and put on a shelf for half-a-year and longer, simply NOT YET.
You’re claiming that
You're claiming that SSDs wouldn't last half a year on a shelf, but if they don't, they are technically defective, even if they were at the very end of their flash write endurance / wear cycle count. Even the 840 EVOs retained data just fine (without errors) for longer than 6 months, but they read the data more slowly due to a bug that prevented proper adjustment of read thresholds. Point is that they *still read the data* after 6 months of sitting on a shelf, and those are SSDs that had actual issues that needed correcting.
The JEDEC standard that all client SSDs are engineered to meet as a *minimum* mandates that flash at its *end of life* retain data for a minimum of *one year* in a powered down state. If that SSD expended its full write cycle count at elevated temperatures (less effective wear on the flash), that figure climbs as high as 7 years. Now if you took brand new flash and only cycled its cells a few times, it would easily store data for longer than it would take for a stored hard drive to begin experiencing mechanical failure (bearing oil evaporation, etc). I've seen quotes as high as 20-year retention for flash cycled once and that was even with the flash kept at 55c (higher storage temperatures are bad for long term retention as leakage of stored electrons is accelerated). Cycle good quality flash memory just a few times and then store at room temperatures and that data will very likely outlive you. In fact, when used just for archival purposes in this way, flash memory retention / reliability looks very similar to the results seen in this HDD archival storage study.
You should be aware that enterprise storage devices are typically not engineered for archival / general client usage (unless specifically labeled as archive / cold storage units). He8's also haven't been proven to be more or less reliable than any other HDD tech.
I get that you've got personal experience and preferences here, but when it comes to HDD vs SSD reliability and data retention endurance, you seem to be basing your opinions solely on your own cognitive bias and not by actually running the numbers at all. By my own personal data points, I can say that the X25-M I pounded on 7 years ago has sat in a box for the past 5 years and had no issues reading its data when I imaged it a couple of months ago.
That is WAAAY overblown out
That is WAAAY overblown out of the realistic proportion. It MIGHT be getting as far as “one year” in the absolutely perfect conditions and environment of a test lab (that is what JEDEC’s “specs” are based on, BTW FYI), but it’s pretty obviously doesn’t take into the consideration the everyday real-life usage by an average person in an typical environment and/or conditions. I’m NOT saying that “one year” is a lie, I’m just saying it’s NOT realistic due to being an approximation calculated out of a testing performed in a test lab that has a perfect environment and conditions, while real world really doesn’t work exactly the same (quite the opposite, actually). SSDs are quite more vulnerable than you might think (when it comes down to long-term data storage in an external enclosure and absolutely unpowered state). I’d take up to 8 or 10 months depending on the cell type, but the so-called “one year” is quite unrealistic if the average user environment and conditions are taken into the consideration.
Actually, the JEDEC spec
Actually, the JEDEC spec takes into consideration the environmental issues, and does so very well, even to the point where they document the conditions that could result in data retention as low as 1 week. Does that ever happen? Well, you would have to expend the SSDs full cell count at very low temperatures (amplifying cell wear), and then store it at very high temperatures (amplifying cell leakage).
You talk as if 'test conditions' are in a nice cushy room-temperature lab. SSD companies test by accelerating wear by writing to small portions of the flash (at high or low temperatures) and then cook them in an oven for months. Intel even bombards theirs with a particle accelerator. It's safe to say that most test conditions applied to SSDs are of the worst possible case accelerated sort, not some cushy office building.
Seriously though, do you think the entire spec that billion dollar companies are testing their products around is so grossly inaccurate compared to your opinion on this particular matter? Virtually every single OS install on every single installed SSD has vital OS data and kernel files stored on the SSD in areas that might not be rewritten for months or years. Based on your figures, these systems would crash during boot after only 8-10 months after that OS install. With SSDs being in the wild for nearly a decade now, don't you think this would be a widely spread / known issue? Thumb drives and SD cards also use flash. Surely *someone* has had a thumb drive / SD card in a drawer for a few years and tried to read their data back (successfully). This claim you are making would have catastrophic ramifications for flash memory. Since the above epidemic is clearly not happening in the real world, I can only conclude that your repeated claims are grossly incorrect.
I don't understand why you are trying to disprove someone who reviews these things for a living and has hundreds of data points absolutely contradicting your claims. As a matter of fact, I can't come up with a single data point that corroborates your own beliefs on this matter. If I could, I would, as it would make for a hell of a story.
http://www.techpowerup.com/22
http://www.techpowerup.com/220432/high-end-slc-ssds-no-more-reliable-than-mlc-ssds-google-study.html
BOOM, MOFOS!
I’ve read that study. Its
I've read that study. Its results are based around component failures, not your FUD theory on all SSDs losing their data within 6 months. Further, the paper is analyzing enterprise application / parts, which are engineered to lower retention standards as the data is active and not archival. Finally, here is a quote from the paper you cited:
That counters your previous claims that HDDs are more reliable than flash. The paper also discusses higher UBER for flash compared to HDD, but remember, their sample set is based on enterprise-rated flash (again, with lower retention ratings and significantly higher wear than consumer flash). They also note chip failures, which typically come from high operating temps (I've killed a fusionIO SSD by running it hot – same type of failure, too – BGA lifted due to thermal stress).
Here's a guy who had stagnant data on an EVO for a year. It was reading slower due to the bug, but was later fixed by updating to the revision D firmware. Main point: data was still fully readable 1 year later. Also, that thread was huge and driven by the firmware bug, and yet nobody in that thread reported unreadable data. That thread is a place where lots of folks gripe about EVOs and SSDs in general. If their SSDs were just losing data after 6 months, that thread would be a lot more active than it is now.
Enterprise-class SSDs may
Enterprise-class SSDs may have just a 3 month data retention requirement but the UBER rating is usually higher than for consumer-class SSDs.
What temperatures would you recommend for an SSD?
Mine often tend to idle around 40-50 degrees except maybe for my MX200 which is pretty consistently hotter than the others and is right now idling at 57 degrees with the second hottest idling at 47 degrees.
Remember that the reported
Remember that the reported temp is that of the controller. Typically the flash runs cooler except during sustained writes. Warmer is better when the flash is active and cooler is better when it is offline. If you want to get a feel for what temperature corresponds to what level of retention, the JEDEC slide (original source offline) is mirrored in this PCWorld article entitled Debunked: Your SSD won't lose data if left unplugged after all.
Saw that slide when everyone
Saw that slide when everyone was afraid that their SSD might not retain data for more than a week.
Despite that practically no consumer wears out their SSD (anymore) and the temperatures being pretty unrealistic.
And I’m aware that data retention is improved if it is warm during use and cold when offline.
My concern was more with what you said about high operating temperatures killing chips.
Since my SSDs are in these IcyDock 4 X 2,5 inch enclosures without any airflow or any room to breathe, well I can imagine it can get pretty hot.
Still, they mostly get to idle even if I do some long sustained writes now and then.
I use the same type of
I use the same type of enclosure (with the fan off) in my home PC. Still more than sufficient passive cooling for typical SSDs, especially in consumer (mostly idle) use.
A big part of why I like SSDs
A big part of why I like SSDs is that they’re silent.
Using a fan to cool them would just ruin that so it’s good to know they will be okay.
It’s called the T3 right?
It’s called the T3 right?
Headline days T2.
Apparently I was too happy
Apparently I was too happy about the 2(TB). Fixed!
I was hoping to find your
I was hoping to find your fancy new figures with percentiles here Allyn. Are those not possible with USB? or just not important?
I can test USB devices with
I can test USB devices with the current system, but it must be done on an in-place file and carries with it additional latency added by the OS file system translation. Without other USB3 devices to compare the results to, the data would not have been very useful standing alone, and comparing the T3 to something like its bare internal mSATA SSD would only serve to show deficiencies present in all USB-bridged devices. I plan to highlight this sort of thing in the future, just not in this particular review. For now I've got to keep working through the backlog!
To satisfy your curiosity though, the T3 would make the same sort of vertical line produced by other Samsung EVO products, only a bit further to the right due to the file system and USB latency. USB doesn't really make things any less consistent, but each IO takes a (~fixed) additional amount of time to complete.
thx for the response, and
thx for the response, and good luck with your backlog!
I much prefer the profile of
I much prefer the profile of the T3. The rounded rectangle looks nicer to me than the ellipsoid of the T1. It is still priced as a premium product though T_T
That is a beautiful device.
That is a beautiful device. TB class USB sticks are not that far away.
How often must this be
How often must this be plugged in to maintain the data?
I can’t find any reliable information. Even if it was “only” a few years I think it would be important as many people like don’t know it’s volatile and may put important data like pictures and videos on it for long-term storage.
I wonder if an INTERNAL BATTERY would help?
Intel’s 3D XFast, once cheap will be nice (or similar non-volatile).
Modern MLC-based SSDs usually
Modern MLC-based SSDs usually have an average data retention endurance of ~6 months…but that’s MLC we’re talking about, not the recent TLC garbage everyone’s been putting in their low-tier and cheapest offerings. SLC, on the other hand, has a data retention endurance span of at least 10 months, if I remember it correctly (but SLC’s waaay more expensive, due to…reasons). Either way, that’s NOWHERE even remotely near even the most crappiest of HDDs, sheer data retention longevity-wise, so SSDs are still a very big NO-NO when it comes down to long-term storage of a data in a completely unpowered device. And that won’t change anytime soon (not until 3DXPoint or quartz crystals come into actual play, that’s for sure). For this “problem” to be completely mitigated, we need to have a 100% non-volatile external device that would be VERY affordable (like, 0.15$ per GB, or even cheaper) at the same time, and obviously even NVMe can’t provide us with such of a solution, at least not in the nearest time frame, that’s why SSDs really can’t be considered as a feasible long-term external solutions right now, and won’t be fore quite a few years to come. Just buy a couple 4TB or 8TB HDDs, that would be a much MUCH better route to go.
“Modern MLC-based SSDs
Citation needed. Badly. Also, you're flat wrong. The closest you can possibly be to this is that enterprise SSDs are EOL-rated to retain data for 3 months, but this spec is specific to the enterprise rating and is meant for SSDs that are hammered 24/7 for their entire useful lifetime, *not* the client rating of 1-year minimum at flash EOL (full wear). Please stop spreading SSD FUD. More counter to this in my previous reply.
This sort of thing I’m always
This sort of thing I’m always glad to see. I’m used to dealing with sites that have poor or no connectivity and require massive data transfers, so faster, more capacity and smaller form factors work well for me.
Storage is another issue, but correct me if I’m wrong. Doesn’t more layers improve data retention? I’m also a little surprised it has only a 3 year warranty- since the Pros had 10 years with 32 layers. Is that because of handling issues?
At any rate, I think Allyn’s right. Data retention is almost a non-starter issue in most cases for this and V-NAND helps with that-
http://www.anandtech.com/show/9248/the-truth-about-ssd-data-retention
http://www.samsung.com/global/business/semiconductor/minisite/SSD/global/html/ssd850pro/3dvnand.html
Practically speaking, how may of you are going to use these drives for data storage and stick it in your closet for the grand kids anyhow? This is a glorified and pricey USB key one would have better have mobile uses for. AFAIC, it’s a data transfer device- not a storage one.
Samsung backed off to the
Samsung backed off to the more common 5-years for their pro stuff and 3-years for their 'lower grade' (750) EVO stuff. 3-years is also reasonable given this is an external device that sees more potential physical abuse. The rating is not likely connected to the flash itself and more to the wearable physical items (connector, etc).
V-NAND helps with endurance (claimed double by Samsung) as a side effdect of the layers. 3D layers can have larger cell volume to store the charge. There is also more surface area for leakage, but the volume increase wins out. Micron recently announced 30,000 cycle endurance of their upcoming 3D NAND, but remember that a 1-year retention rating is based on all 30,000 of those cycles being expended. Things are way more different when you've only written to the flash a few times.
Nice. I wasn’t aware of some
Nice. I wasn’t aware of some of that.
And Yeah, you’re very likely not going to be using this as a primary or server drive and getting even warm to expending those cycles.
FWIW- you did a gj with this.
Well, Samsung claimed an
Well, Samsung claimed an endurance of 30000 write cycles for their 3D-NAND too.
I think that the more conservative estimates from Micron that you can see over at Hexus will be a lot closer to what consumers will get.
Link here: http://hexus.net/tech/news/storage/90521-intels-ssd-range-benefit-greater-capacity-speed/
I have a 120GB Samsung 840
I have a 120GB Samsung 840 EVO that has had almost 16TB ..that’s right ..almost 16TB written to it according to Samsung Magician and I’m still getting over 500Mb/s read and over 400Mb/s write according to ATTO. All you FUD spreaders need to STFU and GTFO… sick of it.
Your read speeds would have
Your read speeds would have to degrade in a matter of minutes for ATTO to report that there is something wrong with it’s read speeds since ATTO reads files that it has just recently written.
That is why if you want to test for read speed slowdowns you have to use a benchmark that performs reads without writing to the SSD first, like for example HDTune or HDTach.
Driver needed for Mac? Check
Driver needed for Mac? Check out warning for previous model at
Mac Owners Should Hold Off on New Samsung T1 Flash SSD
http://www.macobserver.com/tmo/article/mac-owners-should-hold-off-on-new-samsung-t1-flash-ssd
Can it be used to boot Mac and work from it all day long?
RAID 0 inside as in SanDisk’s 1.92TB Extreme 900 Portable SSD? That is the best way to lose data (2x probability or more). One disk fails (or controller), all lost.
Samsung has support for Mac
Samsung has support for Mac on the T3, and its using similar software to what the T1 used, so everything should be up to speed at this point.
Has there been any
Has there been any information if Samsung is going to release the 2TB mSata for OEM or consumer markets. Or if one could “salvage” the mSata from T3 and use it normally without any adverse effects or reduced performance when compared to 840 Evo or 850 Evo mSata?