Haven’t had any of my SSDs conk out on me, my oldest about five years old, but I wouldn’t really put a Unity project on an SSD given the amount of data that gets moved around.
I don’t agree with this assessment. I believe the real culprit is that you were running an early SSD. One major problem with SSDs back when they were first starting to gain traction in the consumer space is that they often came with low quality controllers that were known to fail far sooner than they should have.
OCZ, for example, was selling a 128GB SSD that had a return rate of more than 50%. I don’t know how Intel is in business sectors but I know how they are right now in the consumer space and to be blunt there is a reason their consumer drives are one of the cheapest available. They’re not terribly impressive drives.
I haven’t but then I’m constantly needing more space. My first drive was an ADATA 120GB from half a decade ago which was replaced by a Samsung 850 EVO 512GB one to two years later and I’m currently on a Samsung 860 EVO 1TB that will soon be replaced by a pair of ADATA 2TB NVMe drives.
And that’s not factoring in the performance differences which would have driven me to newer drives if the capacity had been adequate. An enterprise drive from 2009 would have had a few thousand IOPS whereas a modern budget drive will have hundreds of thousands of IOPS. That’s practically the same gains as going from an HDD to an SSD.
Had one recently, from 2013. But I think it was somehow physically damaged because the drive performance didn’t deteriorate, it just suddenly refused to power on.
At a previous job I did a lot of testing on Intel SSD’s from that era for inclusion in our own products. One issue they had was a firmware bug across a large number of their SSD models where if the drive lost power while performing an operation (just writes I believe, but it has been a while) it would sometimes kill the drive. The typical symptom was the SSD from then on reports it was only 8MB in size, with all data lost and no way to get it back. Intel took years to resolve this issue in new products, and it would sometimes pop back up in later products or certain firmware revisions after it was resolved in earlier ones.
(Old work stories…)
We discovered the issue because we would sometimes power cycle machines during tests, and started seeing Intel SSD failures. After the failure rate was a bit high we looked at our test logs, figured out at what point we thought the drives were failing, and came up with a test where we would use power controls on individual drive bays to power cycle SSD’s while we were writing to them and could reliably reproduce the failure across all the Intel SSD models we were testing and a large number of drives.
Of course we talked to our Intel engineering contacts about the issue (no idea if they were already aware of the issue, but my guess is they probably were) but they didn’t have a way to return the drives to normal. We didn’t care about saving data, we just wanted a way to automatically repair a drive which we detected had failed in this way. Then we could just let the RAID rebuild the drive. But nope.
Burning through corporate money destroying SSD’s was a whole lot of fun. Especially since the tech was really new and expensive at the time. Also did a bunch of write wearing testing where we would write non-stop at max throughput to 30 SSD’s all in the same 3U box for a week or two, which was fun, especially watching that $12,000+ in SSD’s all flip to useless read only mode just to figure out how many writes we could really do.
The drive just disappeared. That simple :). I have disconnected the drive and are waiting for the replacement to show up in the mail before I do anything else.
Thank you for sharing your experiences everyone. It is much appreciated.
No, but I’ve noticed my projects thrash (figuratively, not literally) the drive a lot, which is why I use my SSD for application specific stuff and games. If I get a 2tb SSD any time soon I’ll probably dedicate it to Unity projects.
Now to the funny thing. I got my new SSD. Then I plugged in the dead drive to a new SATA channel on 3 Gbps instead of 6 Gbps and it worked again… Intel ssd tool reports the drive to be 100% and it only has a lowly 6 TB written. My system drive has 31 TB written.
Right now I am copying all of my data to my new drive. And I am just happy that all of my latest game creations are not lost.
I will do a full diagnostic of the drive and see what intel toolbox can tell me.
So 132 GB and nearly 1 mio. files later read from the drive and it is still running…
However Media wearout indicator is 0… All the error correction space is used… So basically I will get loss of data the next time a flash memory cell wears out. So the drive is DEAD. At just 6 TB written for a 160GB drive and 557 days or 13371 runtime hours. That seems a bit low. And the weird thing is. The intel solid state toolbox reports the drive to be 100% healthy and ready for use … Aaah LOL. Not going to trust this drive with data anymore.
It is normal for SSD drives to fail prior to using all of their writes. I still love SSD drives, and overall SSD drives fail far less often than hard drives.
That’s a good call and all but, seriously, don’t trust any single device with your data. Use version control with a remote host (Azure DevOps and GitHub are popular hosts with free plans) or have physically separate, automated backups.