Hello everyone!
As my current machine gets some graphical issues now and then, meaning that I suspect it to fail completely any day, I am thinking about buying a new one.
To get to the point: I struggle with the decision concerning SSDs/HDDs. The new Macbooks only come with SSDs and I am really concerned about their reliability.
After doing some research on this topic, it seems to be highly controversial. Some claim SSDs could last about 1000 years if you only use them once in a while (which is clearly not the case here, I’d use it hard). Others claim, it will most likely fail in a couple of years (about 5-10 years).
Either way, even if I end up with an SSD failure after 5 years of usage, I’d still be happy with that.
Most people will advise to make backups, which I do anyway. So this shouldn’t be the biggest problem.
My main concern is data corruption.
While I haven’t found much information about that topic, some suggest that current SSDs will have about 3% corruption rate.
I can’t afford losing 3% of my files. What if it’s a script with thousands of lines of code that I worked on for weeks?
A good backup solution won’t help me with corrupted files, will it? Because when backing up, the corrupted files are not going to be miraculously cured, are they?
Maybe I don’t really understand what “file corruption” actually is? Or why would anyone use SSDs if there is such a high chance of data loss?
Unfortunately, I can’t find the article that I read anymore. But if I stumble upon it, I will link to it.
Here is an article about 2015 Macbook Pros (including the model I’m interested in) having SSD problems, though:
So now the question: Do you use SSDs for development?
If so, what are your experiences?
Have you encountered data loss / file corruption?
Are my concerns justified?
Or do I lack Austin Powers’s attitude?
@elmar1028 But do you know for sure? Did you use some software to check your files? (I don’t know if that actually exists) Because most of the time, you’re not going to access each and every file that is on your system.
For example, while this was about an HDD, but still: A few years ago, I encountered some audio files on an old machine of mine that I couldn’t play back anymore. I guess that’s an example of file corruption? I didn’t expect files to “go bad” at some point. It was horrible.
And the main problem about this: I can’t seem to be able to do anything against it.
Oh well, maybe I’m just too much affected by that scenario and it’s not even that common. Maybe I just had very bad luck.
Two years on my main system, and it’s one of the potentially unreliable Samsungs. Yes, I keep checking it for problems and can’t find any. No failures are more worrisome than quick failures
The laptop has been going strong for a year now with a PCIe SSD (Mid-2014 MBP).
Occasional series of drives of any type may have problems, but it’s usually a warranty issue. If it’s something huge, like that line of failing Seagate drives a while back, the manufacturer will usually recall them. Treat EVERYTHING as if it’s about to fail.
SSD reliability has improved vastly since the first generation. I’m not sure which one we’re up to by now, but the first one needed to be powered on every now and then to maintain data (monthly or something like that). Samsung’s latest 850 EVO line (the one with all that 3D magic) is supposedly able to reliably keep data for years without power, and the lifetime of the system is usually shorter than the life expectancy of these drives. Intel also have something along those lines in durability with the latest series, I think.
tl;dr: I would still not trust SSDs for archival storage (which nobody recommends anyway), but for day to day use it’s the only thing you’ll ever want. Unless you’re a hardcore masochist.
Any physical media can have bad sectors/corruption. That’s why we have backups. With all the cloud based incremental backup services out there available for pennies a day you would have to be crazy not to use one as a developer.
@orb
So what would you advise? Using SSD for booting and applications only? And store project data externally on HDDs?
Even if the data lasts, I really don’t want it to get corrupted. Corrupted data lasting for ages has no use either.
But you didn’t run into any corruption. What do you use for checking your files? Maybe I will have to consider some software to check my files like paranoid, so that I can restore from backup ASAP, if anything bad happens?
This is very good advise in any case. I was even thinking about backing up everything twice.
But how would a backup help against data corruption? Alright, if I can access individual files on incremental backups, this may work, but on the other hand, I’m not a fan of clouds (as online services). I read about Time Machine backups being used for restoring individual files, too, though.
SSD? I have a 1TB samsung as main drive I abuse relentlessly. These things don’t suddenly quit, but they’ll degrade. To you it won’t be loss of files so much as loss of free space. Plus backups are automated. Why would I miss out on performance ?
Yes, the loss of free space, too. I read when it comes to reliability of SSDs, the bigger, the better, because the chance of unusable areas is lower? Then again, Apple charges 600€ for 1TB SSD upgrade (from 500GB), which is insane.
@Lightning-Zordon
Alright, it seems people actually have good experiences with SSD as well!
@Shushustorm : I use SSDs for everything on the work computers. Backups go to external (USB and networked) drives on automatic, regular backups plus repository check-ins on a server. The only times I’ve had file corruption were because of faulty downloads or HDDs.
If you want to keep tabs on the health of your drives, just look for a utility that reports SMART status. Modern drives all have SMART, which does self-checking and health/temperature monitoring. The operating systems just don’t usually do much with the information, other than reporting catastrophic failure. It’s usually not going to show you the warning signs, because the data reported by each manufacturer differs, and can be hard to interpret. It’s a whole new skill set.
Also run the operating system’s own disk check every now and then. Non-fatal forms of disk corruption can happen to the best of drives, because it’s software-controlled.
NOTE: The 12" MacBook (the one with the lonely port) doesn’t report SMART status. Avoid. Every other computer with SATA, mSATA or PCIe drives does, as far as I know.
@hippocoder : Yeah, fortunately SSDs come with an extra storage pool to allocate from when sectors go bad. And unlike HDDs, there’s no rolling snowball effect once a hard error appears, so it could be just one little teensy block that dies and the rest is fine for years (decades with current tech, allegedly). Or you could be screwed and have received the worst SSD in Scotland.
You can access individual files. If want to restore a file to the state it was in 10 days ago I open up my backup software file browser, select a time when I know the file was good, and click restore. Done.
You don’t have to use cloud service, but you need backups. I went through a long phase(lets say 20 years) where I backed up everything to a local NAS. Then local NAS + cloud. Now I just do cloud, while still using things like github for source code as well so it is backed up in 3 locations.
Local NAS is a bit of a pain as you will get much more frequent disk failures on it so it requires ongoing maintenance to maintain it’s integrity, and if you get robbed or have a flood you risk losing everything.
The stress tests done basically seem to agree that normal everyday use (like for gamedev) is not enough to merit concern over data corruption. Eventually, you will begin to lose some free space but again under normal circumstances its not a problem.
Faulty drive causing corruption? Yeah that could happen but its just as possible on an HDD so theres no point in going 20x slower for the same risk. Just use good backup practices and get on with your life. I’ve never had an SSD fail or lose a significant amount of space.
There’s a bit of spare space which it’ll take from if sectors go bad. A 500GB drive may actually be 512GB, with 12GB worth of sectors for backup in case of hard errors.
They’re not SATA drives anymore though. 1200 megabytes per second PCIe beasts at the moment. Still pricey, but at least a bit above the average!
Yes, I do see your point here. I just don’t really trust clouds. (Not in a way that it wouldn’t backup safely, but in a way that my data is stored securely from any other access.)
I will have to take another look at it. This sure sounds reasonable.
Maybe I understand the whole technology incorrectly, but the thing is that SSDs always delete and write new sectors on each writing process? Which would mean if your RAM has some failures, all the data that you rewrite runs the risk of getting corrupted, right?
Also, I don’t know if I understand the concept of “writing processes” correctly, but: It is bad if you save, for example, a script all the time, isn’t it? Because no matter what I’m working on, I’m saving files very regularly. Probably about every 20-30 seconds. Maybe that’s a bad habit for using SSDs?
So I guess I would be fine with using the 500GB version? I don’t really need 1TB in terms of raw storage that I actually have data on.
Theres nothing wrong with that. I’ve used SSDs on every computer I have for the last 6 years both at work, home and backup drives and extensively abused them. I’m actually more concerned with the huge external HDDs I have for backups than the SSDs.
Quite the opposite with TRIM. It optimises writes in such a way that if a file across 50 sectors is deleted, it simply marks them as now available in the index, rather than writing zeroes to them.
Nah, you’ll be fine. You’d also be shocked if you saw just how many gigabytes of logs any modern OS writes in a month at the most! The lifetime of SSDs is measured in petabytes, and the big test somebody did a while back didn’t even manage to break the EVO drives they had by the end of a multi-month constant read-write cycle.
HDDs are the ones which fail big after long-time operation.
If you know you can live with 500GB, get that. Where I live it’s the best price per gigabyte (and within reason as a personal expense, not just a work expense).
I understand, just realize maintaining an equivalent backup service to what you get in the cloud for $5/mo or even free costs quite a bit over time, and still doesn’t deal with offsite backups. My latest NAS doing local backups was a Synology and I probably had a bad drive on it every 6-12 months or so which ended up being a lot of money.
Interesting. That sounds pretty reliable! But why are you concerned about the external HDDs? Do HDDs have some risks to them that I’m not aware? I know, they are sensitive to shock, but I guess you wouldn’t throw your HDDs?
I see. That’s great! So is TRIM some default setting or would I have to watch out for something and activate a certain setting?
Yea, I’m pretty sure I will not fill those 500GB any time soon. Just in case, I could always upgrade, couldn’t I?
That seems pretty bad, though. I am using Western Digital drives for backups and in about 4 years I had to replace one of 4.
Maybe I’m just over concerned about it but it seems like the HDDs I have are getting progressively slower. I haven’t done any formal tests but my SSDs around the same age and usage don’t seem to have as much speed degradation. For me, I’m more concerned about speed and transfer rates realtime since I move huge files around constantly (Project backup, HDR photography, timelapse RAW photos… gigs upon gigs upon gigs of stuff) so I notice speed changes immediately. I couldn’t imagine trying to work with large files on an HDD anymore, it would just be abysmal.
Alright, but isn’t that because HDDs get slower the fuller they are and SSD don’t have that problem, because there is no looking up to get the files?