Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

SSD death and unity - old MLC drive just quit

Discussion in 'General Discussion' started by Mjello, Dec 2, 2019.

  1. Mjello

    Mjello

    Joined:
    Mar 26, 2018
    Posts:
    35
    So ssd's do not last forever. My old MLC 160 GB intel ssd drive just died. The drive is old though. From 2009.

    It served its last 3 years being my unity work area. I think that unity is pretty tough on drives.

    Details about the drive
    https://ark.intel.com/content/www/u...m-series-160gb-2-5in-sata-3gb-s-34nm-mlc.html

    Anyone else experiencing ssd death? How old was your drive and how much data was written to it in its lifetime?

    The drive lasted 11 years before quitting which I think is pretty good.

    I will update this thread with drive reads and writes once I start the data recovery process if it is at all possible.
     
  2. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,083
    Haven't had any of my SSDs conk out on me, my oldest about five years old, but I wouldn't really put a Unity project on an SSD given the amount of data that gets moved around.
     
    Mjello likes this.
  3. Joe-Censored

    Joe-Censored

    Joined:
    Mar 26, 2013
    Posts:
    11,847
    What do you mean that it "just died"? Intel SSD's flip to read only mode when they have reached their maximum number of write cycles.
     
    Mjello likes this.
  4. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,951
    I don't agree with this assessment. I believe the real culprit is that you were running an early SSD. One major problem with SSDs back when they were first starting to gain traction in the consumer space is that they often came with low quality controllers that were known to fail far sooner than they should have.

    OCZ, for example, was selling a 128GB SSD that had a return rate of more than 50%. I don't know how Intel is in business sectors but I know how they are right now in the consumer space and to be blunt there is a reason their consumer drives are one of the cheapest available. They're not terribly impressive drives.

    I haven't but then I'm constantly needing more space. My first drive was an ADATA 120GB from half a decade ago which was replaced by a Samsung 850 EVO 512GB one to two years later and I'm currently on a Samsung 860 EVO 1TB that will soon be replaced by a pair of ADATA 2TB NVMe drives.

    And that's not factoring in the performance differences which would have driven me to newer drives if the capacity had been adequate. An enterprise drive from 2009 would have had a few thousand IOPS whereas a modern budget drive will have hundreds of thousands of IOPS. That's practically the same gains as going from an HDD to an SSD.
     
    Last edited: Dec 3, 2019
    Mjello likes this.
  5. ikazrima

    ikazrima

    Joined:
    Feb 11, 2014
    Posts:
    320
    Had one recently, from 2013. But I think it was somehow physically damaged because the drive performance didn't deteriorate, it just suddenly refused to power on.
     
    Mjello likes this.
  6. Joe-Censored

    Joe-Censored

    Joined:
    Mar 26, 2013
    Posts:
    11,847
    At a previous job I did a lot of testing on Intel SSD's from that era for inclusion in our own products. One issue they had was a firmware bug across a large number of their SSD models where if the drive lost power while performing an operation (just writes I believe, but it has been a while) it would sometimes kill the drive. The typical symptom was the SSD from then on reports it was only 8MB in size, with all data lost and no way to get it back. Intel took years to resolve this issue in new products, and it would sometimes pop back up in later products or certain firmware revisions after it was resolved in earlier ones.

    (Old work stories....)

    We discovered the issue because we would sometimes power cycle machines during tests, and started seeing Intel SSD failures. After the failure rate was a bit high we looked at our test logs, figured out at what point we thought the drives were failing, and came up with a test where we would use power controls on individual drive bays to power cycle SSD's while we were writing to them and could reliably reproduce the failure across all the Intel SSD models we were testing and a large number of drives.

    Of course we talked to our Intel engineering contacts about the issue (no idea if they were already aware of the issue, but my guess is they probably were) but they didn't have a way to return the drives to normal. We didn't care about saving data, we just wanted a way to automatically repair a drive which we detected had failed in this way. Then we could just let the RAID rebuild the drive. But nope.

    Burning through corporate money destroying SSD's was a whole lot of fun. Especially since the tech was really new and expensive at the time. Also did a bunch of write wearing testing where we would write non-stop at max throughput to 30 SSD's all in the same 3U box for a week or two, which was fun, especially watching that $12,000+ in SSD's all flip to useless read only mode just to figure out how many writes we could really do.
     
    Last edited: Dec 3, 2019
    Kiwasi, Mjello and angrypenguin like this.
  7. Mjello

    Mjello

    Joined:
    Mar 26, 2018
    Posts:
    35
    The drive just disappeared. That simple :). I have disconnected the drive and are waiting for the replacement to show up in the mail before I do anything else.

    Thank you for sharing your experiences everyone. It is much appreciated.
     
    Joe-Censored likes this.
  8. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
  9. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    bug or corporate feature?
     
    Antypodish likes this.
  10. Kondor0

    Kondor0

    Joined:
    Feb 20, 2010
    Posts:
    601
    Crap, I have 3 Unity projects in my SSD. Am I dead?
     
    Joe-Censored likes this.
  11. All of my projects (not just Unity but other C# and Java projects) are on SSDs. Frequently building. No SSD death here. Probably I'm doing it wrong.
     
    Joe-Censored likes this.
  12. sxa

    sxa

    Joined:
    Aug 8, 2014
    Posts:
    741
    Only if the words 'backup backup backup and offsite backup' mean nothing to you.
     
  13. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,083
    No, but I've noticed my projects thrash (figuratively, not literally) the drive a lot, which is why I use my SSD for application specific stuff and games. If I get a 2tb SSD any time soon I'll probably dedicate it to Unity projects.
     
  14. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    I always use SSD drives to store Unity projects. I take frequent backups of everything important to me regardless of where it is stored.

    At this point, I never use hard drives to store Unity projects, because hard drives are very slow.
     
    angrypenguin likes this.
  15. Mjello

    Mjello

    Joined:
    Mar 26, 2018
    Posts:
    35
    No reason not to use ssd's. They are fast and reliable. I have had many dead HDD. This is my first SSD making trouble. And it is 11 years old.
     
    Ryiah likes this.
  16. Mjello

    Mjello

    Joined:
    Mar 26, 2018
    Posts:
    35
    Now to the funny thing. I got my new SSD. Then I plugged in the dead drive to a new SATA channel on 3 Gbps instead of 6 Gbps and it worked again... Intel ssd tool reports the drive to be 100% and it only has a lowly 6 TB written. My system drive has 31 TB written.

    Right now I am copying all of my data to my new drive. And I am just happy that all of my latest game creations are not lost. :D

    I will do a full diagnostic of the drive and see what intel toolbox can tell me.
     
    Ryiah likes this.
  17. Mjello

    Mjello

    Joined:
    Mar 26, 2018
    Posts:
    35
    So 132 GB and nearly 1 mio. files later read from the drive and it is still running...

    However Media wearout indicator is 0... All the error correction space is used... So basically I will get loss of data the next time a flash memory cell wears out. So the drive is DEAD. At just 6 TB written for a 160GB drive and 557 days or 13371 runtime hours. That seems a bit low. And the weird thing is. The intel solid state toolbox reports the drive to be 100% healthy and ready for use :confused:... Aaah LOL. Not going to trust this drive with data anymore.
     
    Ryiah likes this.
  18. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    It is normal for SSD drives to fail prior to using all of their writes. I still love SSD drives, and overall SSD drives fail far less often than hard drives.
     
  19. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,614
    That's a good call and all but, seriously, don't trust any single device with your data. Use version control with a remote host (Azure DevOps and GitHub are popular hosts with free plans) or have physically separate, automated backups.
     
    Mjello and Ryiah like this.
  20. Deleted User

    Deleted User

    Guest

    Well, we all back up our projects, don't we? :p
     
  21. Mjello

    Mjello

    Joined:
    Mar 26, 2018
    Posts:
    35
    I do use backups :). And as you say, always do with something you care about. It was just the latest stuff that would be lost. I only do this stuff as a hobby. So whenever I have done like a weekend of playing in Unity I make a backup of the most important stuff... But seriously my source library of all sorts of stuff I found on the web over the last three years... Never realised how much I would have missed it... But fortunately this was a very easy recovery.

    I remember spending all day long trying to get just a few GB out of a failing HDD doing tick tick tick every 30 seconds and read a few MB at a time. Thank You all you hard working scientists and engineers for coming up with flash memory.

    And thanks to all of you for listening and replying to my little crisis :D.
     
  22. Deleted User

    Deleted User

    Guest

    Mjello likes this.
  23. ikazrima

    ikazrima

    Joined:
    Feb 11, 2014
    Posts:
    320
    Using Github definitely doesn't fit into the "trusting blindly" category, it's widely adopted in the industry.

    How does version control works with Mega? Do you sync by folder like Google / One Drive?
     
    angrypenguin likes this.
  24. Deleted User

    Deleted User

    Guest

    There is no such thing as version control; you upload what you want to keep safe, override existing content, or give the new content another name. Mega is just for storing, not for working in teams which is fine by me since I work alone. :)
     
  25. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,951
    And none of us are recommending that you should trust them blindly.

    That said you shouldn't blindly trust these either. Mega, for example, had a breach last year where thousands of users had their login credentials stolen.

    https://www.zdnet.com/article/thousands-of-mega-logins-dumped-online-exposing-user-files/

    Where are you keeping them? If the answer is "in the same location as the machine" then that's not a backup.

    Just because we can't achieve 100% though doesn't mean we shouldn't be trying for 100%. Using an external drive and then keeping that external drive in the same location as your machine is nowhere near 100%. Using Mega and nothing else for remote is nowhere near 100%.

    Using a version control service with a script running on a VPS that backs up the data to a completely different cloud service is a far superior approach, but if you truly care about getting close to 100% then you would have a version control service that has a mirror that backs up data to multiple cloud services.

    While you can't achieve 100% you can definitely achieve 99.99%.

    Anyone who believes version control exists for teams only doesn't understand version control. It's not just about being able to distribute files to other developers. It's about being able to see every change you've ever made as well as restore the changes made back then. It's invaluable for discovering bugs and regressions.
     
    neoshaman and angrypenguin like this.
  26. Deleted User

    Deleted User

    Guest

    I see that you consider me stupid. End of conversation.
     
  27. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,951
    I asked a question because you didn't provide sufficient information. Additionally you're not the only person reading that response. A public response has to be written to some degree with the public in mind and not just the person tagged or quoted.
     
    angrypenguin likes this.
  28. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,754
    While I follow your previous points, I wouldn't agree on that point.
    Having copy of files on any additional drive / storage, is already backup. Location is a secondary in such case. That reduces chances of data loss, when one of drive fails.
     
  29. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,614
    Version control is definitely useful when working in a team, but that's not its only purpose.

    This is a good habit to be in, and most people don't do it, so well done there.

    I still suggest taking a look at GitHub or something like it. For me, when I've done something on a project I right-click the folder it is in and click "Commit". I type in a short description of what I've done, then click another button labelled "Commit" then one labelled "Push". My Git client packages up all of my changes since last time and sends them off to the server.

    The whole process usually takes a few seconds, so I often do it multiple times per day. Once for "Fixed bug X." Again for "Added animation to Y." "Implemented Z feature." and so on.

    Because you're writing those little comments with each change set, one of the cool things is that you now get a complete history of your project on the server. It lists out all of those comments, plus all of the changes to every file committed along that comment. That doesn't sound very useful at first, but once you're used to it being there it has a bunch of benefits.

    Most relevant here, though, is that sending your changes to a remote machine becomes a trivially tiny task, so you're likely to do it more often, and lose less in case something goes wrong.

    This is the kind of thing that a backup system is better suited for. It probably doesn't change as often, and there's probably little value in tracking thie history of changes over time (which has overheads).
     
    Ryiah likes this.
  30. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,083
    Why is it that these threads always draw people out of the woodwork who have no idea how to do proper backups or what version control is?
     
    xVergilx and Ryiah like this.
  31. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,951
    My definition of a backup is the same as the one on Wikipedia ("a copy of data taken and stored elsewhere so that it may be used to restore the original after a data loss"), and thus to me if it isn't intended to prevent data loss it isn't a backup.

    https://en.wikipedia.org/wiki/Backup

    That doesn't mean I don't keep copies of files and projects on external devices within easy reach. It's just that I do it solely for the convenience and not with the expectation that it will save me if the original is lost.
     
    Last edited: Dec 16, 2019
  32. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,754
    Perhaps I misinterpreted your statement. If I reread it again, I think you meant like storing on same drive for example, rather than externally. While initially I just thought about "same location" as home/office :)
     
  33. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,951
    I meant in the same physical location. Because depending on where you live in the world a break in, fire, natural disaster, etc is a real possibility. Having a good backup solution in the event that it occurs is just as important as having good insurance.
     
    Last edited: Dec 16, 2019
  34. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,614
    "Elsewhere" is lovely and ambiguous.

    You're talking about low quality backups as if they aren't backups at all. I tend to agree that's the practical outcome*, but technically it's still a backup. It's worth acknowledging that there's a quality spectrum when it comes to backing things up. There are good backups and there are bad backups.

    Importantly, what makes a high quality backup will be different depending on your circumstances. In most cases you want physically separate copies of data in at least two physically separate locations. However, in some cases that makes your security worse depending on what your risks and/or threats are. I used to do work for clients where we specifically had to keep all of their data in our physical control, so for those projects our backups and version control were strictly on-site. That's because they worked with confidential information, and a part of their risk mitigation was to minimise the number of different locations the data was stored at, and in that case that was a perfectly reasonable approach.

    * Example: "Oh no, I lost my laptop, and all my backups were also on my laptop!" Practically just as bad is "Oh no, I lost my laptop, and my backup drive is with it in the laptop bag!"
     
    xVergilx and Antypodish like this.
  35. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    3,296
    SSD's tend to have specific write cycle amount of data. Once the amount has ran out - it is just a brick. Some can be read but not written. I guess this is exactly OP's case.

    Also, to be honest, VCS is fine and dandy, unless you ran out of space.
    Currently I'm using Gitlab which has 10GB limit per project. And I'm already @ 6.5GB.
    Kinda scary for the Unity project, because they tend to grow really fast (due to the lack of actual LFS).

    And the Gitlab doesn't provide any reasonable replacement, or "indie-friendly" payment plan for these cases.
    This thread made me wondering, what the heck am I suppose to backup to if I ran out of space?

    Does anyone knows any private repo hosts that have larger available space than 10GB?
    Backup hard drive doesn't sound unreasonable (I think). Or raid setup. (No laptop drives or bags :p)
     
  36. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,951
    angrypenguin and xVergilx like this.
  37. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,614
    This is what I use, and their repo size used to be practically unlimited. That's no longer the case, with a "recommended" size of 10gb, but mention that they can go over that in some circumstances. They also do provide other tools to help with storage of larger projects, mentioned in that post.

    In my experience it's a) art assets and b) computed data such as light bakes that cause them to grow really fast.

    I do "spring cleaning" every so often in large projects. When a large milestone is hit I make a new repo and only move the stuff in that's still in use. It's basically just an easy way to drop any old binary files that are no longer in use but hang around in the repo (since things don't really get deleted). Last time I did this one of my projects went from ~5gb down to ~1.5gb.

    That said, I really should look into Git-LFS or similar, because that might entirely eliminate the need for this.

    One other thing to keep in mind is that it's really useful to have your team members understand file compression, particularly when it comes to texture assets and tools which generate them. I've had a few cases where a 64mb TGA could be replaced with a <50kb PNG with zero difference in the game (since PNG is lossless, and Unity converts it to a different format anyway).

    Also consider what assets don't need to be stored in the project repo at all. I'm currently doing work with voiceover integration, and the rest of the team really doesn't need all of the voice sets when they do a pull.
     
    Ryiah likes this.
  38. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,951
    Thanks for the heads up! I'm not surprised they finally introduced a limit but it sucks that they chose a limit similar to that of every other repository out there.
     
    angrypenguin likes this.
  39. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,614
    It doesn't really surprise me. I somewhat expect that the back ends of GitHub and Azure's repos will be combined or merged or something at some point. It makes sense to have different front ends for each, since they integrate with different stuff and target different audiences, but I wonder if they'd benefit from merging the hosting part?

    I've little experience with such things, so I could be way off.
     
  40. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    3,296
    This is identical to running bfg on the repo itself. Gitlab supports this feature, but alas, it may not be sufficient.

    At some point in time share amount of content will be way more than 10GBs.
    This project of mine is 2 yrs old, so its decently kept sustained at lower than the limit.

    Sucks that DevOps have a limit, and at the same time claiming the opposite on their main page.

    I guess an alternative would be to abuse repo system on Gitlab.
    Splitting folders into separate repos until they run out of space.

    (Hacky, but may work?)
     
  41. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,614
    It's always been ambiguous as to just what is "unlimited" on that front page. Note that they say projects can be bigger in some cases, and suggest other approaches. You could always get in contact to discuss your own case?

    You could always rent a VPS and put your own Git host on it? That's what I was planning to do before I found VSTS.
     
  42. pointcache

    pointcache

    Joined:
    Sep 22, 2012
    Posts:
    577
    You can keep your repository on your HDD, and push it there from ssd
    on top of other options
     
    xVergilx likes this.