When I worked at a data center, I would notice drives would die around 50k hours. Some last a lot longer but when your testing hundreds of drives you start to see the patterns. So when my drive get to 50k I replace them preemptively just to not have data loss. I might still glue them km a redundant backup or something like that.
When they fail or when the capacity becomes a hindrance. Other than that if you follow your 3 2 1. You shouldn’t lose data.
Replacing after 50,000 hours in enterprise data center setting makes sense. At home it’s not too much issue for me to have a day of downtime replicating data back across drives. It’ll just cost me my time. In an Enterprise setting it will also cost you money. Possibly even enough or more to justify retiring them at 50,000 hours. Though again if you have raid setup with spare drives etc. You can just keep on running while the raid rebuilds itself. Only replacing a drive when they go bad. Or started acting up preparing to go bad.
It all honestly depends upon your it departments budget competence and staffing. It’s not wrong to replace some after 50,000. But it could be wasteful. There are after all people like myself who buy those drives and run them for years without incident.
Vibrational mode failure is more a thing in large SAS backplane enterprise jbod rack mount deployments. Small workstation/NAS deployments with three to five drives etc. Using rubber grommets and all shouldn’t have too many issues causing failure from vibration. However a large Bay full of drives spinning up and down reaching harmonics can absolutely tear themselves apart over time for sure.
At what point do you consider replacing a drive?
When I worked at a data center, I would notice drives would die around 50k hours. Some last a lot longer but when your testing hundreds of drives you start to see the patterns. So when my drive get to 50k I replace them preemptively just to not have data loss. I might still glue them km a redundant backup or something like that.
When they fail or when the capacity becomes a hindrance. Other than that if you follow your 3 2 1. You shouldn’t lose data.
Replacing after 50,000 hours in enterprise data center setting makes sense. At home it’s not too much issue for me to have a day of downtime replicating data back across drives. It’ll just cost me my time. In an Enterprise setting it will also cost you money. Possibly even enough or more to justify retiring them at 50,000 hours. Though again if you have raid setup with spare drives etc. You can just keep on running while the raid rebuilds itself. Only replacing a drive when they go bad. Or started acting up preparing to go bad.
It all honestly depends upon your it departments budget competence and staffing. It’s not wrong to replace some after 50,000. But it could be wasteful. There are after all people like myself who buy those drives and run them for years without incident.
Vibrational mode failure is more a thing in large SAS backplane enterprise jbod rack mount deployments. Small workstation/NAS deployments with three to five drives etc. Using rubber grommets and all shouldn’t have too many issues causing failure from vibration. However a large Bay full of drives spinning up and down reaching harmonics can absolutely tear themselves apart over time for sure.