I was running the numbers in my head and realized that if hosting media like music and video files where it’s just written to once and read from a lot, a large 2.5 inch SSD might be a better buy than a HDD (especially if size limited to a 2.5 inch HDD). My reasoning is that a HDD needs replacement after around 50,000 power on hours. But an SSD needs replacement depending on how often the entire drive is overwritten. For a media server that should mean that the HDD will be replaced much more often than an SSD. And that’s without considering vibration related issues of having multiple drives in the same server or if you experience frequent power outages (both of which would make a better case for an ssd.
So what I do is I use an M.2 SSD for the OS, and the largest 2.5 sata SSD I can find which will fit my storage and backup solution. (recently bought 4x 8TB SSDs). For the m.2 drive, try to get the best value size as I’ve never heard anyone complain about having too big of a drive.
For all SSDs (m.2 and data) make sure that it accurately reports SMART data for you can keep tabs on their health metrics.
In 40+ years of using HDD I can count failures on one hand. Generally related to power issues. I have many well over 70000 hours. I recently picked up 2 used 12Tb Enterprise drives for less than the cost 1 consumer 12Tb drives to add to the mix as well. I have another 8 to 12 decommissioned enterprise drives in different systems.
You never trust your data to a single drive or single medium. Otherwise effectively you’ve already lost it. And dollar per dollar SSD simply cannot beat traditional hard drives
for capacity. Just seek time and transfer rate.
Just my music library is over a terabyte of largely 320 bits per second mp3. Storage for miscellaneous videos about six times that. And then my streaming library of video. Has been traditionally large enough to make re-encoding and shrinking worth while to get more. Upgrading from divix/xvid in the late 90s early aughts. H264 in the early 2010s. H265 in the late 2010s. Currently converting to av1 from source discs etc. Some of the spinning rust I am using has seen all those Transitions and been Rewritten many times. Which would have been very rough on an SSD. LOL I may have a problem.
Regardless there’s nothing wrong with any particular storage technology. No reason to avoid one over the other as long as it does what you need. And if you’re data is small enough to fit economically on an SSD then they will suit your solution perfectly. Just remember your three, two, one.
When I worked at a data center, I would notice drives would die around 50k hours. Some last a lot longer but when your testing hundreds of drives you start to see the patterns. So when my drive get to 50k I replace them preemptively just to not have data loss. I might still glue them km a redundant backup or something like that.
When they fail or when the capacity becomes a hindrance. Other than that if you follow your 3 2 1. You shouldn’t lose data.
Replacing after 50,000 hours in enterprise data center setting makes sense. At home it’s not too much issue for me to have a day of downtime replicating data back across drives. It’ll just cost me my time. In an Enterprise setting it will also cost you money. Possibly even enough or more to justify retiring them at 50,000 hours. Though again if you have raid setup with spare drives etc. You can just keep on running while the raid rebuilds itself. Only replacing a drive when they go bad. Or started acting up preparing to go bad.
It all honestly depends upon your it departments budget competence and staffing. It’s not wrong to replace some after 50,000. But it could be wasteful. There are after all people like myself who buy those drives and run them for years without incident.
Vibrational mode failure is more a thing in large SAS backplane enterprise jbod rack mount deployments. Small workstation/NAS deployments with three to five drives etc. Using rubber grommets and all shouldn’t have too many issues causing failure from vibration. However a large Bay full of drives spinning up and down reaching harmonics can absolutely tear themselves apart over time for sure.
Yeah, you’d have to figure out how to define a “problem” first. It’s a better IDE to define what metrics might indicate you need to replace soon before a problem actually happens.
I was running the numbers in my head and realized that if hosting media like music and video files where it’s just written to once and read from a lot, a large 2.5 inch SSD might be a better buy than a HDD (especially if size limited to a 2.5 inch HDD). My reasoning is that a HDD needs replacement after around 50,000 power on hours. But an SSD needs replacement depending on how often the entire drive is overwritten. For a media server that should mean that the HDD will be replaced much more often than an SSD. And that’s without considering vibration related issues of having multiple drives in the same server or if you experience frequent power outages (both of which would make a better case for an ssd.
So what I do is I use an M.2 SSD for the OS, and the largest 2.5 sata SSD I can find which will fit my storage and backup solution. (recently bought 4x 8TB SSDs). For the m.2 drive, try to get the best value size as I’ve never heard anyone complain about having too big of a drive.
For all SSDs (m.2 and data) make sure that it accurately reports SMART data for you can keep tabs on their health metrics.
In 40+ years of using HDD I can count failures on one hand. Generally related to power issues. I have many well over 70000 hours. I recently picked up 2 used 12Tb Enterprise drives for less than the cost 1 consumer 12Tb drives to add to the mix as well. I have another 8 to 12 decommissioned enterprise drives in different systems.
You never trust your data to a single drive or single medium. Otherwise effectively you’ve already lost it. And dollar per dollar SSD simply cannot beat traditional hard drives for capacity. Just seek time and transfer rate.
Just my music library is over a terabyte of largely 320 bits per second mp3. Storage for miscellaneous videos about six times that. And then my streaming library of video. Has been traditionally large enough to make re-encoding and shrinking worth while to get more. Upgrading from divix/xvid in the late 90s early aughts. H264 in the early 2010s. H265 in the late 2010s. Currently converting to av1 from source discs etc. Some of the spinning rust I am using has seen all those Transitions and been Rewritten many times. Which would have been very rough on an SSD. LOL I may have a problem.
Regardless there’s nothing wrong with any particular storage technology. No reason to avoid one over the other as long as it does what you need. And if you’re data is small enough to fit economically on an SSD then they will suit your solution perfectly. Just remember your three, two, one.
At what point do you consider replacing a drive?
When I worked at a data center, I would notice drives would die around 50k hours. Some last a lot longer but when your testing hundreds of drives you start to see the patterns. So when my drive get to 50k I replace them preemptively just to not have data loss. I might still glue them km a redundant backup or something like that.
When they fail or when the capacity becomes a hindrance. Other than that if you follow your 3 2 1. You shouldn’t lose data.
Replacing after 50,000 hours in enterprise data center setting makes sense. At home it’s not too much issue for me to have a day of downtime replicating data back across drives. It’ll just cost me my time. In an Enterprise setting it will also cost you money. Possibly even enough or more to justify retiring them at 50,000 hours. Though again if you have raid setup with spare drives etc. You can just keep on running while the raid rebuilds itself. Only replacing a drive when they go bad. Or started acting up preparing to go bad.
It all honestly depends upon your it departments budget competence and staffing. It’s not wrong to replace some after 50,000. But it could be wasteful. There are after all people like myself who buy those drives and run them for years without incident.
Vibrational mode failure is more a thing in large SAS backplane enterprise jbod rack mount deployments. Small workstation/NAS deployments with three to five drives etc. Using rubber grommets and all shouldn’t have too many issues causing failure from vibration. However a large Bay full of drives spinning up and down reaching harmonics can absolutely tear themselves apart over time for sure.
any solution for getting automated SMART reports if errors start popping up? I would prefer not to have to check manually
You could just run it in a cron and have it tee to a file or even send an email report
I need some logic to only send when the is a problem. I might send the data to home assistant and take it from there
Yeah, you’d have to figure out how to define a “problem” first. It’s a better IDE to define what metrics might indicate you need to replace soon before a problem actually happens.