Honestly 10-20 Gb for OS. 200-500 Mb for UEFI. Everything else separate. A cheap SATA low capacity SSD is fine for the OS usually. Bulk storage is still cheapest on Hard Disk.
M.2 is great and all if you can afford. But unless your network is over 2.5 gbps or you are simultaneously streaming large video streams to multiple clients. Regular SATA drives will be able to keep up fine.
No worries. And honestly if you haven’t already committed to a particular Mini PC or absolutely need the form factor. I’d seriously suggest looking on eBay for some old e-waste.
I personally run an old dell business system with a 4th Gen i7 with 16Gb of ram. Cost about 100 dollars when I got it. I run a Minecraft server, Luanti server, jellyfin for movies and TV streaming, icecast/liquidsoap/libretime to stream my own private automated Internet radio, and NFS/SAMBA for NAS. And I still have RAM, CPU, and bandwidth free on a 1gbps network.
The only thing a newer system will net you is possibly a bit more power efficiency. Which depending on electricity costs where you are might make a new system attractive.
Lol but getting into homelabbing, new or old; it’s still a gateway drug. One of my favorite BSD/Linux things. At least for hardwired clients is just having my home directory on the NAS. I have a…few systems, and being able to have my downloads and documents etc all right there. Being able to wipe and reinstall the workstation without worrying about my data if I want to distro hop. It’s great. Only downside that pops up rarely is file locking. Other than that my files and app settings follow me to all of them.
European prices suck. I got a m920q with a i5-9600T, 8GB of RAM, and a 256 SSD off ebay. Embarrassed to say how much, but I managed to negotiate a 15€ discount for the postage costs. Expecting it on thursday. I’m not sure if I will trust the SSD or just buy a new one. This is supposed to replace my Synology, my pi4 and my i5-3570K, which are hosting a huge amount of docker containers each.
9th Gen isn’t bad. Though I’m guessing you probably still paid 200 to 300 for it. The problem with the i-5 is no hyperthreading. It definitely benefits a server system. But still will function nicely. And you would need at least 6th Gen to run NVME.
The 8Gb actually should be fine. I run kde plasma on an arm based Chromebook tablet with only 4Gb. Still have RAM to spare. So you can have a graphical desktop and still do plenty serving. Just make sure to check out the system. Either the OEM. Or the motherboard. Find out what Ram it supports and keep an eye on eBay. All the new systems with ddr5 should see a lot of used ddr4 coming up for sale at good prices. In the near future you can probably quadruple that Ram for 50 to 100.
The storage technology that you choose to use at this point should not be a huge factor. SATA SSD or nvme SSD you aren’t liable to notice a crazy difference. Either will be way faster than HDD. But generally create the base partitions your distro will likely suggest. If you’re just starting out there’s nothing wrong with going with that. Usually a 500 hundred to 1 GB boot / UEFI partition and then a few tens of gigabytes for operating system on the same storage device on a separate partition. If you have any remaining space. That would be a good spot to create a partition for home directories which are typically where you will store all your media or you can actually have a whole physically separate device another nvme or SSD or even hard disk to use as the home directories or storage for media. You can map them in the fstab file later fairly easily with the KDE partition manager or gparted.
Not really certain what you mean by that. Assuming that you’re using a Linux distribution, you can map multiple drives to that installation. Really it depends on what you want to do and how you do it. There are many, many options. If you are not familiar with a unix-based environment, I would advise that you start there and figure out later what distro you want to use and how you want to use it.
Got some time so I’ll elaborate: which type of device has the best balance of price, speed and reliability? I would use that for the OS. Would the other type of device be worth it for rest of the data? Or should I go all in on the same technology? I’ve always preferes to keep these two on separate “hard drives”.
I was running the numbers in my head and realized that if hosting media like music and video files where it’s just written to once and read from a lot, a large 2.5 inch SSD might be a better buy than a HDD (especially if size limited to a 2.5 inch HDD). My reasoning is that a HDD needs replacement after around 50,000 power on hours. But an SSD needs replacement depending on how often the entire drive is overwritten. For a media server that should mean that the HDD will be replaced much more often than an SSD. And that’s without considering vibration related issues of having multiple drives in the same server or if you experience frequent power outages (both of which would make a better case for an ssd.
So what I do is I use an M.2 SSD for the OS, and the largest 2.5 sata SSD I can find which will fit my storage and backup solution. (recently bought 4x 8TB SSDs). For the m.2 drive, try to get the best value size as I’ve never heard anyone complain about having too big of a drive.
For all SSDs (m.2 and data) make sure that it accurately reports SMART data for you can keep tabs on their health metrics.
In 40+ years of using HDD I can count failures on one hand. Generally related to power issues. I have many well over 70000 hours. I recently picked up 2 used 12Tb Enterprise drives for less than the cost 1 consumer 12Tb drives to add to the mix as well. I have another 8 to 12 decommissioned enterprise drives in different systems.
You never trust your data to a single drive or single medium. Otherwise effectively you’ve already lost it. And dollar per dollar SSD simply cannot beat traditional hard drives
for capacity. Just seek time and transfer rate.
Just my music library is over a terabyte of largely 320 bits per second mp3. Storage for miscellaneous videos about six times that. And then my streaming library of video. Has been traditionally large enough to make re-encoding and shrinking worth while to get more. Upgrading from divix/xvid in the late 90s early aughts. H264 in the early 2010s. H265 in the late 2010s. Currently converting to av1 from source discs etc. Some of the spinning rust I am using has seen all those Transitions and been Rewritten many times. Which would have been very rough on an SSD. LOL I may have a problem.
Regardless there’s nothing wrong with any particular storage technology. No reason to avoid one over the other as long as it does what you need. And if you’re data is small enough to fit economically on an SSD then they will suit your solution perfectly. Just remember your three, two, one.
When I worked at a data center, I would notice drives would die around 50k hours. Some last a lot longer but when your testing hundreds of drives you start to see the patterns. So when my drive get to 50k I replace them preemptively just to not have data loss. I might still glue them km a redundant backup or something like that.
When they fail or when the capacity becomes a hindrance. Other than that if you follow your 3 2 1. You shouldn’t lose data.
Replacing after 50,000 hours in enterprise data center setting makes sense. At home it’s not too much issue for me to have a day of downtime replicating data back across drives. It’ll just cost me my time. In an Enterprise setting it will also cost you money. Possibly even enough or more to justify retiring them at 50,000 hours. Though again if you have raid setup with spare drives etc. You can just keep on running while the raid rebuilds itself. Only replacing a drive when they go bad. Or started acting up preparing to go bad.
It all honestly depends upon your it departments budget competence and staffing. It’s not wrong to replace some after 50,000. But it could be wasteful. There are after all people like myself who buy those drives and run them for years without incident.
Vibrational mode failure is more a thing in large SAS backplane enterprise jbod rack mount deployments. Small workstation/NAS deployments with three to five drives etc. Using rubber grommets and all shouldn’t have too many issues causing failure from vibration. However a large Bay full of drives spinning up and down reaching harmonics can absolutely tear themselves apart over time for sure.
Yeah, you’d have to figure out how to define a “problem” first. It’s a better IDE to define what metrics might indicate you need to replace soon before a problem actually happens.
Honestly 10-20 Gb for OS. 200-500 Mb for UEFI. Everything else separate. A cheap SATA low capacity SSD is fine for the OS usually. Bulk storage is still cheapest on Hard Disk.
M.2 is great and all if you can afford. But unless your network is over 2.5 gbps or you are simultaneously streaming large video streams to multiple clients. Regular SATA drives will be able to keep up fine.
Thank you for the reply to my first Lemmy post. I guess this works 😁
No worries. And honestly if you haven’t already committed to a particular Mini PC or absolutely need the form factor. I’d seriously suggest looking on eBay for some old e-waste.
I personally run an old dell business system with a 4th Gen i7 with 16Gb of ram. Cost about 100 dollars when I got it. I run a Minecraft server, Luanti server, jellyfin for movies and TV streaming, icecast/liquidsoap/libretime to stream my own private automated Internet radio, and NFS/SAMBA for NAS. And I still have RAM, CPU, and bandwidth free on a 1gbps network.
The only thing a newer system will net you is possibly a bit more power efficiency. Which depending on electricity costs where you are might make a new system attractive.
Lol but getting into homelabbing, new or old; it’s still a gateway drug. One of my favorite BSD/Linux things. At least for hardwired clients is just having my home directory on the NAS. I have a…few systems, and being able to have my downloads and documents etc all right there. Being able to wipe and reinstall the workstation without worrying about my data if I want to distro hop. It’s great. Only downside that pops up rarely is file locking. Other than that my files and app settings follow me to all of them.
European prices suck. I got a m920q with a i5-9600T, 8GB of RAM, and a 256 SSD off ebay. Embarrassed to say how much, but I managed to negotiate a 15€ discount for the postage costs. Expecting it on thursday. I’m not sure if I will trust the SSD or just buy a new one. This is supposed to replace my Synology, my pi4 and my i5-3570K, which are hosting a huge amount of docker containers each.
9th Gen isn’t bad. Though I’m guessing you probably still paid 200 to 300 for it. The problem with the i-5 is no hyperthreading. It definitely benefits a server system. But still will function nicely. And you would need at least 6th Gen to run NVME.
The 8Gb actually should be fine. I run kde plasma on an arm based Chromebook tablet with only 4Gb. Still have RAM to spare. So you can have a graphical desktop and still do plenty serving. Just make sure to check out the system. Either the OEM. Or the motherboard. Find out what Ram it supports and keep an eye on eBay. All the new systems with ddr5 should see a lot of used ddr4 coming up for sale at good prices. In the near future you can probably quadruple that Ram for 50 to 100.
The storage technology that you choose to use at this point should not be a huge factor. SATA SSD or nvme SSD you aren’t liable to notice a crazy difference. Either will be way faster than HDD. But generally create the base partitions your distro will likely suggest. If you’re just starting out there’s nothing wrong with going with that. Usually a 500 hundred to 1 GB boot / UEFI partition and then a few tens of gigabytes for operating system on the same storage device on a separate partition. If you have any remaining space. That would be a good spot to create a partition for home directories which are typically where you will store all your media or you can actually have a whole physically separate device another nvme or SSD or even hard disk to use as the home directories or storage for media. You can map them in the fstab file later fairly easily with the KDE partition manager or gparted.
Literally what the other guy that replied to this comment says buy an old server off eBay swap out the drives be set for life
thatºs my objective. But I’m just not sure how to organize OS and the rest of the data. Or if I just use an SSD, or just an NVMe drive…
Not really certain what you mean by that. Assuming that you’re using a Linux distribution, you can map multiple drives to that installation. Really it depends on what you want to do and how you do it. There are many, many options. If you are not familiar with a unix-based environment, I would advise that you start there and figure out later what distro you want to use and how you want to use it.
Got some time so I’ll elaborate: which type of device has the best balance of price, speed and reliability? I would use that for the OS. Would the other type of device be worth it for rest of the data? Or should I go all in on the same technology? I’ve always preferes to keep these two on separate “hard drives”.
I was running the numbers in my head and realized that if hosting media like music and video files where it’s just written to once and read from a lot, a large 2.5 inch SSD might be a better buy than a HDD (especially if size limited to a 2.5 inch HDD). My reasoning is that a HDD needs replacement after around 50,000 power on hours. But an SSD needs replacement depending on how often the entire drive is overwritten. For a media server that should mean that the HDD will be replaced much more often than an SSD. And that’s without considering vibration related issues of having multiple drives in the same server or if you experience frequent power outages (both of which would make a better case for an ssd.
So what I do is I use an M.2 SSD for the OS, and the largest 2.5 sata SSD I can find which will fit my storage and backup solution. (recently bought 4x 8TB SSDs). For the m.2 drive, try to get the best value size as I’ve never heard anyone complain about having too big of a drive.
For all SSDs (m.2 and data) make sure that it accurately reports SMART data for you can keep tabs on their health metrics.
In 40+ years of using HDD I can count failures on one hand. Generally related to power issues. I have many well over 70000 hours. I recently picked up 2 used 12Tb Enterprise drives for less than the cost 1 consumer 12Tb drives to add to the mix as well. I have another 8 to 12 decommissioned enterprise drives in different systems.
You never trust your data to a single drive or single medium. Otherwise effectively you’ve already lost it. And dollar per dollar SSD simply cannot beat traditional hard drives for capacity. Just seek time and transfer rate.
Just my music library is over a terabyte of largely 320 bits per second mp3. Storage for miscellaneous videos about six times that. And then my streaming library of video. Has been traditionally large enough to make re-encoding and shrinking worth while to get more. Upgrading from divix/xvid in the late 90s early aughts. H264 in the early 2010s. H265 in the late 2010s. Currently converting to av1 from source discs etc. Some of the spinning rust I am using has seen all those Transitions and been Rewritten many times. Which would have been very rough on an SSD. LOL I may have a problem.
Regardless there’s nothing wrong with any particular storage technology. No reason to avoid one over the other as long as it does what you need. And if you’re data is small enough to fit economically on an SSD then they will suit your solution perfectly. Just remember your three, two, one.
At what point do you consider replacing a drive?
When I worked at a data center, I would notice drives would die around 50k hours. Some last a lot longer but when your testing hundreds of drives you start to see the patterns. So when my drive get to 50k I replace them preemptively just to not have data loss. I might still glue them km a redundant backup or something like that.
When they fail or when the capacity becomes a hindrance. Other than that if you follow your 3 2 1. You shouldn’t lose data.
Replacing after 50,000 hours in enterprise data center setting makes sense. At home it’s not too much issue for me to have a day of downtime replicating data back across drives. It’ll just cost me my time. In an Enterprise setting it will also cost you money. Possibly even enough or more to justify retiring them at 50,000 hours. Though again if you have raid setup with spare drives etc. You can just keep on running while the raid rebuilds itself. Only replacing a drive when they go bad. Or started acting up preparing to go bad.
It all honestly depends upon your it departments budget competence and staffing. It’s not wrong to replace some after 50,000. But it could be wasteful. There are after all people like myself who buy those drives and run them for years without incident.
Vibrational mode failure is more a thing in large SAS backplane enterprise jbod rack mount deployments. Small workstation/NAS deployments with three to five drives etc. Using rubber grommets and all shouldn’t have too many issues causing failure from vibration. However a large Bay full of drives spinning up and down reaching harmonics can absolutely tear themselves apart over time for sure.
any solution for getting automated SMART reports if errors start popping up? I would prefer not to have to check manually
You could just run it in a cron and have it tee to a file or even send an email report
I need some logic to only send when the is a problem. I might send the data to home assistant and take it from there
Yeah, you’d have to figure out how to define a “problem” first. It’s a better IDE to define what metrics might indicate you need to replace soon before a problem actually happens.