TLDR: I am running some Docker containers on a homelab server, and the containers’ volumes are mapped to NFS shares on my NAS. Is that bad performance?
- I have a Linux PC that acts as my homelab server, and a Synology NAS.
- The server is fast but has 100GB SSD.
- The NAS is slow(er) but has oodles of storage.
- Both devices are wired to their own little gigabit switch, using priority ports.
Of course it’s slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be “bad practice” to separate CPU and storage this way? Isn’t that pretty much what a data center also does?
What no one else has touched on is the protocol used for network drives interferes with databases. Protocols like SMB lock files during read/write so other clients on the network can’t corrupt the file by interacting with it at the same time.
It is bad practice to put the docker files on a NAS because it’s slower, and the protocol used can and will lead to docker issues.
That’s not to say that no files can be remote, jellyfin’s media library obviously supports connecting to network drives, but the docker volume and other config files need to be on the local machine.
Data centers get around this by:
My advice is to buy a new SSD and clone the existing one over. They’re dirt cheap and you’re going to save yourself a lot of headache.
Using network mapped disks instead of network mapped filesystems.
They use SAN and not NAS. The database and VM architecture do not fundamentally change the behavior of the disks, and there isn’t much more complicated stuff beyond that.
In context of self hosting it’s probably worth pointing out, that SQLite specifically mentions NFS on their How To Corrupt An SQLite Database File page.
SQLite is used in many popular services people run at home. Often as only or default option, because it does not require external service to work.