cross-posted from: https://lemmy.daqfx.com/post/24701
I’m hosting my own Lemmy instance and trying to figure out how to optimize PSQL to reduce disk IO at the expense of memory.
I accept increased risk this introduces, but need to figure out parameters that will allow a server with a ton of RAM and reliable power to operate without constantly sitting with 20% iowait.
Current settings:
# DB Version: 15 # OS Type: linux # DB Type: web # Total Memory (RAM): 32 GB # CPUs num: 8 # Data Storage: hdd max_connections = 200 shared_buffers = 8GB effective_cache_size = 24GB maintenance_work_mem = 2GB checkpoint_completion_target = 0.9 wal_buffers = 16MB default_statistics_target = 100 random_page_cost = 4 effective_io_concurrency = 2 work_mem = 10485kB min_wal_size = 1GB max_wal_size = 4GB max_worker_processes = 8 max_parallel_workers_per_gather = 4 max_parallel_workers = 8 max_parallel_maintenance_workers = 4 fsync = off synchronous_commit = off wal_writer_delay = 800 wal_buffers = 64MB
Most load comes from LCS script seeding content and not actual users.
Solution: My issue turned out to be really banal - Lemmy’s PostgreSQL container was pointing at default location for config file (/var/lib/postgresql/data/postgresql.conf) and not at the location where I actually mounted custom config file for the server (/etc/postgresql.conf). Everything is working as expected after I updated docker-compose.yaml file to point PostgreSQL to correct config file. Thanks @[email protected] for pointing me in the right direction!
I wouldn’t take anything I say as a recommendation. I’m learning, too. And was hoping to start a conversation (or get corrected).
I should’ve referenced the actual docs. Google directed me to some 3rd party bullshit.
So, it’s more about concurrent client writes… I guess?