• RobotZap10000@feddit.nl
    link
    fedilink
    arrow-up
    31
    ·
    1 month ago

    Try 60GB of system logs after 15 minutes of use. My old laptop’s wifi card worked just fine, but spammed the error log with some corrected error. Adding pci=noaer to grub config fixed it.

  • hushable@lemmy.world
    link
    fedilink
    arrow-up
    26
    ·
    edit-2
    1 month ago

    Once I had a mission critical service crash because the disk got full, turns out there was a typo on the logrotate config and as a result the logs were not being cleaned up at all.

    edit: I should add that I used the commands shared in this post to free up space and bring the service back up

  • wildbus8979@sh.itjust.works
    link
    fedilink
    arrow-up
    28
    arrow-down
    3
    ·
    edit-2
    1 month ago

    Fucking blows my mind that journald broke what is essentially the default behavior of every distro’s use of logrotate and no one bats an eye.

    • Regalia@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      26
      arrow-down
      1
      ·
      edit-2
      1 month ago

      I’m not sure if you’re joking or not, but the behavior of journald is fairly dynamic and can be configured to an obnoxious degree, including compression and sealing.

      By default, the size limit is 4GB:

      SystemMaxUse= and RuntimeMaxUse= control how much disk space the journal may use up at most. SystemKeepFree= and RuntimeKeepFree= control how much disk space systemd-journald shall leave free for other uses. systemd-journald will respect both limits and use the smaller of the two values.

      The first pair defaults to 10% and the second to 15% of the size of the respective file system, but each value is capped to 4G.

    • tentacles9999@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      Still boggles my mind that systemd being terrible is still a debate. Like of all things, wouldn’t text logs make sense?

  • muhyb@programming.dev
    link
    fedilink
    arrow-up
    16
    ·
    1 month ago

    This once happened to me on my pi-hole. It’s an old netbook with 250 GB HDD. Pi-hole stopped working and I checked the netbook. There was a 242 GB log file. :)

  • zoey@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 month ago

    Recently had the jellyfin log directory take up 200GB, checked the forums and saw someone with the same problem but 1TB instead.

    • Agent641@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      1 month ago

      2024-03-28 16:37:12:017 - Everythings fine

      2024-03-28 16:37:12:016 - Everythings fine

      2024-03-28 16:37:12:015 - Everythings fine

  • Scribbd@feddit.nl
    link
    fedilink
    arrow-up
    6
    ·
    1 month ago

    I recently discovered the company I work for, has an S3 bucket with network flow logs of several TB. It contains all network activity if the past 8 years.

    Not because we needed it. No, the lifecycle policy wasn’t configured correctly.

  • FiniteBanjo@lemmy.today
    link
    fedilink
    arrow-up
    6
    ·
    1 month ago

    Windows isn’t great by any means but I do like the way they have the Event Viewer layout sorted to my tastes.

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      3
      ·
      1 month ago

      True that. Sure, I need to keep my non-professional home sysadmin skills sharp and enjoy getting good at these things, but I wouldn’t mind a better GUI journal reader / configurator thing. KDE has a halfway decent log viewer.

      It might also go a long way towards helping the less sysadmin-for-fun-inclined types troubleshoot.

      Maybe there is one and I just haven’t checked. XD

  • alien@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    1 month ago

    I couldn’t tell for a solid minute if the title was telling me to clear the journal or not