All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It’s all very exciting, personally, as someone not responsible for fixing it.

Apparently caused by a bad CrowdStrike update.

Edit: now being told we (who almost all generally work from home) need to come into the office Monday as they can only apply the fix in-person. We’ll see if that changes over the weekend…

    • Revan343@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      5 months ago

      explain to the project manager with crayons why you shouldn’t do this

      Can’t; the project manager ate all the crayons

    • candybrie@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      Why is it bad to do on a Friday? Based on your last paragraph, I would have thought Friday is probably the best week day to do it.

      • Lightor@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        edit-2
        5 months ago

        Most companies, mine included, try to roll out updates during the middle or start of a week. That way if there are issues the full team is available to address them.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        I’m not sure what you’d expect to be able to do in a safe mode with no disk access.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      rolling out an update to production that there was clearly no testing

      Or someone selected “env2” instead of “env1” (#cattleNotPets names) and tested in prod by mistake.

      Look, it’s a gaffe and someone’s fired. But it doesn’t mean fuck ups are endemic.