The US Department of Defense has deployed machine learning algorithms to identify targets in over 85 air strikes on targets in Iraq and Syria this year.

The Pentagon has done this sort of thing since at least 2017 when it launched Project Maven, which sought suppliers capable of developing object recognition software for footage captured by drones. Google pulled out of the project when its own employees revolted against using AI for warfare, but other tech firms have been happy to help out.

  • BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    6
    ·
    8 months ago

    For context, we have had machines that autonomously decide when to kill for awhile now: mines.

    It is good to see the machines getting an upgrade so they are more selective about their targets.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      8 months ago

      The more selective we convince ourselves our weapons are, the more willing we are to use them in conflicts where civilians are put at risk—our use of weapons is constrained by the level of collateral damage we’re willing to take responsibility for, and by distancing ourselves from that responsibility, AI allows us to escalate conflicts until civilians are at even greater risk. It’s the Jevons paradox, with human life instead of gasoline.

      • pearsaltchocolatebar@discuss.online
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        It depends on how well trained your FM is, really. AI/ML is already better than humans at things like cancer diagnoses and such, so there’s really no reason to think that using it in this instance would create more of a risk to civilians than a human operator.

        Most people’s experience with AI is ChatGPT or similar, but ChatGPT really isn’t a very good LLM. Plus, an LLM is only as good as your prompt engineering.

        All that being said, there should always be a human double checking the targets in order to catch hallucinations.

        • AbouBenAdhem@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          8 months ago

          The issue behind the Jevons effect isn’t that the technology in question doesn’t work as advertised—it’s that, by reducing the negative consequences associated with a decision, people become increasingly willing to make that decision until the aggregate negative consequences more than cancel out the effect of the “improvement”.

          • pearsaltchocolatebar@discuss.online
            link
            fedilink
            arrow-up
            2
            ·
            8 months ago

            There’s really no reason to think this technology will be victim to the Jevons paradox. These strikes are already happening remotely, and if AI/ML can better discern targets vs civilians there’s absolutely no reason to think civilian casualties will increase because of it.

            That’s like saying using AI/ML to screen for cancer will result in more people dying from cancer.

            You’re trying to apply an economical theory about the consumption of finite resources to a completely unrelated field/sector.