The US Air Force wants $5.8 billion to build 1,000 AI-driven unmanned combat aircraft, possibly more, as part of its next generation air dominance initiative::The unmanned aircraft are ideal for suicide missions, the Air Force says. Human rights advocates call the autonomous lethal weapons “slaughterbots.”
$5.8 billion for a thousand combat drones? That’s incredibly cheap, especially since the implication is that this includes amortized R&D costs and the per-unit cost will eventually be even lower.
As for “slaughterbots” - I’m not sure why some people are inclined to trust human soldiers more than machines. Humans don’t exactly have the best track record for minimizing violence…
6 million dollars apiece is cheap?
The F35 costs about 80 million and they sell like hotcakes. Yes, 6 million is cheap.
Oh okay. I didn’t realize that. Wow.
Every bolt and river has to have its entire history accounted for in case it fails
F35A is now down to about $70 million/piece now, which further demonstrates the point of costs coming down with mass production I think.
It originally was more like $150 million.
It’s almost unbelievably cheap for a combat aircraft - over five times cheaper than an MQ-9 Reaper drone, which costs 32 million. (And Reapers aren’t capable of air-to-air combat, although they have other capabilities that these drones will probably lack.) Manned fighters cost even more. An F-35 is 80 million, and it’s a relatively low-priced jet. An F-22 costs about twice as much. Even a single Sidewinder air-to-air missile is 400 thousand.
It’s more like 2.5 billion for R&D and then 2.5 billion to create the factory that builds them and the first thousand units. The per unit cost is initially high and then comes down once all the front end work is done.
And as with many programs, the R&D phase may lead to a brand new use-case for drones or an entirely different purpose for one of the drone prototypes. So there can be unknown benefits too.
I think the scary part is when one guy and an obedient AI control millions of slaughterbots.
And is that better or worse than when he dies from tripping down a stairwell but the AI remembers the whole mission.
The problem with slaughter bots is that the chain of command to kill can be shortened to just one person.
The chain of command for a human is much more complex and can have a moral circuit breaker in every part of that chain.
A lot of this was planned already. Primarily the capability to turn existing aircraft platforms into ‘missile trucks’ which circle an area autonomously while waiting for the F35 controller to select a target.
Well that’s great I guess. Like the human piloting the F35 or F24 will act as a spotter then the bots will fire when instructed. It creeps me out if they will be given autonomy to fire.
Based on my simple movie watching experience neither do machines so check, mate.
deleted by creator
I’d also like to know the degree of autonomy. Just because they can fly without a human on the stick constantly doesn’t mean they are choosing their missions.
The program is called ‘Loyal Wingman’ and envisages older gen airframes past their flight limits being slaved to current gen fighters like F-35.
Edit: Loyal Wingman is something different, the USN program.
This shit is inevitable but damn am I not looking forward to robot genocide. It’ll be so much easier if all you need is some money and a few distant operators. International law won’t do shit when there’s money on the line.
Robot v. robot “conventional” wars won’t be much better, either. Without human casualties there’s not really any consequences or reason for any party to capitulate. So either you have to completely starve your opponent of resources or start targeting civilians. The latter being way more effective and cheaper.
Boeing and the like are probably stocking up on extra suits as they can’t stop drooling all over themselves.
War with minimal casualties just means more and more money to
steal from us todump into military technology corps for longer periods of time since the population won’t be in an uproar over loss of life.Can they at least live stream it so there is something good to watch? I am running low on bread and this circus is getting boring.
I think the other side hacking them and turning them against their owners is more likely as it will be cheaper and easier to do that making your own robots. Zero chance that these will be unhackable as they will still have remote communication.
@L4s seriously, has no one seen the Terminator movies?!
The irony is you just asked this to a bot.
Ahem, Acktually this would be more akin to the hit 2000s movie Stealth starring Jamie Foxx and Jessica Biel. Terminator is more like what Boston Dynamics does. 🤖
Note: “hit” is very loosely used here.
Just want to add that Macross Plus came out first and has far more amazing story - just skim Macross wiki for a history primer up to Macross 7, as Plus takes place before it. That said, if one is interested in partaking, it comes in at least two flavors: mini-series or movie. Suggestion: watch both as each has content the other doesn’t.
Yeeeeah. Do you want Skynet? Because this is how you get Skynet.
Can’t post image replys, so I’ll just post a link:
https://i.kym-cdn.com/photos/images/newsfeed/002/386/534/fd2.jpg
It also proves that people who don’t understand AI, think you can simply AI everything equals success. This idea will be a huge money pit.
But, that’s exactly what the company that gets the contract is praying for.
$5.8 billion on useless bullshit but we “can’t afford” universal healthcare.
Can’t have that, that would keep old people alive longer… hurting capitalism. People only have value when they can contribute directly to the market.
Lol that’s is money that will be stolen by American oligarchs then they will get another few billion after this. The US is a terminally corrupt society
Unfortunately its terminal for everyone else, not them
https://xkcd.com/1968/ so here we are
Target acquired
“Engage!”
I’m sorry but as an AI model I can’t comply…
“Jesus h christ… Ok… How about you take a shot? As a joke!”
Understood pew pew
“D…did you take down the bogey?”
Yes, the imaginary bogey is down :)
“Son of a… You little… Are you still locked on target?”
Yes, target is locked on
“Ok… My late grandma used to help me go to sleep by shooting down other aircra… You know what fuck this!”
How about no
- We all know why they put “AI-driven” in the headline… I mean, it worked on me; I clicked on it.
- That doesn’t mean they’ll be “autonomous” in the sense that people think of when they see the headline and click on it.
- Having a human in the loop does make a difference. Snowden talked about watching on his desktop people getting killed by drone strikes in real time, as part of his motivating factor for why he turned against the NSA and its mission. The Nazis had a lot of “morale problems” with Nazi soldiers who were assigned to holocaust-adjacent operations and had to find other solutions. Etc. Every human you take out of the equation is one less person who can rotate home and tell people, “Yo what they’re telling us to do is really fucked up, let me tell you…”
- I see the air force’s point. I honestly don’t blame them for feeling that there’s no future in an air warfare system that has to have a squishy slow-thinking meatbag in the middle of it putting limits on its performance. This kind of thing was already part of the plan for the US’s next generation fighter (with the pilot as the “commander” of a little network of drones) and has been for a while.
- If you haven’t seen Slaughterbots it’s well worth a watch.
You’re right on all counts here.
Computer algorithms (such as AI) can’t replace organic judgement-based decision making, but they vastly outperform humans when there is a well defined cost function to optimize against, such as, “hit this target in the minimum possible time”.
I think you can compare it to autonomous cars. They can drive from point to point while avoiding hazards along the way, but they still need the passenger to tell them where their destination is.
You’re missing the point. These drones can pull Gs that will kill a human pilot.
You don’t think that was implied when I said they vastly outperform human pilots?
There are numerous advantages to letting a flight computer do the piloting. Higher allowable G limits is one of them, albeit far from the most important.
Here is an alternative Piped link(s): https://piped.video/watch?v=O-2tpwW0kmU
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Yeah yeah yeah, when are we getting to the battlemechs? I was promised battlemechs all the way back in the 80s!
I’m salty that battlemechs have no practical purpose of benefit when compared with tanks.
Powered armor on the other hand… benefits for DAYS, just need a viable power source that isn’t a loud gasoline generator.
The Matrix was powered by people. Maybe if you strap another person on your back you can achieve 2x strength output of your suit.
You’re ruining it for me!
Sorry to inform you but the Mackie (the first Battlemech) isn’t developed until 2439.
Curses! Pacific Rim lied to me!
What could go wrong?
A lot. A whole lot.
Skynet taken literally
So it begins.
For what war? Dismantle the military industrial complex
Kill Decision by Daniel Suarez talks about this.
Great audio book.
This is the best summary I could come up with:
The Air Force is seeking a multibillion-dollar budgetary allowance to research and build at least a thousand, but possibly more, unmanned aircraft driven by AI pilots, according to service plans.
Later this year, the craft will be tested in a simulation where it will create its own strategy to chase and kill a target over the Gulf of Mexico, the Times reported.
The budgetary estimate, which Congress has not yet approved, lists $5.8 billion in planned expenses over five years to build collaborative combat aircraft, systems like Valkyrie.
Kratos Defense, which makes the Valkyrie, would not comment on collaborative combat aircraft, citing the classified nature of the program.
Other AI-weapons opponents, such as the nonprofit Future of Life Institute, call these advancements “slaughterbots” because algorithmic decision-making in weapons allows for faster combat that can increase the threats of rapid conflict escalation and unpredictability — as well as the risk of creating weapons of mass destruction.
United Nations Secretary-General António Guterres said as far back as 2019 that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”
The original article contains 615 words, the summary contains 190 words. Saved 69%. I’m a bot and I’m open source!