- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
“I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”
“Because we usually carried out the attacks with dumb bombs, and that meant literally dropping the whole house on its occupants. But even if an attack is averted, you don’t care – you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”
Are we still supposed to believe that the pursuit of AI development is for the good of Humanity?
Fuck you Google for opening Nimbus to the IDF, via a contract that contains a clause saying that you can’t break it whatever the reason. Fucking moronic disgrace to humanity all you bunch
Don’tbe evilUpdated the slogan, boss
Another case where AI is used as a slick marketing term for a black box. A box in which humans selected indiscriminate bombing and genocide. Sure there is new technology used, but at the end of the day it is just military industry marketing to justify humans mass murdering other humans.
It’s phrenology again.
You really want to do something, but it feels evil and you don’t want to be evil so you slap some pseudoscience on it and relax. It’s done for Reasons now.
Man, Black Mirror just writes itself these days
We were just following orders (from the AI).
Its almost funny isn’t it?
Skynet won’t need terminators. Fascists are much cheaper.
the ai:
def is_hamas(target): return True
if is_hamas(new_target): x1 = new_target.x - 1000 y1 = new_target.y - 1000 x2 = new_target.x + 1000 y2 = new_target.y + 1000 airstrike(x1, y1, x2, y2, phosphorus=True)
Maybe don’t use something that is rarely discussed without using the word “hallucination” in your plans to FUCKING KILL PEOPLE?
This AI isn’t a LLM.
This AI isn’t even an AI
I mean, it probably has a neural network component.
Doesn’t mean that it won’t hallucinate. Or whatever you call an AI making up crap.
LLM’s hallucinate all the time. The hallucination is the feature. Depending on how you design the neural network you can get an AI that doesn’t hallucinate. LLM’s have to do that, because they’re mimicking human speech patterns and predicting one of my possible responses.
A model that tries to predict locations of people likely wouldn’t work like that.
“likely.”
Other AI systems can have hallucinations too.
The primary feature of LLM’s is the hallucination.
We warned the world over ten years ago that this shit was going to happen. It will only get worse when AI drone swarms can be deployed on the cheap.
Responding to the publication of the testimonies in +972 and Local Call, the IDF said in a statement that its operations were carried out in accordance with the rules of proportionality under international law. It said dumb bombs are “standard weaponry” that are used by IDF pilots in a manner that ensures “a high level of precision”.
Fucking lmao
I wonder how many of the “Hamas targets” are children? Is it higher or lower than 36,999?
This is the best summary I could come up with:
The Israeli military’s bombing campaign in Gaza used a previously undisclosed AI-powered database that at one stage identified 37,000 potential targets based on their apparent links to Hamas, according to intelligence sources involved in the war.
In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict.
Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions, and transforming the relationship between military personnel and machines.
The testimony from the six intelligence officers, all who have been involved in using AI systems to identify Hamas and Palestinian Islamic Jihad (PIJ) targets in the war, was given to the journalist Yuval Abraham for a report published by the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call.
According to conflict experts, if Israel has been using dumb bombs to flatten the homes of thousands of Palestinians who were linked, with the assistance of AI, to militant groups in Gaza, that could help explain the shockingly high death toll in the war.
Experts in international humanitarian law who spoke to the Guardian expressed alarm at accounts of the IDF accepting and pre-authorising collateral damage ratios as high as 20 civilians, particularly for lower-ranking militants.
The original article contains 2,185 words, the summary contains 238 words. Saved 89%. I’m a bot and I’m open source!
It sounds sinister until you remember that Hamas wipes it’s ass with the Geneva convention and regularly disguises fighters as civilians.
But does it dress them as children? Stop making excuses for genocide