What Israel’s use of AI in the killing of Palestinians may imply for the future of repression.
Some time ago I was watching a documentary in which some military attackers were discussing the place they were attacking. They excitedly described it as “a target-rich environment”, by which they meant there were lots of things to destroy. It struck me that, in the same way that “to a hammer, everything looks like a nail”, those in charge of bombers, drones and artillery are anxious to find targets.
This expression came back to my mind when I recently read an article about Israel’s destruction of Palestinians, their homes, infrastructure and livelihoods. In the article in +972 Magazine, some unnamed Israeli sources were quoted, discussing the methods involved in generating targets for attack. It turns out that Israel has been using artificial intelligence, coupled with a variety of software tools, to generate practically limitless targets.
I know that some commentators have remarked that, for practical purposes, the Israeli attackers have not really needed to generate targets since it appears that anything and anybody in the zones under attack are targets. This is a reasonable remark, but the fact of the employment of AI and software in this context struck me as important. It appears to me to mark a turning point or evolutionary step in this kind of “warfare” that will probably be highly relevant to people who in future may find themselves in the same deadly predicament as the Palestinians are today.
Essentially, the Israeli military has used its tight control of the small areas in which they have confined the Palestinians to build dossiers on people of interest among the populations under their control. Knowing the identity and characteristics of Palestinians who have had some role with Hamas and other organisations with a declared hostility to Israel, the military feeds this data to an AI program. Having learnt a range of signature characteristics of “hostile” Palestinians, the AI is then asked to screen data for the rest of the Palestinian population, assigning a score to each person that correlates to the likelihood of that person being hostile to Israel or, more pertinently, being associated with Hamas. A high score assesses a person as probably connected to Hamas, a low score assesses that they are probably not.
Targets for assassination
After a period of human scrutiny of the machine’s performance the Israelis were confident that the AI was about 90 per cent accurate, so further scrutiny was dropped and the machine was left to identify targets for assassination.
Having arrived at that point, with tens of thousands of targets now available, the next step is to use software to track those target individuals. Mobile phones, for example, are used to track people and to send a signal to the military when the targets arrive at their homes. The idea, it was reported, is to bomb the target individuals at home, killing them and their families all at the same time. It may have been argued that the destruction of the entire families was not a key part of the goal, but it was certainly envisaged and in no way seen as undesirable. Sources claimed that, depending on the perceived importance of the target, the acceptable number of collateral deaths might vary from 15 to 100 or more. Of course the system is prone to errors, but as the Israeli sources in the article noted, a certain margin of error is considered acceptable in the circumstances.
According to the +972 article:
These programs track thousands of individuals simultaneously, identify when they are at home, and send an automatic alert to the targeting officer, who then marks the house for bombing.
“You put hundreds [of targets] into the system and wait to see who you can kill,” said one source with knowledge of the system. “It’s called broad hunting: you copy-paste from the lists that the target system produces.”
Another piece of software collates signals from mobile phones and creates a map of population densities in particular areas, providing information that is just as useful if an attacker wants to minimise or maximise casualties from bombing or strafing.
Since Israel is a major producer and seller of military and quasi-military hardware and software, it might be speculated that the employment of AI and other software in the mass destruction of Palestinians is seen by some as “field testing” that may be used to boost sales in other markets. Arms dealers routinely do this: promote their products as being proven in the field, so it stands to reason that Israel’s busy salesmen would do this with their AI-software hybrids. It is an established fact that Israel’s spyware and malware is keenly sought by repressive regimes and secret intelligence organisations around the world. Some of Israel’s better-known successes include Pegasus and Stuxnet, for a start.
Given what whistleblower Edward Snowden has revealed about the extent to which so-called “democratic” governments in the so-called “free world” use every tool they can to spy on their citizens and harvest their data for intelligence and propaganda purposes, the Israelis’ use of AI to weaponise exactly this sort of data is every bit as ominous as the breathtaking ruthlessness they are applying to their campaign of destruction and the tacit support they are receiving from “democratic” western governments.
While some observers of Israel’s campaign of destruction are watching with horror, no doubt others are more interested in the implied possibilities for future campaigns of their own. As governments and arms dealers move towards more and more dehumanized police and armed forces, it’s not very hard to imagine the dystopian consequences of a marriage of autonomous killing hardware with AI, programmed by those who control the means of violence.
Perhaps in Palestine we can see our own dark future.