Saturday, July 27, 2024
More

    Latest Posts

    Cutting-Edge Warfare: Israeli Military Utilizes Advanced AI System for Precision Strikes in Gaza Conflict, Report Reveals

    The Israeli Military Force denies many of the claims in these reports.

    The Israeli Military used a new artificial intelligence (AI) system to generate lists of tens of thousands of human targets for potential airstrikes in Gaza, according to a report published last week. The report comes from the nonprofit outlet 972 Magazine, which is run by Israeli and Palestinian journalists.

    The Israeli army used a new artificial intelligence (AI) system to generate lists of tens of thousands of human targets for potential airstrikes in Gaza, according to a report published last week. (via REUTERS)

    The report cites interviews with six unnamed sources in Israeli intelligence. The sources claim the system, known as Lavender, was used with other AI systems to target and assassinate suspected militants – many in their own homes – causing large numbers of civilian casualties.

    According to another report in the Guardian, based on the same sources as the 972 report, one intelligence officer said the system “made it easier” to carry out large numbers of strikes, because “the machine did it coldly”.

    As militaries around the world race to use AI, these reports show us what it may look like: machine-speed warfare with limited accuracy and little human oversight, with a high cost for civilians.

    But in 2021, the Jerusalem Post reported an intelligence official saying Israel had just won its first “AI war” – an earlier conflict with Hamas – using a number of machine learning systems to sift through data and produce targets.

    In the same year a book called The Human–Machine Team, which outlined a vision of AI-powered warfare, was published under a pseudonym by an author recently revealed to be the head of a key Israeli clandestine intelligence unit.

    The recent 972 report also claims a third system, called Where’s Daddy?, monitors targets identified by Lavender and alerts the military when they return home, often to their family.

    Several countries are turning to algorithms in search of a military edge. The US military’s Project Maven supplies AI targeting that has been used in the Middle East and Ukraine. China too is rushing to develop AI systems to analyse data, select targets, and aid in decision-making.

    Proponents of military AI argue it will enable faster decision-making, greater accuracy and reduced casualties in warfare.

    The Israeli Defence Force response to the most recent report says “analysts must conduct independent examinations, in which they verify that the identified targets meet the relevant definitions in accordance with international law”.

    As for accuracy, the latest 972 report claims Lavender automates the process of identification and cross-checking to ensure a potential target is a senior Hamas military figure. According to the report, Lavender loosened the targeting criteria to include lower-ranking personnel and weaker standards of evidence, and made errors in “approximately 10 per cent of cases”.

    As military use of AI becomes more common, ethical, moral and legal concerns have largely been an afterthought. There are so far no clear, universally accepted or legally binding rules about military AI.

    The United Nations has been discussing “lethal autonomous weapons systems” for more than ten years. These are devices that can make targeting and firing decisions without human input, sometimes known as “killer robots”. Last year saw some progress.

    Overall, international rules over the use of military AI are struggling to keep pace with the fervour of states and arms companies for high-tech, AI-enabled warfare.

    Some Israeli startups that make AI-enabled products are reportedly making a selling point of their use in Gaza. Yet reporting on the use of AI systems in Gaza suggests how far short AI falls of the dream of precision warfare, instead creating serious humanitarian harms.

    The willingness to accept AI suggestions with barely any human scrutiny also widens the scope of potential targets, inflicting greater harm.

    The reports on Lavender and Habsora show us what current military AI is already capable of doing. Future risks of military AI may increase even further.

    Chinese military analyst Chen Hanghui has envisioned a future “battlefield singularity”, for example, in which machines make decisions and take actions at a pace too fast for a human to follow. In this scenario, we are left as little more than spectators or casualties.

    A study published earlier this year sounded another warning note. US researchers carried out an experiment in which large language models such as GPT-4 played the role of nations in a wargaming exercise. The models almost inevitably became trapped in arms races and escalated conflict in unpredictable ways, including using nuclear weapons.

    Latest Posts

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.