HomeTechnologyIsrael’s A.I. Experiments in Gaza War Raise Ethical Concerns

Israel’s A.I. Experiments in Gaza War Raise Ethical Concerns

In late 2023, Israel aimed to eliminate Ibrahim Biari, a senior Hamas commander linked to the Oct. 7 atrocities. However, Israeli intelligence struggled to locate Mr. Biari, who was believed to be concealed within Gaza’s intricate tunnel network.

Consequently, Israeli operatives utilized a cutting-edge military technology empowered by artificial intelligence, as revealed by three Israeli and American officials familiar with the situation. Although this technology had been developed a decade earlier, it had yet to see combat. The urgency to locate Mr. Biari spurred engineers in Israel’s Unit 8200—its equivalent of the National Security Agency—to integrate A.I. into the system, the sources stated.

Shortly after, Israel intercepted Mr. Biari’s calls and utilized the A.I. audio tool, which provided an estimated location of his communications. This information led to airstrikes in the vicinity on October 31, 2023, resulting in Mr. Biari’s death, but also claiming over 125 civilian lives, according to the London-based conflict monitor, Airwars.

The audio tool exemplifies how Israel has leveraged the conflict in Gaza to rapidly prototype and implement A.I.-enhanced military technologies, according to insights from nine defense officials from both the U.S. and Israel, who requested anonymity due to the confidential nature of their work.

Over the past 18 months, Israel has also integrated A.I. with facial recognition capabilities to identify partially obscured or injured individuals, utilized A.I. for compiling potential airstrike targets, and developed an Arabic-language A.I. model to power a chatbot that analyzes text messages, social media, and other Arabic data, according to two informed sources.

Many of these initiatives have emerged from collaborations between active soldiers and reservists in tech firms like Google, Microsoft, and Meta, as detailed by three individuals knowledgeable about these technologies. Unit 8200 established what became known as “The Studio,” an innovation center designed to pair experts with A.I. projects, the sources indicated.

However, despite the rapid A.I. advancements, the deployment of these technologies has sometimes resulted in erroneous identifications and arrests as well as civilian casualties, according to Israeli and American officials. Some have expressed concern over the ethical implications of A.I. applications that could lead to heightened surveillance and civilian deaths.

No other nation has been as proactive as Israel in experimenting with A.I. tools in active warfare, as noted by European and American defense officials, offering a glimpse into how such technologies might be utilized in future conflicts—and how they may fail.

“The pressing need to address the crisis accelerated innovation, much of it powered by A.I.,” stated Hadas Lorber, head of the Institute for Applied Research in Responsible A.I. at Israel’s Holon Institute of Technology and former senior director at the Israeli National Security Council. “This resulted in groundbreaking battlefield technologies that provided critical advantages in combat.”

However, the technologies raise urgent ethical concerns, Ms. Lorber cautioned, emphasizing the necessity for checks and balances, with final decisions resting in human hands.

A spokeswoman for the Israeli military indicated she could not provide comments on specific technologies due to their “confidential nature.” She stated that Israel “is committed to the lawful and responsible utilization of data technology tools,” and clarified that the military is investigating the operation that led to Mr. Biari’s death without further comments until the inquiry is complete.

Meta and Microsoft opted not to comment, whereas Google noted, “We have employees who serve in reserve duty across various countries. The work those employees perform as reservists is independent of Google.”

Previously, Israel used conflicts in Gaza and Lebanon to test and refine technological tools for its military, including drones, phone hacking mechanisms, and the Iron Dome defense system, designed to intercept short-range ballistic missiles.

Following Hamas’s cross-border assaults into Israel on October 7, 2023, which resulted in over 1,200 deaths and 250 abductions, A.I. technologies were swiftly authorized for use, according to four Israeli sources. This initiated a partnership between Unit 8200 and reservists in “The Studio” to rapidly develop new A.I. capabilities.

Avi Hasson, CEO of Startup Nation Central, an Israeli nonprofit that links investors with businesses, stated that reservists from Meta, Google, and Microsoft played a pivotal role in advancing innovations in drones and data integration.

“Reservists contributed expertise and access to critical technologies absent in the military,” he noted.

The military subsequently employed A.I. to enhance its drone capabilities. Aviv Shapira, founder and CEO of XTEND, a software and drone company partnering with the Israeli military, explained that A.I.-driven algorithms were utilized to design drones that could identify and track targets from afar.

“Historically, homing abilities depended on focusing on an image of the target,” he remarked. “Now, A.I. can identify and follow the object itself—be it a moving vehicle or a person—with lethal accuracy.”

Mr. Shapira noted that his primary clients, the Israeli military and the U.S. Department of Defense, acknowledge A.I.’s ethical considerations in warfare and actively discuss responsible technological applications.

A tool developed by “The Studio” was an Arabic-language A.I. model, recognized as a large language model, according to three Israeli officers familiar with the initiative. (This large language model was previously reported by Plus 972, an Israeli-Palestinian news outlet.)

Developers initially faced challenges creating such a model due to a lack of Arabic-language data for training. When data was accessible, it typically represented standard written Arabic, which is more formal than the myriad dialects spoken.

The Israeli military, however, encountered no such issues; the nation had decades’ worth of intercepted messages, transcribed calls, and social media posts in various spoken Arabic dialects. Consequently, Israeli officers constructed the large language model in the initial months of the conflict and developed a chatbot to process queries in Arabic. They integrated the tool with multimedia databases, permitting analysts to conduct intricate searches across images and videos, as reported by four Israeli officials.

When Israel assassinated Hezbollah leader Hassan Nasrallah in September, the chatbot analyzed reactions from the Arabic-speaking world, as per three Israeli officers. The technology differentiated among various dialects in Lebanon to assess public sentiment, aiding Israel in evaluating potential public pressure for a counterstrike.

At times, the chatbot struggled to comprehend modern slang or transliterated terms from English to Arabic, according to two officers. That necessitated reviews and corrections from Israeli intelligence personnel skilled in various dialects, as one officer noted.

The chatbot also occasionally provided inaccurate results—returning images of pipes instead of firearms, for example, stated two Israeli intelligence officers. Nonetheless, the A.I. tool significantly expedited research and analytical processes, they added.

At temporary checkpoints established between northern and southern Gaza following the October 7 attacks, Israel began outfitting cameras with capabilities to scan and transmit high-resolution images of Palestinians to an A.I.-enhanced facial recognition system.

This system also encountered difficulties recognizing individuals whose faces were obscured, leading to arrests and interrogations of Palestinians mistakenly flagged by the facial recognition technology, according to two Israeli intelligence officers.

Israel also harnessed A.I. to sift through extensive data collected on Hamas members. Before the conflict, Israel developed a machine-learning algorithm—code-named “Lavender”—designed to swiftly categorize information to identify lower-tier militants. It was trained on a database of confirmed Hamas affiliates, aiming to predict additional members of the group. Despite the system’s limitations, it was utilized at the onset of the Gaza conflict to assist in selecting attack targets.

Among the prioritized objectives was the identification and neutralization of Hamas’s senior leadership, with Mr. Biari being a critical figure due to his central role in orchestrating the October 7 attacks.

Israeli military intelligence quickly intercepted Mr. Biari’s communications with other Hamas operatives but struggled to pinpoint his exact location. They subsequently employed the A.I.-enhanced audio tool, which analyzed a range of sounds, including sonic bombs and airstrikes.

After estimating Mr. Biari’s call location, Israeli military officials were alerted that the target area, which contained several apartment buildings, was highly populated. They indicated that targeting multiple structures would be necessary to ensure Mr. Biari’s elimination, leading to the operation’s approval.

Since then, Israeli intelligence has also relied on the audio tool, in conjunction with maps and images of Gaza’s extensive tunnel system, to locate hostages. Over time, the tool has been fine-tuned to more accurately pinpoint individuals, stated two Israeli officers.