AI & RoboticsNews

AI community fractured over Israel-Hamas war

Nearly 200 AI leaders, researchers and data scientists have signed an open letter published last Tuesday by the ‘Responsible AI Community,’ which, in addition to condemning Israel’s “latest violence against the Palestinian people in Gaza and the West Bank,” says “we also condemn the use of AI-driven technologies for warmaking, in which the aim is to make the loss of human life more efficient, and the instances in which anti-Palestinian biases are perpetuated throughout AI-enabled systems.” 

The letter, which appears to first have been circulated by Tina Park, head of inclusive research and Design at the Partnership on AI, calls for a withdrawal of technology support to the Israeli government and an end to defense contracts with the Israeli government and military. It also says that “history did not start on October 7, 2023, but the current crisis reflects the horrific scale and extent of violence enabled by the use of AI-driven technologies.” The Israeli government’s use of AI-driven technology, the letter continues, has “led to strikes against over 11,000 targets in Gaza since the latest conflict started on October 7, 2023.”  

The letter’s signers include Timnit Gebru, AI ethics researcher and founder of DAIR (The Distributed AI Research Institute); Alex Hanna, director of research at DAIR; Abeba Birhane, senior fellow of trustworthy AI at the Mozilla Foundation; Emily Bender, professor of linguistics at the University of Washington; and Sarah Myers West, managing director of the AI Now Institute

Israeli and Jewish AI leaders, some of whom are also part of the AI ethics/responsible AI community, have pushed back on the letter, saying it is one more in a series of examples of prominent AI ethicists either “applauding” the Hamas attacks against Israel on October 7 or being silent about them. The open letter does not mention the Israeli hostages being held in Gaza, nor does it condemn Hamas for the October 7 attacks. 

Jules Polonetsky, CEO of the Future of Privacy Forum, said it is “deeply distressing to me that this letter doesn’t devote a single word to condemning the massacre committed by Hamas.” In addition, he said, “there are absolutely complicated moral issues to weigh when technology is used in military conflict,” but a “one-sided broadside like this unfortunately does little to shed light on the path to ending bloodshed.” 

Yoav Goldberg, a professor at Bar Ilan University and research director at AI2-Israel, also emphasized that many of the “AI systems” or “surveillance systems” described in the letter have “likely saved countless Palestinian lives.” For example, he said he was certain AI technologies — or AI-human collaborative interfaces — are used to try and track the hostages. “Finding hostages will make things conclude faster, saving lives,” he said. AI is also used to navigate missiles, he added, making them more likely to hit their targets and not random people, while AI is used to surface targets. “It means the IDF can target legitimate military targets and not random ones,” he explained.

Finally, he pointed out that before October 7, “many attacks, probably on a smaller scale, were prevented, presumably also due to use of systems that involve ‘AI,’ especially around intelligence.” Prevented attacks, he explained, also prevented Israeli retaliations: “Consider an hypothetical system that would have been able to notify/warn about October 7 in advance — we wouldn’t have been in this war now,” he said.

And Shira Eisenberg, an AI engineer currently based in the Washington DC area who is also a member of the scientific council for The Israeli Association for Ethics in Artificial Intelligence, added that “AI is a critical wartime technology and is being used to translate intercepted Russian messages in the war in Ukraine, as well.” She agreed that Israel must be responsible in its use of AI, “but to rule out wartime use is to jeopardize the safety of many.” She pointed to systems like Israel’s AI-powered Iron Dome, which regularly intercepts missiles launched from Gaza into Israel. 

In addition, several of those Israeli and Jewish AI leaders say they have felt shocked, pained, and disappointed by comments on social media since October 7 that they consider to be anti-semitic, one-sided, and highly insensitive, posted by some of the same AI researchers and industry leaders that signed the open letter. 

“In the first weeks I, and many others in Israel, especially in academia and in Israeli left-wing circles, was completely shocked with this,” said Goldberg. “We felt betrayed. We felt confused. We felt alone. How can people who we know, who we thought we share values with, can show such weak moral judgment? How can they be one sided? How can they be so shallow? But now… I am not surprised anymore. I am not outraged. I am just very very sad and very very disappointed.” 

Eran Toch, associate professor at Tel Aviv University, whose research focuses on the boundary between humans and computers, focusing on usable privacy and security, machine learning, and online safety, added that “I think it makes Israeli members of the critical AI community feel very alone.” In Israel, as well as elsewhere, he said, “people of this community are more political and are more progressive than others,” he said. “The fact that people we thought were part of our community have shown zero empathy and curiosity about our experience stings bitterly.”  Many Israelis in AI ethics, he said, “try to fight for a resolution to the conflict with the Palestinians and human rights, including digital ones for both Palestinians and Israelis. It’s tough to do that with no external allies.” Beyond politics, he added, “many fear, and I experienced discrimination in professional circles. People review our papers as if it’s a social media thread battle.” 

In addition, he said that he is “particularly concerned” with what he said is a “conspiracy theory that is propagated in the anti-Israel letters written by members of the community.” The idea, he explained, is that Israel “is the epicenter of inhuman AI technology, used against Palestinians and then against other people in the Global South—this theory repeats centuries-old anti-semitic tropes that connect Jews, technology, and oppression.” While he agreed that Israel is a technological hub, and that he is critical himself, the technologies are “no different from the ones produced in Silicon Valley or London.” Creating the idea of Israel as the mastermind of AI “is a conspiracy theory that I already see propagated in deep anti-semitic circles. I think these letters are dangerous.” 

VentureBeat reached out to Gebru and Hanna for comment. While Hanna would not respond on the record, Gebru responded by saying “I am not going to engage. Enough is enough. We are seeing things with our own eyes and students (mostly of color) are getting incessantly harassed and doxxed. I am going to focus on my support of Palestinians.”

Some see these issues causing a fracture, or schism, within the AI community, and the tech community at large, that has been growing steadily since October 7. Last month, for example, several AI leaders withdrew from the Web Summit in Lisbon, Europe’s premier technology conference. That decision came in response to the event’s founder and CEO, Paddy Cosgrave, calling Israel’s actions in response to Hamas’ October 7 surprise terror attack “war crimes.”

“I definitely see this as a schism in the AI community, and the tech community at large,” said Eisenberg. “VC has also been affected, with prominent figures coming out on either side of the issue. I wouldn’t say this is super problematic for the AI community — it’s not as divisive as the accelerationist / decelerationist line, but it is a major fracture.” 

Dan Kotliar, a researcher at the University of Haifa who does critical algorithm studies (and describes himself as an Israeli peace activist who lives in a Jewish-Arab city at a school where 50% of students are Arab), said that in my research on Israeli AI ethics, he “used to refer to these ethicists and their institutions as some kind of golden standard — assuming they come from an unequivocal belief in universal, humanistic values.” However, he continued, “their implicit support of Hamas’s Isis-like terrorism puts a massive question mark around their ethics and their ability to spread this ethics to techies and researchers worldwide. So yes, I think techies can no longer look at these ethicists in the same way, unless they actively want their algorithms to be biased against Israel’s Jewry.” 

Put differently, he added, “when some of the most vocal proponents of Responsible AI ignorantly and irresponsibly fuel extremism in the Middle East, when top AI ethicists cannot denounce atrocious acts of murder, rape, mutilation, and the abduction of babies, toddlers, and the elderly, it means there’s something severely broken in today’s AI ethics.” 

Kotliar, who emphasized that he “wholeheartedly” supports the Palestinian right to self-determination and the end of the Israeli occupation of Palestinian territories since 1967, and condemns the killing of innocent civilians in the Gaza Strip and Israel, pointed out that in his writing he is critical of the privatization and commodification of AI-powered surveillance tools in Israel. “But I think that anyone in their right mind can see why a nation-state with Isis-like organizations at its borders needs such technologies,” he said. “So, I read this letter as another hateful call to weaken Israel, and again, the implicit legitimation of Hamas’s actions makes it a call to remove us from our home by any means possible.” 

Talia Ringer, an assistant professor in computer science at the University of Illinois at Urbana-Champaign, says that while they are in broad political agreement with many of the letter’s signers, they are also Israeli-American and has found their social media commentary “extremely hard.” 

On October 7, they said, “I felt a kind of panic and despair I cannot begin to explain. I learned that two of my family’s in-laws were missing — later, they were found dead — and over a dozen of a first cousin’s friends were all murdered on the same day. One family was literally burned alive.” 

In AI research circles, they explained, “I’ve had zero space to grieve and be human.” Even before Israel responded on October 7, she added, “my online circles of friends in this research community [were flooded] with at best denial and at worst celebration of these deaths. I was pestered and harassed to stop “centering” myself. People I had considered good friends for years began to (inadvertently, I assume) spread antisemitic conspiracy theories about Israel attacking its own citizens on October 7, and I lost friends for calling this out as blatant antisemitism. All of this as I was working tirelessly with a joint Israeli-Palestinian peace movement, which has been a wonderful place full of room for everyone’s grief and pain and fear and full humanity.” 

Still, Ringer said that they personally doesn’t find this particular open letter to be “too objectionable.” Being “concerned about the use of technology in warfare and subjugation of a people is really important,” they said. But they pointed out that they were put off by the sentence “history didn’t start on October 7,” which is “a little too dismissive of just how traumatic that day was, and the vaguely conspiratorial undertones that could perpetuate antisemitic conspiracy theories even more.”  

However, they added that “it is hard for me to care about AI right now when this has wholly consumed my life since October 7. I just want this to end. I don’t know what’s next. I have definitely lost close friends in the community. I don’t know if we’ll forgive each other. I don’t know what it will mean for the community later on. That all seems far away right now.” 

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Nearly 200 AI leaders, researchers and data scientists have signed an open letter published last Tuesday by the ‘Responsible AI Community,’ which, in addition to condemning Israel’s “latest violence against the Palestinian people in Gaza and the West Bank,” says “we also condemn the use of AI-driven technologies for warmaking, in which the aim is to make the loss of human life more efficient, and the instances in which anti-Palestinian biases are perpetuated throughout AI-enabled systems.” 

The letter, which appears to first have been circulated by Tina Park, head of inclusive research and Design at the Partnership on AI, calls for a withdrawal of technology support to the Israeli government and an end to defense contracts with the Israeli government and military. It also says that “history did not start on October 7, 2023, but the current crisis reflects the horrific scale and extent of violence enabled by the use of AI-driven technologies.” The Israeli government’s use of AI-driven technology, the letter continues, has “led to strikes against over 11,000 targets in Gaza since the latest conflict started on October 7, 2023.”  

The letter’s signers include Timnit Gebru, AI ethics researcher and founder of DAIR (The Distributed AI Research Institute); Alex Hanna, director of research at DAIR; Abeba Birhane, senior fellow of trustworthy AI at the Mozilla Foundation; Emily Bender, professor of linguistics at the University of Washington; and Sarah Myers West, managing director of the AI Now Institute

Israeli and Jewish AI leaders have pushed back on the letter

Israeli and Jewish AI leaders, some of whom are also part of the AI ethics/responsible AI community, have pushed back on the letter, saying it is one more in a series of examples of prominent AI ethicists either “applauding” the Hamas attacks against Israel on October 7 or being silent about them. The open letter does not mention the Israeli hostages being held in Gaza, nor does it condemn Hamas for the October 7 attacks. 

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

Jules Polonetsky, CEO of the Future of Privacy Forum, said it is “deeply distressing to me that this letter doesn’t devote a single word to condemning the massacre committed by Hamas.” In addition, he said, “there are absolutely complicated moral issues to weigh when technology is used in military conflict,” but a “one-sided broadside like this unfortunately does little to shed light on the path to ending bloodshed.” 

Yoav Goldberg, a professor at Bar Ilan University and research director at AI2-Israel, also emphasized that many of the “AI systems” or “surveillance systems” described in the letter have “likely saved countless Palestinian lives.” For example, he said he was certain AI technologies — or AI-human collaborative interfaces — are used to try and track the hostages. “Finding hostages will make things conclude faster, saving lives,” he said. AI is also used to navigate missiles, he added, making them more likely to hit their targets and not random people, while AI is used to surface targets. “It means the IDF can target legitimate military targets and not random ones,” he explained.

Finally, he pointed out that before October 7, “many attacks, probably on a smaller scale, were prevented, presumably also due to use of systems that involve ‘AI,’ especially around intelligence.” Prevented attacks, he explained, also prevented Israeli retaliations: “Consider an hypothetical system that would have been able to notify/warn about October 7 in advance — we wouldn’t have been in this war now,” he said.

And Shira Eisenberg, an AI engineer currently based in the Washington DC area who is also a member of the scientific council for The Israeli Association for Ethics in Artificial Intelligence, added that “AI is a critical wartime technology and is being used to translate intercepted Russian messages in the war in Ukraine, as well.” She agreed that Israel must be responsible in its use of AI, “but to rule out wartime use is to jeopardize the safety of many.” She pointed to systems like Israel’s AI-powered Iron Dome, which regularly intercepts missiles launched from Gaza into Israel. 

Some call aspects of the letter and social media comments anti-semitic

In addition, several of those Israeli and Jewish AI leaders say they have felt shocked, pained, and disappointed by comments on social media since October 7 that they consider to be anti-semitic, one-sided, and highly insensitive, posted by some of the same AI researchers and industry leaders that signed the open letter. 

“In the first weeks I, and many others in Israel, especially in academia and in Israeli left-wing circles, was completely shocked with this,” said Goldberg. “We felt betrayed. We felt confused. We felt alone. How can people who we know, who we thought we share values with, can show such weak moral judgment? How can they be one sided? How can they be so shallow? But now… I am not surprised anymore. I am not outraged. I am just very very sad and very very disappointed.” 

Eran Toch, associate professor at Tel Aviv University, whose research focuses on the boundary between humans and computers, focusing on usable privacy and security, machine learning, and online safety, added that “I think it makes Israeli members of the critical AI community feel very alone.” In Israel, as well as elsewhere, he said, “people of this community are more political and are more progressive than others,” he said. “The fact that people we thought were part of our community have shown zero empathy and curiosity about our experience stings bitterly.”  Many Israelis in AI ethics, he said, “try to fight for a resolution to the conflict with the Palestinians and human rights, including digital ones for both Palestinians and Israelis. It’s tough to do that with no external allies.” Beyond politics, he added, “many fear, and I experienced discrimination in professional circles. People review our papers as if it’s a social media thread battle.” 

In addition, he said that he is “particularly concerned” with what he said is a “conspiracy theory that is propagated in the anti-Israel letters written by members of the community.” The idea, he explained, is that Israel “is the epicenter of inhuman AI technology, used against Palestinians and then against other people in the Global South—this theory repeats centuries-old anti-semitic tropes that connect Jews, technology, and oppression.” While he agreed that Israel is a technological hub, and that he is critical himself, the technologies are “no different from the ones produced in Silicon Valley or London.” Creating the idea of Israel as the mastermind of AI “is a conspiracy theory that I already see propagated in deep anti-semitic circles. I think these letters are dangerous.” 

VentureBeat reached out to Gebru and Hanna for comment. While Hanna would not respond on the record, Gebru responded by saying “I am not going to engage. Enough is enough. We are seeing things with our own eyes and students (mostly of color) are getting incessantly harassed and doxxed. I am going to focus on my support of Palestinians.”

A fracture within the AI community

Some see these issues causing a fracture, or schism, within the AI community, and the tech community at large, that has been growing steadily since October 7. Last month, for example, several AI leaders withdrew from the Web Summit in Lisbon, Europe’s premier technology conference. That decision came in response to the event’s founder and CEO, Paddy Cosgrave, calling Israel’s actions in response to Hamas’ October 7 surprise terror attack “war crimes.”

“I definitely see this as a schism in the AI community, and the tech community at large,” said Eisenberg. “VC has also been affected, with prominent figures coming out on either side of the issue. I wouldn’t say this is super problematic for the AI community — it’s not as divisive as the accelerationist / decelerationist line, but it is a major fracture.” 

Dan Kotliar, a researcher at the University of Haifa who does critical algorithm studies (and describes himself as an Israeli peace activist who lives in a Jewish-Arab city at a school where 50% of students are Arab), said that in my research on Israeli AI ethics, he “used to refer to these ethicists and their institutions as some kind of golden standard — assuming they come from an unequivocal belief in universal, humanistic values.” However, he continued, “their implicit support of Hamas’s Isis-like terrorism puts a massive question mark around their ethics and their ability to spread this ethics to techies and researchers worldwide. So yes, I think techies can no longer look at these ethicists in the same way, unless they actively want their algorithms to be biased against Israel’s Jewry.” 

Put differently, he added, “when some of the most vocal proponents of Responsible AI ignorantly and irresponsibly fuel extremism in the Middle East, when top AI ethicists cannot denounce atrocious acts of murder, rape, mutilation, and the abduction of babies, toddlers, and the elderly, it means there’s something severely broken in today’s AI ethics.” 

Kotliar, who emphasized that he “wholeheartedly” supports the Palestinian right to self-determination and the end of the Israeli occupation of Palestinian territories since 1967, and condemns the killing of innocent civilians in the Gaza Strip and Israel, pointed out that in his writing he is critical of the privatization and commodification of AI-powered surveillance tools in Israel. “But I think that anyone in their right mind can see why a nation-state with Isis-like organizations at its borders needs such technologies,” he said. “So, I read this letter as another hateful call to weaken Israel, and again, the implicit legitimation of Hamas’s actions makes it a call to remove us from our home by any means possible.” 

In AI research circles, ‘zero space to grieve and be human’

Talia Ringer, an assistant professor in computer science at the University of Illinois at Urbana-Champaign, says that while they are in broad political agreement with many of the letter’s signers, they are also Israeli-American and has found their social media commentary “extremely hard.” 

On October 7, they said, “I felt a kind of panic and despair I cannot begin to explain. I learned that two of my family’s in-laws were missing — later, they were found dead — and over a dozen of a first cousin’s friends were all murdered on the same day. One family was literally burned alive.” 

In AI research circles, they explained, “I’ve had zero space to grieve and be human.” Even before Israel responded on October 7, she added, “my online circles of friends in this research community [were flooded] with at best denial and at worst celebration of these deaths. I was pestered and harassed to stop “centering” myself. People I had considered good friends for years began to (inadvertently, I assume) spread antisemitic conspiracy theories about Israel attacking its own citizens on October 7, and I lost friends for calling this out as blatant antisemitism. All of this as I was working tirelessly with a joint Israeli-Palestinian peace movement, which has been a wonderful place full of room for everyone’s grief and pain and fear and full humanity.” 

Still, Ringer said that they personally doesn’t find this particular open letter to be “too objectionable.” Being “concerned about the use of technology in warfare and subjugation of a people is really important,” they said. But they pointed out that they were put off by the sentence “history didn’t start on October 7,” which is “a little too dismissive of just how traumatic that day was, and the vaguely conspiratorial undertones that could perpetuate antisemitic conspiracy theories even more.”  

However, they added that “it is hard for me to care about AI right now when this has wholly consumed my life since October 7. I just want this to end. I don’t know what’s next. I have definitely lost close friends in the community. I don’t know if we’ll forgive each other. I don’t know what it will mean for the community later on. That all seems far away right now.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!