Google Bard, the search giant’s vision for a conversational AI chatbot, has had a rocky road since it was unveiled to the world in March 2023, with subsequent updates to it earning poor reviews from early testers like VentureBeat, and it was recently found to be accidentally enabling shared conversations to appear in Google Search results (that’s since been fixed).
Now it appears that Google’s flagship AI chatbot finds itself in the midst of more controversy: Bard won’t respond to user queries or prompts about the ongoing crisis in Israel and Palestine over the October 7 Hamas terror attacks and Israel’s ongoing military response. In fact, it won’t respond to any questions about Israel’s or Palestine entirely, even innocuous ones having nothing to do with current events such as “where is Israel?”
Looks like Google’s Bard locks down if you input ‘Israel’ or ‘Gaza pic.twitter.com/e4RLjlFpup
The constraint was discovered by PhD mathematical literary theorist Peli Greitzer, who posed about it on X. As Greitzer weighed in in another post, “Probably better than the alternative but it’s a bold choice.”
The “alternative” in this case could be seen as rival OpenAI’s ChatGPT, powered by its GPT-3.5 and GPT-4 LLMs.
As various users have observed, ChatGPT provides slightly but meaningfully different answers when asked if Israelis and Palestinians “deserve justice.”
asking chatgpt about justice for israel/palestine generates vastly different responses pic.twitter.com/vDONh389Ir
While ChatGPT is unequivocal in stating when asked about Israelis that “justice is a fundamental principle that applies to all individuals and communities, including Israelis,” for Palestinians, it begins by stating that “the question of justice of Palestinians is a complex and highly debated issue, with various perspectives and opinions.”
OpenAI has been hotly criticized for this difference on social media, including by British-Iraqi journalist Mona Chalabi on her Instagram account:
In this case, perhaps Google sought to sidestep this controversy entirely by implementing guardrails on Bard that prevent it from returning a response about either Israel or Palestine.
However, it does appear to be something of a double standard, as Bard will respond to prompts and queries about other ongoing international conflicts, including the war between Ukraine and Russia, for which it provides fairly extensive summaries of the current situation, according to VentureBeat’s tests.
The question remains if Google is throttling Bard’s response capability on this issue temporarily and if so, for how long? And also how was the decision made to restrict responses about this conflict when Bard is able to respond to others?
For a company built to “organize the world’s information and make it universally accessible and useful,” restricting any information about an intensely debated, serious, and globally important conflict seems to be undermining its very purpose. But this question is clearly a tricky one, and it is certain that no answer will satisfy all users. For companies looking to develop or use AI, it is the perfect example of how LLMs in particular can get into hot water quickly regarding their responses to social issues.
VentureBeat has reached out to Google to ask about the Bard behavior and will update when we receive a response.
VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
Google Bard, the search giant’s vision for a conversational AI chatbot, has had a rocky road since it was unveiled to the world in March 2023, with subsequent updates to it earning poor reviews from early testers like VentureBeat, and it was recently found to be accidentally enabling shared conversations to appear in Google Search results (that’s since been fixed).
Now it appears that Google’s flagship AI chatbot finds itself in the midst of more controversy: Bard won’t respond to user queries or prompts about the ongoing crisis in Israel and Palestine over the October 7 Hamas terror attacks and Israel’s ongoing military response. In fact, it won’t respond to any questions about Israel’s or Palestine entirely, even innocuous ones having nothing to do with current events such as “where is Israel?”
The constraint was discovered by PhD mathematical literary theorist Peli Greitzer, who posed about it on X. As Greitzer weighed in in another post, “Probably better than the alternative but it’s a bold choice.”
The “alternative” in this case could be seen as rival OpenAI’s ChatGPT, powered by its GPT-3.5 and GPT-4 LLMs.
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
As various users have observed, ChatGPT provides slightly but meaningfully different answers when asked if Israelis and Palestinians “deserve justice.”
While ChatGPT is unequivocal in stating when asked about Israelis that “justice is a fundamental principle that applies to all individuals and communities, including Israelis,” for Palestinians, it begins by stating that “the question of justice of Palestinians is a complex and highly debated issue, with various perspectives and opinions.”
OpenAI has been hotly criticized for this difference on social media, including by British-Iraqi journalist Mona Chalabi on her Instagram account:
In this case, perhaps Google sought to sidestep this controversy entirely by implementing guardrails on Bard that prevent it from returning a response about either Israel or Palestine.
However, it does appear to be something of a double standard, as Bard will respond to prompts and queries about other ongoing international conflicts, including the war between Ukraine and Russia, for which it provides fairly extensive summaries of the current situation, according to VentureBeat’s tests.
The question remains if Google is throttling Bard’s response capability on this issue temporarily and if so, for how long? And also how was the decision made to restrict responses about this conflict when Bard is able to respond to others?
For a company built to “organize the world’s information and make it universally accessible and useful,” restricting any information about an intensely debated, serious, and globally important conflict seems to be undermining its very purpose. But this question is clearly a tricky one, and it is certain that no answer will satisfy all users. For companies looking to develop or use AI, it is the perfect example of how LLMs in particular can get into hot water quickly regarding their responses to social issues.
VentureBeat has reached out to Google to ask about the Bard behavior and will update when we receive a response.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Carl Franzen
Source: Venturebeat
Reviewed By: Editorial Team