AI & RoboticsNews

Google Bard fails to deliver on its promise — even after latest updates

Google revamped its artificially intelligent chatbot Bard last week in a major overhaul that now gives users access to it from some of its most popular products including Gmail, Docs, Drive, Maps, YouTube, and more. The update theoretically gives Google’s Bard an advantage over ChatGPT, which is the market leader pushed jointly by OpenAI and Microsoft. Together, Google’s search engine and other apps have massively more reach than even Microsoft’s popular Office apps.

The introduction of Bard Extensions is, in theory, a stroke of brilliance. Imagine your AI assistant not just reciting facts from a knowledge base trained on billions of parameters competitive to what ChatGPT offers, but additionally pulling live personalized data from your Google services. The idea of Bard rifling through my Gmail or Google Drive to provide context-specific responses sounds like something pulled from the pages of a William Gibson novel. But here’s where we hit a snag.

In the week since I reported the announcement, I’ve had a chance to play around with the new offering. Unfortunately, in practice, I find Bard to be a disappointment on many levels. It fails to deliver on its core promise of integrating well with Google apps, and often produces inaccurate or nonsensical responses. It also lacks the creativity and versatility of OpenAI’s GPT-4 (It also has no personality or sense of humor, although some users might not take issue with that). Bard badly falls short of expectations.

The crux of the problem lies in the AI’s underlying model, PaLM 2, which powers Bard’s new capabilities. Like all language models, PaLM 2 is a product of its training data. In essence, it can only generate responses based on the content it has been fed. According to a CNBC report, PaLM 2 is trained on about 340 billion parameters. By comparison, GPT-4 is rumored to be trained on a massive dataset of 1.8 trillion parameters. This means that GPT-4 has access to more information and knowledge than PaLM 2, which may help it generate more relevant and interesting texts.

I stress-tested Bard’s new capabilities by trying dozens of prompts that were similar to the ones advertised by Google in last week’s launch. For example, I asked Bard to pull up the key points from a document in Docs and create an email summary. Bard responded by saying “I do not have enough information” and refused to pull up any documents from my Google Drive. It later poorly summarized another document and drafted an unusable email for me.

Another example: I asked Bard to find me the best deals on flights from San Francisco to Los Angeles on Google Flights. The chat responded by drafting me an email explaining how to search manually for airfare on Google Flights.

Bard’s performance was equally dismal when I tried to use it for creative tasks, such as writing a song or a screenplay. Bard either ignored my input or produced bland and boring content that lacked any originality or flair. Bard also lacks any option to adjust its creativity level, unlike GPT-4, which has a dial that allows the user to control how adventurous or conservative the output is.

The only redeeming feature of Bard is that it has a built-in feature that allows users to double-check its answers via Google Search. By clicking the “Google It” button after a prompt, users can see how Bard’s response compares to the results from Google Search. Bard then highlights the parts of its output that could be false or misleading. This feature is handy for reducing hallucinations and errors, but it also exposes how unreliable and untrustworthy Bard is.

Why does this matter? Because Google is one of the leading companies in the world of technology and innovation, and it has a huge influence on how people access and use information. Google’s products and services are used by billions of people every day, and they shape how we communicate, learn, work, and play. If Google wants to stay ahead of the competition and maintain its reputation as a leader in AI, it needs to do better than Bard.

Bard is not just a chatbot; it is a reflection of Google’s vision and values. It is supposed to be an assistant that can help users with various tasks and enhance their productivity and creativity. But Bard fails on all these counts. It is not helpful; it is generally very frustrating.

Bard 2.0 is here, but it stinks. So far, at least. Maybe Google’s upcoming model “Gemini” will be the fix it’s looking for. But until then, I recommend relying on GPT-4 for the bulk of your work tasks. OpenAI’s GPT-4 may not be perfect either, but it is far superior to Bard in terms of functionality, reliability, creativity, and personality.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Google revamped its artificially intelligent chatbot Bard last week in a major overhaul that now gives users access to it from some of its most popular products including Gmail, Docs, Drive, Maps, YouTube, and more. The update theoretically gives Google’s Bard an advantage over ChatGPT, which is the market leader pushed jointly by OpenAI and Microsoft. Together, Google’s search engine and other apps have massively more reach than even Microsoft’s popular Office apps.

The introduction of Bard Extensions is, in theory, a stroke of brilliance. Imagine your AI assistant not just reciting facts from a knowledge base trained on billions of parameters competitive to what ChatGPT offers, but additionally pulling live personalized data from your Google services. The idea of Bard rifling through my Gmail or Google Drive to provide context-specific responses sounds like something pulled from the pages of a William Gibson novel. But here’s where we hit a snag.

In the week since I reported the announcement, I’ve had a chance to play around with the new offering. Unfortunately, in practice, I find Bard to be a disappointment on many levels. It fails to deliver on its core promise of integrating well with Google apps, and often produces inaccurate or nonsensical responses. It also lacks the creativity and versatility of OpenAI’s GPT-4 (It also has no personality or sense of humor, although some users might not take issue with that). Bard badly falls short of expectations.

The crux of the problem lies in the AI’s underlying model, PaLM 2, which powers Bard’s new capabilities. Like all language models, PaLM 2 is a product of its training data. In essence, it can only generate responses based on the content it has been fed. According to a CNBC report, PaLM 2 is trained on about 340 billion parameters. By comparison, GPT-4 is rumored to be trained on a massive dataset of 1.8 trillion parameters. This means that GPT-4 has access to more information and knowledge than PaLM 2, which may help it generate more relevant and interesting texts.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

Falling short of expectations

I stress-tested Bard’s new capabilities by trying dozens of prompts that were similar to the ones advertised by Google in last week’s launch. For example, I asked Bard to pull up the key points from a document in Docs and create an email summary. Bard responded by saying “I do not have enough information” and refused to pull up any documents from my Google Drive. It later poorly summarized another document and drafted an unusable email for me.

Another example: I asked Bard to find me the best deals on flights from San Francisco to Los Angeles on Google Flights. The chat responded by drafting me an email explaining how to search manually for airfare on Google Flights.

Google Bard produced several odd and erroneous responses in our initial test of Bard Extensions. (Image Credit: Screenshot / VentureBeat)

Bard’s performance was equally dismal when I tried to use it for creative tasks, such as writing a song or a screenplay. Bard either ignored my input or produced bland and boring content that lacked any originality or flair. Bard also lacks any option to adjust its creativity level, unlike GPT-4, which has a dial that allows the user to control how adventurous or conservative the output is.

The only redeeming feature of Bard is that it has a built-in feature that allows users to double-check its answers via Google Search. By clicking the “Google It” button after a prompt, users can see how Bard’s response compares to the results from Google Search. Bard then highlights the parts of its output that could be false or misleading. This feature is handy for reducing hallucinations and errors, but it also exposes how unreliable and untrustworthy Bard is.

Why does this matter? Because Google is one of the leading companies in the world of technology and innovation, and it has a huge influence on how people access and use information. Google’s products and services are used by billions of people every day, and they shape how we communicate, learn, work, and play. If Google wants to stay ahead of the competition and maintain its reputation as a leader in AI, it needs to do better than Bard.

Bard is not just a chatbot; it is a reflection of Google’s vision and values. It is supposed to be an assistant that can help users with various tasks and enhance their productivity and creativity. But Bard fails on all these counts. It is not helpful; it is generally very frustrating.

Bard 2.0 is here, but it stinks. So far, at least. Maybe Google’s upcoming model “Gemini” will be the fix it’s looking for. But until then, I recommend relying on GPT-4 for the bulk of your work tasks. OpenAI’s GPT-4 may not be perfect either, but it is far superior to Bard in terms of functionality, reliability, creativity, and personality.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Michael Nuñez
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!