AI & RoboticsNews

Generative AI is a toddler — don’t let it run your business

Toddler

Cutting-edge technology and young kids may initially seem completely unrelated, but some AI systems and toddlers have more in common than you might think. Just like curious toddlers who poke into everything, AI learns through data-driven exploration of huge amounts of information. Letting a toddler run wild invites disaster, and as such, generative AI models aren’t ready to be left unattended either.

Without human intervention, gen AI doesn’t know how to say, “I don’t know.” The algorithm keeps pulling from whatever language model it’s accessing to respond to inquiries with astounding confidence. The problem with that approach? The answers could be inaccurate or biased.

You’d never expect unequivocal truth from a proud, bold toddler, and it’s important to remain similarly wary of gen AI’s responses. Many people already are — Forbes research found that more than 75% of consumers worry about AI providing misinformation.

Luckily, we don’t have to leave AI to its own devices. Let’s look at gen AI’s growing pains — and how to ensure the appropriate amount of human involvement.

But really, what’s the big fuss over letting AI do its thing? To illustrate the potential pitfalls of unsupervised AI, let’s start with an anecdote. In college, I was in a late-stage interview for an internship with an investment company. The head of the company was leading the discussion with me, and his questions quickly surpassed my depth of knowledge.

Despite this fact, I continued to answer confidently, and hey, I thought I sounded pretty smart! When the interview ended, however, he let me in on a “secret”: He knew I was rambling nonsense, and my continued delivery of that nonsense made me the most dangerous type of employee they could hire — an intelligent person reluctant to say “I don’t know.”

Generative AI is that exact type of dangerous employee. It will confidently deliver wrong answers, fooling people into accepting its falsehoods, because saying “I don’t know” isn’t part of its programming. These hallucinations in industry-speak can cause trouble if they’re delivered as fact, and there’s no one to check the accuracy of the AI’s output.

Beyond generating categorically wrong responses, AI output also has the potential to outright steal someone else’s property. Because it’s trained on vast amounts of data, AI could generate an answer closely replicating someone else’s work, potentially committing plagiarism or copyright infringement.

Another issue? The data AI sources for answers includes human engineers’ unconscious (and conscious) biases. These biases are difficult to avoid and can lead gen AI to output content that’s unintentionally prejudiced or unfair to certain groups because it perpetuates stereotypes.

For example, AI might make offensive, discriminatory race-based assumptions because the data it’s pulling from contains information biased against a specific group. But since it’s just a tool, we can’t hold AI responsible for its answers. Those who deploy it, however, can be.

Remember our toddlers? They’re still learning how to behave in our shared world. Who’s responsible for guiding them? The adults in their lives. Humans are the adults responsible for verifying our “growing” AI’s output and making corrections as needed.

Responsible use of gen AI is possible. Since AI’s behavior reflects its training data, it doesn’t have a conception of correct vs. incorrect; it only knows “more similar” and “less similar.” Although it’s a transformative, exciting technology, there is still much work to be done to get it to behave consistently, correctly and predictably so that your organization can extract the maximum value from it and keep hallucinations at bay. To help with that work, I’ve outlined three steps enterprises can take to properly utilize their most dangerous employee.

Generative AI has many applications in a business setting. It can help solve plenty of problems, but it won’t always be able to provide compelling solutions independently. With the right suite of technologies, however, its benefits can bloom while its weaknesses are mitigated.

For example, if you’re implementing a gen AI tool for customer service purposes, ensure that the source knowledge base has clean data. To maintain that data hygiene, invest in a tool that sanitizes and keeps data — and the information the AI pulls from — accurate and up-to-date. Once you’ve got good data, you can fine-tune your tool to provide the best responses. It takes a village of technologies to create a great customer experience; gen AI is only one member of that village. Organizations choosing to tackle tough problems with generative AI alone do so at their own risk.

AI excels at many tasks, but it has limitations. Let’s revisit our customer service example. Gen AI sometimes struggles with procedural conversations requiring that steps be completed in a certain order. An intent-based model would likely produce better results because genAI’s answers and task fulfillment are inconsistent in this “job.”

But asking AI to do something it’s good at — such as synthesizing information from a customer call or outputting a conversation summary — yields much better results. You can ask the AI specific questions about these conversations and glean insights from the answers.

Approach your AI strategy like you do talent development — it’s an unproven employee requiring training. By leveraging your organization’s unique data set, you ensure your gen AI tool responds in a way specific to your organization.

For example, use your organization’s wealth of customer data to train your AI, which leads to personalized customer experiences — and happier, more satisfied customers. By adjusting your strategy and perfecting your training data, you can turn your most unpredictable employee into a dependable ally.

The AI industry has exploded, especially in recent years and months. Estimated to have generated almost $89 billion in 2022, the industry’s meteoric rise shows no signs of slowing. In fact, experts predict that the valuation of the AI market will reach $407 billion by 2027.

Although the popularity and use of these sophisticated tools continues to increase, the U.S. still lacks federal regulations governing their use. Without legislative guidance, it’s up to every individual employing a gen AI tool to ensure its ethical and responsible use. Business leaders must supervise their AI so they can quickly intervene if responses start veering into catastrophic untruth territory.

Before this technology advances further and becomes fully entrenched in operations, forward-thinking organizations will implement policies on ethical AI usage to establish the highest standards possible and position themselves ahead of the curve of future legislation.

Even though we can’t leave AI alone, we can still responsibly capitalize on its benefits by using the right tools with the technology, giving it the right job and training it appropriately. The toddler stage of childhood, like this era of gen AI, can be rife with difficulties, but every challenge presents an opportunity to improve and achieve sustained success.

Yan Zhang is COO of PolyAI.

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Cutting-edge technology and young kids may initially seem completely unrelated, but some AI systems and toddlers have more in common than you might think. Just like curious toddlers who poke into everything, AI learns through data-driven exploration of huge amounts of information. Letting a toddler run wild invites disaster, and as such, generative AI models aren’t ready to be left unattended either.

Without human intervention, gen AI doesn’t know how to say, “I don’t know.” The algorithm keeps pulling from whatever language model it’s accessing to respond to inquiries with astounding confidence. The problem with that approach? The answers could be inaccurate or biased.

You’d never expect unequivocal truth from a proud, bold toddler, and it’s important to remain similarly wary of gen AI’s responses. Many people already are — Forbes research found that more than 75% of consumers worry about AI providing misinformation.

Luckily, we don’t have to leave AI to its own devices. Let’s look at gen AI’s growing pains — and how to ensure the appropriate amount of human involvement.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

Learn More

The problems with unsupervised AI

But really, what’s the big fuss over letting AI do its thing? To illustrate the potential pitfalls of unsupervised AI, let’s start with an anecdote. In college, I was in a late-stage interview for an internship with an investment company. The head of the company was leading the discussion with me, and his questions quickly surpassed my depth of knowledge.

Despite this fact, I continued to answer confidently, and hey, I thought I sounded pretty smart! When the interview ended, however, he let me in on a “secret”: He knew I was rambling nonsense, and my continued delivery of that nonsense made me the most dangerous type of employee they could hire — an intelligent person reluctant to say “I don’t know.”

Gen AI is that exact type of dangerous employee. It will confidently deliver wrong answers, fooling people into accepting its falsehoods, because saying “I don’t know” isn’t part of its programming. These hallucinations in industry-speak can cause trouble if they’re delivered as fact, and there’s no one to check the accuracy of the AI’s output.

Beyond generating categorically wrong responses, AI output also has the potential to outright steal someone else’s property. Because it’s trained on vast amounts of data, AI could generate an answer closely replicating someone else’s work, potentially committing plagiarism or copyright infringement.

Another issue? The data AI sources for answers includes human engineers’ unconscious (and conscious) biases. These biases are difficult to avoid and can lead gen AI to output content that’s unintentionally prejudiced or unfair to certain groups because it perpetuates stereotypes.

For example, AI might make offensive, discriminatory race-based assumptions because the data it’s pulling from contains information biased against a specific group. But since it’s just a tool, we can’t hold AI responsible for its answers. Those who deploy it, however, can be.

Remember our toddlers? They’re still learning how to behave in our shared world. Who’s responsible for guiding them? The adults in their lives. Humans are the adults responsible for verifying our “growing” AI’s output and making corrections as needed.

What the right way looks like

Responsible use of gen AI is possible. Since AI’s behavior reflects its training data, it doesn’t have a conception of correct vs. incorrect; it only knows “more similar” and “less similar.” Although it’s a transformative, exciting technology, there is still much work to be done to get it to behave consistently, correctly and predictably so that your organization can extract the maximum value from it and keep hallucinations at bay. To help with that work, I’ve outlined three steps enterprises can take to properly utilize their most dangerous employee.

1. Teamwork makes the dream work

Generative AI has many applications in a business setting. It can help solve plenty of problems, but it won’t always be able to provide compelling solutions independently. With the right suite of technologies, however, its benefits can bloom while its weaknesses are mitigated.

For example, if you’re implementing a gen AI tool for customer service purposes, ensure that the source knowledge base has clean data. To maintain that data hygiene, invest in a tool that sanitizes and keeps data — and the information the AI pulls from — accurate and up-to-date. Once you’ve got good data, you can fine-tune your tool to provide the best responses. It takes a village of technologies to create a great customer experience; gen AI is only one member of that village. Organizations choosing to tackle tough problems with generative AI alone do so at their own risk.

2. All in a day’s work: Give AI the right job

AI excels at many tasks, but it has limitations. Let’s revisit our customer service example. Generative AI sometimes struggles with procedural conversations requiring that steps be completed in a certain order. An intent-based model would likely produce better results because genAI’s answers and task fulfillment are inconsistent in this “job.”

But asking AI to do something it’s good at — such as synthesizing information from a customer call or outputting a conversation summary — yields much better results. You can ask the AI specific questions about these conversations and glean insights from the answers.

3. Keep AI from going off the rails by training it appropriately

Approach your AI strategy like you do talent development — it’s an unproven employee requiring training. By leveraging your organization’s unique data set, you ensure your gen AI tool responds in a way specific to your organization.

For example, use your organization’s wealth of customer data to train your AI, which leads to personalized customer experiences — and happier, more satisfied customers. By adjusting your strategy and perfecting your training data, you can turn your most unpredictable employee into a dependable ally.

Why now?

The AI industry has exploded, especially in recent years and months. Estimated to have generated almost $89 billion in 2022, the industry’s meteoric rise shows no signs of slowing. In fact, experts predict that the valuation of the AI market will reach $407 billion by 2027.

Although the popularity and use of these sophisticated tools continues to increase, the U.S. still lacks federal regulations governing their use. Without legislative guidance, it’s up to every individual employing a generative AI tool to ensure its ethical and responsible use. Business leaders must supervise their AI so they can quickly intervene if responses start veering into catastrophic untruth territory.

Before this technology advances further and becomes fully entrenched in operations, forward-thinking organizations will implement policies on ethical AI usage to establish the highest standards possible and position themselves ahead of the curve of future legislation.

Even though we can’t leave AI alone, we can still responsibly capitalize on its benefits by using the right tools with the technology, giving it the right job and training it appropriately. The toddler stage of childhood, like this era of gen AI, can be rife with difficulties, but every challenge presents an opportunity to improve and achieve sustained success.

Yan Zhang is COO of PolyAI.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Yan Zhang, PolyAI
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!