AI & RoboticsNews

3 generative AI misunderstandings resolved for enterprise success

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Polarization is the way of things today. From politics to coffee, we are all on one side or another. Today in the tech space, you are either cheering the arrival of AI for the masses or curmudgeonly grumbling about AI’s inapplicability.

Just six months ago, many of us hadn’t really even heard of generative AI. Today, we have ChatGPT, Bing AI and so many other startups; it makes the crypto wave look like a ripple in the pond. So, are we to surrender our jobs to the algorithms, or is there a bit more nuance to the story?

Microsoft and OpenAI together really made the news with conversational chat tools based on transformer neural network technology and trained on massive and varied data from the web. These tools went off the rails quite quickly, with often surreal and sometimes disturbing responses.

Understanding a daunting environment

It’s not terribly surprising if you understand the underlying technology, though it was a risk that did wrongfoot Google and other major tech companies that professed to be AI frontrunners.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

What it also did was set up an ecosystem of “thin wrapper AI companies” that simply used the APIs from Microsoft to quickly build products that took advantage of most folks’ lack of knowledge in the space. And kicked off an arms race to acquire the foundational language models that underpin the API ecosystem. See Amazon partnering with HuggingFace.

Most enterprises understand, at least on a general level, that AI can be a huge benefit for their companies. However, as the landscape is moving fast, it’s important to sort the needed requirements for success and not end up with overly optimistic end goals, vendor lock-in/disappearance and sustainable deployments that benefit the company for the long run.

Understanding the environment can be a bit daunting — and with the polarization, a series of myths have arisen. Let’s look at these misunderstandings as a few broad claims and then facts behind the headline story. 

Myth one: Bigger models are always better

Truth: The success of these tools is almost entirely dependent on the data that the algorithm was trained on. Ignore the talk about model parameter size. If you want to apply these tools to enterprise problems like code, legal, medical or anything in between, make sure you know the training data deeply.

For example, a model trained with more code but fewer overall parameters may well be better suited to training an AI tool for writing code than one trained on literary data but with more parameters. The better you understand what went into the model, the more confident you’ll be in the resulting suggestions. Almost all verticals that make use of these tools will fine-tune the foundational models on a well-understood and quality-controlled data set applicable to their focused solution space.

Myth two: I can hitch my horse to any of these AI companies because it’s all the same model underneath

Truth: The companies that will survive this hurricane of AI will be the ones that can use any foundational model, can fine-tune that model on customer’s data and have the support and depth of knowledge to help iron out a variety of deployment methodologies.

Thin wrappers around a public API can be covered up well with UI/UX, but ultimately the legal, security and longevity concerns of those companies need to be well understood. Another aspect to this truth is the consolidation of this arena.

Like most other tech trends, there is a danger of capture here, where a big player loss leads on a closed model API and the perception of an ecosystem of solution providers is just a bunch of thin UX wrappers over the same backend. Building a fine-tuned, industry-specific solution requires a company with in-house machine learning (ML) talent, control over its own infrastructure costs and the ability to deploy this technology in a way that meets enterprise security and compliance needs.

Myth three: AI is going to replace lower-skilled or more junior technical staff

This final myth is one that has aspects of the first two myths, but is something that brings together something larger and more strategy focused. Some people have claimed that AI will be a replacement for human beings.

This isn’t true, but not for the reason one might think. With time and advancements in computing technology, generative AI might well be “technically” capable of replacing human beings, but it won’t for the considerably more simplistic reason of human psychology. There will always be a need for humans to work together to build something wonderful.

This means managing and growing people. Emotions, nuance, humor and repair are all needed to build something greater than each of us could on our own. Putting an AI as a replacement teammate is not something that will build confidence or camaraderie. And ultimately, this affects company culture. Too heavy a reliance, whether in practice or concept, is a recipe for a difficult company success story. Let’s be cautious and celebrate companies that thoughtfully enhance employee productivity rather than seek to replace it. 

Busting the early promised myths of AI

There is considerably more nuance to the discussion, but understanding the early myths of promised AI rapture with a few realistic truths will go a long way in helping evaluate fiction from truth in the fast-moving space.

The key to remember is to pay attention to the model data. Understand your vendor’s ML expertise and willingness to help customize your enterprise’s data to fine-tune a solution. And lastly, apply AI in a way that benefits employees, not just look to save costs.

Just these few concepts alone will go a long way in helping enterprises choose a solution that can be both impactful and long-lasting.

Dror Weiss is founder and CEO of Tabnine.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Dror Weiss, Tabnine
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!