AI & RoboticsNews

What OpenAI’s latest announcements mean for enterprise businesses

At its inaugural developer conference in San Francisco on Monday, OpenAI made several major announcements, including the introduction of GPT-4 Turbo, customizable versions of ChatGPT with GPT Builder, and the new Assistants API, which empowers programmers to swiftly build tailored “assistants” into their applications. 

But what do these new offerings mean for enterprise businesses who have spent the past year figuring out how to take advantage of generative AI? VentureBeat asked a variety of enterprise leaders about the impact on enterprise GenAI efforts.

Sheldon Monteiro, chief product officer at global digital transformation consulting company Publicis Sapient, told VentureBeat that with GPTs and more APIs, OpenAI has made tasks that would have previously required a more technical expertise far more accessible to everyday people to create assistants that can perform specific roles. 

This was possible previously for large enterprises with developer resources, Monteiro explained. But what OpenAI has done is “democratize that for enterprises with fewer resources so any business person can make a specialized agent and share it,” he said. 

Alex Beckman, founder and CEO at ON Platform, added the announcements will “significantly enhance the enterprise’s engagement with generative AI” because they not only make the API more powerful and user-friendly, but also allow for refined control over both the data fed into the AI and the information it produces.

“This results in more coherent and contextually relevant content, suitable for a broader spectrum of applications and use cases, and leverages recent world knowledge as of April 2023,” he said. 

Still, while the announcements are great for usability and performance, they still rely on the same foundational model of GPT-4, he added. “OpenAI’s user interfaces are also still lagging behind which could hinder the learning curve and adoption for enterprises,” he said. 

Bob Brauer, founder and CEO of Interzoid, a data usability consultancy and generative AI-powered data quality solutions provider, said OpenAI’s new GPTs, or “custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills,” can reference specific knowledge sources, such as a company handbook or technical field guides, to inform their responses with the ability to be deployed for use company-wide. This means that the vast repositories of knowledge that companies have amassed over the years can now be tapped into through AI chatbots and shared and utilized across an organization. 

“The potential productivity gains are incalculable,” said Brauer. “For instance, a human resources department could convert an entire 200-page handbook into a chatbot format, accessible to all employees, thus saving significant time spent on inquiries for both the department as well as every employee, especially new hires, getting them up to speed rapidly.” 

The longer, 128K context window of GPT-4 Turbo is “exciting,” added Monteiro. The equivalent of 300 pages of context means GPT will have improved context understanding, enhanced document summarization, more cohesive long form narratives, more coherent multi part conversations, and improved fine tuning, he said. 

“For example, we often use GPT for analyzing legacy code,” he explained. “Old code, for example COBOL, is not modular and many of these old programs are longer than the previous context window would allow. The new longer context window enables us to use GPT to understand the entire program without having a developer try to break it up in advance.” 

Piyush Tripathi, lead engineer/tech lead at Square, said the launch of GPT-4 Turbo, with its world events knowledge till April 2023, enables businesses with “superior” understanding capabilities. 

For example, while leading the communication platform development at Square, Tripathi said he was contributing to a mission-critical project idea: making sense of customer concerns and queries from our user base of nearly 23 million small and medium-scale businesses. 

“The sheer volume of the task seemed daunting,” he said, pointing out that the company used AI to deal with it but the technology at the time couldn’t handle the high volume of data. 

“So, we had to supplement our tech with some old-fashioned manual work, picking out summaries from each case for further use,” he said. “If we’d had today’s OpenAI GPT-4 Turbo back then, it would have been a game changer. Thanks to its larger context window, it could handle larger chunks of conversation at once. This would have made our summarizing work much easier, freeing us from a good chunk of manual work.” 

Not everyone applauded the full scope of OpenAI’s Dev Day announcements as game-changing for the enterprise. For example, Kjell Carlsson, head of data science strategy and evangelism for Domino Data Lab said that while there are some upsides — GPTs make it easier and cheaper than ever before to create generative AI proof-of-concept applications thanks the optimized GPT-4 Turbo and the new pricing structure, and the Copyright Shield will help allay fears that prevented experiments from getting started, none address the central challenge — developing and operationalizing production-grade GenAI applications. 

“Companies complain that the OpenAI models and APIs do not meet their needs for data security, control, scalability, reliability, latency, or even performance,” he explained. “These announcements do little, if anything, to significantly address these concerns. They make it even easier to get started – something which was never a meaningful problem with OpenAI’s offerings – without addressing the downstream challenges that are crucial for delivering value.”  

As companies progress in their generative AI journeys, he added, they are switching to open-source models and other proprietary offerings that provide greater control, and “these announcements will do little to stop this trend from accelerating.” 

Carlsson even maintained that many companies are “setting themselves up for failure” with generative AI. 

“They believe the narrative that they can outsource the development and operationalization of their GenAI capabilities to third parties, while they focus on design and application development,” he explained. “Unfortunately, the opposite is true. GenAI applications require just as much, if not more, in-house expertise and capabilities than traditional AI and ML-based applications.”  

Jon Hackett, VP technology at Huge, pointed out that OpenAI is very new in the eyes of enterprise organizations “whose entire livelihood is predicated on managing risk and costs.” 

Generative AI risks involved with generative AI are still unclear to them, especially when the provider isn’t a tried and true solution, he explained, while OpenAI’s pricing model is still high depending on the scale and ways companies will integrate with it. “They are often too costly given the perceived value they drive for an organization,” he said. 

With those challenges in mind, the new Assistant API and GPTs are, he said, a “smart way to help companies experiment quickly and at low or no cost before making a deeper investment in custom generative AI experiences.” 

In many ways, he said, this is similar to the offering Google Vertex launched with its Gen AI App Builder tool and looks very similar to Meta’s AI Studio offering they announced at Meta Connect 2023. 

“Any movement on better pricing and rate limits will help drive adoption and provide better ROI, and anything that allows teams to move from concept to prototypes for user testing will help push adoption,” he said. 

But moving forward, organizations need to experiment quickly and safely with using generative AI to drive real impact on their internal productivity and the experiences they deliver to consumers, he added.  

“This is not a space where they can sit back and wait for the dust to settle,” he said. “They either need to develop an edge and competency in AI, or their competitors will surpass them – fast. If companies are finding that they need assistance learning and cultivating this space within their organizations, they should seek the right mix of partners to help guide them along the path.” 

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Hear from top industry leaders on Nov 15. Reserve your free pass


At its inaugural developer conference in San Francisco on Monday, OpenAI made several major announcements, including the introduction of GPT-4 Turbo, customizable versions of ChatGPT with GPT Builder, and the new Assistants API, which empowers programmers to swiftly build tailored “assistants” into their applications. 

But what do these new offerings mean for enterprise businesses who have spent the past year figuring out how to take advantage of generative AI? VentureBeat asked a variety of enterprise leaders about the impact on enterprise GenAI efforts.

Democratizing generative AI for the enterprise

Sheldon Monteiro, chief product officer at global digital transformation consulting company Publicis Sapient, told VentureBeat that with GPTs and more APIs, OpenAI has made tasks that would have previously required a more technical expertise far more accessible to everyday people to create assistants that can perform specific roles. 

This was possible previously for large enterprises with developer resources, Monteiro explained. But what OpenAI has done is “democratize that for enterprises with fewer resources so any business person can make a specialized agent and share it,” he said. 

VB Event

AI Unleashed

Don’t miss out on AI Unleashed on November 15! This virtual event will showcase exclusive insights and best practices from data leaders including Albertsons, Intuit, and more.

 


Register for free here

Alex Beckman, founder and CEO at ON Platform, added the announcements will “significantly enhance the enterprise’s engagement with generative AI” because they not only make the API more powerful and user-friendly, but also allow for refined control over both the data fed into the AI and the information it produces.

“This results in more coherent and contextually relevant content, suitable for a broader spectrum of applications and use cases, and leverages recent world knowledge as of April 2023,” he said. 

Still, while the announcements are great for usability and performance, they still rely on the same foundational model of GPT-4, he added. “OpenAI’s user interfaces are also still lagging behind which could hinder the learning curve and adoption for enterprises,” he said. 

OpenAI’s GPT agents offer productivity gains

Bob Brauer, founder and CEO of Interzoid, a data usability consultancy and generative AI-powered data quality solutions provider, said OpenAI’s new GPTs, or “custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills,” can reference specific knowledge sources, such as a company handbook or technical field guides, to inform their responses with the ability to be deployed for use company-wide. This means that the vast repositories of knowledge that companies have amassed over the years can now be tapped into through AI chatbots and shared and utilized across an organization. 

“The potential productivity gains are incalculable,” said Brauer. “For instance, a human resources department could convert an entire 200-page handbook into a chatbot format, accessible to all employees, thus saving significant time spent on inquiries for both the department as well as every employee, especially new hires, getting them up to speed rapidly.” 

Longer context window of GPT-4 Turbo could be a game-changer 

The longer, 128K context window of GPT-4 Turbo is “exciting,” added Monteiro. The equivalent of 300 pages of context means GPT will have improved context understanding, enhanced document summarization, more cohesive long form narratives, more coherent multi part conversations, and improved fine tuning, he said. 

“For example, we often use GPT for analyzing legacy code,” he explained. “Old code, for example COBOL, is not modular and many of these old programs are longer than the previous context window would allow. The new longer context window enables us to use GPT to understand the entire program without having a developer try to break it up in advance.” 

Piyush Tripathi, lead engineer/tech lead at Square, said the launch of GPT-4 Turbo, with its world events knowledge till April 2023, enables businesses with “superior” understanding capabilities. 

For example, while leading the communication platform development at Square, Tripathi said he was contributing to a mission-critical project idea: making sense of customer concerns and queries from our user base of nearly 23 million small and medium-scale businesses. 

“The sheer volume of the task seemed daunting,” he said, pointing out that the company used AI to deal with it but the technology at the time couldn’t handle the high volume of data. 

“So, we had to supplement our tech with some old-fashioned manual work, picking out summaries from each case for further use,” he said. “If we’d had today’s OpenAI GPT-4 Turbo back then, it would have been a game changer. Thanks to its larger context window, it could handle larger chunks of conversation at once. This would have made our summarizing work much easier, freeing us from a good chunk of manual work.” 

Does OpenAI’s announcements address the biggest challenges of GenAI? 

Not everyone applauded the full scope of OpenAI’s Dev Day announcements as game-changing for the enterprise. For example, Kjell Carlsson, head of data science strategy and evangelism for Domino Data Lab said that while there are some upsides — GPTs make it easier and cheaper than ever before to create generative AI proof-of-concept applications thanks the optimized GPT-4 Turbo and the new pricing structure, and the Copyright Shield will help allay fears that prevented experiments from getting started, none address the central challenge — developing and operationalizing production-grade GenAI applications. 

“Companies complain that the OpenAI models and APIs do not meet their needs for data security, control, scalability, reliability, latency, or even performance,” he explained. “These announcements do little, if anything, to significantly address these concerns. They make it even easier to get started – something which was never a meaningful problem with OpenAI’s offerings – without addressing the downstream challenges that are crucial for delivering value.”  

As companies progress in their generative AI journeys, he added, they are switching to open-source models and other proprietary offerings that provide greater control, and “these announcements will do little to stop this trend from accelerating.” 

Carlsson even maintained that many companies are “setting themselves up for failure” with generative AI. 

“They believe the narrative that they can outsource the development and operationalization of their GenAI capabilities to third parties, while they focus on design and application development,” he explained. “Unfortunately, the opposite is true. GenAI applications require just as much, if not more, in-house expertise and capabilities than traditional AI and ML-based applications.”  

Organizations need to experiment quickly and safely to drive real impact

Jon Hackett, VP technology at Huge, pointed out that OpenAI is very new in the eyes of enterprise organizations “whose entire livelihood is predicated on managing risk and costs.” 

Generative AI risks involved with generative AI are still unclear to them, especially when the provider isn’t a tried and true solution, he explained, while OpenAI’s pricing model is still high depending on the scale and ways companies will integrate with it. “They are often too costly given the perceived value they drive for an organization,” he said. 

With those challenges in mind, the new Assistant API and GPTs are, he said, a “smart way to help companies experiment quickly and at low or no cost before making a deeper investment in custom generative AI experiences.” 

In many ways, he said, this is similar to the offering Google Vertex launched with its Gen AI App Builder tool and looks very similar to Meta’s AI Studio offering they announced at Meta Connect 2023. 

“Any movement on better pricing and rate limits will help drive adoption and provide better ROI, and anything that allows teams to move from concept to prototypes for user testing will help push adoption,” he said. 

But moving forward, organizations need to experiment quickly and safely with using generative AI to drive real impact on their internal productivity and the experiences they deliver to consumers, he added.  

“This is not a space where they can sit back and wait for the dust to settle,” he said. “They either need to develop an edge and competency in AI, or their competitors will surpass them – fast. If companies are finding that they need assistance learning and cultivating this space within their organizations, they should seek the right mix of partners to help guide them along the path.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
DefenseNews

Navy’s next amphibious ship named for Marines’ Helmand province fight

DefenseNews

Navy pauses T-45C Goshawk fleet operations after ‘engine malfunction’

DefenseNews

V-22 Osprey could see second life, with new drive system, wings in 2050s

Cleantech & EV'sNews

Acura ZDX S-Line first drive: A smooth, comfy ride, but it doesn't scream 'performance EV' [Video]

Sign up for our Newsletter and
stay informed!