AI & RoboticsNews

How Microsoft can become the biggest winner of generative AI

generative AI

Check out all the on-demand sessions from the Intelligent Security Summit here.


Since the release of ChatGPT in November, there has been much speculation about the possible killer application of advanced large language models (LLM). A while back, there were reports that Microsoft will be integrating ChatGPT into its Bing search engine to get ahead of Google. There are also many discussions about something like ChatGPT replacing search altogether.

While I’m not sold on either of those ideas, I think that we’re just beginning to explore the huge business potential of LLMs and other generative artificial intelligence technologies.

And Microsoft has the chance to become the big winner of this new wave of innovation that is about to be unleashed. Azure OpenAI Service, now generally available, can be Microsoft’s winning card in the race to dominate the fast-growing market for generative AI.

Azure OpenAI Service vs. OpenAI API

Azure OpenAI Service launched in November 2021 but was only available through a sales model. Now, anyone can apply and gain access to the service if they conform to Microsoft’s responsible AI principles.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

Currently, Azure OpenAI Service supports base and fine-tuned GPT-3 models, base and fine-tuned Codex series, and LLM embeddings. Microsoft also added DALL-E 2 to OpenAI Service in October, though it is still not available as part of the public product. According to the Microsoft blog, the company will soon add support for ChatGPT.

Azure OpenAI Service is basically a copy of OpenAI API, though it has several advantages. For Microsoft customers that are already using Microsoft’s cloud, getting access to OpenAI’s technology through Azure will be much easier. Since many companies are already using Microsoft’s machine learning and devops products, it will be much easier for them to manage their GPT-3 and Codex instances on the Azure platform.

Azure also offers enterprise-level security features that are required in many industries. And it supports features such as choosing the geographical region of the cloud instance and adding content filters to prevent misuse.

Interestingly, the prices of Azure OpenAI Service are more competitive than OpenAI API. In OpenAI API, the prices of fine-tuned GPT-3 models are higher than base models. In Azure, both base and fine-tuned models have the same pricing. Azure also allows customers to pay for fine-tuned models using a per-hour payment model instead of the usual token-based pricing, which is more convenient for applications with high-volume model usage.

Microsoft and OpenAI both profit from the expanding market for Azure OpenAI Service and OpenAI API. OpenAI API is powered by Microsoft’s cloud, which means as its customers increase, OpenAI’s Azure bill will grow. On the other hand, Microsoft has a licensing deal with OpenAI. The details of the deal have not been made public (aside from the fact that Microsoft has exclusive licensing rights to OpenAI’s technology). But with the growing usage of Azure OpenAI Service, Microsoft’s licensing fees will increase.

However, in the long run, I expect Azure to eat into OpenAI’s business as the market for generative AI grows and matures. Azure is much more flexible than OpenAI API and it also offers a host of other services that are critical to large-scale software and machine learning development.

OpenAI API will still remain a hub for exploration and innovation, but the high-paying customers that want to build scalable products will slowly migrate to Azure. This will make OpenAI increasingly dependable on Microsoft as a source of revenue for its models.

The robustness, flexibility, and convenience of Azure will also enable it to compete against open-source and commercial alternatives that are emerging. Microsoft’s AI-optimized and scalable hardware infrastructure allows it to deliver generative models at competitive prices. At the same time, the complexity and upfront costs of setting up the hardware for generative models will keep hosted systems like Azure OpenAI the preferable option for many firms that lack in-house talent to set up open-source models.

The market for RLHF

Before ChatGPT, the prominent way to train LLMs and other generative models was unsupervised or self-supervised learning. The model is provided with a very large corpus of text, software code, images or other types of data and left on its own to learn relevant patterns. During training, the model masks parts of the data and tries to predict them. It then reveals the masked sections and compares its predictions with the ground truth, and corrects its inner parameters to improve its predictions. By repeating this process over and over, the LLM learns statistical representations of the training corpus and can use it to generate relevant sequences of text, computer instructions, image pixels, etc.

ChatGPT showed the power of adding human control to the training process. ChatGPT was trained using reinforcement learning from human feedback (RLHF). Instead of pure unsupervised learning, the engineers at OpenAI used human annotators to guide the model at different stages of the training process. The team first fine-tuned a pretrained model using a set of prompts and responses written by human experts. Next, they created a “reward model” that ranked the language model’s output. The reward model was trained on output quality scores provided by human reviewers. Finally, they used the reward model to further train the model and align its output with human preferences. The impressive results of ChatGPT show how far LLMs can be pushed with human assistance.

With the success of ChatGPT, the market for RLHF-trained LLMs is likely to grow. Companies will want to use the technique to fine-tune LLMs like ChatGPT to follow application-specific instructions. But the pipeline for RLHF requires complicated development and management tools, including data preparation and annotation, reward model development, model and data versioning, regular retraining, model monitoring and control, and much more.

Fortunately for Microsoft, its Azure platform is well-prepared to meet such requirements through its MLops and data warehousing tools. This, along with its scalable cloud infrastructure and software development tools, will give Microsoft the edge in this more specialized niche of generative models.

Microsoft missed the boat on smartphones and mobile platforms. But its early investment in OpenAI, an AI lab that at the time didn’t have a profitable business model, has given it the chance to grab a big share of the market for the next wave of disruptive innovation.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Ben Dickson
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!