As more and more enterprises look to power their internal workflows with generative AI, OpenAI is working to make implementation better for them. Case in point: the latest move from the Sam Altman-led company is to offer new built-in support for users to fine-tune its GPT-3.5 Turbo large language model (LLM).
The development allows enterprises to bring their proprietary data for training the model and run it at scale. This kind of customization will make GPT-3.5 Turbo, which has been pre-trained on public data up to September 2021, better at handling business-specific use cases — and creating unique and differentiated experiences for each user or organization that implements it.
GPT-3.5 Turbo is one of the models directly available to consumers for free through ChatGPT, but it can also be used independently of that product through paid application programming interface (API) calls, which companies can then integrate into their own products and services.
OpenAI says that early tests have shown that a custom-tuned GPT-3.5 Turbo can match or even outperform the flagship GPT-4 in certain narrow tasks. It plans to open the latter for fine-tuning this fall.
As OpenAI writes in a blog post, fine-tuning pre-trained GPT-3.5 Turbo on company data will give enterprise developers certain benefits, including better instruction-following from the model.
For instance, the model could be customized to respond in German every time it is prompted in that language. It could also be tuned to format responses in a given way, like completing the given code snippets, or provide answers in a specific tone that falls in line with a specific brand’s voice.
Beyond this, OpenAI claims that customization could help businesses shorten their prompts and speed up API calls while reducing costs at the same time. In early tests, developers were able to reduce their prompt size by up to 90% by fine-tuning instructions into the model itself.
Also Read: Microsoft invests $1 billion in Elon Musk-founded OpenAI
The company launched GPT-3.5 Turbo earlier this year and claims it is its most capable and cost-effective model in the GPT-3.5 family, optimized for chat using the Chat completions API as well as for traditional completions tasks. It notes that the fine-tuned version of this model can handle 4,000 tokens at a time — twice what earlier GPT-3 models available for fine-tuning could interpret.
According to OpenAI’s blog, fine-tuning involves three main steps: Preparing the data, uploading the files and creating a fine-tuning job. Once the fine-tuning is finished, the model is available to be used in production with the same shared rate limits as the underlying model.
“It is very important to us that the deployment of fine-tuning is safe. To preserve the default model’s safety features through the fine-tuning process, fine-tuning training data is passed through our Moderation API and a GPT-4 powered moderation system to detect unsafe training data that conflict with our safety standards,” OpenAI notes in the blog post.
The company also emphasized that the data sent in and out of the fine-tuning APIs and systems is owned by the user and is not used for training any model (from OpenAI or any other enterprise) besides the customer’s own.
As for pricing, OpenAI is charging $0.0080 per 1,000 tokens for training GPT-3.5 Turbo, $0.0120 per 1,000 tokens for input usage and $0.0120 per 1,000 tokens for outputs.
Moving ahead, OpenAI plans to open GPT-4, its flagship generative model which can even understand images, for fine-tuning. The targeted timeline is later this fall, it said.
Further, to improve the whole fine-tuning process, the company will launch a fine-tuning interface to work with. This will give developers easier access to information about ongoing fine-tuning jobs, completed model snapshots and other details related to customization efforts. However, as of now, there’s no word on when exactly this UI will debut.
OpenAI’s move to build in more enterprise-friendly tools for one of its signature LLMs makes sense but also puts it into direct competition with the growing ecosystem of startups and established players that offer their own third-party LLM fine-tuning solutions, among them Armilla AI and Apache Spark.
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
As more and more enterprises look to power their internal workflows with generative AI, OpenAI is working to make implementation better for them. Case in point: the latest move from the Sam Altman-led company is to offer new built-in support for users to fine-tune its GPT-3.5 Turbo large language model (LLM).
The development allows enterprises to bring their proprietary data for training the model and run it at scale. This kind of customization will make GPT-3.5 Turbo, which has been pre-trained on public data up to September 2021, better at handling business-specific use cases — and creating unique and differentiated experiences for each user or organization that implements it.
GPT-3.5 Turbo is one of the models directly available to consumers for free through ChatGPT, but it can also be used independently of that product through paid application programming interface (API) calls, which companies can then integrate into their own products and services.
OpenAI says that early tests have shown that a custom-tuned GPT-3.5 Turbo can match or even outperform the flagship GPT-4 in certain narrow tasks. It plans to open the latter for fine-tuning this fall.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
What to expect from fine-tuning GPT-3.5 Turbo?
As OpenAI writes in a blog post, fine-tuning pre-trained GPT-3.5 Turbo on company data will give enterprise developers certain benefits, including better instruction-following from the model.
For instance, the model could be customized to respond in German every time it is prompted in that language. It could also be tuned to format responses in a given way, like completing the given code snippets, or provide answers in a specific tone that falls in line with a specific brand’s voice.
Beyond this, OpenAI claims that customization could help businesses shorten their prompts and speed up API calls while reducing costs at the same time. In early tests, developers were able to reduce their prompt size by up to 90% by fine-tuning instructions into the model itself.
The company launched GPT-3.5 Turbo earlier this year and claims it is its most capable and cost-effective model in the GPT-3.5 family, optimized for chat using the Chat completions API as well as for traditional completions tasks. It notes that the fine-tuned version of this model can handle 4,000 tokens at a time — twice what earlier GPT-3 models available for fine-tuning could interpret.
How to fine-tune with OpenAI
According to OpenAI’s blog, fine-tuning involves three main steps: Preparing the data, uploading the files and creating a fine-tuning job. Once the fine-tuning is finished, the model is available to be used in production with the same shared rate limits as the underlying model.
“It is very important to us that the deployment of fine-tuning is safe. To preserve the default model’s safety features through the fine-tuning process, fine-tuning training data is passed through our Moderation API and a GPT-4 powered moderation system to detect unsafe training data that conflict with our safety standards,” OpenAI notes in the blog post.
The company also emphasized that the data sent in and out of the fine-tuning APIs and systems is owned by the user and is not used for training any model (from OpenAI or any other enterprise) besides the customer’s own.
As for pricing, OpenAI is charging $0.0080 per 1,000 tokens for training GPT-3.5 Turbo, $0.0120 per 1,000 tokens for input usage and $0.0120 per 1,000 tokens for outputs.
Fine-tuning for GPT-4 and more coming soon
Moving ahead, OpenAI plans to open GPT-4, its flagship generative model which can even understand images, for fine-tuning. The targeted timeline is later this fall, it said.
Further, to improve the whole fine-tuning process, the company will launch a fine-tuning interface to work with. This will give developers easier access to information about ongoing fine-tuning jobs, completed model snapshots and other details related to customization efforts. However, as of now, there’s no word on when exactly this UI will debut.
OpenAI’s move to build in more enterprise-friendly tools for one of its signature LLMs makes sense but also puts it into direct competition with the growing ecosystem of startups and established players that offer their own third-party LLM fine-tuning solutions, among them Armilla AI and Apache Spark.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Shubham Sharma
Source: Venturebeat
Reviewed By: Editorial Team