AI & RoboticsNews

Iterate introduces AppCoder LLM, allowing enterprises to build AI apps with natural language

At a time when figuring out how to use AI to drive business gains is the “Holy Grail” of almost every enterprise, vendors are racing to introduce new and lucrative tools to make it easier for their customers to build high-performing AI/ML-powered applications.

The focus has largely been on low-code development, but Iterate is taking steps to get rid of the coding layer entirely. The California-headquartered company, known for building and deploying AI and emerging technologies to private, edge or cloud environments, today announced the launch of AppCoder LLM – a fine-tuned model that can instantly generate working and updated code for production-ready AI applications using natural language prompts.

Integrated into Iterate’s Interplay application development platform, AppCoder LLM works with text prompts, just like any other generative AI copilot, and performs far better than already existing AI-driven coding solutions, including Wizardcoder. This gives developer teams quick access to accurate code for their AI solutions, be it an object detection product or one for processing documents.

“This innovative model can generate functional code for projects, significantly accelerating the development cycle. We encourage developer teams to explore Interplay-AppCoder LLM and the powerful experience of building out code automatically with our model,” Brian Sathianathan, CTO of Iterate.ai, said in a statement.

At its core, Iterate Interplay is a fully containerized drag-and-drop platform that connects AI engines, enterprise data sources and third-party service nodes to form the flow required for a production-ready application.

Developer teams can open each node in this interface for custom code, which is exactly where AppCoder comes in. It allows users to generate the code by simply giving the instructions in natural language.

“Interplay-AppCoder can handle computer vision libraries such as YOLOv8 for building advanced object detection applications. We also have the ability to generate code for LangChain and Google libraries, which are among the most commonly used libraries (for chatbots and other capabilities),” Sathianathan told VentureBeat.

A fast-food drive-thru restaurant, for instance, could connect a video data source and simply ask Interplay-AppCoder to write a car identification application with the YOLOv8 model from the Ultralytics library. The LLM will produce the desired code for the application right away. 

Sathianathan noted his team testing this capability was able to build a core, production-ready detection app in just under five minutes. This kind of acceleration in app development can save costs and increase team productivity, allowing them to focus on strategic initiatives critical to business growth.

In addition to being fast, AppCoder LLM also produces better outputs when compared to Meta’s Code Llama and Wizardcoder, which outperforms Code Llama.

Specifically, in an ICE Benchmark, which ran the 15B versions of AppCoder and Wizardcoder models to work with the same LangChain and YOLOv8 libraries, the Iterate model had a 300% higher functional correctness score (2.4/4.0 versus 0.6/4.0) and 61% higher usefulness score (2.9/4.0 versus 1.8/4.0). 

The higher functional correctness score suggests that the model is better at conducting unit tests while considering the given question and reference code, while the usefulness score indicates that the output from the model is clear, presented in a logical order and maintains human readability – while covering all functionalities of the problem statement after comparing it with the reference code. 

“Response time when generating the code on an A100 GPU was typically 6-8 seconds for Interplay-AppCoder.  The training was done in a conversational question>answer>question>context method,” Sathianathan added. 

He noted that they were able to achieve these results after meticulous fine-tuning of CodeLlama-7B, 34B and Wizard Coder-15B, 34B on a hand-coded dataset of LangChain, YOLO V8, VertexAI and many other modern generative AI libraries used on a daily basis.

While AppCoder is now available to test and use, Iterate says this is just the start of its work aimed at simplifying the development of AI/ML apps for enterprises.

The company is currently building 15 private LLMs for large enterprises and is also focused on bringing the models to CPU and edge deployments, to drive scalability.

“Iterate will continue to provide a platform and expanding toolset for managing AI engines, emerging language models, and large data sets, all tuned for rapid development and deployment (of apps) on CPU and edge architectures. New models and data heaps are coming out all the time, and our low-code architecture allows for quick adaptation and integration with these emerging models. The space is rapidly expanding—and also democratizing—and we will continue to push innovative new management and configuration tools into the platform,” the CTO said.

Over the past two years, Iterate has nearly doubled its revenue. The company has Fortune 100 customers covering sectors such as banking, insurance, documentation services, entertainment, luxury goods, automotive services and retail.

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Hear from top industry leaders on Nov 15. Reserve your free pass


At a time when figuring out how to use AI to drive business gains is the “Holy Grail” of almost every enterprise, vendors are racing to introduce new and lucrative tools to make it easier for their customers to build high-performing AI/ML-powered applications.

The focus has largely been on low-code development, but Iterate is taking steps to get rid of the coding layer entirely. The California-headquartered company, known for building and deploying AI and emerging technologies to private, edge or cloud environments, today announced the launch of AppCoder LLM – a fine-tuned model that can instantly generate working and updated code for production-ready AI applications using natural language prompts.

Integrated into Iterate’s Interplay application development platform, AppCoder LLM works with text prompts, just like any other generative AI copilot, and performs far better than already existing AI-driven coding solutions, including Wizardcoder. This gives developer teams quick access to accurate code for their AI solutions, be it an object detection product or one for processing documents.

“This innovative model can generate functional code for projects, significantly accelerating the development cycle. We encourage developer teams to explore Interplay-AppCoder LLM and the powerful experience of building out code automatically with our model,” Brian Sathianathan, CTO of Iterate.ai, said in a statement.

VB Event

AI Unleashed

Don’t miss out on AI Unleashed on November 15! This virtual event will showcase exclusive insights and best practices from data leaders including Albertsons, Intuit, and more.

 


Register for free here

What exactly makes AppCoder LLM unique?

At its core, Iterate Interplay is a fully containerized drag-and-drop platform that connects AI engines, enterprise data sources and third-party service nodes to form the flow required for a production-ready application.

Developer teams can open each node in this interface for custom code, which is exactly where AppCoder comes in. It allows users to generate the code by simply giving the instructions in natural language.

“Interplay-AppCoder can handle computer vision libraries such as YOLOv8 for building advanced object detection applications. We also have the ability to generate code for LangChain and Google libraries, which are among the most commonly used libraries (for chatbots and other capabilities),” Sathianathan told VentureBeat.

A fast-food drive-thru restaurant, for instance, could connect a video data source and simply ask Interplay-AppCoder to write a car identification application with the YOLOv8 model from the Ultralytics library. The LLM will produce the desired code for the application right away. 

Sathianathan noted his team testing this capability was able to build a core, production-ready detection app in just under five minutes. This kind of acceleration in app development can save costs and increase team productivity, allowing them to focus on strategic initiatives critical to business growth.

AppCoder performs leading code-generating LLMs

In addition to being fast, AppCoder LLM also produces better outputs when compared to Meta’s Code Llama and Wizardcoder, which outperforms Code Llama.

Specifically, in an ICE Benchmark, which ran the 15B versions of AppCoder and Wizardcoder models to work with the same LangChain and YOLOv8 libraries, the Iterate model had a 300% higher functional correctness score (2.4/4.0 versus 0.6/4.0) and 61% higher usefulness score (2.9/4.0 versus 1.8/4.0). 

The higher functional correctness score suggests that the model is better at conducting unit tests while considering the given question and reference code, while the usefulness score indicates that the output from the model is clear, presented in a logical order and maintains human readability – while covering all functionalities of the problem statement after comparing it with the reference code. 

“Response time when generating the code on an A100 GPU was typically 6-8 seconds for Interplay-AppCoder.  The training was done in a conversational question>answer>question>context method,” Sathianathan added. 

He noted that they were able to achieve these results after meticulous fine-tuning of CodeLlama-7B, 34B and Wizard Coder-15B, 34B on a hand-coded dataset of LangChain, YOLO V8, VertexAI and many other modern generative AI libraries used on a daily basis.

More to come

While AppCoder is now available to test and use, Iterate says this is just the start of its work aimed at simplifying the development of AI/ML apps for enterprises.

The company is currently building 15 private LLMs for large enterprises and is also focused on bringing the models to CPU and edge deployments, to drive scalability.

“Iterate will continue to provide a platform and expanding toolset for managing AI engines, emerging language models, and large data sets, all tuned for rapid development and deployment (of apps) on CPU and edge architectures. New models and data heaps are coming out all the time, and our low-code architecture allows for quick adaptation and integration with these emerging models. The space is rapidly expanding—and also democratizing—and we will continue to push innovative new management and configuration tools into the platform,” the CTO said.

Over the past two years, Iterate has nearly doubled its revenue. The company has Fortune 100 customers covering sectors such as banking, insurance, documentation services, entertainment, luxury goods, automotive services and retail.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Shubham Sharma
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!