AI & RoboticsNews

GitHub Copilot expands market for AI code generation with new business plan

Check out all the on-demand sessions from the Intelligent Security Summit here.


GitHub Copilot, a programming tool that uses artificial intelligence (AI) to make code suggestions, is releasing a new business plan enabling large companies with hundreds of developers to use its model at scale.

First previewed in 2021, Copilot uses OpenAI’s Codex large language model (LLM) to turn textual descriptions into source code. It can perform a range of tasks, from auto-completing a line of code to writing full blocks of code. A study by GitHub in 2022 found that Copilot helped make developers considerably more productive and keep them in the flow while they’re coding.

The new plan will enable GitHub and its owner Microsoft to expand Copilot at scale and solidify their position in automated programming, which can be one of the most lucrative markets for generative AI.

Better code suggestions

One of the important parts of the LLM life cycle is gathering user feedback and updating models. Since officially launching Copilot, GitHub has used feedback from millions of developers to improve its model, increasing the quality of code suggestions and reducing latency. According to GitHub’s latest report, on average Copilot writes 46% of code for developer users, up from 27% in June 2022.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

“With more accurate and responsive code suggestions, we’re seeing a higher acceptance rate [for code suggestions],” Shuyin Zhao, GitHub senior director of product management told VentureBeat. “This means that developers using GitHub Copilot are staying in the flow and coding faster than before — and as a result — [are] more productive and happy.”

Context around code

GitHub has also added a few new tricks to improve the Copilot experience. One of them is a new paradigm called “Fill-in-the-Middle” (FIM), which gives Copilot more context to improve code suggestions.

Previously, Copilot used the code before the user’s current cursor location as input prompt for the LLM. With FIM, Copilot uses both the code that comes before and after the current location. So, for example, if a developer is trying to insert a block of code in the middle of a file, Copilot will have more context about what comes not just before but also after the code it generates.

“Instead of only considering the prefix of the code, it also leverages the suffix of it and leaves a gap in the middle for Copilot to fill,” said Zhao. “This way, Copilot has more context about your intended code and how it should align with the rest of your program. We’ve seen FIM consistently produce higher quality code suggestions.”

At the same time, GitHub has developed various strategies to make sure FIM does not increase the latency of the model, said Zhao.

Multi-model approach

LLMs are often presented as end-to-end systems that can perform multiple tasks without any external help. But in practice, an LLM needs to be complemented with other tools and features to improve its robustness.

The latest Copilot update uses multiple models to address different challenges of generating source code. A lightweight client-side model provides context about the user’s behavior and preferences, such as whether they accepted the last suggestion. This information complements context provided by the source code and helps reduce unwanted suggestions. The client-side LLM is currently only available on VS Code, but GitHub plans to roll it out across other popular extensions. 

Another LLM vets the code generated by Copilot for security holes. Generating insecure code has been one of the main concerns regarding code generators such as Copilot and Codex. This second AI system approximates the behavior of static analysis tools and detects basic vulnerabilities such as SQL injection, path injection, and inserting sensitive information in the code. 

Security integrations

Traditional static application security testing (SAST) tools are meant to review the entire application code at the compile and build stages without time constraints. In contrast, the AI code evaluator is meant to review small blocks of code and provide near-real-time feedback to prevent insecure suggestions from being surfaced to developers. 

“When accompanied with adequate hardware and a robust inference platform and service, we can accomplish fast vulnerability detection on incomplete fragments of code,” said Zhao. “With our system in place, the unsafe examples are no longer shown to users, and are replaced by suggestions without detected vulnerabilities when/if available.”

This is a work in progress, GitHub says, and it will continue to improve the security model as developers report vulnerable code suggestions generated by Copilot.

Enterprise features

The new release of Copilot moves beyond individual developers and enables enterprises to onboard many developers within a single plan. The new business plan supports corporate VPN access and centralized seat management, as well as enabling companies to use Copilot without storing their code on GitHub (although they still need a GitHub account to purchase the plan). Developers can integrate Copilot with their preferred editor, including Neovim, JetBrains IDEs, and Visual Studio.

At $19 per month per seat, the business plan costs nearly double the price of the individual plan. But given that, according to GitHub, Copilot can help speed up coding up to 55% and can have huge benefits for enterprises.

The business plan will enable GitHub to try new growth channels and sales models for large companies with hundreds or thousands of developers. It will also provide the company with new feedback to upgrade the LLM for software projects with large teams of developers. 

“Whether you’re part of a startup or Fortune 500 enterprise, a developer or student, we believe AI will reach every aspect of the developer experience, and we want to enable developers wherever they are, in their preferred environment and workflow,” said Zhao.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Ben Dickson
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Volkswagen CEO says it's not a 'fantasy world' as 100,000 workers strike

Cleantech & EV'sNews

GM braces for a $5 billion hit as it fights to keep up in China's intensifying EV price war

Cleantech & EV'sNews

Tesla shuts down rumors of Cybertruck coming to China

AI & RoboticsNews

OpenAI appears poised to launch ChatGPT Pro subscription plans at $200 USD per month

Sign up for our Newsletter and
stay informed!