AI & RoboticsNews

Consulting giant McKinsey unveils its own generative AI tool for employees: Lilli

gen AI tool

McKinsey and Company, the nearly century-old firm that is the one of the largest consulting agencies in the world, made headlines earlier this year with its rapid embrace of generative AI tools, saying in June that nearly half of its 30,000 employees were using the technology.

Now, the company is debuting a gen AI tool of its own: Lilli, a new chat application for employees designed by McKinsey’s “ClienTech” team under CTO Jacky Wright. The tool serves up information, insights, data, plans, and even recommends the most applicable internal experts for consulting projects, all based on more than 100,000 documents and interview transcripts.

“If you could ask the totality of McKinsey’s knowledge a question, and [an AI] could answer back, what would that do for the company? That’s exactly what Lilli is,” McKinsey senior partner Erik Roth, who led the product’s development, said in a video interview with VentureBeat.

Named after Lillian Dombrowski, the first woman McKinsey hired for a professional services role back in 1945, Lilli has been in beta since June 2023 and will be rolling out across McKinsey this fall.

Roth and his collaborators at McKinsey told VentureBeat that Lilli has already been in beta use by approximately 7,000 employees as a “minimum viable product” (MVP) and has already cut down the time spent on research and planning work from weeks to hours, and in other cases, hours to minutes.

“In just the last two weeks, Lilli has answered 50,000 questions,” said Roth. “Sixty six percent of users are returning to it multiple times per week.”

Roth provided VentureBeat with an exclusive demo of Lilli, showing the interface and several examples of the responses it generates.

The interface will look familiar to those who have used other public-facing text-to-text based gen AI tools such as OpenAI’s ChatGPT and Anthropic’s Claude 2. Lilli contains a text entry box for the user to enter in questions, searches and prompts at the bottom of its primary window, and generates its responses above in a chronological chat, showing the user’s prompts and Lilli’s responses following.

However, the are several features that immediately stand out in terms of additional utility: Lilli also contains an expandable left-hand sidebar with saved prompts, which the user can copy and paste over and modify to their liking. Roth said that categories for these prompts were coming soon to the platform, as well.

The interface includes two tabs that a user may toggle between, one, “GenAI Chat” that sources data from a more generalized large language model (LLM) backend, and another, “Client Capabilities” that sources responses from McKinsey’s corpus of 100,000-plus documents, transcripts and presentations.

“We intentionally created both experiences to learn about and compare what we have internally with what is publicly available,” Roth told VentureBeat in an email.

Another differentiator is in sourcing: While many LLMs don’t specifically cite or link to sources upon which they draw their responses — Microsoft Bing Chat powered by OpenAI being a notable exception — Lilli provides a whole separate “Sources” section below every single response, along with links and even page numbers to specific pages from which the model drew its response.

“We go full attribution,” said Roth. “Clients I’ve spoken with get very excited about that.”

With so much information available to it, what kinds of tasks is McKinsey’s new Lilli AI best suited to complete?

Roth said he envisioned that McKinsey consultants would use Lilli through nearly every step of their work with a client, from gathering initial research on the client’s sector and competitors or comparable firms, to drafting plans for how the client could implement specific projects.

VentureBeat’s demo of Lilli showed off such versatility: Lilli was able to provide a list of internal McKinsey experts qualified to speak about a large e-commerce retailer, as well as an outlook for clean energy in the U.S. over the next decade, and a plan for building a new energy plant over the course of 10 weeks.

Throughout it all, the AI cited its sources clearly at the bottom.

While the responses were sometimes a few seconds slower than leading commercial LLMs, Roth said McKinsey was continually updating the speed and also prioritized quality of information over rapidity.

Furthermore, Roth said that the company is experimenting with enabling a feature for uploading client information and documentation for secure, private analysis on McKinsey servers, but said that this feature was still being developed and would not be deployed until it was perfected.

“Lilli has the capacity to upload client data in a very safe and secure way,” Roth explained. “We can think about use cases in the future where we’ll combine our data with our clients data, or just use our clients’ data on the same platform for greater synthesis and exploration…anything that we load into Lilli, goes through an extensive compliance risk assessment, including our own data.”

Lilli leverages currently available LLMs, including those developed by McKinsey partner Cohere as well as OpenAI on the Microsoft Azure platform, to inform its GenAI Chat and natural language processing (NLP) capabilities.

The application, however, was built by McKinsey and acts as a secure layer that goes between the user and the underlying data.

“We think of Lilli as its own stack,” said Roth. “So its own layer sits in between the corpus and the LLMs. It does have deep learning capabilities, it does have trainable modules, but it’s a combination of technologies that comes together to create the stack.”

Roth emphasized that McKinsey was “LLM agnostic” and was constantly exploring new LLMs and AI models to see which offered the most utility, including older versions that are still being maintained.

While the company looks to expand its usage to all employees, Roth also said that McKinsey was not ruling out white-labeling Lilli or turning it into an external-facing product for use by McKinsey clients or other firms entirely.

“At the moment, all discussions are in play,” said Roth. “I personally believe that every organization needs a version of Lilli.”

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


McKinsey and Company, the nearly century-old firm that is the one of the largest consulting agencies in the world, made headlines earlier this year with its rapid embrace of generative AI tools, saying in June that nearly half of its 30,000 employees were using the technology.

Now, the company is debuting a gen AI tool of its own: Lilli, a new chat application for employees designed by McKinsey’s “ClienTech” team under CTO Jacky Wright. The tool serves up information, insights, data, plans, and even recommends the most applicable internal experts for consulting projects, all based on more than 100,000 documents and interview transcripts.

“If you could ask the totality of McKinsey’s knowledge a question, and [an AI] could answer back, what would that do for the company? That’s exactly what Lilli is,” McKinsey senior partner Erik Roth, who led the product’s development, said in a video interview with VentureBeat.

Named after Lillian Dombrowski, the first woman McKinsey hired for a professional services role back in 1945, Lilli has been in beta since June 2023 and will be rolling out across McKinsey this fall.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.


Register Now

Roth and his collaborators at McKinsey told VentureBeat that Lilli has already been in beta use by approximately 7,000 employees as a “minimum viable product” (MVP) and has already cut down the time spent on research and planning work from weeks to hours, and in other cases, hours to minutes.

“In just the last two weeks, Lilli has answered 50,000 questions,” said Roth. “Sixty six percent of users are returning to it multiple times per week.”

How McKinsey’s Lilli AI works

Roth provided VentureBeat with an exclusive demo of Lilli, showing the interface and several examples of the responses it generates.

The interface will look familiar to those who have used other public-facing text-to-text based gen AI tools such as OpenAI’s ChatGPT and Anthropic’s Claude 2. Lilli contains a text entry box for the user to enter in questions, searches and prompts at the bottom of its primary window, and generates its responses above in a chronological chat, showing the user’s prompts and Lilli’s responses following.

However, the are several features that immediately stand out in terms of additional utility: Lilli also contains an expandable left-hand sidebar with saved prompts, which the user can copy and paste over and modify to their liking. Roth said that categories for these prompts were coming soon to the platform, as well.

Gen AI chat and client capabilities functions

The interface includes two tabs that a user may toggle between, one, “GenAI Chat” that sources data from a more generalized large language model (LLM) backend, and another, “Client Capabilities” that sources responses from McKinsey’s corpus of 100,000-plus documents, transcripts and presentations.

“We intentionally created both experiences to learn about and compare what we have internally with what is publicly available,” Roth told VentureBeat in an email.

Another differentiator is in sourcing: While many LLMs don’t specifically cite or link to sources upon which they draw their responses — Microsoft Bing Chat powered by OpenAI being a notable exception — Lilli provides a whole separate “Sources” section below every single response, along with links and even page numbers to specific pages from which the model drew its response.

“We go full attribution,” said Roth. “Clients I’ve spoken with get very excited about that.”

What McKinsey’s Lilli can be used for

With so much information available to it, what kinds of tasks is McKinsey’s new Lilli AI best suited to complete?

Roth said he envisioned that McKinsey consultants would use Lilli through nearly every step of their work with a client, from gathering initial research on the client’s sector and competitors or comparable firms, to drafting plans for how the client could implement specific projects.

VentureBeat’s demo of Lilli showed off such versatility: Lilli was able to provide a list of internal McKinsey experts qualified to speak about a large e-commerce retailer, as well as an outlook for clean energy in the U.S. over the next decade, and a plan for building a new energy plant over the course of 10 weeks.

Throughout it all, the AI cited its sources clearly at the bottom.

While the responses were sometimes a few seconds slower than leading commercial LLMs, Roth said McKinsey was continually updating the speed and also prioritized quality of information over rapidity.

Furthermore, Roth said that the company is experimenting with enabling a feature for uploading client information and documentation for secure, private analysis on McKinsey servers, but said that this feature was still being developed and would not be deployed until it was perfected.

“Lilli has the capacity to upload client data in a very safe and secure way,” Roth explained. “We can think about use cases in the future where we’ll combine our data with our clients data, or just use our clients’ data on the same platform for greater synthesis and exploration…anything that we load into Lilli, goes through an extensive compliance risk assessment, including our own data.”

The technology under the hood

Lilli leverages currently available LLMs, including those developed by McKinsey partner Cohere as well as OpenAI on the Microsoft Azure platform, to inform its GenAI Chat and natural language processing (NLP) capabilities.

The application, however, was built by McKinsey and acts as a secure layer that goes between the user and the underlying data.

“We think of Lilli as its own stack,” said Roth. “So its own layer sits in between the corpus and the LLMs. It does have deep learning capabilities, it does have trainable modules, but it’s a combination of technologies that comes together to create the stack.”

Roth emphasized that McKinsey was “LLM agnostic” and was constantly exploring new LLMs and AI models to see which offered the most utility, including older versions that are still being maintained.

While the company looks to expand its usage to all employees, Roth also said that McKinsey was not ruling out white-labeling Lilli or turning it into an external-facing product for use by McKinsey clients or other firms entirely.

“At the moment, all discussions are in play,” said Roth. “I personally believe that every organization needs a version of Lilli.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Carl Franzen
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!