AI & RoboticsNews

How LinkedIn released new ChatGPT-based AI tools in just 3 months

LinkedIn

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


The sprint to develop LinkedIn’s recently released generative AI tools took only three months, Ya Xu, VP of engineering and head of data and artificial intelligence (AI), told VentureBeat in an interview.

The timeline, she said, was “unprecedented” for a large company like LinkedIn, given the many changes engineering and product teams implemented based on OpenAI’s latest GPT models, including ChatGPT and GPT-4, as well as some open-source models. These include generative AI-powered collaborative articles, job descriptions and personalized writing suggestions for LinkedIn profiles.

For example, she explained, her teams were able in just one month to generate job descriptions automatically and serve live traffic. Cross-functional teams with shared goals and purposes are key, she added: “It’s not about working 20-hour days or leaving the office late,” it’s about “dropping other things and focusing on what’s important to get the job done.”

Since LinkedIn is owned by Microsoft, Xu said she does get a “front-row seat in seeing the future of this technology ahead of time.” So along with LinkedIn CEO Ryan Roslansky and other colleagues, Xu quickly moved last fall to envision how ChatGPT and other GPT models could create more economic opportunities for Linkedin members and customers.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

>>Follow VentureBeat’s ongoing generative AI coverage<<

LinkedIn prioritized an engineering philosophy

Xu said that her team early on prioritized an engineering philosophy “rooted in exploration over building a mature final product.” The maturity for the right features and experiences would occur over time, she explained, but the exploration was encouraged by putting generative AI technology in the hands of every engineer and product manager who was interested.

That exploration was boosted by creating the LinkedIn Gateway, which allows access to OpenAI models and open-source models from Hugging Face, as well as offering LinkedIn’s Generative AI Playground, which allows engineers to explore Linkedin data with the advanced generative AI models from OpenAI and other sources. The company also brought together engineers for LinkedIn’s largest-ever internal Hackathon, featuring thousands of participants.

In addition, all LinkedIn employees needed to develop a better understanding of how large language models work, said Xu, including how to do prompt engineering, and what potential problems and limitations the models have.

“We provided education at different levels, such as company-wide meetings, lunch-and-learn sessions and deeper education for those more heavily involved in AI development and R&D,” she said.

Being collaborative was also a big part of integrating and supporting generative AI. “Because of our collaborative culture, we encouraged different teams to share resources,” she said, so that they could quickly develop in a time when the number of developers who could access certain generative AI models was limited due to capacity. “We passed on learnings from team to team about quotas, access, prompting patterns and other best practices, so that they could better help one another,” she added.

Running fast — but together

Xu also emphasized that LinkedIn realizes that there are areas in the generative AI process that need to be done centrally. While there is always a tension between running fast and running together, she explained, the company tries to keep those checks and balances, especially when it comes to responsible AI. “Even though this may slow down the team a little bit, we need to be very thoughtful,” she said.

For example, the company puts articles generated by AI through an evaluation pipeline. They have human-reviewed outputs that iterate, and change their prompt engineering until they get a score they are happy with. LinkedIn is very deliberate, Xu explained, about what kind of risk is okay and what is not okay. The company has a low tolerance for bad content but is willing to tolerate some gray-area content, and it relies on the human contributors to flag those to be taken down.

LinkedIn wants to avoid any bad and disruptive information and only allow content that is safe and informative, she added. For example, she pointed to Kevin Roose’s recent New York Times article that included a transcript of a chat with Microsoft’s Bing chatbot. LinkedIn would be worried if someone shared instructions on how to make a bomb, but a chat giving bad advice on how to complete a task — or in Roose’s case, commenting on his marriage — is less of a concern.

“The technology cannot just be living in a lab; we’ve got to put it in front of people,” Xu said. “Then people can make the best use of it, in ways that we never would have anticipated in the lab. But we needed to make sure we have the right process.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!