AI & RoboticsNews

UserTesting expands platform with generative AI to scale human insights

UserTesting is kicking off its Human Insights Summit today with the launch of a new set of generative AI powered capabilities for its platform.

The new features that the company is simply calling UserTesting AI are intended to help customers scale up experience research efforts using AI. The initial set of tools benefit from an integration with OpenAI to help users more easily generate summaries and build reports from research data. They extend existing AI capabilities that UserTesting has developed in-house in recent years to help organizations to better understand user behavior and sentiment for products and services.

Back in April, UserTesting launched its machine learning (ML)-powered friction detection capability for behavioral analytics.

The goal with the new UserTesting AI tools is to go beyond what the company has already been doing and tap into the power of gen AI technologies like OpenAI’s ChatGPT.

“UserTesting AI is a set of capabilities that are designed to be easily understood by our customers as being AI powered, that help a research, design, marketing or product team, essentially achieve more throughput,” Andy MacMillan, UserTesting CEO, told VentureBeat in an exclusive interview.

To date, much of the AI capabilities that UserTesting has provided to its users falls within the domain of ML.

MacMillan said that UserTesting has built its own ML models to take data from its platform, which enables teams to test how users interact with and experience a service or application. UserTesting records the user sessions and then uses its ML models to derive insights. The models have helped to identify things like sentiment, intent and where users get stuck in a workflow. 

With the new UserTesting AI tools, the company isn’t just sending raw data to the gen AI model to process. MacMillan emphasized that UserTesting is using the gen AI alongside its existing models.

“We’re taking a lot of those ML outputs, where we’ve extracted what a researcher would find interesting, the friction, the insights, the suggestions, in addition to the transcripts, and we’re providing that and we’re creating tasks, summaries and research report summaries using large language models (LLMs),” MacMillan said.

To date, user experience researchers have largely had to write reports and summaries on their own based on the data and insights from a UserTesting operation.

But now, for example, for a team that wants to test a new mobile app, the platform identifies and matches them with profiles of people ideal for testing. Users are then recorded as they test out the app prototype, with UserTesting ML models identifying interesting data points. The user is also recorded with video and audio and the entire sessions is transcribed.

“We run all those data streams through our ML models that help extract interesting moments,” said MacMillan.

The UserTesting platform then provides a results page that provides a list of interesting data points and highlights from the session. With UserTesting AI, researchers now get a full summary and report generated base on detailed findings. The report and summaries generated by AI will also have specific citations and references that can help researchers dig into specific data points.

While there is some concern with the broader use of gen AI and how it could have potential bias, MacMillan said that UserTesting AI could actually help to reduce potential bias.

“We think UserTesting AI can help our customers be more efficient,” said MacMillan. “I think it also helps researchers to avoid missing something, and it can help avoid biases, so you as the person doing the research might have some biases and AI can help you maybe see things you might not see.”

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


UserTesting is kicking off its Human Insights Summit today with the launch of a new set of generative AI powered capabilities for its platform.

The new features that the company is simply calling UserTesting AI are intended to help customers scale up experience research efforts using AI. The initial set of tools benefit from an integration with OpenAI to help users more easily generate summaries and build reports from research data. They extend existing AI capabilities that UserTesting has developed in-house in recent years to help organizations to better understand user behavior and sentiment for products and services.

Back in April, UserTesting launched its machine learning (ML)-powered friction detection capability for behavioral analytics.

The goal with the new UserTesting AI tools is to go beyond what the company has already been doing and tap into the power of gen AI technologies like OpenAI’s ChatGPT.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

“UserTesting AI is a set of capabilities that are designed to be easily understood by our customers as being AI powered, that help a research, design, marketing or product team, essentially achieve more throughput,” Andy MacMillan, UserTesting CEO, told VentureBeat in an exclusive interview.

How UserTesting is using generative AI alongside its existing machine learning

To date, much of the AI capabilities that UserTesting has provided to its users falls within the domain of ML.

MacMillan said that UserTesting has built its own ML models to take data from its platform, which enables teams to test how users interact with and experience a service or application. UserTesting records the user sessions and then uses its ML models to derive insights. The models have helped to identify things like sentiment, intent and where users get stuck in a workflow. 

With the new UserTesting AI tools, the company isn’t just sending raw data to the gen AI model to process. MacMillan emphasized that UserTesting is using the gen AI alongside its existing models.

“We’re taking a lot of those ML outputs, where we’ve extracted what a researcher would find interesting, the friction, the insights, the suggestions, in addition to the transcripts, and we’re providing that and we’re creating tasks, summaries and research report summaries using large language models (LLMs),” MacMillan said.

Generative AI at UserTesting helps to *avoid* bias 

To date, user experience researchers have largely had to write reports and summaries on their own based on the data and insights from a UserTesting operation.

But now, for example, for a team that wants to test a new mobile app, the platform identifies and matches them with profiles of people ideal for testing. Users are then recorded as they test out the app prototype, with UserTesting ML models identifying interesting data points. The user is also recorded with video and audio and the entire sessions is transcribed.

“We run all those data streams through our ML models that help extract interesting moments,” said MacMillan.

The UserTesting platform then provides a results page that provides a list of interesting data points and highlights from the session. With UserTesting AI, researchers now get a full summary and report generated base on detailed findings. The report and summaries generated by AI will also have specific citations and references that can help researchers dig into specific data points.

While there is some concern with the broader use of gen AI and how it could have potential bias, MacMillan said that UserTesting AI could actually help to reduce potential bias.

“We think UserTesting AI can help our customers be more efficient,” said MacMillan. “I think it also helps researchers to avoid missing something, and it can help avoid biases, so you as the person doing the research might have some biases and AI can help you maybe see things you might not see.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sean Michael Kerner
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!