AI & RoboticsNews

ProBeat: Release your data sets to the AI research community and reap the benefits

This week we featured how Duolingo uses AI in every part of its app, including Stories, Smart Tips, podcasts, reports, and even notifications. The story is based on interviews with CEO Luis von Ahn and research director Burr Settles, who joined as the company’s first AI hire in 2013 (Duolingo was founded in 2012). While that story covers the AI in Duolingo specifically, which I think is relevant to any startup looking to invest in AI early, I wanted to publish the tail end of my interview with Burr for its even broader insights.

But first, some context from the top of our discussion. “We approach AI projects in three kinds of ways,” Settles explained. “AI to help facilitate building high quality content in our processes. AI to create more engaging and exciting to keep people coming back. And then AI to kind of knowledge model and then personalize the experiences. So we’ve got projects going on in all three of those areas.”

The below transcript will make more sense if you read the Duolingo story first. One observation: How Settles describes Duolingo releasing its data sets reminds me of the early days of Mozilla building its browser in the open and how the ensuing open source revolution affected software development.

VentureBeat: What do you want to see Duolingo use AI for next — what’s the next big thing?

Settles: In broad strokes, those three things. The AI to improve the process, to help to work together with the humans to develop good content. So, kind of like the interactive system I described for Smart Tips. You can imagine that also working for phonology — what sorts of sounds do people struggle with, not just what sorts of grammar do they struggle with. Or making the report prioritization project irrelevant because we now have machine translation systems that you can input the prompt and you’ll get all the possible correct translations, rather than just the single best possible translation, which is what most machine translation systems do.

The problems we work on are so unique, people in academia and other parts of industry aren’t working on these problems because they’re very specific to Duolingo. But we have so much data, and they’re so interesting, that in order to help further the research community and provide new kinds of interesting problems for the research community to work on, we’ve released data sets. Usually whenever we’ve published a paper we’ve released the data set along with it, but we’ve also hosted things called shared tasks. So we actually just in 2020 had a shared task on what I was just talking about, this translation thing.

Normally in machine translation it’s just, if you want to translate this English into Portuguese, you would get one input and one output. But what we care about, because learners could type in all kinds of different things that are correct, we formalized a task and provided data for the first time in the NLP community. Here’s an input. And then here’s a ton of different outputs that are all correct, and furthermore, they’re weighted by how likely they are. Like how frequent they are, how fluent they are.

I think there were a dozen or so different teams from across the world. Here’s an overview paper that we wrote summarizing all the different approaches that different teams took. This is a project we started working on internally and we realized that essentially one person working part time on it, it was too big of a problem. So we pushed pause on it and then released the data set — threw it out into the ether — and it got lots of people excited from the machine learning and paraphrase communities.

And they came up with some really cool ideas. So now we want to take those ideas and then improve our internal processes around translation based on some of these ideas, which we could have come up with if we had the bandwidth for it. But there was like this win-win of letting other people throw a bunch of spaghetti at the wall. And then we all got to learn what stuck.

Hopefully 10 years from now, this data set will be a benchmark in machine learning papers because it’s a new problem that nobody’s been able to work on.

VentureBeat: So, back to my question. What is the number one thing that you would want to see Duolingo do with AI — is it just more research?

Settles: Well it’s that and, yeah. There’s so much potential, particularly for low resource languages. Like a lot of language-related AI is just focused on English. We want to teach Navajo as well as possible. We want to teach Irish. There’s not a whole lot of resources for those.

So, to do research that pushes the boundaries into the long tail of languages develops processes where AI collaborates with humans to create top notch content. And then also uses all of the user behavior of how people interact with the app to create a more engaging experience through both personalization and just creating new types of interfaces. I mean you can’t carry out a conversation right now with Duolingo, but in a few years, maybe you can.

VentureBeat: That would be a completely different value proposition.

Settles: Yeah, what we’re trying to do is push the boundaries to get as close as possible to what a one-on-one language tutor experience would be like. Not to replace grade teachers, but just because everybody in the world, regardless of socioeconomic status, you deserve, I believe, that kind of experience. The supply-demand curve is not setup to provide that. So AI is the best way to do that. That’s what we’re aiming to do.

VentureBeat: We’ve talked a lot about how Duolingo is using AI to improve the app. What about with regards to helping the business generate revenue, get more Duolingo premium subscriptions, and so on?

Settles: It’s kind of funny, we do have some projects going on there. But most of the effort, I’d say 90% of the AI projects we’ve got going on in the company are aimed at either teaching or assessing language better. And I think that’s because of this core value that if we nail those things, the rest of the business will follow. But if we don’t nail those things, in the long term it doesn’t matter how much revenue we make this year if we’re not teaching any better next year than we are this year.

VentureBeat: I agree with your priorities but since we didn’t talk about that 10% — how are you using AI to improve revenue, if anything?

Settles: Things that basically every other startup in the world does, like try to predict users, whether or not users will churn in their subscriptions. If they’re going to renew or not, when the time comes. Trying to optimize — similar to how we use machine learning to pick which notification to send — based on both like this tradeoff between notifications that seemed to work well but also things that you’ve seen recently so that you don’t become desensitized to them. So we’re applying some of those same techniques to the subscription purchase flow. All that pretty boring stuff that you can go to almost every company’s blogs and read about; we’re doing those things too.

VentureBeat: Thank you so much for taking the time.

Settles: My pleasure.

ProBeat is a column in which Emil rants about whatever crosses him that week.


Author: Emil Protalinski.
Source: Venturebeat

Related posts
DefenseNews

UK Navy to buy six vessels, enters new ‘golden age’ of shipbuilding

DefenseNews

House bill would block F-22 retirements, keep buying Air Force F-15EXs

DefenseNews

House panel takes aim at Navy size, new capabilities in defense bill

Cleantech & EV'sNews

VW just released details of the 2025 VW ID. Buzz's US trims

Sign up for our Newsletter and
stay informed!