AI & RoboticsNews

L1ght raises $15 million for AI that protects children from online toxicity

L1ght, a fledgling AI startup that wants to help technology companies combat online toxicity, bullying, and abuse, has raised $15 million in a seed round of funding from Mangrove Capital Partners, Tribeca Venture Partners, and Western Technology Investment.

The company’s substantial seed funding comes as tech companies are struggling to contain offensive and harmful behavior on their platforms. It’s nearly impossible to monitor massive platforms manually, which is why automation and AI are playing increasing roles in the gatekeeping process — but they still can’t detect every piece of abusive content. Moreover, technology companies have other priorities to juggle — such as making more money and growing their user base. Against this backdrop, L1ght is hoping to carve out a niche by focusing on safeguarding children.

“We strongly believe that the burden of responsibility to keep children safe online is on the developer, not the user,” L1ght cofounder and CEO Zohar Levkovitz told VentureBeat. “It’s the social networks themselves that must create and integrate protection all the way from the design stage. We provide the tool for networks to do this.”

Toxicity

Founded in 2018 as AntiToxin, L1ght is essentially an API online platforms can use to help identify and manage toxic content, whether on social networks, messaging apps, hosting providers, and gaming platforms or across text, audio, video, and images.

The way its technology works depends on the platform and what the client is looking for. “If we are working with a messaging service, we can analyze communications to identify predatory behaviors, whether that be cyberbullying or shaming,” Levkovitz said. “If we are working with a hosting company, we can analyze millions of websites to identify negative types of content.”

L1ght provided this mockup screenshot of a dashboard if the technology were used by a hosting provider to scan various websites for toxic content.

Above: A mockup dashboard of how one of L1ght’s products might look when used by a hosting provider

L1ght also tailors its pricing strategy depending on the use case. For example, a social network might pay a per-user fee, while a hosting company might pay per website.

We know that companies are willing to invest a lot in their cloud security, so why wouldn’t they invest just as much in the security of their users?” Levkovitz asked.

The idea for the company came when Ron Porat, L1ght’s chief technology officer and cofounder, discovered that his son had been approached by a predator while playing an online game. Porat partnered with Levkovitz, who also has children, to launch L1ght, and they hired a team of AI specialists, psychologists, and anthropologists to figure out ways to identify content that could harm children.

“As [I’m] both a father and entrepreneur, L1ght’s mission statement is very personal to me,” said Levkovitz. “My cofounder Ron Porat and I started L1ght because we saw the dangers our children were facing online and felt not enough has been done to address these problems. Our main priority has always been to keep kids safe, and this investment is further proof that our technology can accomplish that.”

Above: L1ght cofounder and CEO Zohar Levkovitz

With $15 million in the bank, the company said it plans to invest in R&D and scale its platform “to meet market demand” for a scalable way to detect toxicity and abuse online.

“After working with Zohar and his team for quite some time, we are confident L1ght is uniquely positioned to help solve this massive societal problem,” said Tribeca Venture Partners managing partner Chip Meakem. “Now the real work begins, which is to distribute L1ght’s proprietary algorithms to major online platforms and providers across the globe.”

L1ght is one of a number of startups looking to safeguard children online by automatically detecting questionable content. Swiss startup Privately also makes AI-infused software that developers and device makers can integrate to improve privacy and protect against threats such as cyberbullying. Its clients include U.K. broadcaster the BBC, which launched a Privately-powered keyboard app last year that warns users and serves up real-time advice whenever it detects something untoward.

BBC Own It: Feeling suicidal?

Above: BBC Own It: Feeling suicidal?

Well-being

More broadly,  L1ght and Privately tap into a trend of big tech companies investing significant resources in digital health and well-being tools. Last July, Instagram introduced a new automated feature that warns users before they post abusive or bullying comments beneath photos or videos, and this was later expanded to captions. Meanwhile, Alphabet offshoot Jigsaw is working with publishers on technology that enables users to filter out abusive comments, and it recently released a huge data set to the public.

While these various efforts may be partly designed to quell the growing tech backlash, they also serve a more immediate business purpose — people are more likely to use a service when they feel safe.

One imagines that big technology platforms — those with millions or billions of users — might develop their own technology in-house rather than using third-party APIs. But Levkovitz is adamant that its focus will appeal to companies of all sizes across industries.

“We applaud any efforts that major technology companies make to eradicate online toxicity,” he said. “At the same time, though, we know that these tech behemoths will never be able to devote 100% of their resources to tackling this problem because their goal is to grow fast at all costs. They have other priorities, and this issue requires full-time attention. Our mission is solely dedicated to protecting children from harmful online content. Our AI is more complex and sophisticated than the other technologies currently available. While other solutions search for keywords or flag offensive comments, we make sense of a conversation’s context.”

L1ght is currently working with “several active customers” but isn’t at liberty to say more. “At this stage, none of our customers can be named publicly,” Levkovitz said.

Sign up for Funding Weekly to start your week with VB’s top funding stories.


Author: Paul Sawers.
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!