AI & RoboticsNews

How Privately’s AI could help developers and device makers safeguard children

Smartphones and social media have often been called our era’s cigarettes — an addictive, destructive cancer to society. Children are particularly susceptible to the corruption permeating the digital world, whether they’re soaking in misinformation or battling a barrage of bullying and abuse.

But while the internet can be toxic, for millions of young people globally the only world they’ve ever known is one in which they interact with fellow humans through MySpace, Facebook, Instagram, YouTube, Twitter, Snapchat, WhatsApp, and countless other social platforms. Parents who try to ban their offspring using such services face an uphill battle from the outset, so the onus has increasingly fallen on technology companies to find new ways to make the internet a more pleasant place.

Most of the companies behind these platforms have sought to address the growing tech backlash by introducing new features that claim to fix at least some of the problems. Instagram, for example, now uses AI to warn users before they post offensive text — the idea being that they might think twice before sharing toxic comments. Elsewhere, Alphabet offshoot Jigsaw is working with publishers on technology that enables users to filter out abusive comments.

Deep-pocketed tech giants have the resources and AI expertise to at least attempt to address the problem, but smaller companies … not so much. This is where Swiss startup Privately is hoping to carve out a niche — by giving developers and device makers the tools to add well-being and safety features to their products to protect children in the digital age.

Private eye

Founded out of Lausanne, Switzerland in 2014, Privately makes software that integrates with any app and enables it to detect safety markers pertaining to children’s online communications and provide guidance around cyberbullying, privacy, and more.

A few years back, Privately launched a consumer-focused mobile app called Oyoty, which serves as a demonstration of its technology. Oyoty, in its original guise, was a downloadable mobile app that linked to users’ social accounts (Facebook, Instagram, and Twitter) and served as an automated bot to analyze posts for “problems” and then intervene. It might, for example, recommend that a child not share sensitive information, such as a phone number, or alert them if they’re showing too much skin in a photo.

Above: Oyoty in action

Oyoty has evolved in the intervening years, and it’s still being developed in a handful of European markets. Privately told VentureBet that the company is currently in “advanced discussions” with a major device manufacturer to provide a customized version of Oyoty on their hardware in several countries.

To achieve true scale, however, Privately is pushing into the B2B sphere to make its underlying technology available to anyone and everyone. Its Online Wellbeing and Safety (OWAS) tech has been available since September, and the BBC was among the first third-party organizations to tap the technology when it launched the Own It mobile app and keyboard in the U.K.

The general idea behind Own It is that children install the app and make it the default keyboard on their device (presumably at their parents’ insistence). The kids then receive warnings, prompts, and real-time advice whenever the bot detects something untoward.

For example, if someone types the words “I’m feeling suicidal” into Google Search, a little message on the keyboard will pop up telling them to talk to someone who can help, while also giving them a free support number to call.

Above: BBC Own It: Feeling suicidal?

Elsewhere, the BBC’s Own It keyboard can detect when a user writes something mean and ask them to think again before posting.

Above: Companies can use Privately to integrate anti-abuse technology into their apps

Similarly, if someone tries to share personal information such as an email address or telephone number, the app will prompt the user to reconsider.

Above: Privately warns users about sharing too much information

The BBC Own It app helps demonstrate some of the ways Privately’s technology can be used by developers. And Privately is planning to broaden its horizons to include other forms of digital communication — including voice.

“Voice is a top priority, as it is slowly replacing text in many environments,” Privately CEO and cofounder Deepak Tewari told VentureBeat. “We will have at least some features around voice in 2020.”

The Oyoty app also provides a glimpse into how its underlying AI can be leveraged for visual forms of communication, including the ability to detect “too much skin” in a photo.

Privately said it has trained its systems to “understand a number of modalities” in terms of what may or may not be an inappropriate photo. This takes into account the context of a photo, such as whether it was taken indoors or outdoors and whether it’s an individual or a family snap to “interpret whether an image might be provocative,” Tewari said.

Above: Oyoty demonstrates how Privately’s technology can be used across myriad scenarios to prevent children from over-sharing information about themselves

Privately is quick to stress that its technology is deployed on the device itself, which not only helps deliver speedy, real-time guidance to the user, but also boosts its privacy credentials, given that data isn’t transferred to a remote server. With more privacy-focused regulations, such as GDPR, coming to the fore, this could be a big selling point for app developers.

“The privacy-preserving AI element of the Own It application that was developed through this collaboration [with the BBC] is an industry first,” Tewari added.

Privately has so far been supported by a Swiss angel investor, in addition to a grant that helped fund its R&D base in Switzerland. The company is also gearing up to raise funding from U.K. investors in early 2020, which is why Privately has also now opened an office in London.

In terms of its business model, Privately said it garners revenue chiefly through licensing its technology to companies, which can pick out the features they wish to use. The company can also customize its technology, for an additional fee, to meet specific client requirements.

It is still early days for Privately, but it’s already working on various proofs on concept with companies operating in the gaming industry, in addition to charities, antivirus companies, and telcos. Earlier this year, Privately was invited to become a member of the Fair Play Alliance, a gaming company working group dedicated to improving experiences for gamers. Tencent also invited Privately to participate in the development of new protection standards for minors.

“There has been very strong interest overall,” Tewari said.

Well-being

At its core, Privately is tapping into a growing “digital health” trend. Back in July, Instagram introduced a new automated feature that warns users before they post abusive or bullying comments beneath photos or videos, and this feature was recently expanded to captions.

But reducing toxicity is only part of the picture here — Privately is focusing more on the broader “well-being” factor at an individual level.

“While toxicity and hate speech is a big problem ailing the internet, we see our focus more broadly on the subject of digital well-being,” Tewari continued. “To that end, we will develop technologies that understand the digital environment as well as the user better and provide personalized assistance to users to have a net positive relationship with technology.”

This is an area the big mobile platform providers are investing heavily in, with both Google and Apple launching various tools designed to improve people’s relationship with their digital devices. And it will likely become an increasingly greater focus for tech companies across the spectrum. Privately is betting it can improve the quality of time spent online by providing an extra layer of protection for children, with AI filling in the gaps for parents who can’t monitor their offspring 24/7.


Author: Paul Sawers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!

Worth reading...
Clearcover raises $50 million to find you a vehicle insurance policy with AI