Presented by Samsung NEXT
The reality of human-level artificial intelligence is still a dream. Even with all of the recent advancements in state-of-the-art AI, its ability to understand the world around us is only at the level of a one-year-old child. We don’t yet know how to build a robot to match a two-year-old’s ability to empathize or her ability to define new goals to help others.
Still, it’s mind-blowingly impressive how well our current AI technology scales. The software industry has entered a race to apply AI in every vertical possible. While AI technology adoption is accelerating, our ability to understand its potential impact isn’t keeping up.
To share some of the crazy cool advancements happening in the AI world, we recently launched a 12-part video series where some of the world’s leading AI researchers from the likes of Google, Uber, and Giphy shared some of the technology they’re working on. We called this effort “How AI is Changing the World”.
Through this video series, we hope to inspire the next wave of AI innovation. Here are our notes from the top presentations and what kind of impact these technologies could have in the next 10 years.
Your phone camera: the health monitor
Cocoon Health is a San Francisco-based startup that develops a computer vision-driven health monitor for infants. From a seemingly simple camera monitor, parents receive a video feed, breathing rate, and alerts. The CTO of Cocoon, Pavan Kumar, shared how their camera could identify that an infant was sick — a full 18 hours before parents noticed his temperature rising. The potential for smart phones to play a greater role in our personal health is a bet many companies are making. What’s unique about Cocoon Health is that it’s investing a great deal on technology that would enable people to take vitals diagnostics and measurements directly through a phone’s camera.
Imagining the next step, there’s nothing technically impossible about building an app to replace 911 emergency services, where our phone camera could stream vitals and a video feed to a doctor who would advise on an immediate course of action. Actually building such an emergency response app would require some of the infrastructure and techniques that Cocoon Health are working on today.
20x more efficient neural networks
“This is a new period in the world’s history — we build models and machines in AI that are more complicated than we can understand,” says Jason Yosinski, co-founder of Uber AI Labs. The lab conducts research in artificial intelligence to solve a variety of fundamental AI challenges across the whole of Uber. Jason has devoted a great deal of effort into deepening our understanding of what makes AI models tick.
During training, every set of examples fed into a neural network causes updates to its parameters (often called “weights”). Every single parameter of the network is updated with the goal of improving results. In his presentation, Jason shared a new technique to evaluate how many of these updates to the parameters were actually useful.
It turns out that in the example Jason analyzed (training a ResNet model), the amount of useful updates was 50.7%, while the rest of the updates were pointless, or detrimental. These training sessions can be time-consuming and expensive, so a 50% efficiency level is alarming. As an example, to train an AI as powerful as Google DeepMind’s StarCraft agent, it takes 16 TPUs (Google’s specialized hardware for AI workloads) training for 14 days. It also draw an estimated 200 watts each in California for $0.1 per kWh. This results in an estimated $110,000 electricity bill for each new version of the model.
The scale of energy used by AI training globally is unimaginably high, and efforts from Jason and his colleagues may find a way to improve it dramatically. During neural network training, thanks to Jason’s results, we can reasonably suspect that only half of the parameter updates are useful. From another perspective, these AI StarCraft agents require about 200 years of gameplay experience in training, while human professional players play for about 10 years to achieve similar results. This means that in the future, AI training can be 20 times more efficient.
Bringing back privacy
Even though many have given up entirely on ever regaining their online privacy, some governments are investing greatly in regulating the amount of personally identifiable information that companies store about us, as well as how they share it. In this new domain, D-ID is an identity protection services startup. Gil Perry, CEO of D-ID, described how his company’s products helps organizations in compliance avoid having to destroy data, while also better protecting consumer privacy.
One product Gil shared can modify an image with a face in it to render the face unrecognizable to AI-based facial recognition systems, while still making those images identifiable to humans. Another tool D-ID designed replaces the faces of individuals in video footage while preserving their gaze, gender, and emotional state.
Imagine a scenario in which a self-driving car needs to decide whether a pedestrian is planning to cross the street or not, which can be gleaned from the person’s gaze direction. Companies that stockpile this kind of data often resort to blurring out faces, which can dramatically reduce the value of their original footage.
AI today unlocks the ability to completely change who we’re looking at in photos and videos. It’s a small leap of the imagination to think that in the future, we’ll choose who will star in the next blockbuster hit, based on our own personal preferences. It might even be our own faces seamlessly inserted into these movies.
Aside from the possibilities in the entertainment industry, these techniques can be used for deception. Designing AI that would tell us what’s false and true might lead to a dystopian future, even purely through bias and not malice. But the need for better grounding in facts, and understanding of context around information is perhaps the next greatest challenge for AI. Not to tell us what to think, but to direct us to learn more and be better able to judge for ourselves.
How AI is changing the world
Recording this video series was done as part of a global event series in Boston, New York City, Tel Aviv, and San Francisco. We got to hear from many more teams, big and small, that are building cutting edge applications using AI. To name a few: we learned how Hopper is using AI to change air travel, how Giphy is using AI to enrich our conversations with contextually relevant memes, and how Google is building AI infrastructure. You can find the full list of speakers, companies and videos on our blog.
Your ideas, our investment
It’s unclear how far we are from developing an AI that we’d consider a good friend, one we can trust to make us coffee and discuss politics, or even a good two-year-old to help us open a door. This series of events offered small glimpses into applied AI today, and hopefully offers ideas for the future.
If you’re aware of these ideas yourself, or are actively working on applied AI — please drop us a note at Samsung NEXT’s Q Fund. We invest in early stage startups that take on today’s grand challenges in AI, and we’re actively looking to invest more.
Yuval Greenfield, Developer Relations at Samsung NEXT.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact sales@venturebeat.com.
Author: Yuval Greenfield, Samsung NEXT
Source: Venturebeat