AI & RoboticsNews

DeepCube’s software-based solution accelerates AI on existing hardware

DeepCube, a startup developing a platform that reduces the computational requirements of AI algorithms on existing hardware, today raised $7 million. A spokesperson told VentureBeat the funds will be put toward research, commercialization, and growth of the DeepCube team at its offices in Tel Aviv and New York.

Machine learning deployments have historically been constrained by the size and speed of algorithms and the need for costly hardware. In fact, a report from MIT found that machine learning might be approaching computational limits. A separate Synced study estimated that the University of Washington’s Grover fake news detection model cost $25,000 to train in about two weeks. OpenAI reportedly racked up a whopping $12 million to train its GPT-3 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.

DeepCube, which describes its solution as a “software-based inference accelerator,” was cofounded in 2017 by Yaron Eitan and Eli David, who previously founded AI cybersecurity company Deep Instinct. The two developed a platform that enables machine learning models to run efficiently on edge devices and servers. DeepCube’s product is designed to be deployed on any type of hardware, including processors, GPUs, and AI accelerators, and the company claims it leads to an average tenfold speed improvement, a “major” reduction in memory requirements, and a “substantial” reduction in power consumption.

According to David, DeepCube’s technology continuously restructures and “sparsifies” machine learning models during training to maintain accuracy and reduce the size of the models. Neural Magic takes a similar approach. Founded by MIT professor Nir Shavit, Neural Magic redesigns AI algorithms to run more efficiently on processors by leveraging the chips’ large available memory and powerful cores. Another DeepCube rival, DarwinAI, uses what it calls generative synthesis to ingest models and spit out highly optimized versions.

But David claims DeepCube’s platform is already being used by “major” semiconductor companies ahead of its general availability in Q1 2021, though he declined to provide names. The company’s go-to-market strategies include working with developers of AI hardware accelerators; licensing directly to enterprises; and working with vertical solution providers like makers of security cameras, drones, and potentially self-driving cars.

Over the last several years, deep learning has been the most significant driver of AI advances and improvement. “But of the tremendous results we have seen in deep learning research, very few end in real-world deployment, due to extensive computational requirements and large model sizes,” David told VentureBeat via email. “At DeepCube, we are bridging that gap to bring these results to real-world deep learning deployments on edge devices. This vote of confidence from our investors allows us to deliver on the promise of providing deep learning to new markets, at scale, revolutionizing workflows for organizations and their end-users, and driving the next decade of AI advancement.​”

Canadian VC Awz Ventures led DeepCube’s series A, with participation from Koch Disruptive Technologies and Nima Capital. The round brings the company’s total raised to date to $12 million; DeepCube expects to increase the size of its workforce from 20 to 25 by 2021.

Sign up for Funding Weekly to start your week with VB’s top funding stories.


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!