AI & RoboticsNews

SmartCow’s new dev kit promises conversational AI, video apps

Join today’s leading executives online at the Data Summit on March 9th. Register here.


Enterprises needing to beef up development of conversational AI and video-based applications may want to know about SmartCow.ai, an AI engineering company specializing in video analytics and AIoT devices that use containers for deployments.

The artificial intelligence of things (AIoT) is a newly named IT category. It combines artificial intelligence (AI) with the internet of things (IoT) infrastructure to ostensibly achieve more efficient IoT operations, improve human-machine interactions, and enhance data management and analytics.

The six-year-old company this week introduced its new audiovisual development kit, Apollo. Built around Nvidia Jetson Xavier NX processors, the Apollo device enables developers to create applications with conversational AI capabilities, CEO and founder Ravi Kiran told VentureBeat from the company’s headquarters in St. Julian’s, Malta.

“Traditional development kits are geared toward beginner-level developers working with general-purpose use cases with AI vision widely used across applications,” Kiran said. “We recognize the breadth and depth of developers out there who want a dev kit that enables them to go deeper in their research and development, including the ability to implement conversational AI and NLP (natural language processing).

“Apollo is a specialized dev kit created to meet higher-level developers’ needs and give them a way to get straight to more conversational applications. The solutions are built using Nvidia SDKs and packaged in Docker containers, and deployed on Apollo. This allows developers/customers to experiment with various solutions without worrying about installing libraries, and so on,” Kiran said.

Apollo stands upright on a desk

SmartCow’s Apollo development kit stands upright on your desk and features onboard visual and audio sensors, including four microphones, two speaker terminals, two 3.5mm phone jacks, an 8MP IMX179 camera module, and an OLED display. In addition, Apollo features a 128GB NVMe SSD for storage and comes pre-packaged with the Nvidia DeepStream and RIVA Embedded SDK toolkits. The  sixNLP examples showcase the kit’s unique capabilities include: text-independent speaker recognition systems; speech to text and sentiment analysis; language translations and speaker diarizations; and applications for abnormal sound and surveillance.

Speaker diarization is the process of partitioning an input audio stream into homogeneous segments according to the speaker identity.

Apollo development kits support two programmable buttons: a default with one-key recovery to help developers ease the procession of device recovery, and a programmable button that provides flexibility for developers to add their applications, offering them a more accessible means to developing. Apollo is designed with a base frame that allows it to stand upright, making it easier to use, Kiran said.

The global NLP market is predicted to grow from $20.98 billion in 2021 to $127.26 billion in 2028 at a compound annual growth rate (CAGR) of 29.4% in the forecast period. With six starter NLP examples and seamless, immediate speaker recognition, Kiran claims that Apollo meets the increasing demand for development kits that simultaneously process both audio and video data using advanced AI models.

Implementation of AI

In order for technologists, data architects and software developers to learn more about how to utilize AI, VentureBeat asked the following questions of Ravi Kiran, CEO at SmartCows, who offered readers these details:

VentureBeat: What AI and ML tools are you using specifically? 

Kiran:  DeepStream for Vision, RIVA for audio; these are SDK tool kits made from Nvidia. And SmartCow engineering teams have the foremost expertise.

VentureBeat: Are you using models and algorithms out of a box — from exaPEle, from DataRobot or other sources? 

Kiran: SmartCow has a huge collection of data, thus allowing us to train our own models. Our partner companies also provide these models.

VentureBeat: What cloud service are you using mainly? 

Kiran: AWS.

VentureBeat: Are you using a lot of the AI workflow tools that come with that cloud? 

Kiran: Weights and Biases and internal tooling that SmartCow developed. These tools constantly evolve.

VentureBeat: How much do you do yourselves? 

Kiran: The majority of the engineering is done in house.

VentureBeat: How are you labeling data for the ML and AI workflows? 

Kiran: With transfer learning techniques etc., fewer data is required for the train model of a task; we do use external services when required to label the data.

VentureBeat: Can you give us a ballpark estimate on how much data you are processing? 

Kiran: It depends on the task. Often we source data from public sources. There are also commercial companies that provide data. In some cases, we install sensors at the edge and build complex pipelines to retrieve the required data.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More


Author: Chris J. Preimesberger
Source: Venturebeat

Related posts
AI & RoboticsNews

AI risk management startup ValidMind raises $8.1M to help banks comply with regulations

DefenseNews

Amid faltering domestic program, Taiwan orders more MQ-9B drones

DefenseNews

BAE demos platform that gives Army AMPVs turret system options

DefenseNews

US Army’s fresh look at watercraft includes unmanned options

Sign up for our Newsletter and
stay informed!