Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next.
Synthesis AI, a synthetic data company, has released HumanAPI, a new tool for generating virtual humans from synthetic data. Virtual humans are photorealistic digital representations of people who will move, talk, and behave like real humans in a virtual environment. These virtual humans are meant to help developers improve AI application developments.
The offering comes at a time of growing excitement about the idea of the metaverse. Facebook recently changed its name to Meta to focus on the metaverse, a place where humans and virtual humans can interact in new ways through technologies like AR and VR. Another company, Nvidia, launched a barrage of announcements at its GTC event today designed to build out the metaverse, including new avatar technology. The race is on.
Synthetic data helps address challenges around creating a robust set of labeled training data for developing AI apps. Supervised AI application development for recognizing objects in video starts with labeled data reflecting what the AI is being taught to recognize.
Typically, the labeling process starts with real humans manually labeling frames of video. Sometimes, the labels are generic, like indicating “sitting” or “standing,” which any human can label correctly. In other cases, like differentiating between sports or yoga poses, more expensive experts are needed to differentiate correctly between subtle differences in poses. In this traditional labeling approach, human experts would have to manually label each variation reflecting different body types, clothing, and camera angles.
However, synthetic data generation can automate this process. It can automatically generate imagery of people doing the same pose as in the manually labeled data, and then copy these labels over to the new images. This eliminates the need to manually label each novel variation generated by the tool. The main benefit is that the experts only have to label a few frames manually, and then the synthetic data creates variations reflecting different characteristics of people, for example in skin color, body shape, and so on — which also carry over the appropriate labels.
HumanAPI is designed to simplify application development
HumanAPI complements the company’s existing FaceAPI service for generating virtual faces. These offerings promise to streamline workflows and improve the quality of AI applications that analyze human appearance, emotions, activity, movement, gestures, ergonomics, and other characteristics. Specifically, HumanAPI automatically generates a range of virtual humans demonstrating common differences an app might see across people and circumstances, which improves the applications trained on them.
Synthesis CEO Yashar Behzadi told VentureBeat, “Outside of common use cases in cinema, media, AR/VR, and gaming, virtual humans can be used to build and test computer vision AI systems.”
Synthetic humans could also improve physical product designs by simulating user experience across a range of humans. For instance, Synthesis AI works with car manufacturers to create better driver safety systems. It is essential to ensure these systems are intuitive and robust for people of all shapes, sizes, and ages.
Generating vast amounts of diverse synthetic humans involves a combination of traditional approaches used in cinematic visual effects and new generative AI models. HumanAPI can start with one golden set of labeled data and then apply various transformations reflecting various lighting conditions, body types, skin colors, clothing, or other characteristics. Synthetic data can also address privacy concerns by removing personally identifiable information from the new data.
New use cases for synthetic data and virtual humans
Virtual humans could also improve intelligent AI assistants, virtual fitness coaches, and metaverse applications. Behzadi predicted, “The next generation of smart AI assistants will leverage cameras to drive more emotionally aware engagements.”
These systems will engage and converse more naturally by considering the attention and emotions of users. The virtual human could simulate human states such as boredom, engagement, distraction, and tiredness, leading to better intelligent assistants and more natural interactions.
Down the road, synthetic data tools reflecting people, objects, and the environment could make it easier to sample and recombine data from the real world for metaverse apps. Examples include watching a sports game from the perspective of your favorite athlete, interacting with photorealistic avatars of your friends, or experiencing physical products first-hand. All these applications are dependent on computer vision AI to deeply understand humans, objects, 3D environments, and their interactions with one another.
“Creating these AI capabilities requires tremendous amounts of high-quality labeled 3D data that can only be generated using synthetic data technologies and virtual humans,” Behzadi said.
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Author: George Lawton
Source: Venturebeat