AI & RoboticsNews

Hollywood’s strike battle over AI and 3D scanning has been decades in the making

Hollywood has been largely shut down for more than 100 days now, after the union representing screenwriters, the Writers Guild of America (WGA), voted to go on strike on May 1. The writers were soon followed by the actors’ union, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), on July 13, marking the first time in 63 years that both major unions were on strike at the same time.

Both unions have objected to contract renewal proposals from the Alliance of Motion Picture and Television Producers (AMPTP). A key sticking point is the use of artificial intelligence (AI) and 3D scanning technology. The producers, and the major movie studios behind them, want a broad license to use the tech however they wish. The writers and actors want an agreement on specific rules for how, when and where it can be used.

While the two sides continue to duke it out through their negotiators, VentureBeat took a close look at the actual tech at issue, and discovered that there is an important distinction to be made if the dueling sides are to come to a mutually satisfactory agreement: 3D scanning is not the same as AI, and most vendors only offer one of the two technologies for filmmaking.

The tech vendors largely also believe actors and writers should be compensated for their work in whatever form it takes, and that the vendors’ business would suffer if actors were replaced with 3D doubles and writers with generated scripts.

But things are changing quickly. VentureBeat learned of plans by an AI vendor, Move.AI, to launch next month a new motion capture app using a single smartphone camera — a development that would radically reduce the cost and complexity of making 3D digital models move. Separately, a 3D scanning company, Digital Domain, shared its intent to use AI to create “fully digital human” avatars powered by AI chatbots.

While some 3D scanning companies are pursuing AI solutions for helping them create interactive 3D models of actors — known variously as digital humans, digital doubles, digital twins, or virtual doppelgängers — 3D scanning technology came to Hollywood long before AI was readily available or practical, and AI is not needed to scan actors.

However, if realistic 3D scans are to one day replace working actors — perhaps even in the near future — an additional, separate layer of AI will likely be needed to help the 3D models of actors move, emote and speak realistically. That AI layer largely does not exist yet. But companies are working on tech that would allow it.

Understanding exactly who are some of the tech vendors behind these two separate and distinct technologies — 3D scanning and AI — and what they actually do is imperative if the conflicting sides in Hollywood and the creative arts more generally are to forge a sustainable, mutually beneficial path forward.

Yet in Hollywood, you could be forgiven for thinking that both technologies — AI and 3D scanning — are one and the same.

Duncan Crabtree-Ireland, the chief negotiator for SAG-AFTRA, revealed that the studios proposed a plan in July to 3D-scan extras or background actors and use their digital likenesses indefinitely. This proposal was swiftly rejected by the union. “We came into this negotiation saying that AI has to be done in a way that respects actors, respects their human rights to their own bodies, voice, image and likeness,” Crabtree-Ireland told Deadline.

Meanwhile there have been increasing reports of actors being subjected to 3D scanning on major movie and TV sets, causing unease within the industry. 

The first week of the strike, a young actor (early 20s) told me she was a BG actor on a Marvel series and they sent her to “the truck” – where they scanned her face and body 3 times. Owned her image in perpetuity across the Universe for $100. Existential, is right.

Though 3D actor scanning has been around for years, Hollywood executives like those at Disney are reportedly excited about the addition of generative AI to it, and about AI’s overarching prospects for new, more cost-effective storytelling. But the increasing availability of the technology has also sparked major concerns from writers and actors as to how their livelihoods and crafts will be affected.

When it comes to Hollywood writers, the recent launch of a number of free, consumer-facing, text-to-text large language model (LLM) applications such as ChatGPT, Claude and LLaMA have made it much easier for people to generate screenplays and scripts on the fly.

Reid Hoffman, a backer of ChatGPT maker OpenAI, even wrote a whole book with ChatGPT and included sample screenplay pages.

Another app, Sudowrite, based on OpenAI’s GPT-3, can be used to write prose and screenplays, but was the target of criticism several months ago from authors who believed that it was trained on unpublished work from draft groups without their express consent. Sudowrite’s founder denied this.

Meanwhile, voice cloning AI apps like those offered by startup ElevenLabs and demoed by Meta are also raising the prospect that actors won’t even need to record voiceovers for animated performances, including those involving their digital doubles.

Separately, though 3D body-scanning is now making headlines thanks to the actors’ strike, the technology behind it has actually been around for decades, introduced by some of cinema’s biggest champions and auteurs, including James Cameron, David Fincher, and the celebrated effects studio Industrial Light and Magic (ILM).

Now with the power of generative AI, those 3D scans that were once seen as extensions of a human actor’s performance on a set can be repurposed and theoretically used as the basis for new performances that don’t require the actor — nor their consent — going forward. You could even get an AI chatbot like ChatGPT to write a script and have a digital actor perform it. But because of the inherent complexity of these technologies, they are all generally, and improperly, conflated into one, grouped under the moniker du jour, “AI.”

“We’ve been at this for 28 years,” said Michael Raphael, CEO, president and founder of Direct Dimensions, in an exclusive video interview with VentureBeat.

Direct Dimensions is a Baltimore-based 3D scanning company that builds the scanning hardware behind some of the biggest blockbusters in recent years, including Marvel’s Avengers: Infinity War and Avengers: Endgame.

The firm’s first subject in Hollywood was actor Natalie Portman for her Oscar-winning turn in the 2010 psychosexual thriller Black Swan.

Raphael, an engineer by training, founded the company in 1995 after working in the aerospace industry, where he helped develop precision 3D scanning tools for measuring aircraft parts, including an articulating arm with optical encoders in the joints.

However, as the years passed and technology became more advanced, the company expanded its offerings to include other scanning hardware such as laser scanning with lidar (light ranging and detection sensors, such as the kind found on some types of self-driving cars), as well as still photos taken by an array of common digital single reflex cameras (DSLR) and stitched together to form a 3D image, a technique known as photogrammetry.

Today, Direct Dimensions works not only on movies, but on imaging industrial parts for aerospace, defense and manufacturing; buildings and architecture; artworks and artifacts; jewelry; and basically any object from the small to the very large. In fact, Hollywood has only ever made up a small portion of Direct Dimensions’ business; most of it is precision 3D scanning for other, less glamorous industries.

“We scan anything you can think of for basically engineering or manufacturing purposes,” Raphael told VentureBeat.

In order to scan small objects, Direct Dimensions created its own in-house hardware: an automated, microwave-sized scanner it calls the Part Automated Scanning System (PASS).

Importantly, Direct Dimensions does not make its own AI software nor does it plan to. It scans objects and turns them into 3D models using off-the-shelf software like Autodesk’s Revit.

Raphael said Direct Dimensions was only one of about a “dozen” companies around the world offering similar services. VentureBeat’s own research revealed the following names:

One such 3D scanning company, Avatar Factory from Australia, is run by a family of four: husband and wife Mark and Kate Ruff, and their daughters Amy and Chloe.

The company was founded in 2015 and offers a “cyberscanning” process involving 172 cameras mounted around the interior of a truck. This allows it to provide mobile 3D scanning of actors on locations outside of studios — say, landscapes and exteriors. Like Direct Dimensions, the company also offers prop scanning.

Among the notable recent titles for which Avatar Factory has performed 3D scanning are Mortal Kombat, Elvis and Shantaram (the Apple TV series).

“The Avatar Factory create photo-realistic 3D digital doubles that are used for background replacement, as well as stunt work that is too dangerous to be performed by actual stunt doubles,” explained Chloe Ruff, Avatar Factory’s CEO, chief technology officer (CTO) and head of design, in an email to VentureBeat.

While Ruff said that Avatar Factory had used 3D scanning of multiple extras or background actors to create digital crowd scenes, she also said that without the variety they contributed, it would be detrimental to the work.

“As so much of our work is for background replacement we see hundreds of extras and background actors come through our system on a typical shoot day,” Ruff wrote. “Having extras and background actors be on a film set is fundamental to our business operations and we couldn’t do what we do without them. It would be devastating to the industry and our business if all of those actors were to be replaced by AI, like some studios are suggesting.”

Separately, rival 3D scanning company Digital Domain, co-founded in 1993 by James Cameron, legendary effects supervisor Stan Winston and former ILM general manager Scott Ross, declined to comment for this story on the controversy over scanning background actors.

However, a spokesperson sent VentureBeat a document outlining the company’s approach to creating “digital humans,” 3D models of actors derived from thorough, full-body scans that are “rigged” with points that allow motion. The document contains the following passage:

“In most cases, direct digital animation is used for body movements only, while facial animation almost always has a performance by a human actor as the underlying and driving component. This is especially true when the dialog is part of the performance.”

The Digital Domain document goes on to note the increasing role of AI in creating digital humans, saying, “We have been investigating the use of generative AI for the creation of digital assets. It’s still very early days with this technology, and use cases are still emerging.” The document also states:

“We feel the nuances of an actor’s performance in combination with our AI & Machine Learning tool sets is critical to achieving photo realistic results that can captivate an audience and cross the uncanny valley.

“That said, we are also working on what we call Autonomous Virtual Human technology. Here we create a fully digital human, either based on a real person or a synthetic identity, powered by generative AI components such as chatbots. The goal is to create a realistic virtual human the user can have a conversation or other interaction with. We believe that the primary application of this technology is outside of entertainment, in areas such as customer service, hospitality, healthcare, etc…”

How did we get here? Visual effects computer graphics scholars point to the 1989 sci-fi film The Abyss, directed by James Cameron of Titantic, Avatar, Aliens and Terminator 2 fame, as one of the first major movies to feature 3D scanning tech.

Actors Ed Harris and Mary Elizabeth Mastrantonio both had their facial expressions scanned by Industrial Light and Magic (ILM), the special effects company founded earlier by George Lucas to create the vivid spacefaring worlds and scenery of Star Wars, according to Redshark News. ILM used a device called the Cyberware Color 3-D Digitizer, Model 4020 RGB/PS-D, a “plane of light laser scanner” developed by a defunct California company for which the device was named. The U.S. Air Force later got ahold of one for military scanning and reconnaissance purposes, and wrote about it thusly:

“This Cyberware scanning system is capable of digitizing approximately 250,000 points on the surface of the head, face, and shoulders in about 17 seconds. The level of resolution achieved is approximately 1 mm.”

For The Abyss, ILM scanned actors to create the “pseudopod,” a watery shapeshifting alien lifeform that mimicked them. This holds the distinction of being the first fully computer-generated character in a major live-action motion picture, according to Computer Graphics and Computer Animation: A Retrospective Overview, a book from Ohio State University chronicling the CGI industry’s rise, by Wayne E. Carlson.

Raphael also pointed to 2008’s The Curious Case of Benjamin Button, starring Brad Pitt as a man aging in reverse, complete with visual effects accompanying his transformation from an “old baby” into a young elderly person, as a turning point for 3D actor-scanning technology.

Benjamin Button pioneered the science around these types of human body scanning,” Raphael said.

When making Benjamin Button, director David Fincher wanted to create a realistic version of lead star Brad Pitt both old and young. While makeup and prosthetics would traditionally be used, the director thought this approach would not give the character the qualities he wanted.

He turned to Digital Domain, which in turn looked to computer effects work from Paul Debevec, a research adjunct professor at the University of Southern California’s (USC) Institute for Creative Technologies (ICT), who today also works as a chief researcher at Netflix’s Eyeline Studios.

According to Debevec’s recollection in a 2013 interview with the MPPA’s outlet The Credits, Fincher “had this hybrid idea, where they would do the computer graphics for most of the face except for the eyeballs and the area of skin around the eyes, and those would be filmed for real and they’d put it all together.”

In order to realize Fincher’s vision, Digital Domain turned to Debevec and asked him to design a “lighting reproduction” system whereby they could capture light and reflections in Pitt’s eyes, and superimpose the eyes onto a fully digital face.

Debevec designed such a system using LED panels arranged like a cube around the actor, and later, brought in a physical sculpture of Pitt’s head as a 70-year-old man and used the system to capture light bouncing off that.

“Ever since I started seriously researching computer graphics, the whole idea of creating a photo-real digital human character in a movie, or in anything, was kind of this Holy Grail of computer graphics,” Debevec told The Credits.

The approach worked: The Curious Case of Benjamin Button went on to win the 2009 Academy Award for Best Achievement in Visual Effects. And, the team got closer to Debevec’s “Holy Grail,” by creating a fully CGI human face.

According to Mark Ruff of Avatar Factory, the fact that Benjamin Button achieved such a lifelike representation of Brad Pitt, yet Pitt continues to act in new films, helps explain why 3D scans will not be displacing human actors anytime soon.

“It was conceivable back then that Brad Pitt no longer needed to appear in future films,” Mark told VentureBeat. “His avatar could complete any future performance. Yet, we still see Brad Pitt acting. Even if Brad Pitt were scanned and did not perform himself ever again in a film, I am sure his agent would still acquire a premium for his identity.”

Today, many companies are pursuing the vision of creating lifelike 3D actors — whether they be doubles or fully digital creations.

As The Information reported recently, a number of startups — Hyperreal, Synthesia, Soul Machines and Metaphysic — have all raised millions on the promise they could create realistic 3D digital doubles of leading A-list stars in Hollywood and major sports.

This would allow stars to reap appearance fees without ever setting foot on set (while the agents took a cut). In fact, it could create a whole new revenue stream for stars, “renting” out their likenesses/digital twins while they pursue higher-quality, more interesting, but possibly lower-paying passion projects.

In July, VentureBeat reported that Synthesia actually hired real actors to create a database of 39,765 frames of dynamic human motion that its AI would train on. This AI will allow customers to create realistic videos from text, though the ideal use case is more for company training videos, promotions and commercials rather than full feature films.

“We’re not replacing actors,” the company’s CEO, Jon Starck, told VentureBeat. “We’re not replacing movie creation. We’re replacing text for communication. And we’re bringing synthetic video to the toolbox for businesses.”

At the same time, he said that an entire movie made out of synthetic data was likely in the future.

The industry is moving fast from the days when deepfake images of Tom Cruise plastered on TikTok creators’ faces (powered by the tech that went on to become Metaphysic) and Bruce Willis renting out his own deepfake were making headlines.

Now, just one or two years later, “many stars and agents are quietly taking meetings with AI companies to explore their options,” according to The Information’s sources.

Of course, creating a digital double is a lot easier said than done. And then, animating that double to move realistically is another ballgame entirely.

Motion capture — the technology that allows human movements to be reproduced in animation or computer graphics — has been around for more than 100 years, but the modern tools didn’t come into effect until the 1980s.

And then, for the subsequent two decades, it mostly involved covering actors in tight-fitted bodysuits covered with ping pong-ball like markers, and using specialized cameras to map their movements onto a digital model or “skeleton” that could be turned into a different character or re-costumed with computer graphics.

But today, thanks to advances in AI and software, human motion can be captured with a set of smartphones alone, without the need of pesky suits and markers. One such company taking the “markerless” smartphone route is U.K.-based Move.ai, founded in 2019 to capture athletes’ movements, and which has since branched off into video games and film.

“Creating 3D animation might seem like quite a niche market, but it’s actually a huge market, over $10 billion,” said Tino Millar, CEO and cofounder of Move.ai, in a video interview with VentureBeat.

Millar said that in the past, animating the motion of 3D characters was done largely “by hand.” Even those animators using longstanding software such as Blender or Cinema 4D have to spend many hours training and educating themselves on the tools in order to achieve the quality necessary for major films.

The other alternative, the marker and tight-fitted suit approach described above, is similarly time-intensive and requires an expensive studio setup and multiple infrared cameras.

“What we’ve come along and done is, using AI and a few other breakthroughs in understanding human motion in physics and statistics, is that we believe we can make it 100 to 1,000 times cheaper to do than with motion capture suits, while maintaining the quality, and making it much more accessible to people,” Millar said.

In March 2023, Move.ai launched a consumer-facing smartphone app that requires at least two (and up to six) iPhones running iOS 16 to be positioned around a person to capture their motion.

Since then, “it’s being used by top game companies around the world, top film and TV productions, [and] content creators at home creating video for YouTube and TikTok,” Millar said.

Move.ai also supports Android devices in an “experimental” mode, and Millar told VentureBeat the company plans to launch a single-smartphone camera version of its app next month, September 2023, which would further reduce the barrier to entry for aspiring filmmakers.

So, to recap: 3D scanning and improved motion-capture tech has been in the works in Hollywood for years, but has lately become much more affordable and ubiquitous, and AI tech has only recently become publicly available to consumers and Hollywood.

“It’s one thing to have these [3D] assets, and they’ve had these assets for 10 years at least,” said Raphael of Direct Dimensions. “But the fact that you’re adding all this AI to it, where you can manipulate assets, and you can make crowd scenes, parade scenes, audiences, all without having to pay actors to do that — the legality of all this still needs to be worked out.”

This trickle-down effect of both technologies has come just as the actors and writers had to renegotiate their contracts with studios, and as the studios have embraced yet another new technology — streaming video.

All of which has concocted a stew of inflated hype, real advances, fear and fearmongering, and mutual misunderstandings that have boiled over into the standoff that has now gone on for more than 100 days.

“I can only speculate,” Millar of Move.ai said. “But AI is much more in popular culture. People are much more aware of it. There is AI in their devices now. In the past, people weren’t aware of it because it was only being used by high-end production companies. The high end will always have the bleeding edge, but a lot of this technology is filtering down to consumers.”

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Hollywood has been largely shut down for more than 100 days now, after the union representing screenwriters, the Writers Guild of America (WGA), voted to go on strike on May 1. The writers were soon followed by the actors’ union, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), on July 13, marking the first time in 63 years that both major unions were on strike at the same time.

Both unions have objected to contract renewal proposals from the Alliance of Motion Picture and Television Producers (AMPTP). A key sticking point is the use of artificial intelligence (AI) and 3D scanning technology. The producers, and the major movie studios behind them, want a broad license to use the tech however they wish. The writers and actors want an agreement on specific rules for how, when and where it can be used.

While the two sides continue to duke it out through their negotiators, VentureBeat took a close look at the actual tech at issue, and discovered that there is an important distinction to be made if the dueling sides are to come to a mutually satisfactory agreement: 3D scanning is not the same as AI, and most vendors only offer one of the two technologies for filmmaking.

The tech vendors largely also believe actors and writers should be compensated for their work in whatever form it takes, and that the vendors’ business would suffer if actors were replaced with 3D doubles and writers with generated scripts.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

But things are changing quickly. VentureBeat learned of plans by an AI vendor, Move.AI, to launch next month a new motion capture app using a single smartphone camera — a development that would radically reduce the cost and complexity of making 3D digital models move. Separately, a 3D scanning company, Digital Domain, shared its intent to use AI to create “fully digital human” avatars powered by AI chatbots.

3D scanning is not the same as AI, and only one is truly new to Hollywood

While some 3D scanning companies are pursuing AI solutions for helping them create interactive 3D models of actors — known variously as digital humans, digital doubles, digital twins, or virtual doppelgängers — 3D scanning technology came to Hollywood long before AI was readily available or practical, and AI is not needed to scan actors.

However, if realistic 3D scans are to one day replace working actors — perhaps even in the near future — an additional, separate layer of AI will likely be needed to help the 3D models of actors move, emote and speak realistically. That AI layer largely does not exist yet. But companies are working on tech that would allow it.

Understanding exactly who are some of the tech vendors behind these two separate and distinct technologies — 3D scanning and AI — and what they actually do is imperative if the conflicting sides in Hollywood and the creative arts more generally are to forge a sustainable, mutually beneficial path forward.

Yet in Hollywood, you could be forgiven for thinking that both technologies — AI and 3D scanning — are one and the same.

Duncan Crabtree-Ireland, the chief negotiator for SAG-AFTRA, revealed that the studios proposed a plan in July to 3D-scan extras or background actors and use their digital likenesses indefinitely. This proposal was swiftly rejected by the union. “We came into this negotiation saying that AI has to be done in a way that respects actors, respects their human rights to their own bodies, voice, image and likeness,” Crabtree-Ireland told Deadline.

Meanwhile there have been increasing reports of actors being subjected to 3D scanning on major movie and TV sets, causing unease within the industry. 

The main conflict

Though 3D actor scanning has been around for years, Hollywood executives like those at Disney are reportedly excited about the addition of generative AI to it, and about AI’s overarching prospects for new, more cost-effective storytelling. But the increasing availability of the technology has also sparked major concerns from writers and actors as to how their livelihoods and crafts will be affected.

When it comes to Hollywood writers, the recent launch of a number of free, consumer-facing, text-to-text large language model (LLM) applications such as ChatGPT, Claude and LLaMA have made it much easier for people to generate screenplays and scripts on the fly.

Reid Hoffman, a backer of ChatGPT maker OpenAI, even wrote a whole book with ChatGPT and included sample screenplay pages.

Another app, Sudowrite, based on OpenAI’s GPT-3, can be used to write prose and screenplays, but was the target of criticism several months ago from authors who believed that it was trained on unpublished work from draft groups without their express consent. Sudowrite’s founder denied this.

Meanwhile, voice cloning AI apps like those offered by startup ElevenLabs and demoed by Meta are also raising the prospect that actors won’t even need to record voiceovers for animated performances, including those involving their digital doubles.

Separately, though 3D body-scanning is now making headlines thanks to the actors’ strike, the technology behind it has actually been around for decades, introduced by some of cinema’s biggest champions and auteurs, including James Cameron, David Fincher, and the celebrated effects studio Industrial Light and Magic (ILM).

Now with the power of generative AI, those 3D scans that were once seen as extensions of a human actor’s performance on a set can be repurposed and theoretically used as the basis for new performances that don’t require the actor — nor their consent — going forward. You could even get an AI chatbot like ChatGPT to write a script and have a digital actor perform it. But because of the inherent complexity of these technologies, they are all generally, and improperly, conflated into one, grouped under the moniker du jour, “AI.”

The long history of 3D scanning

“We’ve been at this for 28 years,” said Michael Raphael, CEO, president and founder of Direct Dimensions, in an exclusive video interview with VentureBeat.

Direct Dimensions is a Baltimore-based 3D scanning company that builds the scanning hardware behind some of the biggest blockbusters in recent years, including Marvel’s Avengers: Infinity War and Avengers: Endgame.

The firm’s first subject in Hollywood was actor Natalie Portman for her Oscar-winning turn in the 2010 psychosexual thriller Black Swan.

Raphael, an engineer by training, founded the company in 1995 after working in the aerospace industry, where he helped develop precision 3D scanning tools for measuring aircraft parts, including an articulating arm with optical encoders in the joints.

However, as the years passed and technology became more advanced, the company expanded its offerings to include other scanning hardware such as laser scanning with lidar (light ranging and detection sensors, such as the kind found on some types of self-driving cars), as well as still photos taken by an array of common digital single reflex cameras (DSLR) and stitched together to form a 3D image, a technique known as photogrammetry.

Today, Direct Dimensions works not only on movies, but on imaging industrial parts for aerospace, defense and manufacturing; buildings and architecture; artworks and artifacts; jewelry; and basically any object from the small to the very large. In fact, Hollywood has only ever made up a small portion of Direct Dimensions’ business; most of it is precision 3D scanning for other, less glamorous industries.

“We scan anything you can think of for basically engineering or manufacturing purposes,” Raphael told VentureBeat.

In order to scan small objects, Direct Dimensions created its own in-house hardware: an automated, microwave-sized scanner it calls the Part Automated Scanning System (PASS).

Importantly, Direct Dimensions does not make its own AI software nor does it plan to. It scans objects and turns them into 3D models using off-the-shelf software like Autodesk’s Revit.

The short list of 3D scanners

Raphael said Direct Dimensions was only one of about a “dozen” companies around the world offering similar services. VentureBeat’s own research revealed the following names:

One such 3D scanning company, Avatar Factory from Australia, is run by a family of four: husband and wife Mark and Kate Ruff, and their daughters Amy and Chloe.

The company was founded in 2015 and offers a “cyberscanning” process involving 172 cameras mounted around the interior of a truck. This allows it to provide mobile 3D scanning of actors on locations outside of studios — say, landscapes and exteriors. Like Direct Dimensions, the company also offers prop scanning.

Among the notable recent titles for which Avatar Factory has performed 3D scanning are Mortal Kombat, Elvis and Shantaram (the Apple TV series).

“The Avatar Factory create photo-realistic 3D digital doubles that are used for background replacement, as well as stunt work that is too dangerous to be performed by actual stunt doubles,” explained Chloe Ruff, Avatar Factory’s CEO, chief technology officer (CTO) and head of design, in an email to VentureBeat.

While Ruff said that Avatar Factory had used 3D scanning of multiple extras or background actors to create digital crowd scenes, she also said that without the variety they contributed, it would be detrimental to the work.

“As so much of our work is for background replacement we see hundreds of extras and background actors come through our system on a typical shoot day,” Ruff wrote. “Having extras and background actors be on a film set is fundamental to our business operations and we couldn’t do what we do without them. It would be devastating to the industry and our business if all of those actors were to be replaced by AI, like some studios are suggesting.”

AI-assisted 3D scanning is in the works

Separately, rival 3D scanning company Digital Domain, co-founded in 1993 by James Cameron, legendary effects supervisor Stan Winston and former ILM general manager Scott Ross, declined to comment for this story on the controversy over scanning background actors.

However, a spokesperson sent VentureBeat a document outlining the company’s approach to creating “digital humans,” 3D models of actors derived from thorough, full-body scans that are “rigged” with points that allow motion. The document contains the following passage:

“In most cases, direct digital animation is used for body movements only, while facial animation almost always has a performance by a human actor as the underlying and driving component. This is especially true when the dialog is part of the performance.”

The Digital Domain document goes on to note the increasing role of AI in creating digital humans, saying, “We have been investigating the use of generative AI for the creation of digital assets. It’s still very early days with this technology, and use cases are still emerging.” The document also states:

“We feel the nuances of an actor’s performance in combination with our AI & Machine Learning tool sets is critical to achieving photo realistic results that can captivate an audience and cross the uncanny valley.

“That said, we are also working on what we call Autonomous Virtual Human technology. Here we create a fully digital human, either based on a real person or a synthetic identity, powered by generative AI components such as chatbots. The goal is to create a realistic virtual human the user can have a conversation or other interaction with. We believe that the primary application of this technology is outside of entertainment, in areas such as customer service, hospitality, healthcare, etc…”

Industrial Light and Magic (ILM) was at the forefront

How did we get here? Visual effects computer graphics scholars point to the 1989 sci-fi film The Abyss, directed by James Cameron of Titantic, Avatar, Aliens and Terminator 2 fame, as one of the first major movies to feature 3D scanning tech.

Actors Ed Harris and Mary Elizabeth Mastrantonio both had their facial expressions scanned by Industrial Light and Magic (ILM), the special effects company founded earlier by George Lucas to create the vivid spacefaring worlds and scenery of Star Wars, according to Redshark News. ILM used a device called the Cyberware Color 3-D Digitizer, Model 4020 RGB/PS-D, a “plane of light laser scanner” developed by a defunct California company for which the device was named. The U.S. Air Force later got ahold of one for military scanning and reconnaissance purposes, and wrote about it thusly:

“This Cyberware scanning system is capable of digitizing approximately 250,000 points on the surface of the head, face, and shoulders in about 17 seconds. The level of resolution achieved is approximately 1 mm.”

For The Abyss, ILM scanned actors to create the “pseudopod,” a watery shapeshifting alien lifeform that mimicked them. This holds the distinction of being the first fully computer-generated character in a major live-action motion picture, according to Computer Graphics and Computer Animation: A Retrospective Overview, a book from Ohio State University chronicling the CGI industry’s rise, by Wayne E. Carlson.

Raphael also pointed to 2008’s The Curious Case of Benjamin Button, starring Brad Pitt as a man aging in reverse, complete with visual effects accompanying his transformation from an “old baby” into a young elderly person, as a turning point for 3D actor-scanning technology.

Benjamin Button pioneered the science around these types of human body scanning,” Raphael said.

Pressing the ‘Benjamin Button’

When making Benjamin Button, director David Fincher wanted to create a realistic version of lead star Brad Pitt both old and young. While makeup and prosthetics would traditionally be used, the director thought this approach would not give the character the qualities he wanted.

He turned to Digital Domain, which in turn looked to computer effects work from Paul Debevec, a research adjunct professor at the University of Southern California’s (USC) Institute for Creative Technologies (ICT), who today also works as a chief researcher at Netflix’s Eyeline Studios.

According to Debevec’s recollection in a 2013 interview with the MPPA’s outlet The Credits, Fincher “had this hybrid idea, where they would do the computer graphics for most of the face except for the eyeballs and the area of skin around the eyes, and those would be filmed for real and they’d put it all together.”

In order to realize Fincher’s vision, Digital Domain turned to Debevec and asked him to design a “lighting reproduction” system whereby they could capture light and reflections in Pitt’s eyes, and superimpose the eyes onto a fully digital face.

Debevec designed such a system using LED panels arranged like a cube around the actor, and later, brought in a physical sculpture of Pitt’s head as a 70-year-old man and used the system to capture light bouncing off that.

“Ever since I started seriously researching computer graphics, the whole idea of creating a photo-real digital human character in a movie, or in anything, was kind of this Holy Grail of computer graphics,” Debevec told The Credits.

The approach worked: The Curious Case of Benjamin Button went on to win the 2009 Academy Award for Best Achievement in Visual Effects. And, the team got closer to Debevec’s “Holy Grail,” by creating a fully CGI human face.

According to Mark Ruff of Avatar Factory, the fact that Benjamin Button achieved such a lifelike representation of Brad Pitt, yet Pitt continues to act in new films, helps explain why 3D scans will not be displacing human actors anytime soon.

“It was conceivable back then that Brad Pitt no longer needed to appear in future films,” Mark told VentureBeat. “His avatar could complete any future performance. Yet, we still see Brad Pitt acting. Even if Brad Pitt were scanned and did not perform himself ever again in a film, I am sure his agent would still acquire a premium for his identity.”

Say hello to digital humans

Today, many companies are pursuing the vision of creating lifelike 3D actors — whether they be doubles or fully digital creations.

As The Information reported recently, a number of startups — Hyperreal, Synthesia, Soul Machines and Metaphysic — have all raised millions on the promise they could create realistic 3D digital doubles of leading A-list stars in Hollywood and major sports.

This would allow stars to reap appearance fees without ever setting foot on set (while the agents took a cut). In fact, it could create a whole new revenue stream for stars, “renting” out their likenesses/digital twins while they pursue higher-quality, more interesting, but possibly lower-paying passion projects.

In July, VentureBeat reported that Synthesia actually hired real actors to create a database of 39,765 frames of dynamic human motion that its AI would train on. This AI will allow customers to create realistic videos from text, though the ideal use case is more for company training videos, promotions and commercials rather than full feature films.

“We’re not replacing actors,” the company’s CEO, Jon Starck, told VentureBeat. “We’re not replacing movie creation. We’re replacing text for communication. And we’re bringing synthetic video to the toolbox for businesses.”

At the same time, he said that an entire movie made out of synthetic data was likely in the future.

The industry is moving fast from the days when deepfake images of Tom Cruise plastered on TikTok creators’ faces (powered by the tech that went on to become Metaphysic) and Bruce Willis renting out his own deepfake were making headlines.

Now, just one or two years later, “many stars and agents are quietly taking meetings with AI companies to explore their options,” according to The Information’s sources.

AI-driven motion capture

Of course, creating a digital double is a lot easier said than done. And then, animating that double to move realistically is another ballgame entirely.

Motion capture — the technology that allows human movements to be reproduced in animation or computer graphics — has been around for more than 100 years, but the modern tools didn’t come into effect until the 1980s.

And then, for the subsequent two decades, it mostly involved covering actors in tight-fitted bodysuits covered with ping pong-ball like markers, and using specialized cameras to map their movements onto a digital model or “skeleton” that could be turned into a different character or re-costumed with computer graphics.

But today, thanks to advances in AI and software, human motion can be captured with a set of smartphones alone, without the need of pesky suits and markers. One such company taking the “markerless” smartphone route is U.K.-based Move.ai, founded in 2019 to capture athletes’ movements, and which has since branched off into video games and film.

“Creating 3D animation might seem like quite a niche market, but it’s actually a huge market, over $10 billion,” said Tino Millar, CEO and cofounder of Move.ai, in a video interview with VentureBeat.

Millar said that in the past, animating the motion of 3D characters was done largely “by hand.” Even those animators using longstanding software such as Blender or Cinema 4D have to spend many hours training and educating themselves on the tools in order to achieve the quality necessary for major films.

The other alternative, the marker and tight-fitted suit approach described above, is similarly time-intensive and requires an expensive studio setup and multiple infrared cameras.

“What we’ve come along and done is, using AI and a few other breakthroughs in understanding human motion in physics and statistics, is that we believe we can make it 100 to 1,000 times cheaper to do than with motion capture suits, while maintaining the quality, and making it much more accessible to people,” Millar said.

In March 2023, Move.ai launched a consumer-facing smartphone app that requires at least two (and up to six) iPhones running iOS 16 to be positioned around a person to capture their motion.

Since then, “it’s being used by top game companies around the world, top film and TV productions, [and] content creators at home creating video for YouTube and TikTok,” Millar said.

Move.ai also supports Android devices in an “experimental” mode, and Millar told VentureBeat the company plans to launch a single-smartphone camera version of its app next month, September 2023, which would further reduce the barrier to entry for aspiring filmmakers.

AI’s increasing availability to consumers stokes fears

So, to recap: 3D scanning and improved motion-capture tech has been in the works in Hollywood for years, but has lately become much more affordable and ubiquitous, and AI tech has only recently become publicly available to consumers and Hollywood.

“It’s one thing to have these [3D] assets, and they’ve had these assets for 10 years at least,” said Raphael of Direct Dimensions. “But the fact that you’re adding all this AI to it, where you can manipulate assets, and you can make crowd scenes, parade scenes, audiences, all without having to pay actors to do that — the legality of all this still needs to be worked out.”

This trickle-down effect of both technologies has come just as the actors and writers had to renegotiate their contracts with studios, and as the studios have embraced yet another new technology — streaming video.

All of which has concocted a stew of inflated hype, real advances, fear and fearmongering, and mutual misunderstandings that have boiled over into the standoff that has now gone on for more than 100 days.

“I can only speculate,” Millar of Move.ai said. “But AI is much more in popular culture. People are much more aware of it. There is AI in their devices now. In the past, people weren’t aware of it because it was only being used by high-end production companies. The high end will always have the bleeding edge, but a lot of this technology is filtering down to consumers.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Carl Franzen
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!