AI & RoboticsNews

Meta’s new AI milestone: Emu Video and Emu Edit set to revolutionize text-to-video generation and image editing

Meta, the parent company of Facebook and Instagram, unveiled major strides in artificial intelligence (AI) content creation and editing tools today, announcing two major advancements: Emu Edit for image editing and Emu Video for video generation.

The company developed both tools under its Expressive Media Universe (Emu) project, which was revealed back in September. Together, the new AI tools for content creation hint at more intuitive and creative features for Meta’s family of social apps like Facebook and Instagram.

The first breakthrough, dubbed Emu Edit, seeks to provide users with refined control over image editing. It ushers in a unique approach to image manipulation where users input text-based instructions to make changes to images. This is similar to the “generative fill” feature currently offered by Adobe Photoshop.

The tool can perform a variety of editing tasks such as local and global editing, adding or removing backgrounds, color and geometry transformations, object detection, and segmentation. Importantly, Emu Edit aims to limit modifications only to areas relevant to the edit request, ensuring that unrelated pixels remain untouched.

“The primary objective shouldn’t just be about producing a believable image,” Meta’s researchers underscored in their recent announcement. “Instead, the model should focus on precisely altering only the pixels relevant to the edit request.”

Emu Edit was trained on a massive dataset consisting of 10 million synthesized samples, making it capable of delivering high-quality results in terms of instruction faithfulness and image quality. For instance, a user could input the text “Aloha!” to be added to an image of a baseball cap, and Emu Edit would accomplish this task without altering the cap itself. You can dive into the full research paper here.

In addition to image editing, Meta’s AI team has also been working on enhancing video generation. The Emu Video tool, based on diffusion models, provides a simple method for text-to-video generation. It responds to various inputs, including text only, image only, or both.

The video generation process involves creating an image conditioned by a text prompt, followed by creating a video based on that image and another text prompt. If you’re interested in trying out the new Emu Video editing tool, you can try the live demo now. You can also read the full research paper here.

These advancements are poised to transform how users interact with their images and videos on social media platforms. For instance, users could create their own animated stickers and GIFs or edit their photographs without needing to rely on complex tools like Photoshop; however, it’s important to note that these tools are still in development, and there hasn’t been official word on when they’ll be available on platforms like Facebook and Instagram.

For Meta, the Emu-powered tools represent growing momentum in generative AI, complementing existing initiatives like Make-A-Video and the AI image generator DALL-E. As the company continues to push boundaries in assistive AI, it aims to provide intuitive features that expand artistic possibilities for general users.

The Emu Video and Emu Edit rollouts also build on Meta’s strategy to drive engagement across its family of apps. With in-platform editing and creation, Meta locks users deeper into its social ecosystem.

Even as the new tools promise more creativity, questions remain around AI ethics and content oversight. Like other generative models, Emu will require oversight to prevent potential abuses. Meta indicated that safeguards remain a priority amid rapid generative AI progress.

For now, Emu Video and Emu Edit remain in development, with no set timeline for public release. But Meta’s active generative AI research signals more transformative social media experiences on the horizon. As AI synthesis matures, users could one day produce professional-grade content as intuitively as sending a text.

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Meta, the parent company of Facebook and Instagram, unveiled major strides in artificial intelligence (AI) content creation and editing tools today, announcing two major advancements: Emu Edit for image editing and Emu Video for video generation.

The company developed both tools under its Expressive Media Universe (Emu) project, which was revealed back in September. Together, the new AI tools for content creation hint at more intuitive and creative features for Meta’s family of social apps like Facebook and Instagram.

Emu Edit: raising the bar for image editing

The first breakthrough, dubbed Emu Edit, seeks to provide users with refined control over image editing. It ushers in a unique approach to image manipulation where users input text-based instructions to make changes to images. This is similar to the “generative fill” feature currently offered by Adobe Photoshop.

Credit: Research by AI at Meta

The tool can perform a variety of editing tasks such as local and global editing, adding or removing backgrounds, color and geometry transformations, object detection, and segmentation. Importantly, Emu Edit aims to limit modifications only to areas relevant to the edit request, ensuring that unrelated pixels remain untouched.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

“The primary objective shouldn’t just be about producing a believable image,” Meta’s researchers underscored in their recent announcement. “Instead, the model should focus on precisely altering only the pixels relevant to the edit request.”

Emu Edit was trained on a massive dataset consisting of 10 million synthesized samples, making it capable of delivering high-quality results in terms of instruction faithfulness and image quality. For instance, a user could input the text “Aloha!” to be added to an image of a baseball cap, and Emu Edit would accomplish this task without altering the cap itself. You can dive into the full research paper here.

Emu Video: text-to-video generation simplified

In addition to image editing, Meta’s AI team has also been working on enhancing video generation. The Emu Video tool, based on diffusion models, provides a simple method for text-to-video generation. It responds to various inputs, including text only, image only, or both.

The video generation process involves creating an image conditioned by a text prompt, followed by creating a video based on that image and another text prompt. If you’re interested in trying out the new Emu Video editing tool, you can try the live demo now. You can also read the full research paper here.

Credit: Research by AI at Meta

Far-reaching impact on content creation

These advancements are poised to transform how users interact with their images and videos on social media platforms. For instance, users could create their own animated stickers and GIFs or edit their photographs without needing to rely on complex tools like Photoshop; however, it’s important to note that these tools are still in development, and there hasn’t been official word on when they’ll be available on platforms like Facebook and Instagram.

For Meta, the Emu-powered tools represent growing momentum in generative AI, complementing existing initiatives like Make-A-Video and the AI image generator DALL-E. As the company continues to push boundaries in assistive AI, it aims to provide intuitive features that expand artistic possibilities for general users.

Rollouts build on Meta’s social strategy

The Emu Video and Emu Edit rollouts also build on Meta’s strategy to drive engagement across its family of apps. With in-platform editing and creation, Meta locks users deeper into its social ecosystem.

Even as the new tools promise more creativity, questions remain around AI ethics and content oversight. Like other generative models, Emu will require oversight to prevent potential abuses. Meta indicated that safeguards remain a priority amid rapid generative AI progress.

For now, Emu Video and Emu Edit remain in development, with no set timeline for public release. But Meta’s active generative AI research signals more transformative social media experiences on the horizon. As AI synthesis matures, users could one day produce professional-grade content as intuitively as sending a text.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Michael Nuñez
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!