AI & RoboticsNews

Adobe introduces structure reference for Firefly AI and GenStudio for brands

Adobe was one of the largest and first companies to jump on the generative AI bandwagon with the release of its own, commercially-safe Firefly AI image generator and editing model nearly a year ago in March 2023.

Now, as it celebrates the first year anniversary of the model and several months after the release of Firefly 2, the creative software juggernaut is introducing a whole new “GenStudio” application designed to help enterprise users and brands create generative AI assets for campaigns and publish them online or to their digital distribution channels.

It is further introducing a new feature that it hopes will give customers even more control — and therefore, more of a reason — to generate AI images.

Called “Structure reference,” the new feature allows users of the Adobe Firefly stand-alone text-to-image generator app to upload an image that guides subsequent image generations, not in terms of style or content, but rather, the arrangement of the image and the objects and characters within it.

The features were formally unveiled publicly for the first time at Adobe Summit, the company’s annual conference taking place this week (March 25-28, 2024) at the Venetian Convention and Expo Center in Las Vegas.

GenStudio is designed to serve as a central hub for brands, offering a comprehensive suite of tools for planning marketing/advertising/promotional campaigns, creating and managing content, activating digital experiences across channels, and measuring performance.

Adobe’s vision with this new app — part of its Creative Cloud subscription suite of applications — is that it can simplify and streamline content generation.

It allows brands and enterprise users to track and view campaigns, manage briefs, and see tasks assigned to them, and it is integrated with Adobe Workfront, Adobe’s project management software.

Users can generate different variations of marketing assets for various distribution channels, ensuring the content is audience-centric and on-brand. GenStudio also alerts users if content deviates from brand standards, offering suggestions for alignment.

It further serves as a content hub, connecting to Adobe Experience Manager Assets and letting users search for assets and create personalized variations with Firefly in Adobe Express.

In addition to GenStudio, Adobe announced the Adobe Experience Platform AI Assistant, a conversational interface within Adobe’s enterprise software that aims to enhance productivity and spur innovation among teams.

This assistant is capable of answering technical questions, automating tasks, and generating new audiences and journeys.

Adobe’s commitment to integrating generative AI capabilities extends to specific applications as well, such as Adobe Experience Manager’s variant generation and Adobe Content Analytics. These innovations enable brands to instantly create personalized variations of marketing assets and align AI-generated content with performance goals.

The significance of these updates cannot be overstated, as they underscore Adobe’s role as a global leader in digital experience platforms and a trusted partner for enterprises worldwide. With a customer base that spans across industries and includes 11,000 global customers, Adobe is hoping to push generative AI tools to a wide swath of users.

Adobe vice president of generative AI and “sensei” Alexandru Costin and Firefly group product manager Jonathan Pimento briefed VentureBeat yesterday in a video call about “structure reference.”

As shown in the animated GIF above, you can click a button, upload an image of a rock formation or butte in the desert, type in “a castle found deep in the forest with moss growing on the stone walls” and it will generate the castle in the same position and resembling the shape, size, and placement of the initial rock formation.

This is, at its core, what structure reference enables: taking one image and generating new ones that may be completely different stylistically, but whose internal elements are arranged and sized similarly to the first image.

In the case of VentureBeat’s demo, the Adobe execs chose a living room. Then, they typed in a text prompt “cathedral” and the AI model generated a new image that looked like a living room in a cathedral, complete with stained glass windows and opulent couches.

This is a powerful feature for users looking to exert further control of the image generator’s outputs — moving beyond text or a “style reference,” both of which can still also be used. (The “style reference” feature ensures that images generated by Firefly try to follow the color scheme and artistic style of the uploaded images, unlike the “structure reference,” which does not adopt the style, only the arrangement and sizing of objects.)

In addition, users can actually upload rough, plain, hand-drawn sketches of the concept in their mind’s eye — the Adobe team we met with showed us a sketch by a colleague of a tiny home on a crescent moon — and Firefly can use these as a “structure reference” as well, allowing a user to go from concept art to a more fully realized, colorful and shaded illustration in seconds.

These attributes may be especially helpful for artists and designers working at creative agencies, on their own for clients, on marketing and advertising campaigns, on film storyboards, or other enterprise creative jobs where consistency, repeatability and precision are important.

A big issue with AI image generation models overall — from Midjourney to OpenAI’s DALL-E and previously Firefly — is that by virtue of the way they work, they intrinsically generate wildly different imagery with each use, even if some of the same key words are re-used in the text prompts.

Other AI image generators have tried to allow for more control by human users over AI generated outputs through various methods. For example, Midjourney recently added a “character reference” feature that seeks to reproduce a character from one generation across multiple images consistently, as well as previously, a “style reference” parameter.

Adobe’s “structure reference” is a new twist on solving this problem, and based on our early look, seems incredibly promising.

Some Firefly features are also baked into various Adobe Creative Cloud applications including Adobe Photoshop, Illustrator, Express, and Substance 3D — but unfortunately, the new “structure reference” feature is restricted to the stand-alone Firefly app, for now.

Yet Adobe’s statements that Firefly is commercially safe — and even its policy to offer users indemnification, or some amount of legal assistance if challenged/sued for using its products — makes the AI model uniquely attractive to enterprises. Adobe notes that unlike other AI models trained by scraping the web, including Midjourney and DALL-E, it only used imagery to which it already had a license, namely, more than 400 million images from Adobe Stock.

Nonetheless, contributors to Adobe Stock previously expressed their disappointment and consternation to VentureBeat that their imagery and photographs were being used to train Firefly, which they view as competing with them.

VentureBeat uses AI image generators including all those mentioned in this piece to create article header imagery and other assets.

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Adobe was one of the largest and first companies to jump on the generative AI bandwagon with the release of its own, commercially-safe Firefly AI image generator and editing model nearly a year ago in March 2023.

Now, as it celebrates the first year anniversary of the model and several months after the release of Firefly 2, the creative software juggernaut is introducing a whole new “GenStudio” application designed to help enterprise users and brands create generative AI assets for campaigns and publish them online or to their digital distribution channels.

It is further introducing a new feature that it hopes will give customers even more control — and therefore, more of a reason — to generate AI images.

Called “Structure reference,” the new feature allows users of the Adobe Firefly stand-alone text-to-image generator app to upload an image that guides subsequent image generations, not in terms of style or content, but rather, the arrangement of the image and the objects and characters within it.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.


Request an invite

The features were formally unveiled publicly for the first time at Adobe Summit, the company’s annual conference taking place this week (March 25-28, 2024) at the Venetian Convention and Expo Center in Las Vegas.

How Adobe’s new GenStudio works

GenStudio is designed to serve as a central hub for brands, offering a comprehensive suite of tools for planning marketing/advertising/promotional campaigns, creating and managing content, activating digital experiences across channels, and measuring performance.

Adobe’s vision with this new app — part of its Creative Cloud subscription suite of applications — is that it can simplify and streamline content generation.

It allows brands and enterprise users to track and view campaigns, manage briefs, and see tasks assigned to them, and it is integrated with Adobe Workfront, Adobe’s project management software.

Users can generate different variations of marketing assets for various distribution channels, ensuring the content is audience-centric and on-brand. GenStudio also alerts users if content deviates from brand standards, offering suggestions for alignment.

It further serves as a content hub, connecting to Adobe Experience Manager Assets and letting users search for assets and create personalized variations with Firefly in Adobe Express.

In addition to GenStudio, Adobe announced the Adobe Experience Platform AI Assistant, a conversational interface within Adobe’s enterprise software that aims to enhance productivity and spur innovation among teams.

This assistant is capable of answering technical questions, automating tasks, and generating new audiences and journeys.

Adobe’s commitment to integrating generative AI capabilities extends to specific applications as well, such as Adobe Experience Manager’s variant generation and Adobe Content Analytics. These innovations enable brands to instantly create personalized variations of marketing assets and align AI-generated content with performance goals.

The significance of these updates cannot be overstated, as they underscore Adobe’s role as a global leader in digital experience platforms and a trusted partner for enterprises worldwide. With a customer base that spans across industries and includes 11,000 global customers, Adobe is hoping to push generative AI tools to a wide swath of users.

How the new Adobe Firefly structure reference feature works

A video demo showing Adobe Firefly’s new structure reference feature. Credit: Adobe

Adobe vice president of generative AI and “sensei” Alexandru Costin and Firefly group product manager Jonathan Pimento briefed VentureBeat yesterday in a video call about “structure reference.”

As shown in the animated GIF above, you can click a button, upload an image of a rock formation or butte in the desert, type in “a castle found deep in the forest with moss growing on the stone walls” and it will generate the castle in the same position and resembling the shape, size, and placement of the initial rock formation.

This is, at its core, what structure reference enables: taking one image and generating new ones that may be completely different stylistically, but whose internal elements are arranged and sized similarly to the first image.

In the case of VentureBeat’s demo, the Adobe execs chose a living room. Then, they typed in a text prompt “cathedral” and the AI model generated a new image that looked like a living room in a cathedral, complete with stained glass windows and opulent couches.

This is a powerful feature for users looking to exert further control of the image generator’s outputs — moving beyond text or a “style reference,” both of which can still also be used. (The “style reference” feature ensures that images generated by Firefly try to follow the color scheme and artistic style of the uploaded images, unlike the “structure reference,” which does not adopt the style, only the arrangement and sizing of objects.)

In addition, users can actually upload rough, plain, hand-drawn sketches of the concept in their mind’s eye — the Adobe team we met with showed us a sketch by a colleague of a tiny home on a crescent moon — and Firefly can use these as a “structure reference” as well, allowing a user to go from concept art to a more fully realized, colorful and shaded illustration in seconds.

Who is Firefly structure reference for?

These attributes may be especially helpful for artists and designers working at creative agencies, on their own for clients, on marketing and advertising campaigns, on film storyboards, or other enterprise creative jobs where consistency, repeatability and precision are important.

A big issue with AI image generation models overall — from Midjourney to OpenAI’s DALL-E and previously Firefly — is that by virtue of the way they work, they intrinsically generate wildly different imagery with each use, even if some of the same key words are re-used in the text prompts.

Other AI image generators have tried to allow for more control by human users over AI generated outputs through various methods. For example, Midjourney recently added a “character reference” feature that seeks to reproduce a character from one generation across multiple images consistently, as well as previously, a “style reference” parameter.

Adobe’s “structure reference” is a new twist on solving this problem, and based on our early look, seems incredibly promising.

Some Firefly features are also baked into various Adobe Creative Cloud applications including Adobe Photoshop, Illustrator, Express, and Substance 3D — but unfortunately, the new “structure reference” feature is restricted to the stand-alone Firefly app, for now.

Yet Adobe’s statements that Firefly is commercially safe — and even its policy to offer users indemnification, or some amount of legal assistance if challenged/sued for using its products — makes the AI model uniquely attractive to enterprises. Adobe notes that unlike other AI models trained by scraping the web, including Midjourney and DALL-E, it only used imagery to which it already had a license, namely, more than 400 million images from Adobe Stock.

Nonetheless, contributors to Adobe Stock previously expressed their disappointment and consternation to VentureBeat that their imagery and photographs were being used to train Firefly, which they view as competing with them.

VentureBeat uses AI image generators including all those mentioned in this piece to create article header imagery and other assets.


Author: Carl Franzen
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!