AI & RoboticsNews

Visual Electric launches to liberate AI art generation from chat interfaces

If you’ve tried out at least a few of the text-to-image AI art generation services that have launched in the past few years, such as Midjourney or OpenAI’s various versions of DALL-E, you have likely noticed some commonalities. Chief among them: all pretty much resemble a chat interface. There’s typically a space for the user to enter their text prompts and then the application responds with an image embedded in a message.

While this interface works well for many users and application developers, some people believe it is limiting and ultimately not what established artists and designers want when using AI on the job. But now San Francisco-based Visual Electric is here to offer a different approach. One that the new startup — which emerges from stealth today after a seed round last year from Sequoia, BoxGroup, and Designer Fund of an undisclosed sum — believes is better adapted to visual creativity than texting back and forth with an AI model.

“There’s just so many workflow-specific optimizations that you need to make if you’re a graphic designer or a concept artist,” said Colin Dunn, founder and CEO of Visual Electric, in an exclusive interview with VentureBeat. “There’s a long tail of things that will make their life way easier and will make for a much better product.”

Dunn previously led product design and brand at the mobile website-building company Universe, and before that, served as head of design at Playspace, a Google acquisition.

Visual Electric is trying to be that “much better product” for AI art, visual design, and creativity for enterprise users, such as independent designers, in-house designers at major brands, and even “pro-sumers.”

The company is deliberately not launching its own underlying AI image generator machine learning (ML) model. Instead, it is built atop the open-source Stable Diffusion XL model (currently being litigated in the form of a copyright lawsuit by artists against the separate company that developed it, Stability AI, as well as Midjourney and other AI art generators).

This is because Dunn and his two co-founders — Visual Electric chief product officer Adam Menges, formerly co-founder of Microsoft acquisition Lobe; and chief technology officer Zach Stiggelbout, also formerly of Lobe — believe that image generation AI models are in the process of being commoditized, and that it is the front-end user interface that will most differentiate companies and separate the successes from the failures.

“We just want to build the best product experience,” Dunn said. “We’re really model agnostic and we’re happy to swap out whatever model is going to give users the best results. Our product can easily accommodate multiple models or the next model that’s going to come out.”

Visual Electric’s biggest departure from the image generators that have come before? It allows users to generate and drag-to-move their imagery around an infinite virtual “canvas,” allowing them to compare images side-by-side instead of the top-to-bottom “linear” form factor of other chat-based AI art generator apps, the latter of which forces users to scroll back up to see their prior generations. Users can continue generating new sets of 4 images at a time and move them around this canvas wherever they’d like.

“Creativity is a nonlinear process,” Dunn said. “You want to explore; you want to go down different paths and then go back up to an idea you were looking at previously and take that in a new direction. Chat forces you into this very linear flow where it’s sort of like you have a starting point and an ending point. And that’s just not really how creativity works.”

While there remains, of course, a space to enter in text prompts, this box has been moved to the top of the screen rather than the bottom like many chat interfaces.

To help overcome the initial hurdle that some users face — not knowing exactly what to type in to prompt the AI to get it to produce the image they have in their mind’s eye — Visual Electric offers a drop-down field of autocomplete suggestions, similar to what a user finds when typing in a search on Google. These are all recommendations based on what Visual Electric has seen from early users and what results in the highest quality images. But a user is also free to deviate from these entirely and type in a custom prompt as well.

In addition, Visual Electric’s web-based AI art generator offers a range of helpful additional tools for modifying the prompt and style of the resulting images, including pre-set styles that mimic common ones in the pre-AI digital and printed art worlds, including “marker,” “classic animation,” “3D render,” “airbrush,” “risograph,” “stained glass,” and many many others — with new styles being added regularly.

Instead of the user having to specify their image aspect ratio — 16×9 or 5×4 being two common examples— within the prompt text, they can select it as an option from buttons on the dropdown or a convenient right-rail sidebar, putting it in more direct competition with Adobe’s Firefly 2 AI art interface, which offers similar functionality.

This sidebar also lets the user specify dominant colors and elements they wish to exclude from their resulting AI-generated image, also inputted via text.

Further, the user can click a button to “remix” or regenerate their images based on their initial prompt, or “touch up” selected portions of the image and have AI regenerate only those areas that they highlight with a digital brush whose size the user can vary, preserving the rest of the image and adding to it in the same style. So, for example, if you didn’t like your AI-generated subject’s hair, you could “touch up” and tell the underlying Stable Diffusion XL model to redo only that part of the image.

There’s also a built-in upscaler to enhance the resolution and detail of images.

“These are the tools that represent what we see as the AI-native workflow and they in the order that you use them,” Dunn said.

While Visual Electric is launching publicly today, the company has been quietly alpha testing with a few dozen designers, whom Dunn says have already provided valuable feedback to make the product better, as well as promising results of how Visual Electric has been used to help in real-world enterprise workplace situations.

Dunn mentioned one client in particular — withholding the name for the sake of confidentiality — who had a small team of designers working to create menus and other visual collateral for more than 600 universities.

In the past, this team would have spent lots of their time sorting through stock imagery and seeking to find images that matched one another yet also represented fairly the items on a school’s dining hall menu, and having to manually edit the stock imagery to make it more accurate.

Now, with Visual Electric, they can generate whole new images from scratch that satisfy the menu requirements and edit portions of them without going into Adobe Photoshop or other rival tools.

“They’re now able to take what was a non-creative task and make it into something that is very creative, much more fulfilling, and they can do it in a tenth of the time,” Dunn claimed.

One more important differentiator that Visual Electric offers is an “Inspiration” feed comprised of AI-generated images made on the platform by other users. This feed, a grid of different-sized images that evokes Pinterest, allows the user to hover over the images and see their prompts. They can also grab and “remix” any images on the public feed, importing them to their private canvas.

“This was a early decision that we made, which is we think that with generative AI there’s an opportunity to bring the network into the tool,” Dunn explained. “Right now, you have inspiration sites like Pinterest and designer-specific sites like Dribbble, and then you have the tools like Photoshop, Creative Suite and Figma. It’s always felt odd to me that these things are not unified in some way, because they’re so related to each other.”

Users of Visual Electric can choose to engage with this feed and contribute to it or not, at their discretion. For enterprises concerned about the privacy of their imagery and works-in-progress, Dunn assured VentureBeat that the company takes privacy and security seriously, though only the “Pro” plan offers the ability to have privately stored images — everything else is public by default.

Launching in the U.S. today publicly, Visual Electric’s pricing is as follows: a free plan offering 40 generations per day at slower speeds and a license limited to personal use (so no selling said images or using them as marketing material); a standard plan at $20 per month or $16/month paid annually up-front, which allows for community sharing, unlimited generations at 2x faster speeds, and royalty-free commercial usage license; as well as a pro plan for $60 per month or $48/month paid annually up-front, which offers everything the latter two plans offer but also even higher resolution images, and critically, privatized generations.

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


If you’ve tried out at least a few of the text-to-image AI art generation services that have launched in the past few years, such as Midjourney or OpenAI’s various versions of DALL-E, you have likely noticed some commonalities. Chief among them: all pretty much resemble a chat interface. There’s typically a space for the user to enter their text prompts and then the application responds with an image embedded in a message.

Screenshot of the author prompting DALL-3 using ChatGPT. Credit: VentureBeat
Screenshot of the author prompting Midjourney through the Discord interface. Credit: VentureBeat

While this interface works well for many users and application developers, some people believe it is limiting and ultimately not what established artists and designers want when using AI on the job. But now San Francisco-based Visual Electric is here to offer a different approach. One that the new startup — which emerges from stealth today after a seed round last year from Sequoia, BoxGroup, and Designer Fund of an undisclosed sum — believes is better adapted to visual creativity than texting back and forth with an AI model.

“There’s just so many workflow-specific optimizations that you need to make if you’re a graphic designer or a concept artist,” said Colin Dunn, founder and CEO of Visual Electric, in an exclusive interview with VentureBeat. “There’s a long tail of things that will make their life way easier and will make for a much better product.”

Dunn previously led product design and brand at the mobile website-building company Universe, and before that, served as head of design at Playspace, a Google acquisition.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

Visual Electric is trying to be that “much better product” for AI art, visual design, and creativity for enterprise users, such as independent designers, in-house designers at major brands, and even “pro-sumers.”

The company is deliberately not launching its own underlying AI image generator machine learning (ML) model. Instead, it is built atop the open-source Stable Diffusion XL model (currently being litigated in the form of a copyright lawsuit by artists against the separate company that developed it, Stability AI, as well as Midjourney and other AI art generators).

This is because Dunn and his two co-founders — Visual Electric chief product officer Adam Menges, formerly co-founder of Microsoft acquisition Lobe; and chief technology officer Zach Stiggelbout, also formerly of Lobe — believe that image generation AI models are in the process of being commoditized, and that it is the front-end user interface that will most differentiate companies and separate the successes from the failures.

“We just want to build the best product experience,” Dunn said. “We’re really model agnostic and we’re happy to swap out whatever model is going to give users the best results. Our product can easily accommodate multiple models or the next model that’s going to come out.”

How Visual Electric differs from Midjourney, DALL-E 3 and other AI art apps

Visual Electric’s biggest departure from the image generators that have come before? It allows users to generate and drag-to-move their imagery around an infinite virtual “canvas,” allowing them to compare images side-by-side instead of the top-to-bottom “linear” form factor of other chat-based AI art generator apps, the latter of which forces users to scroll back up to see their prior generations. Users can continue generating new sets of 4 images at a time and move them around this canvas wherever they’d like.

Screenshot showing Visual Electric’s “infinite canvas.” Credit: Visual Electric/VentureBeat

“Creativity is a nonlinear process,” Dunn said. “You want to explore; you want to go down different paths and then go back up to an idea you were looking at previously and take that in a new direction. Chat forces you into this very linear flow where it’s sort of like you have a starting point and an ending point. And that’s just not really how creativity works.”

While there remains, of course, a space to enter in text prompts, this box has been moved to the top of the screen rather than the bottom like many chat interfaces.

To help overcome the initial hurdle that some users face — not knowing exactly what to type in to prompt the AI to get it to produce the image they have in their mind’s eye — Visual Electric offers a drop-down field of autocomplete suggestions, similar to what a user finds when typing in a search on Google. These are all recommendations based on what Visual Electric has seen from early users and what results in the highest quality images. But a user is also free to deviate from these entirely and type in a custom prompt as well.

Screenshot showing Visual Electric’s prompt autocomplete suggestions. Credit: VentureBeat/Visual Electric

In addition, Visual Electric’s web-based AI art generator offers a range of helpful additional tools for modifying the prompt and style of the resulting images, including pre-set styles that mimic common ones in the pre-AI digital and printed art worlds, including “marker,” “classic animation,” “3D render,” “airbrush,” “risograph,” “stained glass,” and many many others — with new styles being added regularly.

Instead of the user having to specify their image aspect ratio — 16×9 or 5×4 being two common examples— within the prompt text, they can select it as an option from buttons on the dropdown or a convenient right-rail sidebar, putting it in more direct competition with Adobe’s Firefly 2 AI art interface, which offers similar functionality.

This sidebar also lets the user specify dominant colors and elements they wish to exclude from their resulting AI-generated image, also inputted via text.

Further, the user can click a button to “remix” or regenerate their images based on their initial prompt, or “touch up” selected portions of the image and have AI regenerate only those areas that they highlight with a digital brush whose size the user can vary, preserving the rest of the image and adding to it in the same style. So, for example, if you didn’t like your AI-generated subject’s hair, you could “touch up” and tell the underlying Stable Diffusion XL model to redo only that part of the image.

Screenshot showing the “touch up” feature and digital brush on Visual Electric. Credit: VentureBeat/Visual Electric

There’s also a built-in upscaler to enhance the resolution and detail of images.

“These are the tools that represent what we see as the AI-native workflow and they in the order that you use them,” Dunn said.

Pricing, community and early success stories

While Visual Electric is launching publicly today, the company has been quietly alpha testing with a few dozen designers, whom Dunn says have already provided valuable feedback to make the product better, as well as promising results of how Visual Electric has been used to help in real-world enterprise workplace situations.

Dunn mentioned one client in particular — withholding the name for the sake of confidentiality — who had a small team of designers working to create menus and other visual collateral for more than 600 universities.

In the past, this team would have spent lots of their time sorting through stock imagery and seeking to find images that matched one another yet also represented fairly the items on a school’s dining hall menu, and having to manually edit the stock imagery to make it more accurate.

Now, with Visual Electric, they can generate whole new images from scratch that satisfy the menu requirements and edit portions of them without going into Adobe Photoshop or other rival tools.

“They’re now able to take what was a non-creative task and make it into something that is very creative, much more fulfilling, and they can do it in a tenth of the time,” Dunn claimed.

One more important differentiator that Visual Electric offers is an “Inspiration” feed comprised of AI-generated images made on the platform by other users. This feed, a grid of different-sized images that evokes Pinterest, allows the user to hover over the images and see their prompts. They can also grab and “remix” any images on the public feed, importing them to their private canvas.

Screenshot of Visual Electric’s “Inspiration” feed. Credit: VentureBeat/Visual Electric

“This was a early decision that we made, which is we think that with generative AI there’s an opportunity to bring the network into the tool,” Dunn explained. “Right now, you have inspiration sites like Pinterest and designer-specific sites like Dribbble, and then you have the tools like Photoshop, Creative Suite and Figma. It’s always felt odd to me that these things are not unified in some way, because they’re so related to each other.”

Users of Visual Electric can choose to engage with this feed and contribute to it or not, at their discretion. For enterprises concerned about the privacy of their imagery and works-in-progress, Dunn assured VentureBeat that the company takes privacy and security seriously, though only the “Pro” plan offers the ability to have privately stored images — everything else is public by default.

Launching in the U.S. today publicly, Visual Electric’s pricing is as follows: a free plan offering 40 generations per day at slower speeds and a license limited to personal use (so no selling said images or using them as marketing material); a standard plan at $20 per month or $16/month paid annually up-front, which allows for community sharing, unlimited generations at 2x faster speeds, and royalty-free commercial usage license; as well as a pro plan for $60 per month or $48/month paid annually up-front, which offers everything the latter two plans offer but also even higher resolution images, and critically, privatized generations.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Carl Franzen
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!