Highlights:

  • Microsoft Corporation stated that its Bing search engine and Edge browser would be able to generate AI-powered images using the Bing Picture Maker, which is driven by the DALL-E model.
  • Adobe Inc., a provider of creative software, has announced that it will expand its own capabilities with an AI art generating product called Firefly.

With the debut of OpenAI LP’s DALL-E and Midjourney, which take text prompts and transform them into beautiful artwork, artificial intelligence art generators have become increasingly popular, and recently, two large businesses have joined the party.

Microsoft Corporation stated that its Bing search engine and Edge browser can generate AI-powered images using the Bing Picture Maker, which the DALL-E model drives. Adobe Inc., a creative software provider, has announced that it will expand its capabilities with an AI art-generating product called Firefly.

Those with access to the Bing chat preview will immediately access the new AI image generator in the “creative” mode. Users will be able to have the interface make artwork for them by just inputting a description of the image they desire, along with a flourishing example such as a location, object, or action, and then the underlying AI will develop something based on what it was taught.

Users may swiftly and simply input anything their imaginations conjure up, such as “make a picture” or “draw an image,” and the AI will do the rest. This includes iterating on an initial image by altering internal components, altering image elements, and letting the user alter the backdrop or other portions. Accessible Edge users who click the Bing Picture Creator icon in the sidebar or initiate Bing conversation in the browser can access the same functionality.

Microsoft emphasized that the AI image generator had controls to prevent the development of risky or damaging images and that it would block and alert users if they attempted to utilize prompts to make such images. Pictures made by the AI also have a watermark icon in the lower left corner to show that the Bing Image Creator developed them, but this may likely be clipped off.

Yusuf Mehdi, Corporate Vice President and Consumer Chief Marketing Officer at Microsoft, said, “With these updates and more coming, our goal is to deliver more immersive experiences in Bing and Edge that make finding answers and exploring the web more interesting, useful and fun.”

This functionality is being handed out immediately to those with access to the trial versions of the new Bing and Edge AI features and will be available to English-speaking users. Those who don’t have access to the new Bing and Edge capabilities can sign up for the waiting list, while those without access can test it immediately.

Adobe Announces Generative AI Tools with Firefly

Adobe refers to Firefly as “a family of generative AI models for creative expression” that it is adding to its applications to enable users to harness the power of AI art generation. These applications will initially include two tools: one that allows users to generate users and another that generates text effects.

The new AI art-generating tools will be immediately linked into Adobe’s existing portfolio of innovative cloud products, such as Creative Cloud, Document Cloud, Experience Cloud, and Adobe Express, enabling users to access these capabilities.

The current beta’s initial feature is like DALL-E or Midjourney’s text-to-image tool, which allows users to write a text prompt and create a sequence of pictures based on a written description. Like WordArt, “text effects” enable users to input text into the computer and then quickly change its appearance to “covered in snow” or “looks like it’s made of cake.” It will apply the specified style to the text on the screen.

Adobe has planned several future capabilities to enable customers to use textual descriptions to change or add to what they’ve previously developed in Adobe’s products. This is the potential of generative AI, which will expand the capabilities of Adobe’s already potent AI products built on Adobe Sensei.

For instance, users of Photoshop and Illustrator graphical editors can take images they are currently working on, choose portions of their digital artwork, and then have the AI adjust a part of the image based on context-aware cues. It could take a picture of a home on a beach, choose the house, type “house built of seashells,” and have the AI generate versions of the house. Alternatively, select the water and have the AI add ships, or choose the sky and create an alien fight.

It may also be used to generate new images based on the color schemes and styles of current material, making it easier to create comparable graphics. The same technique may also be used for film editing, as AI can alter the atmosphere and weather.

David Wadhwani, President of the digital media business at Adobe, said, “Generative AI is the next evolution of AI-driven creativity and productivity, transforming the conversation between creator and computer into something more natural, intuitive, and powerful. With Firefly, Adobe will bring generative AI-powered ‘creative ingredients’ directly into customers’ workflows.”

Adobe stated that the initial model is built on Adobe Stock photos, publicly licensed content, and photographs for which the copyright has expired to assuage the anxieties of creators afraid that Adobe may be utilizing their protected works to train the AI. The purpose is to ensure that all generated photographs are suitable for commercial usage.

Adobe also stated it is establishing a methodology akin to Adobe Stock to recompense creators whose artwork is used to train generative AI models, such as those used in Firefly. In addition, it is pioneering a worldwide standard in which creators may include a “Do Not Train” metadata tag to their artwork to instruct AI not to utilize it.