On a par with Google: Facebook already has its own image creator using artificial intelligence

Meta , Facebook ‘s parent company, revealed its AI image generation engine called Make-A-Scene.


Artificial intelligence imagers have become a digital trend that several technology companies are following. Now, in addition to the well-known OpenAI Craiyon (previously DALL -E mini) and Google Image , Meta -Facebook ‘s  parent company- has joined this nice branch of technological art with its own version which it calls Make-A -Scene .

As indicated through a post on its official blog , the firm hopes to adopt this new tool on its way to developing immersive worlds in the Metaverse , in addition to contributing to the creation of high-quality digital art.

Just by typing a word or phrase, the system will start a process where the writing goes through a transformation model, then goes to a neural network that analyzes the text to develop a contextual understanding of the relationship between the words. After capturing the essence of what the user describes, the artificial intelligence will synthesize an image using a set of generative adversarial networks (GANs).

A rapidly advancing technology

Due to ongoing efforts to train AI models with ever-larger sets of high-definition images and well-chosen textual descriptions, the most advanced generators can now create photorealistic images of just about anything thrown at them. order. However, this process changes depending on the chosen AI.

We have Google Image which uses a diffusion model “ that learns to convert a pattern of random dots into images, starting with low resolution figures and gradually increasing resolution” . Google ‘s Parti AI , on the other hand, “ first converts a collection of images into a sequence of code inputs, similar to the pieces of a puzzle. A given text is then translated into these code entries and a new image is created . “

Meta’s contribution to AI imagers

As Mark Zuckerberg points out in the Make-A-Scene entry on the Meta blog , while the aforementioned systems can render almost anything, the user has no real control over aspects of that image in its final form. “To harness the potential of AI to drive creative expression, people must be able to shape and control the content that a system generates , “ said the company’s CEO.

So what Make-A-Scene does is incorporate the user-created sketches into its system, resulting in a 2048 x 2048 px image . With this combination, the user will be able to describe what he wants in the image and, in addition, he will be able to control the general composition of the image.

“ Make-A-Scene demonstrates how people can use both text and simple drawings to convey their vision with greater specificity , using a variety of elements, shapes, arrangements, depth, compositions, and structures , ” says Mark Zuckerberg .

The Make-A-Scene tests were encouraging as human test groups preferred this text-and-draw system over text-only, as it better matched the original 66% sentence description and the original sketch in 99.54% of the time. However, for now the company has not mentioned when it will be made available to the public.