Google this week unveiled a new challenger to OpenAI’s vaunted DALLE-2 text-to-image generator — and took shots at its rival’s efforts. Both models convert text prompts into pictures. But Google’s researchers claim their system provides “unprecedented photorealism and deep language understanding.” Human raters preferred Imagen over DALLE-2 for both sample quality and image-text alignment. Credit: Saharia et al.The cringingly-named Imagen system uses a large pre-trained language model as a text encoder. A cascade of diffusion models then turn the user’s words into pictures. In tests, the Google team said Imagen “significantly outperformed” DALL-E 2. Imagen’s developers have even invented a new method of measuring the supremacy of their creation. Dubbed DrawBench, the benchmark compares human judgments on the outputs of different text-to-image generators. Unsurprisingly, Google’s metric gave strong scores to Google’s system. “With DrawBench, extensive human evaluation shows that Imagen outperforms other recent methods by a significant margin,” the researchers said in their study paper. DALL-E 2 can struggle to correctly assign colors to objects — especially for prompts with more than one object. Credit: Saharia et al.The images and metrics certainly look impressive, but Google hasn’t offered an opportunity to scrutinize the results. You can try some interactive demos at the Imagen website, but these only let you use a small selection of phrases to form a constrained sentence. Until the model and code get a public release, cynics will suspect that Google’s cherry-picking the results. Imagen was significantly better than DALL-E 2 in prompts with quoted text. Credit: Saharia et al.Google’s explanation for keeping the model private echoes one given by OpenAI: the system is too dangerous to release. The researchers warn that generative methods can spread misinformation, stir harassment, and exacerbate marginalization. “Our preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes,” said the researchers. Imagen significantly outperformed DALL-E 2 in the positional, text, and descriptions categories. Credit: Saharia et al.The team concludes that Imagen “is not suitable for public use at this time” — but does offer hope of a future release. I await their update with caution. As someone who creates images for articles every day, the prospect of AI labs competing to offer better results is attractive. On the other hand, I don’t want our robot overlords to replace artists with algorithms.