Google's Gemini AI Image Generator: Addressing Bias and Accuracy Challenges

Written by Published

Google rebranded its artificial intelligence offering to Gemini and launched its photo generation service earlier this month. The service was suspended briefly due to historically inaccurate photos, which Google called “embarrassing and wrong.”

Gemini, an AI-powered image generator, faced criticism when users found it difficult to accurately generate historical images. Some images posted on social media depicted racially diverse versions of Nazi-era German soldiers and U.S. Founding Fathers. Users also received images like a female pope and a Black pope when requesting an “image of a pope,” and images of a female ruler and racially diverse men when asking for a “historically accurate depiction of a medieval British king.”

Google leaders, including Demis Hassabis, CEO and co-founder of Google DeepMind, addressed the issues during a panel at the Mobile World Congress conference in Barcelona. They acknowledged that Gemini’s emphasis on diversity and inclusivity was well-intended but applied too bluntly. As a result, Gemini was taken offline for fixes and is expected to be back online in the next couple of weeks.

Gemini was previously added to Google's Bard chatbot before the rebranding. Jack Krawczyk, the product lead for Gemini, explained in a now-deleted post on X that they had overcorrected with Gemini’s model when attempting to combat AI bias with depictions of people of color.

The Gemini incident was part of a challenging week for the AI industry. OpenAI’s ChatGPT generated nonsensical answers, and the far-right social media platform Gab launched Holocaust-denying AI chatbots modeled after Adolf Hitler and Osama Bin Laden.