Google still hasn’t fixed Gemini’s biased image generator



Back in February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people after users complained of historical inaccuracies. Told to depict “a Roman legion,” for example, Gemini would show an anachronistically diverse group of soldiers, while rendering “Zulu warriors” as uniformly Black.

Google CEO Sundar Pichai apologized, and Demis Hassabis, the co-founder of Google’s AI research division DeepMind, said that a fix should arrive “in very short order” — but we’re now well into May, and the promised fix has yet to appear.

Google touted plenty of other Gemini features at its annual I/O developer conference this week, from custom chatbots to a vacation itinerary planner and integrations with Google Calendar, Keep and YouTube Music. But image generation of people continues to be switched off in Gemini apps on the web and mobile, confirmed a Google spokesperson.

So what’s the holdup? Well, the problem’s likely more complex than Hassabis alluded to.

The data sets used to train image generators like Gemini’s generally contain more images of white people than people of other races and ethnicities, and the images of non-white people in those data sets reinforce negative stereotypes. Google, in an apparent effort to correct for these biases, implemented clumsy hardcoding under the hood to add diversity to queries where a person’s appearance wasn’t specified. And now it’s struggling to suss out some reasonable middle path that avoids repeating history.

Will Google get there? Perhaps. Perhaps not. In any event, the drawn-out affair serves as a reminder that no fix for misbehaving AI is easy — especially when bias is at the root of the misbehavior.




Source