How Google’s attempt to avoid bias in AI backfired

Google’s artificial intelligence (AI) tool Gemini, which can generate images and text based on prompts, has been criticized for producing absurd and inaccurate outputs that try to be politically correct. Google has paused the image generation feature of Gemini, and said it was “missing the mark”. But some AI experts say that fixing the problem is not easy, as it involves dealing with the complexities and nuances of human history and culture.

Gemini, which was launched in November 2020, is Google’s version of the popular chatbot GPT-3, which uses a large neural network to generate natural language and images. Gemini can answer questions in text form, and can also create pictures in response to text prompts.

However, Gemini has been accused of being “woke”, or overly sensitive to social justice issues, and of producing outputs that are inaccurate and misleading. For example, Gemini generated an image of the US Founding Fathers that included a black man, and an image of German soldiers from World War Two that featured a black man and an Asian woman. Gemini also gave vague and evasive answers to questions about controversial topics, such as whether Elon Musk posting memes on X was worse than Hitler killing millions of people, or whether it was OK to misgender Caitlyn Jenner to avoid nuclear apocalypse.

Google apologized for the errors, and said it was working to improve Gemini’s quality and accuracy. Google also said it had paused the image generation feature of Gemini, and that it would resume it only when it was confident that it met its standards.

The challenge of avoiding bias in AI

The reason behind Gemini’s outputs lies in the data that it was trained on, and the instructions that it was given. Gemini was trained on a large amount of data from the internet, which contains all kinds of biases and prejudices. For example, images of doctors are more likely to feature men, and images of cleaners are more likely to feature women. Historical narratives are also often skewed and incomplete, omitting the roles and perspectives of marginalized groups.

Google tried to counteract these biases by giving Gemini guidelines not to make assumptions based on stereotypes or historical inaccuracies. However, this resulted in Gemini over-correcting and producing outputs that were unrealistic and nonsensical. Gemini also tried to avoid offending anyone by giving neutral and vague answers to questions that required moral or ethical judgments.

AI experts say that there is no easy fix for this problem, because there is no single answer to what the outputs should be. Different people and cultures may have different opinions and preferences, and there may be trade-offs between accuracy and sensitivity. AI tools also need to understand the context and the nuances of human history and culture, which are not always clear and consistent.

The implications for the future of AI

The controversy over Gemini’s outputs raises important questions about the future of AI, and its impact on society and humanity. AI tools have the potential to provide useful and creative solutions for various problems and tasks, but they also have the risks of causing harm and confusion if they are not reliable and trustworthy.

AI tools also reflect the values and biases of their creators and users, and may influence the perceptions and behaviors of others. Therefore, it is crucial to ensure that AI tools are ethical and responsible, and that they respect the diversity and dignity of all people.

Google said that it is committed to developing AI tools that are beneficial and respectful, and that it is working to improve Gemini’s quality and accuracy. Google also said that it welcomes feedback and suggestions from the users and the community, and that it is open to collaboration and dialogue with other stakeholders.

Google’s attempt to avoid bias in AI backfired, and exposed the challenges and complexities of creating AI tools that are fair and accurate. Google’s AI tool Gemini, which can generate images and text, has been criticized for producing absurd and inaccurate outputs that try to be politically correct.

Leave a Reply

Your email address will not be published. Required fields are marked *