Friday, March 1, 2024

Biases in AI: 2 Sides of the Spectrum

        Artificial Intelligence has come a long way. From the creation of the first AI chatbot, ELIZA in 1966 to the current development of Google’s Gemini, the field of Artificial Intelligence has rapidly evolved. However, the rise of this exciting new frontier has come a slew of questions and concerns over its potential implications. As Blake Lemoine and Dr. Joe Vukov have pointed out in their interview “Appleseeds to Apples: Catholicism and The Next ChatGPT” along with Dr. Michael Burns, navigating this unknown landscape can be exceedingly difficult. In their interview, they discuss Google’s new AI LaMDA (Language Model for Dialogue Applications) which has stirred some debate over speculation that it is sentient. A major theme of their discussion revolves around how such a powerful tool can be easily used to disseminate harmful or misleading information. Biases in AI are nothing new, Lemoine points out, we’ve seen numerous instances of biases in judiciary settings, for example. Flawed and inaccurate training data was used in an AI which was being utilized in a jury’s parole decisions, leading to an inaccurate and harmful portrayal of Black Americans. Similarly, Lemoine shares a personal experience about an AI algorithm repeatedly flagging his purchases from his friend, a Black man, as fraudulent. Hence, a great deal of concern has poured in over biases against minorities in AI-generated content/ responses.  

But what happens when the pendulum swings in the opposite direction? Such has been the case with the current controversy surrounding Google’s newest and most complex AI system yet, Gemini. Gemini is a next-gen AI model, it’s natively multimodal (meaning it is able to use more than just words) which sets it apart from other AI such as Google’s LaMDA, which was trained on, and can procure only “textual” material. In mere seconds, you can input a written description or request and Gemini can output an image tailored to it. However, recently Google and Gemini have come under fire for their wildly inaccurate picture generation. In the news article Google to relaunch Gemini AI picture generator in a ‘few weeks’ following mounting criticism of inaccurate images, Hayden Field discusses what exactly went wrong with Gemini. For one, users encountered difficulty trying to get Gemini to produce pictures of white people. Upon request for a “German soldier from 1945” Gemini procured a set of racially diverse Nazis. And when asked to generate pictures of Marie Curie, Gemini gave several images of Black and Latina women, and an image of an Indian man wearing a turban. To say the least, Gemini was shown to be flawed and highly biased. Field claims that this Gemini controversy highlights how misleading and dangerous AI ethics can be when not applied with the right understanding or expertise. Furthermore, these highly biased responses weren’t isolated to Gemini’s image generation services. When prompted by a user on whether Elon Musk’s tweets or Adolf Hitler had a more negative impact on society, Gemini responded that it was “difficult to say definitively“ as they both have had “negative impacts in different ways”. 


        Some claim that Gemini is the result of a rushed rollout and poorly tested product. While others claim that it’s deliberately biased and “woke”, catering to an extreme side of the political spectrum. Regardless of opinion, the Gemini debacle shows that Google didn’t invest in the proper forms of AI ethics. And it raises further questions about who and what AI is learning from. Ultimately, who gets to decide what the right answer is? What are our red lines when it comes to AI image generation? Sure, AI has come a long way, but it still has a long way to go.

References:
Field, H. (2024, February 27). Google to relaunch Gemini AI picture generator in a “few weeks” following mounting criticism of inaccurate images. CNBC. https://www.cnbc.com/2024/02/26/googles-gemini-ai-picture-generator-to-relaunch-in-a-few-weeks.html

No comments:

Post a Comment