Thursday, February 29, 2024

Bias in Artificial Intelligence and Why it's a Bigger Problem Than You Might Think

When you think about Artificial Intelligence it's easy to just assume that it is a fairly modern creation. What you may be shocked to find out is that AI is a relatively old invention. Infact, its birth most likely predates yours! AI was invented over seventy years ago with scientists like Alan Turing and Arthur Samuel at the forefront of this innovation. Nowadays Artificial Intelligence is practically everywhere. Virtual assistants like Google and Siri and  systems like ChatGPT and Adobe Firefly are all considered to be AI. While AI can be a helpful tool and a powerful resource to be used in everyday life, they aren't perfect systems. Artificial Intelligence still has a long way to go when it comes to things such as biases. AI bias is a term that refers to biased results made by an AI system that reflect real societal biases. This bias can be present in predictive policing tools, fraud detection software and healthcare related predictive algorithms. Since AI is so involved in decision making algorithms it only makes sense that it is as unbiased as it can be. 

In an interview that we read this semester on AI bias and sentience, Joe Vukov and Blake Lemoine had a lengthy conversation about multiple AI related topics including AI bias. Lemoine is considered an expert in the field of AI bias and broke down this complex topic with a couple of examples. He talked about biases present in multiple different systems/settings and the vast impacts they can have on our society. The one example that stood out to me was about AI and the judiciary system. Artificial Intelligence softwares can be used by judges to help decide whether or not an inmate deserves to make parole. This software was designed to help identify people who wouldn’t commit any more crimes if they were released from prison. The only problem is that data on that information isn't available yet. This particular AI was given training data on rates of recidivism instead. Black people being more frequently arrested than white people in America is a fact that the AI system was fed/trained on. This led the AI to come to the incorrect conclusion that once African American people are released from prison, they will commit more crimes compared to white people. This is a hard hitting example that Lemoine gave to Vukov and Burns about AI bias and how it is affecting the trajectory of real people's lives. Biased/incorrect training data given to an AI algorithm can unfairly discriminate against an already discriminated against race! The interview then goes on to discuss AI sentience and the Google AI, LaMDA. At the end, ethical concerns about sentient AI and their possible rights are dissected and discussed.

LucĂ­a Vincente and Helena Matute of Deusto University published a paper on AI bias this previous year. Their hypothesis was that if subjects performed a diagnostic task with the help of a system with AI bias, the subjects would reproduce said bias when made to make decisions without the help of the AI. Their results, after three experiments were performed, provided evidence that there is human inheritance of bias perpetrated by AI systems. All three experiments were similar but the later ones were tweaked slightly to build up on the results of the one(s) conducted before it. To explain what the experiment had its participants do, I’ll explain what happened during experiment one. Students were shown a skin sample with light yellow and dark pink squares and were asked to deduce whether or not the sample showed signs of a made up disease called Lindsay Syndrome. Participants were told that either having more pink or more yellow squares than the other color indicated the presence of the syndrome. The non AI groups were simply given an image of the sample and buttons labeled positive and negative. The AI groups were also provided with the image and the buttons but they were also given an AI generated suggestion on a diagnosis. The suggestion was not always correct especially in the case of the 40/60 color ratio stimuli where the suggestion was always incorrect. In the end it was found that recommendations made by a biased AI system increased the amount of errors made by participants in this healthcare related task. It was also found that when the participants in the AI group went through the simulation without the AI, they demonstrated the same biases portrayed by the AI system. 

Both the interview and the study provide invaluable information about how serious AI biases are. These biases can affect the entire trajectory of a person's life by not granting them things such as parole or proper medical treatment. Within the medical field, biased diagnostic softwares are quite literally making life or death decisions about a patient's life. It's abundantly clear that more research needs to be done on how AI biases affect human decision making. It's also clear that more research also needs to be done on how we can work towards eliminating AI bias in its entirety. 


References



What is the history of artificial intelligence (AI)? (n.d.). Tableau. https://www.tableau.com/data-insights/ai/history


Team, I. D. a. A., & Team, I. D. a. A. (2023, October 16). Shedding light on AI bias with real world examples. IBM Blog. https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples/#

 

Appleseeds to Apples | Nexus Journal. (n.d.). Nexus Journal. https://www.hanknexusjournal.com/appleseedstoapples


Vicente, L. G., & Matute, H. (2023). Humans inherit artificial intelligence biases. Scientific Reports, 13(1). https://doi.org/10.1038/s41598-023-42384-8







No comments:

Post a Comment