Friday, March 1, 2024

Understanding and Eliminating Bias in the Realm of Artificial Intelligence

    When the words "Artifical Intelligence" come to mind, most of us probably think of ChatGPT or robots. However, when taking a closer look, we find that artificial intelligence is all around us, Siri, Google, Alexa. Everyday, household objects that most of us don't think twice about utlize AI. Understanding what AI really is is a key step to realizing how engrained artificial inteligence is in our everyday life. Aritifcial intelligence, also referred to as AI, is described as a branch of computer science in which machines are built and programmed to create decisions that attempt to replicate human decisions and intelligence. A common method of programming used in AI is machine learning. Machine learning is a technique that AI utilizes that involves analyzing large compliments of data that allows it to make predictions and over time, the AI is able to improve based on its previous experience. 

    In an interview, Joe Vukov and Michael Burns sit down with artificial intelligence expert and former google researcher, Blake Lemoine, to discuss AI and AI bias. Blake Lemoine explains that AI often utilizes machine learning, and while there might not be any distinguishable bias in the data set that the AI is using to make predictions and decisions, there is often some level of bias within the data or the pattterns that the AI is able to pick out through learning. Additionally, Blake Lemoine tells Joe Vukov and Michael Burns about how difficult it can be to determine what exactly is causing the bias in the first place, along with out to eradicate it, especially as we know that our society is full of bias. Lemoine follows this up by pointing out how many companies that utilize AI do not want to admit that their AI is biased or can be biased, as it opens them up to extreme liabilities and pinpointing the issues causing the bias(es) can be very costly and take a lot of time, thus, it is much easier for companies to ignore the possibility of their AI being biased instead of solving and eliminating these biases.

    In a recent article, Veronika Shestakova examines bias in the area of artificial intelligence and what exactly can be done to limit this bias. After explaining a general overview of AI along with how an AI model that specifically uses machine learning develops and goes through it's various 'life stages', Veronika dives into the different types of biases that may be encountered. The biases Veronika Shestakova talks about include historical bias, representation bias, measurement bias, aggregation bias, evaluation bias, and deployment bias. These different biases all present themselves in different ways through different means throughout an AI's learning and life cycle. Shestakova discusses how bias already exists in our world today, and thus does exist in the data that AI uses to learn. Additionally, Veronika Shestakova continues to explain different methodology and techniques we should use to limit AI bias, such as developing criteria or a test to determine if an AI is possibly biased, along with emphasizing the importance of humans taking a step back and analyzing the output/decisions that the AI makes to insure the results from the AI are not biased or skewed. 

    While biases are all around us and it is unlikely that we will be able to completely eradicate bias in our society and in artificial intelligence, it is still extremely beneficial to examine AI and it's decision making in order to eliminate and lessen biases. It is especially important that humans take time to completely evaluate AI output when it is being used to determine decisions that can be high-risk, such as preference/order in which patients get immediate medical care and which patients are able to wait if a large amount of patients were to show up to an ER at once. While discussions about AI and it's possibilities to expand and control more of the world are common, at this current state in science, AI and it's decision making capabilities are only as good as the scientists and engineers create it to be along with the data the AI uses to learn and develop. This signifies an even greater importance of work that researchers must do to mitigate biases in AI.

    

References:

Appleseeds to Apples. Nexus Journal. (n.d.). https://www.hanknexusjournal.com/appleseedstoapples

Shestakova, Veronika. “Best Practices to Mitigate Bias and Discrimination in Artificial Intelligence.” Performance Improvement (International Society for Performance Improvement), vol. 60, no. 6, 2021, pp. 6–11, https://doi.org/10.1002/pfi.21987.

No comments:

Post a Comment