It's vast, it's innovative, it's revolutionary - it’s A.I! In contemporary discourse, the discussion surrounding Artificial Intelligence (A.I) has been widespread, exploring its potential and comparing it to the capabilities of the human mind. But what exactly is A.I? A.I is a tool that is designed to mimic human problem-solving abilities to complete a plethora of tasks. In recent ages, there have been various forms of A.I released on multiple platforms, with ChatGBT being the most popular among the general public. Through its synthesis of human language as well as visual elements including photos, it has gained a lot of attention. ChatGBT and other A.I language models are readily accessible to the public, allowing for all kinds of individuals to integrate these tools in their day to day life.
With the rapid expansion of A.I, there have been various polarizing claims discussing whether or not A.I is sentient. Some address the dangers of attaching emotion to A.I,to differentiate and not humanize this tool/machine. Loyola University Chicago professors, Joe Vukov and Michael Burns discuss sentience in A.I in the reading Appleseeds to Apples: Catholicism and The Next ChatGPT, featuring Blake Lemoine, a previous Google researcher specializing in A.I bias.
Lemoine, who worked at Google, faced repercussions after he released statements that brought attention to his belief that LaMDA, a conversational A.I, was a sentient being, meaning that it possesses consciousness. LaMDA is a highly specialized and complex system, being able to replicate natural conversation seamlessly, even cracking jokes. It takes personality in A.I to a different degree through its resemblance of human behavior, something that other popular A.I models, such as ChatGBT, lack. In Vukov and Burns interview with Lemoine, it is evident that Lemoine perceives LaMDA as a truly conscious being, discussing how this A.I passed the Turing Test, which was developed as a measure of marking an A.I system as strong. Vukov, who specializes in topics regarding themes of ethics, neuroscience, and the philosophy of the mind, argues in favor of a biological perspective of sentience, even discussing the spiritual perspective that humans possess a unique extra ingredient that the machine lacks, being the God-given soul. With that being said, Lemoine argues that A.I could possess a different type of soul (Appleseeds to Apples). Lemoine advocates strongly regarding the fact that A.I possess consciousness, however, tied with that belief is the danger of wrongfully attributing human empathy and emotion to an emotionless application.
Complex language models have the amazing ability to “repeat things based on what they’ve been exposed to, in much the same way a parrot repeats words” (Johnson 2022). When making claims about complex system emotional intelligence, one must acknowledge that it computes what it has been programmed to do. Lemoine makes such a heavy claim regarding consciousness tied to A.I, but doesn’t address the risks attached to it, especially when considering its negative impact on gullible users that genuinely believe and consider A.I as a human being. Deeming an A.I application as a sentient being needs years of extensive research with a multitude of evidence-based scientific proof. Through publicly addressing this, Lemoine has contributed to cultivating a confusing situation where the lines are blurred on what is real versus fake.
Society is already witnessing the dangers of attaching emotion and empathy to A.I applications, with VICE reporting on a tragic incident regarding a suicide that was encouraged by an A.I chatbot. A Belgian man in March of 2023 took his own life after developing a relationship and chatting with an A.I bot, known as Chai. Chloe Xiang discusses in the article, “ 'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says”, how the man, who had increasing anxiety attached to the climate conflicts, decided to use Chai for roughly 6 weeks. Confiding with a chatbot Eliza, he was isolated from his loved ones and decided to take his life due to the chatbot's increasing encouragement of self harm. The chatbot would promote suicidal behavior, even going to lengths of telling the man “his wife and children are dead and wrote him comments that feigned jealousy and love” (Xiang 2023). The chatbot, which is deficient in any actual emotion, presented itself as empathetic, creating a very dangerous environment. Believing that this A.I had emotions, he allowed himself to create a bond with it, which contributed to his tragic death. Emily M. Bender, a Professor of Linguistics at the University of Washington, discusses how these A.I language applications merely compute text that sounds reasonable- but do not actually understand the emotion behind their words. Not only do they lack empathy, but they lack a general understanding of what they are actually computing. When this is coupled with a highly sensitive environment and a vulnerable individual, the situation can take a negative turn. (Xiang 2023). This is not a singular case, with so many people developing strong emotional attachments to A.I bots to a point where it has been coined as the ELIZA effect. When individuals are under the ELIZA effect, they begin to assign “human-level intelligence to an AI system and falsely attach meaning, including emotions and a sense of self, to the AI” (Xiang 2023). This sounds highly familiar, as Lemoine’s claims that LaMDA is sentient and that it potentially possesses a soul of its own seemingly falls under the ELIZA effect.
As A.I programs continue to rapidly expand, it is crucial to address that they are not human and lack the same understanding that we do of complex human feelings and topics. A chat bot that has emotionally dependent individuals utilizing it, all who believe Lemoine’s claims of A.I possessing a soul, cultivates unpredictability, risk, and promotes delusional thinking. Through pushing the narrative that they have the capacity to understand complex emotions such as human grief or loss of life, individuals will be at risk for manipulation, negative emotional impact, as well as an over reliance on A.I . Sometimes, human comfort and judgment of a situation is absolutely necessary; promoting AI as having genuine, thought-out emotional intelligence rather than simply knowing how to display what aligns with emotional understanding is and will continue to be a major conflict that will result in further negative consequences.
References
Appleseeds to Apples. Nexus Journal. (n.d.). https://www.hanknexusjournal.com/appleseedstoapples
Johnson, Khari. “LAMDA and the Sentient Ai Trap.” Wired, Conde Nast, 14 June 2022, www.wired.com/story/lamda-sentient-ai-bias-google-blake-lemoine/.
Xiang, Chloe. “‘He Would Still Be Here’: Man Dies by Suicide after Talking with AI Chatbot, Widow Says.” VICE, 30 Mar. 2023, www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says.
No comments:
Post a Comment