Thursday, February 29, 2024

The Moral Concerns around the Consciousness of Artificial Intelligence

 Artificial Intelligence (AI) has seen incredible evolutionary changes since its conception in the 1950s. It has not only been able to process and mimic human speech and analyze complex data but has also been used to perform typical human activities such as creating artwork and driving cars. With AI becoming smarter and asked to understand abstract concepts, the question that plagues computer scientists, neuroscientists, and philosophers is whether AI can reach human-level consciousness, and if so what ethical framework would need to be in place to grant AI personhood. Contemporary AI like Chat GPT is passing the Turing Test (Biever, 2023), now we need to know the structures behind their understanding and if it allows them to experience consciousness.

    In their interview with Blake Lemoine, Dr. Joe Vukov and Dr. Michael ask about the possibility of AI sentience, and discuss the Catholic view on personhood, and the “soul” of AI. Lemoine, a former researcher at Google claims that the language learning model LaMDA has reached consciousness. Vukov, a philosopher who studies the intersection of ethics, science, and religion, states that according to Catholicism humans possess a unique aspect of their nature as soulful beings crafted in the likeness of God. This distinctive attribute grants us sentience, enabling self-awareness, the formation of thoughts and moral judgments, and learning empathy. So if these processes are in our nature, how can AI replicate them? Blake Lemoine claims that there is a way to work around this dilemma: “In our brains, we have dedicated moral centers that dynamically recompute things like context and evolution of language…Build some concrete principles--kind of like the moral centers in our brains--and then work towards a positive goal” (Appleseeds to Apples). Our morals change with context and evolving language, so implementing our language would allow conscious AI to develop its moral compass on its own. 

Other neuroscientists, computer scientists, and philosophers are also considering the possibility of AI consciousness and the moral weight that would come with this. In Grace Huckins’ article “Minds of the Machines: The Great AI Consciousness Conundrum”, Liad Mudrik of Tel Aviv University focuses on manipulating one’s conscious experience to reach a fundamental theory of what consciousness is. As it stands right now, the basic definition of consciousness is the ability to experience things, but this definition does not give us the scientific mechanisms and understanding of who or what can be conscious. In the human brain, feedback connections appear to give rise to consciousness (Huckins, 2023), but again, this does not answer why these structures contribute to it, which is essential to test consciousness in AI. Trying to define consciousness and apply it to AI could have dire consequences: on one hand, if we do not identify conscious AI and subjugate it to various tests to prove its consciousness, then we run the risk of torturing it. However, if we “mistake unconscious AI for a conscious one, [and] you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code” (Huckins, 2023). 

With all this being said, what does the diversity of thought mean for AI? Well, it brings questions on the ethical responsibilities placed on humans if consciousness is confirmed in AI and we grant them personhood, a concept brought up by Dr. Vukov and others. Vukov and Lemoine discuss the natural rights granted to humans by our Creator, and considering that humans are the ones creating these AI, we must account for the type of nature we want to bestow on it. This nature acts as the basis of inalienable rights, and we must support these rights if we plan to grant personhood to AI (Appleseeds to Apples). We also need to account for AI being able to understand and replicate human emotions and further our understanding of AI's pleasures, pains, desires, and fears. If AI can internalize emotions, it would mean that we hold a responsibility to extend care to it and protect it from harm. But as Huckins (2023) says in her article, “Protecting a real-world AI from suffering could prove much harder…and that limits the choices that humans can ethically make”. Are humans ready to take on the responsibility and work around the needs of AI? 


References

Biever, C. (2023). ChatGPT broke the Turing test — the race is on for new ways to assess AI. Nature, 619(7971), 686–689. https://doi.org/10.1038/d41586-023-02361-7

Huckins, G. (2023, October 16). Minds of machines: The great AI consciousness conundrum. MIT Technology Review. https://www.technologyreview.com/2023/10/16/1081149/ai-consciousness-conundrum/

No comments:

Post a Comment