As science and technology evolve, many new discoveries are being developed everyday. One of the greatest advancements made in the field of science and technology is the creation of Artificial Intelligence (AI). AI can be described as a series of systems or programs which are developed to mimic human behavior and intelligence. These programs and systems can range from Amazon suggested purchases to highly innovative systems such as Google’s LaMDA. Because AI has proven to have high intelligence as well as the ability to critically think of abstract concepts, many question whether these programs and systems have a mind of their own. One of the most debated questions among the science and philosophy community is, can AI have consciousness? If so, can they be considered the same as humans?
In their interview, Dr. Joe Vukov and Dr. Michael Burns sit down to have a conversation with former Google employee Blake Lemoine. During his time at Google, Lemoine worked as an engineer and researcher who focused on AI bias. During their conversation, the three gentlemen discuss a variety of topics. Among the topics discussed during the interview, one of the key points highlighted was Lemoine’s view on AI and sentience. During his time at Google, Mr. Lemoine worked with the new Google AI Language Model for Dialogue Applications (LaMDA). He was later fired from this position after making a claim that the LaMDA was sentient. The LaMDA believed it had its own soul, but it also acknowledged that it was not the same as a human soul. Additionally, Lemoine points out that Google built the LaMDA to do everything a human can, but does not consider it a real person. If the LaMDA has some kind of soul and it is able to mimic and replicate a lot of human actions, what makes it different from humans? To him there is no difference between the two subjects. Lemoine’s view supports the question proposed in the introduction. This means that anyone who holds similar beliefs to Lemoine believes that there are no fundamental differences between human and AI consciousness.
On the other side of the coin, there are also many individuals who are adamant with their belief that AI’s do not equate to humans. One of the main points made by the opposing side is that AI’s should not be compared to humans because they do not have to face many of the struggles living beings do. In her article “No legal personhood for AI” author Brandeis Marshall claims that AI’s do not experience many of the struggles humans do. She says “AI as a rights-bearing entity has skipped over a myriad of social, cultural, economic, political, and legal disparities incurred by actual human beings” (Marshall, 2023). The main point made here is that cognitive ability should not be the only factor considered when thinking about personhood. A soul is constantly influenced by factors such as those listed above. An AI is programmed a specific way and it is not necessarily impacted by these variables the same way a person would be. This goes hand-in-hand with a claim made by professor Anthony Chemero from the University of Cincinnati. In his article, “LLMs differ from human cognition because they are not embodied” Chemero claims that AI’s do not equate to humans because they are missing that connection to the world around them. The article highlights that AI lacks the essential human elements of caring, survival, and concern for the world. He states that unlike humans, AI’s do not have genuine emotions or embodied experiences. For this reason, they cannot have a soul or equate to human intelligence.
With that being said, the question still stands: what makes a human, human? Is it the blood flowing through their veins? Is it the ability to think critically about a complex situation? Is it the idea of containing a soul within a body? At one point in the interview, Dr. Vukov speaks about the Catholic perspective of a soul. Vukov suggests that a soul cannot exist if it does not have a body attached to it; in other words, it does not make sense to say something has a soul if it does not have a body. He later elaborated on this same idea during his “Can Silicon be Conscious” talk which was presented to students and faculty at Loyola University Chicago. In this presentation, Vukov outlined two perspectives: that of functionalism and the other of Embodied Views. Lemoine’s claim falls more toward the functionalism side whereas Marshall’s and Chemero’s lean more towards the Embodied views side. It is important to note that each side is supported by a credited test. The functionalism side can be associated with the Turing Test while the Embodied View side can be associated with the Chinese Room Test. The presentation pointed out that if we consider this question through the functionalism lens, it is indeed possible for silicon to have consciousness. However, this approach faces an objection with the Chinese Room test. If we took the Embodied View lens, we would take more of a biological approach which means silicon would not be conscious. The example of silicon was used during this talk, but it can be substituted with AI programs and systems.
Before thinking about whether or not AI should be granted personhood or if they are the same as humans, we as a society need to first establish what I call the “human requirements.” I challenge you to think about what requirements and characteristics need to be met in order to be considered a human. Additionally, consider whether the title “human” should be extended to any object that meets these requirements.
References:
Chemero, A. (2023). LLMS differ from human cognition because they are not embodied. Nature Human Behaviour, 7(11), 1828–1829. https://doi.org/10.1038/s41562-023-01723-5
Marshall, B. (2023). No legal personhood for ai. Patterns, 4(11), 100861. https://doi.org/10.1016/j.patter.2023.100861
Vukov, J., Burns, M. (2024). Appleseeds to Apples: Catholicism and The Next ChatGPT. The Joan & Bill Hank Center for the Catholic Intellectual Heritage. https://www.hanknexusjournal.com/appleseedstoapples
No comments:
Post a Comment