In his talk titled "Can Silicon Be Conscious?" Dr. Vukov posed a fundamental question to the audience: What defines personhood? Dr. Joe Vukov, a professional philosopher, undertook a personal exploration to unravel this question. As part of this quest, he engaged in a conversation with Blake Lemoine, a former leader involved in the development of Google's AI LaMDA, alongside his colleague Michael Burns. This insightful discussion unfolded on the podcast "Appleseeds to Apples: Catholicism and The Next ChaptGPT." Dr. Vukov and Dr. Burns guided the discourse towards the inquiry of the AI's potential sentience, the essence of personhood, and the ethical considerations that would arise if the AI were acknowledged as sentient. Blake Lemoine notably asserted that the Google AI LaMDA, specifically designed for dialogue applications, could indeed be considered sentient. In order to explain how advanced the AI is, Dr. Vukov informed the audience about the Turing Test, this test was developed as a method to see if an AI can effectively communicate to the observer that the sender of the messages is also a human. If an AI can convince the receiver that would mean that the AI has passed the Turing test. However, this brought up the argument that is tricking a human into thing that the AI is human really enough to consider the AI to have personhood? Blake Lemoine argues that in the past humans have made the mistake of dehumanizing people of color and women. Since humans have a bad history of impeding on the rights of those that should not, it makes the most sense to give AI some rights as well. However, Many argue AI systems have access to all of the resources on the internet, making its “mind” limitless. However,regurgitating the information on the internet to humans really makes AI as complex as a human? Many people are convinced that it does not.
In their study, "Protecting Sentient Artificial Intelligence: A Survey of Lay Intuitions on Standing, Personhood, and General Legal Protection," Dr. Eric Martinez and Dr. Christoph Winter aimed to explore public opinions on AI rights. They sought to understand what the public thought about extending legal protection to sentient AI and what they perceived as personhood. The results were somewhat surprising, with participants ranking desired legal protection for AI lower than other groups (humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and humans living in the far future), indicating a perceived lesser importance. However, the desired protection level for AI was significantly higher than its perceived current protection, suggesting a nuanced concern for AI's legal status. About one-third of participants endorsed granting personhood and standing to sentient AI, either aligning with or deviating from legal expert opinions. Political differences emerged, with liberals advocating higher legal protection and personhood for AI than conservatives. Both political groups, however, showed lower favorability towards legal protection for AI compared to other neglected groups. The findings also prompt considerations for potential reforms in existing legal systems, with a democratic lens on lay attitudes influencing legal philosophy and policy. The study's descriptive focus emphasizes the importance of further research to draw normative implications from the results within the evolving landscape of AI ethics and law.
AI is an inevitable part of the future, thus people should increasingly become more involved with the ethical implications of AI. With artificial intelligence offering a bright prospect for numerous fields, it becomes even more logical for us to closely monitor its developments.
Andreotta A. J. (2021). The Hard Problem of AI Rights. AI Soc. 36, 19–32. 10.1007/s00146-020-00997-x [PMC free article] [PubMed] [CrossRef] [Google Scholar]