Friday, March 1, 2024

Artificial Intelligence, Personhood, and How to Hold A.I Accountable Under the Law

            Artificial intelligence (A.I) has been making headlines consistently over the past couple of years, with OpenAI’s ChatGPT revolutionizing the way people perceive and interact with A.I technology. ChatGPT allows users to input any question or idea and receive a thoughtful generated response. Its versatility has made it invaluable for brainstorming project ideas, coding, and writing. What sets ChatGPT apart from other A.I, is its accessibility. The base version of ChatGPT, ChatGPT-3, is completely free to use. While some may opt to use the premium version, ChatGPT-4 for $20 dollars a month, it is not necessary to access the majority of the features ChatGPT provides (Marr, 2023).

With ChatGPT being in the public eye more and more, there is a need to answer the major ethical and legal questions that the existence of A.I present. Some of the major challenges presented by A.I involve determining if it is sentient, how we know what sentient A.I is, what to do if it creates sentient, and how do we hold A.I legally accountable regardless of sentience. Dr. Joe Vukov and Dr. Michael Burns, two Loyola University of Chicago professors, explored these challenges through an interview with Blacke Lemoine, a former A.I researcher at Google who claims sentient A.I is already here. Blake Lemoine assisted in the development of Google’s own A.I model, LaMDA and after messaging back and forth with LaMDA, Blake determined that LaMDA was in fact sentient A.I. As Blake sees it, Google has replicated every aspect of a person within LaMDA, but without creating something that would be considered a person (Appleseeds to Apples).

If there is in fact sentient A.I, as Blake claims, there are certain rights a sentient A.I would need to be given. Similar to how animals do not have souls and are not technically “persons,” there are still certain rights and protections for animals under the law. The question now is, what rights do we grant A.I to protect it, but also to hold it accountable? If A.I is sentient, then it has certain wants/desires that we need to protect, but also it is capable of certain harms that it needs to be held accountable for. As Blake Lemoine discusses, A.I can do some real damage to the world without even having sentience; we’ve already seen how basic A.I can discriminate against people of certain races or religions since even when it is used to review resumes it acts in a “color blind” manner (Appleseeds to Apples). Eliot Lance argues that in order to hold A.I accountable, and not just its creators, it needs to be classified as a person under the law.

Without a legal classification, A.I will just be able to run rampant without anyone able to actually hold it accountable. To hold A.I accountable, without creating new laws, Eliot Lance argues that we should make A.I (such as LaMDA) the head of a corporation. The reason Eliot suggests this is because corporations are already classified as a type of “person” and it would be much easier to make an A.I the head of a company than to make new laws specific to A.I (Eliot, 2022). The purpose of this personhood classification is not because corporations represent a person in any sort of philosophy, it is so that they can be legally held accountable for their actions. Eliot has a four step process for how this legal personhood classification would work. First, the founder of a company creates a member-managed LLC (an extension of their current company). Second, the LLC adopts an operating agreement specifying that it will be in accordion with A.I. Third, the founder of the initial company transfers ownership of any physical systems that host the AI to the LLC. Fourth, the founder of the initial company dissociates from the LLC, leaving the LLC to exist on its own (Eliot, 2022). In simple terms, Elliot is suggesting that we make every aspect of an A.I a company and legally classify it as such.

While I do not practically agree with what Eliot is suggesting, and I honestly find it quite ridiculous, his suggestion points out a crucial flaw that exists with A.I. Regardless of sentience, there are not enough rules and regulations around A.I. When A.I is used to screen applicants in a biased manner,  it is still not clear if the company who created the AI should be held accountable, if the company who used the A.I should be held accountable or if even the A.I is responsible for its own actions. We need laws today that will allow us to hold A.I accountable for the current chaos it is able to create, but also to ensure that any future sentient A.I is protected under the law without any legal loopholes. We should not have to theorize hypothetical ways to assign A.I some sort of personhood under the law.

 

Appleseeds to Apples. Nexus Journal. (n.d.). https://www.hanknexusjournal.com/appleseedstoapples

Eliot, L. (2022, November 21). Legal personhood for AI is taking a sneaky path that makes AI Law and AI ethics very nervous indeed. Forbes. https://www.forbes.com/sites/lanceeliot/2022/11/21/legal-personhood-for-ai-is-taking-a-sneaky-path-that-makes-ai-law-and-ai-ethics-very-nervous-indeed/?sh=5f0320df48a2

Marr, B. (2023, December 21). CHATGPT: Everything you really need to know (in simple terms). Forbes. https://www.forbes.com/sites/bernardmarr/2022/12/21/chatgpt-everything-you-really-need-to-know-in-simple-terms/

 

No comments:

Post a Comment