Friday, March 1, 2024

AI and Sentience

   AI and Sentience

It is with little doubt that we can say the surge of artificial intelligence has been an impressive one that has taken the world by storm. In an event that shocked the technological world, Blake Lemoine, former engineer for Google, was fired after making the controversial and bold claim that its new AI, known as Language Model for Dialogue Applications (LaMDA) was sentient. In a fascinating interview, Blake Lemoine sat down with Dr. Joseph Vukov and Dr. Michael Burns to discuss his views on the new artificial intelligence model and why he believes in the existence of its consciousness. 

Many curious individuals in the tech/artificial intelligence industry know of the Turing Test, developed by Alan Turing over 70 years ago. The test was made in order to assess the sentience of artificial intelligence (in other words, could it fool one into thinking that they are human). However, with the explosion of more intuitive artificial intelligence models, many scientists think the test is outdated. Or, perhaps, does that just show the power contemporary AI holds? According to Blake Lemoine, LaMDA claims to have a soul and can hold conversations on the topic of souls (and any other topic) as well as any human could. Lemoine’s theory and experiments with the language model led it to believe it had internal motivations that were independent of its original computerized function. 

Is this enough to conclude that AI has obtained sentience? Some scientists don’t seem to think so. Ralph Lewis, an Associate Professor from the University of Toronto, believes there are some missing elements of AI that are holding it back from attaining consciousness. For one, language models do not have an understanding of the topic that they are asked about. They are simply coded to search through their database (which is likely the entire Internet) for information and are trained to spit it out in a human-like way. There are quite a few other criteria of sentience that Lewis lists. AI models seem to lack emotion or motivation, social and interpersonal intelligence, a sense of time, and complex memory. He also points out the absence of abstract reasoning, higher-level ethical thinking, and, perhaps most importantly, self-awareness. Having an understanding of its own thoughts, emotions (assuming it eventually obtains these), and internal representations will be the biggest leap toward consciousness. Lewis goes on to question whether the construction of an AI model that checks all the boxes of “sentience” is worth working towards in the first place. Would sentient AI make a difference in its ability to complete tasks? Will it become a detriment to society? 

So what would it take to achieve this sentience, assuming scientists decide to pursue it? Zohar Bronfman, Simona Ginsburg, and Eva Jablonka, in their “Journal of Artificial Intelligence and Consciousness”, suggest that AI requires what they call “unlimited associative learning”. Similar to how humans are able to associate and connect various stimuli and respond accordingly, they believe robots must share this trait of pairing, reinforcing, habituation, and dishabituation of responses to stimuli. Notably, this skill must be “general”, or “unlimited” to any environment or domain. Given that this ability has not been achieved by contemporary AI, sentience has not been achieved either. 

Ultimately, the emergence of artificial intelligence, although having existed for many decades now, is still a field that has yet to grow. All the same, the understanding of consciousness is also just as small. Blake Lemoine’s discussion with Dr. Vukov and Dr. Burns acknowledges the long way technology has to go, but could LaMDA be the catalyst for the inception of a world run by AI? 


References

Bronfman, Zohar, et al. “When will robots be sentient?” Journal of Artificial Intelligence and Consciousness, vol. 08, no. 02, 6 Aug. 2021, pp. 183–203, https://doi.org/10.1142/s2705078521500168.

Lewis, Ralph. “What Would It Take to Build Sentient AI?” Psychology Today, Sussex Publishers, www.psychologytoday.com/us/blog/finding-purpose/202306/what-would-it-take-to-build-sentient-ai. Accessed 1 Mar. 2024. 

Appleseeds to Apples | Nexus Journal (n.d.) Nexus Journal.

https://www.hanknexusjournal.com/appleseedstoapples







No comments:

Post a Comment