With the recent development of advanced Artificial Intelligence (AI), one may begin to question the extent to which these systems are similar to human brains in how they operate. Joseph Vukov, a philosophy professor at Loyola University Chicago, was also curious about this, choosing to interview Blake Lemoine, a previous Google employee who worked with AI, on this topic. Lemoine is known for having worked with Google’s LaMDA — one of the most advanced AI on the planet — and for being convinced that LaMDA is…a person. Lemoine arrived at this conclusion after countless conversations with LaMDA which mirrored that of a human being.
Although LaMDA — and other AI such as ChatGPT — can engage in human-like conversation with others, to what extent are these machines built like human brains? Surprisingly, not very much. According to the article “New AI Circuitry That Mimics Human Brains Makes Models Smarter” by Anna Mattson, both AI and human thought “run on electricity, but that’s about where the physical similarities end" (Mattson). Although the human brain can intermix its memory with processing, computers store information in distinct parts of their hardware, leading to less integration in these systems. As a result, AI can be both extremely costly and inefficient compared to the brain.
To solve this issue, chemists and engineers at Northwestern University have begun to model AI systems that reflect the human brain, creating what is known as neuromorphic computer systems. Hersam and colleagues have redesigned computer transistors to function similarly to a neuron. More specifically, they designed transistors to be made with thin, two-dimensional materials that are layered on top of each other at different orientations, forming what is known as moirĂ© superstructures. As a result, these transistors are able to learn from the data they process, establishing connections between different inputs while also recognizing patterns and making associations. This is what human brains are capable of doing in what is referred to as associative learning. As a result, these transistors enable AI systems to “move beyond simple pattern recognition and into more brainlike decision-making” (Mattson).
Beyond this, these models would drastically improve energy efficiency in AI. Due to the unique quantum properties of these transistors, a continuous power supply would not be needed for them to store data. As a result, these new devices would consume “20 times less energy than other kinds of synaptic devices” (Mattson). Such machines would revolutionize the AI industry, leading to far more efficient and intelligent systems.
Who knows, perhaps Lemoine's conclusion may be taken by the public as far less crazy once these neuromorphic systems are released!
Resources
"Appleseeds to Apples: Catholicism and The Next ChatGPT - An Interview with Blake Lemoine." Hank Nexus Journal, https://www.hanknexusjournal.com/appleseedstoapples.
Mattson, Anna. "New AI Circuitry That Mimics Human Brains Makes Models Smarter." Scientific American, Feb 7, 2024, https://www.scientificamerican.com/article/new-ai-circuitry-that-mimics-human-brains-makes-models-smarter/#:~:text=New%20AI%20Circuitry%20That%20Mimics%20Human%20Brains%20Makes%20Models%20Smarter,-A%20new%20kind&text=Artificial%20intelligence%20and%20human%20thought,a%20mass%20of%20living%20tissue.
No comments:
Post a Comment