Friday, March 1, 2024

 Rendition of J.A.R.V.I.S.—What could go wrong?

 

The storyline of Iron Man is armored with the help of J.A.R.V.I.S., an AI assistant to Robert Downy Jr., highly advanced enabling Iron Man to participate in combat with many different superpowers. JARVIS is the underrated underdog letting Iron Man take all the credit for the success of the successful combat that generates much respect and fear for Iron Man. This fictional storyline proves to be entertaining because it entails an AI system that requires high intelligent and quick problem-solving skills in a way that isn’t attainable outside the movie theater. What can be detailed from the story of Iron Man is that JARVIS is an AI that is capable of multifaceted skills much like human capability. This level of intelligence combined with troubleshooting and whit is something that humanity, technology, and academia is racing to achieve—with some of us frightened about the trajectory of our AI.

 

Roboticists in the real world have longed to create anything J.A.R.V.I.S.-like but have rather been met with endless limitations. 

The tool you might use to do homework, Chat-GPT, may have found its better half. Chat-GPT is an LLM, or “large language model” that has been widely used for its ability to mimic human speech, writing, and sentience. It can hold conversation, give suggestions and advice whether personal or factual, write poems and vows, etc. Some ethicists and philosophers have depicted the behavior and responses of LLMs to be sentient. The concept of artificial intelligence’s ability to be sentient may spark controversy about the potential damage that can arise. However, LLMs like Chap-GPT have never been able to pose a serious physical risk due to their immobility. This may soon be a limitation of the past. 

 

 

Scientist across the industry and academia have been racing to pair the LLM ‘brain’ with physical robots’ bodies. The reason that robot bodies are not able to fulfill these human-like behaviors is because they are given explicit directions on how to perform one or more tasks. This requires a significant amount of training beyond simple instructions and is heavily time demanding. These robots do not have the capacity to oscillate in search for an answer in the way that the human brain does. We have robot dogs, robot cars, even robot warehouse workers. And we may have found a way to upgrade ‘robot warehouse worker’ to ‘robot—able to perform multiple tasks across many disciplines.

 

The pairing of LLM and robot bodies may be soon here. Ishika Singh, a Ph.D. student in computer science at the University of Southern California, has a goal of creating a robot that can cook dinner, set the table, and serve it too. And although this sounds like it would be an incredible advancement, it poses larger ethical concerns regarding giving more power to a creature that is debated to be sentient. 

 

Joe Vukov, a philosophy professor at Loyola recently held an hour-long interview with Blake LeMoine, a former researcher at Google who claimed that Google’s AI called LaMDA (Language Model for Dialogue Application) was sentient. The conversation touched the catholic religion in the search of qualification for sentience. 

 

So.. what happens when we do build robots that can possess all the tools needed to be a human? What happens when the robots ‘feel’? These details are questioned by many philosophers in the real world and not many companies like Google are looking into enough.

 

Could we possibly have a rendition of J.A.R.V.I.S.? 

 

Berreby, David. “AI Chatbot Brains Are Going inside Robot Bodies. What Could Possibly Go Wrong?” Scientific American, 27 Feb. 2024, www.scientificamerican.com/article/scientists-are-putting-chatgpt-brains-inside-robot-bodies-what-could-possibly-go-wrong/. 


Appleseeds to Apples | Nexus Journals (n.d.) Nexus Journal.

    http://www.hanknexusjournal.com/appleseedstoapples

No comments:

Post a Comment