Thursday, March 9, 2017

Teaching Self-Driven Cars Morality


With the rate technology is developing at, it would be no surprise if self-driven cars started appearing in the marketplace. Although the idea sounds incredibly safe and time efficient in theory, the question of safety protocols in emergency situations must be addressed before the car is manufactured. In a New York Times article “Should Your Driverless Car Hit a Pedestrian to Save Your Life?” this dilemma is studied and discussed further. Through various surveys, researchers found that “respondents generally thought self-driving cars should be programmed to make decisions for the greatest good.” What exactly is the ‘greatest good’, however, is not exactly clear. In one particular series of quizzes, researchers found that most people would rather stay alive than spare others. Is that the choice that provides the greatest good, though?

In a talk at Loyola University Chicago, Jordan Grafman presented his research regarding the human brain and the mechanism behind making moral, legal, religious, and political decisions. Using a PET scan, brain activity was captured in patients as they judged or ranked various moral/legal/religious/political statements presented to them. Prior to the presentation, I read “The neural basis of human moral cognition”, a journal article Jordan Grafman contributed to. The journal defines morality as referring to the “consensus of manners and customs within a social group, or to an inclination to behave in some ways but not in others.” The journal attributes moral cognition as a culmination between the cortical region (anterior prefrontal cortex, lateral and medial orbitofrontal cortex, dorsolateral prefrontal cortex, anterior temporal lobes, superior temporal sulcus), subcortical structures (amygdala, ventromedial hypothalamus, septal area/nuclei, basal forebrain, third ventricle, rostral brainstem tegmentum), and large areas of frontal/temporal lobes, brain stem, basal ganglia, and other subcortical structures. Obviously many different parts of the brain work together in order to complete the task. In his talk, Grafman went a little more in depth about which brain structures are activated when working with a particular belief. The general conclusion, though, was that no single moral, legal, political, or God spot in the brain (nor a dedicated brain network) is unique to each specific belief. In order to better understand this relationship with the human brain, further research is necessary.

With the conclusions Grafman presented, how could we expect a machine to make a decision when we don’t even fully understand the extent to which the human brain makes moral decisions? Additionally, the journal mentions that different parts of the brain are activated for different decisions, all of which vary with cultural context. The New York Times article suggested making an assortment of car algorithms to imitate the variety of values in different societies. If there were various algorithms, it would be difficult to assess whom to blame in potential harmful incidents- the buyer or the algorithm? Rather than relying solely on the machine, researchers emphasize this must be a “partnership between the human and the tool, and the person should be the one who provides ethical guidance.” Before this invention becomes an everyday luxury, the philosophical aspects must be clearly determined prior to the technological incorporation.


References:

Markoff, John. "Your Driverless Car Hit a Pedestrian to Save Your Life?" The New York Times, The New York Times Company, 23 June 2016, www.nytimes.com/2016/06/24/technology/should-your-driverless-car-hit-a-pedestrian-to-save-your-life.html?_r=0. Accessed 8 Mar. 2017.

Moll, Jorge, et al. "The neural basis of human moral cognition." Nature Reviews: Neuroscience, Oct. 2005, pp. 799-809. 

Grafman, David. "The Believing Brain." Loyola University Chicago, 28 Feb. 2017, Chicago. Lecture.


Pictures:



No comments:

Post a Comment