Loyola’s Dr. Gail Baura spoke on sleep deprivation research in long-haul
truck drivers, and its relevance to the current debate on self-driving
cars. Dr. Baura’s assigned article, “The
Sleep of Long-haul Truck Drivers”, from the New England Journal of Medicine,
reported on a study done in 1988 using multiple methods of data recording. This included a questionnaire on sleep
habits, electroencephalogram and eye-movement recordings, infrared videos of
the driver and of the speed and position of the truck, along with
polysomnography during sleep periods each night. Together these data were designed to analyze
the drowsiness of truck drivers and the effects it has on their
performance. The study used 80 drivers
on normal, revenue-producing routes of two 10-hour schedules and two 13-hour
schedules, each set at different times of the day.
On average, the drivers reported a need for 6-8 hours of sleep each
night to be fully alert, but over the course of this study only got 4.78 hours
of measured sleep per night. The results
suggested circadian influence when incidents of measured stage-1 sleep at the
wheel only occurred during 11pm and 5am.
These findings show that shift workers need to be informed of the
importance of sleep and a reliable schedule that follows the body’s natural
circadian rhythm, and the risks that come with sleep deprivation.
Dr. Baura explained the problems with this study in particular and the
topic as a whole. She says there are
many proposed measurements of driver drowsiness and performance, and this one
is not the best, but is also not the most recent, and there is no true “winner”
when it comes to this field of study.
Each method has its flaws, and it’s up to the opinion of the researcher
to determine which has the best trade-offs.
This is because drowsiness is variable with each subject - some people
have droopy eyes and a low heart-rate, while others will have a glassy-eyed
stare or rapid blinking just before drifting into the first sleep-state. This makes it hard to design a device that
can record and recognize these symptoms, and could lead to a high rate of false
alarms if they were ever implemented as safety systems in commercial and
noncommercial vehicles.
We know this is true because the safety systems already implemented in noncommercial
vehicles have overwhelming reports of false alarms, according to Dr. Baura,
making the warnings more of an annoyance than a life-saver. Dr. Baura said this is largely due to the
problems with the sensors. Most sensors
used in manual and self-driving cars are less than ideal, because each has its
own “blind-spots” - satellite, for example, is useless on a cloudy day. Some manufacturers try to get around this by
combining multiple types of sensors, as do researchers in the field of this
study, but Dr. Baura emphasized that this does not mean they add up to a more
accurate analysis, it just combines a lot of iffy data and gives false hope for
a safer solution. In Scientific American’s article,
“Redefining ‘Safety’ for Self-Driving Cars”, this is an example of how
self-driving cars are far from perfect.
This problem is rooted in the main obstacle for self-driving cars today
- human error. We have yet to find a
perfect solution to analyzing a car’s surroundings without risking false
assumptions, and poor sensors are only the beginning of the problem. Even when the right assumption is made, the
car has to be programmed to find the safest solution to the environment. Human-error has been the sole source of
self-driving car accidents, aside from one incident in which a truck backed up
and hit a self-driving shuttle that had sensed the truck but was only
programmed to stop and wait. The truck
hit the shuttle’s bumper and then stopped, so no serious injuries
occurred. However, this exposes the greater
issue of unpredictable human error and the infinite ways it can out passengers
in danger. Self-driving cars must obey
traffic laws, but car accidents usually take place when drivers disobey these
laws, and the safest response is often to disobey laws as well. The article explains that when a situation
does not have a predetermined response, self-driving cars will pull over and
stop until the environment returns to normal.
This sounds logical, but Scientific
American argues that this is not always the right choice. What if the best response is to speed up and
avoid a collision, or to swerve before there’s time to signal?
In both drowsiness research and self-driving programming, the biggest
obstacle is creating a system that accounts for all human variability, but
human behavior is largely unpredictable.
Until we can find a way to program for this, self-driving cars will
never be 100% accident-free while sharing the road with human drivers - drowsy
or not.
Sources
Mitler, M. M., et. al. (1997) The Sleep of Long-Haul Truck Drivers.
Massachusetts Medical Society, The New England Journal of Medicine, 24
Feb. 2016, https://luc.app.box.com/v/neuroseminar/file/251218239087
Saripalli, Srikanth. “Redefining ‘Safety’
for Self-Driving Cars.” Scientific American,
The Conversation US, Inc., 29 Nov. 2017,
www.scientificamerican.com/article/redefining-ldquo-safety-rdquo-for-self-driving-cars/.
No comments:
Post a Comment