Freewheeling down the hills around San Francisco on two wheels, Chris Gerdes often thinks about computers on four – driverless cars in other words. On one such recent occasion the mechanical engineering professor had one of his eureka moments.
As cars passed him they crossed the double central line, in contravention of the Californian traffic code, and he asked himself whether a driverless car would do the same.
“We have acquired human behaviour that is even expected by society – to make way for cyclists, even though it’s not strictly legal,” says Gerdes who works at Stanford University.
“How do we teach a driverless car to break the rules in this way, and does the central line have any meaning for a car controlled by a robot?” he queries.
Gerdes assumes that a driverless car will assess any situation better than a person. The car will only overtake the cyclist when there is nothing coming the other way.
Gerdes carries out his research in Silicon Valley with the help of postgraduate students. The current research involves Shelley, a retooled Audi being confronted with unexpected situations, for example a pedestrian suddenly crossing the road from behind a parked car.
“We can never construct a perfect system, but we have to try to make it as safe as possible,” Gerdes says.
Gerdes recently received an e-mail from Patrick Lin, a professor of philosophy in San Luis Obispo, halfway between San Francisco and Los Angeles. “Are you thinking about all the ethical questions that driverless cars will raise?” Lin queried.
The two academics are now working in tandem: the philosopher dreams up a scenario, and the engineer looks for the answer.
For example: suppose the car has to deviate suddenly. If it turns left, it will kill an 80-year-old grandmother, and if it turns right, it will smash into an 8-year-old girl. How does the driverless car choose?
Carmakers are uncomfortable with questions of this kind, stressing that their vehicles are not designed to choose between victims of collisions, rather to avoid them completely, especially in the case of pedestrians and cyclists.
And they are sure of one thing: driverless cars will have much fewer accidents than is currently the case.
Lin nevertheless calls for a public debate on the ethics. “How do the programmers arrive at their choices? Have they thought through the consequences?” he asks.
He has issued a recommendation that the automotive sector should discuss these ethical questions openly before the vacuum is filled with speculation and groundless fears.
The first fatal “driverless accident” involving a Tesla and a truck in Florida in May 2016 provoked a media storm. The manufacturer stressed that the car was not fully driverless, but rather driving in autopilot mode.
After investigating the crash, in which the driver was killed while watching a Harry Potter movie, the National Highway Traffic Safety Administration (NHTSA) concluded that the system had worked as intended, but that the driver should not have relied on it to the extent that he did.
Incidents like this one generate instant headlines, but the expectation is that after the novelty wears off and as the technology improves, this phase will come to an end. Nevertheless, polls show that motorists remain sceptical of driverless technology.
Google began as early as 2009 with testing driverless cars on the road, forcing established carmakers to follow suit. German companies are also testing the technology in Silicon Valley, and BMW plans a fully autonomous car in collaboration with Intel by 2021.
Dirk Wisselmann, responsible for ethical issues at BMW, says any algorithm based on the idea “child comes before old lady” would never be programmed. “It violates the German constitution. The answer can only be: material damage rather than personal injury,” he says.
Given the speeds that these vehicles are programmed to travel at, Wisselmann sees little danger for pedestrians.
“At 30 kilometres an hour, braking distance is around four metres. At this speed a car can swerve around 50 centimetres to the left or right. So how realistic is this dramatic scenario?” he asks.
One thing Gerdes is sure about is that an emergency override is not the answer.
“Today most accidents are caused by people reacting wrongly to an unexpected situation,” he says, noting that hitting an emergency button switches control to the human driver at the worst possible moment. “No, the car must be able to take its own decisions,” he says. – DPA


Related Story