The recent fatal crash of a self-driving Tesla in Florida raised good questions about the safety of self-driving cars and shows just how far we have to go in terms of technology and regulations before autonomous cars make the roads safer.
Last year, in excess of 352,000 people were killed in car crashes, according to the National Highway Transportation Safety Administration. By far, those accidents were the result of the decisions and actions of people, who may text or drink or try to read an eBook while driving. If you take people out of that equation and give a car a strict set of best practices in the form of rules to go by, would these accidents happen?
Driving decisions will not be made at the time of the accident. Rather, with self-driving cars, these decisions are made at the time of programming, while meeting any regulatory requirements. And some of these decisions will require more thought than you might think.
Consider this dilemma. What if a car was driving down the street at 40 MPH and suddenly 3 pedestrians enter the crosswalk? At the current speed, braking would not stop the car in time and the only thing it can do to avoid hitting the people is to swerve and slam into a stone retaining wall. This will likely kill the passenger on contact. A group of people were asked, by researchers published in Science, to answer this type of dilemma and their responses were very contradictory at best. They thought the car should be programmed to do less harm by not hitting the pedestrians, but they certainly didn’t want to ride in that car.
So it comes down to perspective. One way or other cars will have to be programmed to respond to situations like this, but it may cause great ethical arguments on either side. With proper regulation and improved technology, self-driving cars could reduce accidents, but the legislators and courts will have to make some tough decisions that will shape the future of our relationship with this technology.