Red flags from Tesla’s first fatal crash
The old saying goes something like, “The price of safety is eternal vigilance.”
Well, it looks like the price of artificial intelligence will be eternal vigilance.
On May 7, 2016, a 40-year-old Canton, Ohio, man had just left a family vacation in Walt Disney World in Orlando and was behind the wheel of his Tesla Model S, which he nicknamed “Tessy,” an all-electric compact vehicle that he had put 40,000 miles on in the first nine months he owned it. The vehicle was driving down a divided highway in Florida with its Autopilot system engaged. A tractor-trailer was crossing the highway perpendicular to the Model S and against the bright sky both the Tesla driver and the Model S’s camera could not distinguish the white side of the tractor-trailer. The Model S passed beneath the trailer, with the bottom of the trailer smashing the windshield of the Model S. This was the first fatality involving an autonomous vehicle.
‘Extremely rare circumstances’
Tesla wrote in a blog post that the circumstances were extremely rare. Despite Tesla’s Autopilot feature being an “assist feature that requires you to keep your hands on the steering wheel at all times,” according to the disclaimer before it is enabled, Tesla found early on in testing that even with the disclaimer, people would still become inattentive drivers, sometimes being reckless with the system, and treating it as a substitute for driving.
The “Tesla Team” posted on June 30, 2016, “Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert… Every time that Autopilot is engaged, the car reminds the driver to ‘Always keep your hand on the wheel. Be prepared to take over at any time.’ The system also makes frequent checks to ensure the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.”
To be sure, this tragedy is a rarity. According to Tesla, its Autopilot system drove 130 million miles before the May fatality. Human drivers in the U.S. have a car fatality every 94 million miles. Many occasions have been documented where drivers avoided a potentially fatal crash thanks to Tesla’s Autopilot system.
Still, self-driving car innovators know there will be more days like May 7. Dark days when artificial intelligence glitches combined with inevitable human error causes tragedy. The self-driving revolution is in its infancy, with many more years and millions of miles of closed environment testing to come before autonomous cars are marketed for mass consumption. But that’s not stopping Tesla, Google, Uber, Ford Motor, Volvo, BMW and a host of other automakers from rushing ahead to develop driverless technologies.
The bigger picture
Let’s think beyond driverless cars for a moment. Industrial robots of all varieties are a booming population. Worldwide shipments of multipurpose industrial robots were forecasted to exceed 207,000 units in 2015, up from around 159,000 in 2012, according to statista (www.statista.com). By 2018, some 2.3 million units will be deployed on factory floors - more than twice as many as in 2009 (1.0 million). So says the 2016 World Robotics Statistics, issued by the International Federation of Robotics (IFR).
The robot population is particularly dense in automobile manufacturing. BMW’s typical installation involves 800 to 1,000 industrial robots. Until recently, robots have been limited to automated welding and joining of auto frame and body components. This is because working around high-speed robots obviously can be dangerous. But engineers foresee technology enable closer collaboration between robots and humans. Some industrial robots stop quickly when they contact a person. This enables them to work near people without hurting them. BMW and other automakers have begun installing these robots on assembly lines. These same human-friendly robots can also shuttle work pieces in and out of machines, pack products, run tests, and place pepperoni slices on pizza in assembly lines near people.
Accidents involving robots are as rare as the Tesla fatal crash. OSHA reports fatalities involving industrial robots in the U.S. have occurred at an average rate of approximately one per year during the past 15 years. To be sure, the robotics industry has spent the past 30 years creating safety standards to keep workers safe while working with robots. Still, OSHA reports, “Studies indicate that many robot accidents occur during non-routine operating conditions, such as programming, maintenance, testing, setup, or adjustment. During many of these operations the worker may temporarily be within the robot’s working envelope where unintended operations could result in injuries.”
This is where we come back to vigilance. There’s evidence that artificial intelligence is advancing faster that safety protocols can keep up with. These protocols must take into account how robots can change human behavior, give a false sense of security, and increase a sense of complacency. But to paraphrase and expand on what one automotive expert says, until robots in mass penetrate industrial markets to an even greater extent, it will be difficult to account for how people use them and behave differently.
OSHA has no regulations governing human-robotic interactions. The National Highway Traffic Safety Administration has not set standards for fully automated vehicles, although it vows to in the next six months. Of course this comes after the Tesla tragedy. The Feds so far leave regulatory decisions to state and local governments. Only a handful of states have passed laws on the testing of autonomous cars.
Now is the time to be proactive with robot safety training, situational awareness, risk assessments and controls. OSHA and the Robotic Industries Association (www.robotics.org) offer a slew of resources, including training and hazard recognition and control.