First Of Its Kind Car Crash Raises A Host Of Legal Issues

First Of Its Kind Car Crash Raises A Host Of Legal Issues

First Of Its Kind Car Crash Raises A Host Of Legal Issues

It was reported last week that a Tesla Model S with the Autopilot system activated was involved in a fatal crash, the first known fatality in which a car that was operating, at least semi-autonomously, crashed without human input. The crash occurred when a tractor trailer drove across a highway, perpendicular to the Tesla. The driver of the Tesla, a 40-year-old Ohio resident named Joshua Brown, was killed when the windshield of his Tesla crashed into the semi’s trailer. Mr. Brown had previously posted multiple YouTube videos of himself testing the Autopilot feature.  It was also reported that Mr. Brown was watching a Harry Potter movie at the time of the crash. According to Tesla, Mr. Brown’s death was "the first known fatality in just over 130 million miles where Autopilot was activated," while a fatality happens once every 60 million miles worldwide. It has been reported that the collision was due to a failure by both the driver and the Tesla’s Autopilot system to detect the white side of the tractor trailer against the backdrop of a bright white sky. If the witness reports of the Harry Potter video are to be believed, it would hardly be the first instance of a driver being disengaged from the driving experience while in a Tesla on Autopilot. Other YouTube videos show Tesla “drivers” playing games, taking naps and keeping their hands off the wheel in general. Autonomous vehicles (i.e. those with automated cruise control, lane recognition, automatic braking, etc.), currently being developed by Tesla, Google and others, promise to make our streets safer and vastly reduce the number of collisions, injuries and deaths, and all signs point to the realization of this goal within a decade or two. In the meantime, however, the Brown crash highlights the rocky road and growing pains it is going to take to reach that mountaintop. To a large extent, the early entry companies into this exciting new technology are victims of their own success.  The autonomous features work so well that within a very short time, drivers simply trust the vehicle to work and start doing other things, like texting, gaming or otherwise taking their hands off the wheel, all while traveling on a freeway going 60 miles per hour. And while it seems crazy at first blush to think drivers would act in this way, this autonomy and freedom to get other things accomplished during long commutes is the main hook being promoted by the companies developing this technology and driving the excitement and investment in this technology. The Brown crash brings into stark relief the lack of federal rules or regulations over autonomous driving technology.  Indeed, automakers do not need National Highway Traffic Safety Administration approval before putting these cars on the highway. Accordingly, for now it will be up to the Courts to determine the legal issues that arise when something like the Brown collision occurs, which presents a host of questions over who is responsible in tragic situations such as these. Does the manufacturer of the vehicle bear responsibility for a defective product?  Is it the driver’s fault for trusting in the technology even though it’s promoted as providing autonomy to the driver? “The NHTSA ranks self-driving cars based on the level they cede to the vehicle, with 1 being the lowest and 5 the highest.” In the Brown case, Tesla’s Autopilot technology was developed as a Level 2 technology, which means it was capable of staying in the center of a lane, adjusting speed according to traffic and changing lanes. Tesla also provided instructions and warnings to make it clear that the driver of the car remained responsible for its operation at all times. But what about Google, who is developing Level 4 and 5 technologies that would cede complete control of the car to the software and take the human driver almost completely out of the picture?  Where does responsibility lie in crashes involving that technology? Is it with the company who builds the car?  Is it with Google, who aims to simply supply the software to auto manufacturers? Undoubtedly, there will be a tremendous amount of additional development on Tesla’s Autopilot to engineer around the circumstances that led to the Brown crash, but what happens when the car is perfected to pick up on everything in its surroundings, but has to make split-second, life and death decisions? For instance, what if a tractor trailer pulls in front of an autonomous car at the last second and the only way to avoid that collision is for the software to cause the car to veer right, onto a sidewalk where an elderly man is walking.  Does the car cause you to crash into a likely fatal obstacle or hit the pedestrian and risk his life.  Who then, is responsible for the crash once that decision is made for you? On balance, autonomous technology seems poised to fulfill the promise of a much safer drive, but it carries with it a host of legal and moral conundrums that will have to be addressed in order for the technology to fulfill its potential.