The hypothetical scenario is frightening. You’re enjoying a relaxing ride in your automated car. Windows down, soft rock on the radio, chowing on a cheeseburger without a care in the world. Your smart car has safely navigated you to and from work a thousand times, and you trust it implicitly.
But one day, something in the car’s CPU goes haywire, and it doesn’t recognize a detoured area until it’s far too late. The car suddenly computes that it will have to sharply turn left or right since braking hard would cause a rear-end accident. The only problem is that to the left is a crowd of tourists taking pictures of a statue, and to the right is a single mother carrying her child.
What does the car do? How can a car choose between endangering the lives of people behind you, or choosing to veer into the path of pedestrians? When there is no avoiding a potentially-deadly accident, what implications does that have for an automated vehicles?
Although technology has advanced to a point where fully-automated cars are now being rolled out into the busy streets of major cities all over the world – including Boston’s South Shore – it has not advanced to a point where we can design computers to weigh the heavy burden of making complex moral choices. Computers do not yet have a conscious after all.
The dilemma is sure to cause much tension as more and more driverless cars take to the roadways, and will only get more intense in the likely event that these cars eventually cause bodily harm to their drivers or pedestrians on the streets.
Surrendering control for safety?
The proponents of automated vehicles argue that if every car drove itself – according to carefully-designed algorithms that process their surrounding environments and connect with other vehicles – the roads would actually be much safer than with purely human drivers at the wheel.
After all, over 35,000 people died in traffic accidents in 2015, according to the National Highway Traffic Safety Administration. Although this number has dropped since the days when cars were made of heavy steel, had no airbags and no seatbelts, the fact remains that driving – even in today’s safety-conscious world – is still a very dangerous activity.
If cars operated autonomously, in harmony with one another, able to instantly process risk and navigate out of danger, it would remove the unpredictability of daily commutes that cause so many accidents – such as coffee spilling on a lap, texting while driving and falling asleep at the wheel.
But the conundrum is that people currently feel as if they are safer by having total control over their vehicle. Giving total control up to a computer when your life is on the line is by no means an easy thing to do, especially given the decades of societal conditioning that has taught us that technology will always fail eventually.
Will society one day come to a point where we feel comfortable enough giving up control of our vehicles? If the numbers prove the theory that roads will be safer, perhaps. But the morality question will likely linger.
As more driverless cars hit the roads, it is important to have a team of professionals behind you to navigate the complex and constantly-changing legal landscape behind automated driving.
The legal professionals at Altman & Altman LLP have over 40 years of experience dealing with every variation of automobile accident, and the future of driverless cars is an area we strive to stay on top of and monitor closely. Should you get into an accident or be victimized by a driverless car, we’ll work tirelessly to figure out your rights and get you financial compensation whenever and however possible.
Call us for a free consultation today at 617-492-3000 or toll-free at 800-481-6199. We are available 24/7