Vision System Coding Ethics in Self-Driving Vehicles

Is Vision System a Step Towards Coding Ethics into Self-Driving Vehicles?
Featured image credit: JasonDoiy/istockphoto.com

In the spring of last year, a 49-year-old woman was walking her bike across the street in Tempe, Arizona when she was hit and killed by an autonomous vehicle tested by Uber which was traveling at over 40 miles an hour. Also, in May 2018, a Tesla on the autopilot mode accelerated briefly before hitting the back of a fire truck and injuring two people.

Such crashes sparked the debate about the ethics of autonomous vehicles. Is the driverless car technology really safer than human drivers? And, in the event of an accident, who should be held responsible: the human driver who failed to recognize the system failure or the manufacturer who created the faulty software?

This ethics discussion has largely been focused on the moment of impact. Defining and implementing ethics in self-driving vehicles is more of a chicken and egg problem or, technically, a trolly problem. It is a hypothetical ethical brain teaser frequently brought up in the debate over self-driving cars.

Autonomous vehicles are programmed to be rule-followers. But, if an automated vehicle faces an unavoidable fatal crash, whose life should it save? Should it prioritize the life of the passenger or the pedestrian? Should it save the young or elderly?

This decision should fall into the hands of the consumers, suggested researchers at the European University Institute. They call it an ‘ethical knob’ through which the consumer would set the software’s ethical decision making to impartial (equal importance to all parties), egoistic (preference for all passengers in the vehicle) or altruistic (preference for third parties) in the case of an inevitable crash.

The interesting thing is that, in a series of surveys, researchers found that people believe in utilitarian ethics when it comes to self-driving cars. So, automated vehicles should minimize casualties in the case of an unavoidable accident. But, not many people would be keen on riding in a car that would potentially value the lives of multiple other people over the few people in the car.

Luckily, recent research shows some hope. By zeroing in on a human’s gait, symmetry, and foot placement, a group of researchers at the University of Michigan are trying to develop an algorithm that will not only take into account what a pedestrian is doing but how he or she is doing it. They claim this will help self-driving vehicles recognize and predict pedestrian movements with greater accuracy than the current technologies is capable of.

According to Ram Vasudevan, U-M assistant professor of mechanical engineering, prior work on this area has typically only looked at still images and not at how people move in three dimensions. But, by utilizing video clips of humans in motion as captured through cameras, LiDAR, and GPS, the U-M system can study the first half of the snippet to make its predictions and then verify the accuracy with the second half.

The researchers are trying to train the system to recognize motion and make predictions about stop signs as well as where the pedestrian’s body will be at the next step and the next and the next. Keeping such an eye on pedestrians and predicting their next step is a major part of any autonomous vehicle’s vision system. If such vehicles can realize that people are present and understand where they are, it will make a huge difference in how they operate.

The results of the research have been encouraging so far and show that this new system improves upon an autonomous vehicle’s capacity to recognize what’s most likely to happen next. But, it is too early to expect this algorithm to form part of an upcoming autonomous vehicle. It is still little-studied, but recognizing and categorizing the algorithm is the first step to making it an integral part of an autonomous vehicle’s vision system.

Final Thoughts

Leading technology, rideshare, and automotive companies, including Lyft and General Motors, already have self-driving projects in progress. And, according to an estimate by the Brookings Institute, over $80 billion has already been invested in self-driving technology. So, it is time to seriously consider self-driving cars’ ethics before the potential of automated vehicle technology becomes fully realized or explored.

Can the vision system solve the problem? Only time can tell. To stay on the safer side, it is always recommendable to use present technologies like telematics before the clouds of ethics get cleared away.

What do you think about the ethics of autonomous vehicles and the role of the vision system in it? Do you have any questions? Please feel free to leave your comments below.

Want To See For Yourself How Route4Me Can Boost Your Profits?

Whether you want to slash the time it takes you to plan routes for your drivers, increase the number of stops they can make, or keep your customers satisfied knowing that your drivers show up on time… Route4Me helps you achieve that!
Start Free 7 Day Trial
Route4Me

About author: Rahul Dasgupta

With a master’s in computer science and over two decades in logistics technology, Rahul Dasgupta is an authority in route optimization and last mile logistics. At Route4Me, Rahul uses his expertise to help businesses maximize delivery efficiency through strategic route planning and innovative logistics solutions, ensuring optimal fleet performance and cost-effectiveness.

Route4Me

About Route4Me

Route4Me has over 40,000 customers globally. Route4Me's Android and iPhone mobile apps have been downloaded over 2 million times since 2009. Extremely easy-to-use, Route4Me's apps create optimized routes, synchronize routes to mobile devices, enable communication with drivers and customers, offer turn-by-turn directions, delivery confirmation, and more. Behind the scenes, Route4Me's operational optimization platform combines high-performance algorithms with data science, machine learning, and big data to plan, optimize, and analyze routes of almost any size in real-time.