Ethical Dilemma: Self-Driving Cars

Zoe Petroianu
5 min readDec 13, 2020

--

James Vincent / Source

The Rules

There are three laws of robotics set out by Isaac Asimov in 1942.

  1. A robot may not injure a human being or, through inaction, allow a human to come at harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Later, Isaac Asimov added a fourth:

0. A robot may not harm humanity or, by inaction, allow humanity to come to harm.

A world of self-driving cars seems to be something that many people in the Artificial Intelligence field fantasize. However wonderful you believe this fantasy may be, we can’t disregard the ethical concerns that many researchers have. Can cars truly make a right decision when someone’s life is on the line?

When you’re on the road and there is about to be an unavoidable accident, you simply makes a reaction in that given moment. It is sudden and understood as just a reaction. However, in the world of self-driving car, there is no driver to make a reaction. Instead, there is a programmer who decides what the car’s course of action will be beforehand. If you were to drive left into a motorcycle, it was your reaction. If the car were to turn left into a motorcycle, it was premeditated murder.

Many may argue the benefits of self-driving cars will significantly decrease car crashes because it eliminates the risk of human error. Although this may be true, crashes will still occur and their outcome can be seen as unethical. What happens in a car crash will have been determined years in advance by programmers. By saying “minimize harm” is the answer, it still brings up questions. If the car needs to decide between crashing into one of two motorcyclists, which one should it choose? The one wearing a helmet or the one without? If you chose the one with a helmet because it “minimizes harm”, then you are punishing the responsible citizen. And in the reverse situation, the car wouldn’t be “minimizing harm” anymore.

The algorithms in self-driving cars will favor certain objects to crash in to with no fault given to the humans involved. Additionally, many question whether they would buy a car that saves as many lives as possible, or one that would save the passenger no matter what. Should self-driving cars be programmed to make a random decision, or a decision based off of fatalities and priorities? If you were to ride a self-driving car that made a random decision, then there would be no morals involved.

The Trolley Problem

David Engber / Source

It’s all a question of morals. Therefore, we can analyze a famous philosophical problem, the Trolley Problem.

If a trolley with broken breaks was approaching two paths as depicted above and you had control over which path it would take, which would you choose? More importantly, why did you make your decision? This example is pretty simple, having to choose between five or one. However, MIT made a Moral Machine where you can test to see what you would do. There are many more factors involved including: age, species, gender, social values, and fitness.

MIT Moral Machine / Source

Whatever we think we would do, it’s either that we probably wouldn’t do or it just isn’t morally correct.

There are a couple things that all autonomous vehicles should do:

  • Align with our values
  • Perform even better than humans
  • Use machine learning

Machine learning is thought to solve all our abstract problems by using logic and math. When trying to solve this problem, manufacturers are trying to use an evolutionary approach. By setting out simple rules at the very beginning, the machine can learn on its own what is good and bad. Through the evolution, we can provide checkpoints such as a situation and analyze whether the machine’s decision makes sense and if people agree. If not, we tell the machine that it was wrong to make that decision and it can keep learning from there. A good analogy would be raising a child. You don’t tell them everything, but they figure out lots on their own. At the end, the self-driving car would end up matching our own values.

Although not everyone may agree, this is believed to be the best approach to solving the dilemma. There will much more debating for what is morally correct, but we need to come up with something because the benefits of self-driving cars are astronomical.

If anyone in high-school is interested in learning more about AI, I highly recommend checking out InspiritAI. I hope you enjoyed my three part blog series on self-driving cars and stick around for more AI related content in the future.

Zoe Petroianu is a Student Ambassador in the Inspirit AI Student Ambassadors Program. Inspirit AI is a pre-collegiate enrichment program that exposes curious high school students globally to AI through live online classes. Learn more at https://www.inspiritai.com/.

Sources:

Lin, P. (2015, December 08). The ethical dilemma of self-driving cars — Patrick Lin. Retrieved December 13, 2020, from https://www.youtube.com/watch?v=ixIoDYVfKA0

Meaker, M. (2019, August 28). How Should Self-Driving Cars Choose Who Not to Kill? Retrieved December 13, 2020, from https://onezero.medium.com/how-should-self-driving-cars-choose-who-not-to-kill-442f2a5a1b59

Pachter, J. (2018, July 25). Ethics in Autonomous Cars | Josh Pachter | TEDxUniversityofRochester. Retrieved December 13, 2020, from https://www.youtube.com/watch?v=KnD27GdhZxUe

Three Laws of Robotics. (2020, December 07). Retrieved December 13, 2020, from https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

--

--

Zoe Petroianu
Zoe Petroianu

Written by Zoe Petroianu

High school student with an interest in computer science and AI. Enthusiast of STEM.

No responses yet