Sunday, January 24, 2016

Programmable Ethics

Last week in class, we discussed some different schools of thought regarding ethics and morals.  Because individuals hold their own ethical and moral codes, there are no absolutes.  The image I carried away from class was that of the person holding a knife, another person dead at his feet.  I immediately jumped to the conclusion that the knife guy murdered the dead guy.  I followed the classic Model 1 as outlined in the reading. What I lacked was information.  Could the knife holding guy have just picked the knife up, oblivious of the events that had unfolded?  Could he have just wrestled the knife away from now-dead-guy?  Or, supposing that he HAD killed the guy on the floor, could knife-guy have been acting in self-defense, or in defense of a loved one?  And would that make killing the suspect morally acceptable?  

People raised in the same culture, same community and even the same household, can have different moral codes and can react to last week's model differently.  This week's readings - largely focused on programmable ethics -  made me wish that there WERE a common moral code.  Honestly, I hadn't given much thought to self-driven cars prior to reading these articles.  It seems as though the over-arching guideline with regard to harming others is, "The basic intuition, known to philosophers as the doctrine of double effect, is that deliberately inflicting harm is wrong, even if it leads to good.  However, inflicting harm might be acceptable if it is not deliberate, but simply a consequence of doing good." (The Robot's Dilemma: Working out How to Build Ethical Robots is One of the Thorniest Challenges in Artifical Intelligence). This effect explains why most people wouldn't have sacrificed the perfectly healthy gentleman in the hospital to save the five dying from injuries in one of last week's examples.

I did some additional research (beyond what was assigned) and stumbled upon an article from Wired magazine, "Here's a Terrible Idea:  Robot Cars with Programmable Ethics."  The idea of programmable ethics blew my mind.  How could we possibly standardize ethics for robot cars, when we can't even come up with a standard set of ethics for humans?  The article contends that attempting to pre-program self-driving cars to react in certain scenarios would be virtually impossible.  We would have to identify every possible scenario and program every variable, to include possible number of passengers, their ages, the presence of animals, construction, road conditions.  The list is truly mind-boggling!

Another distinction that the Wired article makes is that, if the car were programmed to react a particular way to minimize damage - kill one person instead of five - that wouldn't change the fact that one person was dead.  And because it was programmed, that would mean the difference between manslaughter and homicide.  Who would be liable for that person's death?  The car company and it's programmers?  The owner of the self-driving car?  In attempting to identify and program for every scenario, we would be eliminating the very human (and often excusable) panic reaction.  Again, the difference between manslaughter and homicide.


In "The Ethical Dilemma of Self-Driving Cars," (Ted.com) Patrick Lin makes several salient points.  If a car were in an unavoidable crash scenario, where one option was to hit a person on a motorcycle wearing a helmet and the other was to hit someone on a motorcycle NOT wearing a helmet, what would the car do?  By hitting the person with the helmet - the one with the greatest chance of survival - that person would be penalized for following safety protocols.  But the converse exposes similar problems.

No comments:

Post a Comment