Sunday, January 24, 2016

Are these moral dilemmas overly simplified?

Starting with the NYT article "Can moral disputes be resolved?" I'd have to say no in the case of hard-line true believers.. The problem with moral disputes based on religion is that the prescribed morality within each group leaves no room for critical thinking or compromise. More moderate members grant themselves permission to think and decide. It's still an uphill battle on some issues, but at least there's a chance. The same is true for political issues. We've become so polarized nothing gets done. It's one of the reasons I like to joke that when it comes to politics, I'm non-denominational.

This weeks reading cluster was interesting, but I think the questions posed were a little too binary. The NYT article showed how answering the question of "honor killing" takes more than a "right or wrong" answer. There's many other factors involved. Simplifying moral dilemmas to yes or no does a disservice to the respondent by forcing an answer based more on emotion than on available facts.

Programming robots and self-driving cars to make ethical decisions is often explored in literature, film and the implications of the narratives. Asimov's "Three Rules" often present a conflict within the robotic characters of the story. Number three (self preservation) conflicts with both one and two (do no harm). The replicants in Bladerunner choose to kill in hopes of increasing their lifespan. In Terminator, Battlestar Galactica and The Matrix, machines programmed with decision making abilities nearly wipe out the human race. Hal in 2001 kills the astronauts because of a moral decision program. I, Robot (film version) features a man saved by a robot because he had a higher chance for survival over a child. The man ended up hating robots until investigating one that killed its creator.

I don't think the public will accept self driving cars until all cars are self driving. The concept of choosing whether the car's default loyalty leans to the passengers or pedestrians shouldn't be a consumer choice at all. If I opt for the car saving me, then am I liable for vehicular manslaughter because I chose not to defer to pedestrians? If I opt to save pedestrians over passengers then could life insurance refuse to pay out because of that choice? Could family members of the other passengers sue over my car's settings? Removing the choice removes a lot of the headaches associated with pondering the choice, just like religion.

For Buzz Feed's quiz, my answers fell in line with the majority on most of them. The few that differed are due to my personal background over any real moral quandary. This is why research questions need more details or additional solutions. A single variable could change the outcome. Would I tell a close friend about a cheating partner? Yes, but that is not my first choice. I would confront the cheater first and compel them to come clean. I'd only expose their infidelity if they refused. In the case of the email, the cheater requesting the email from me is an idiot and deserves to be caught. I wouldn't worry to much about company policy because often there's policies against personal use of work systems. The cheater was wrong on both counts. Again, they'd get the ultimatum. Tell your spouse or I will.

As for the EMT and organ scenario, the answer to that boils down to triage. All emergency medical responders are trained to stop massive bleeding first. Triage intends to save the ones that can be and not waste supplies on those that can't be. Many times it comes down to the patient will die or they won't no matter what you do. As for the organs, if the doctor stole from the one patient that doctor might as well steal organs from travelers in third world countries and leave them in tubs of ice with a note. Taking the organs violates the Hippocratic Oath.

Lastly, the killing Hitler scenario. I really dislike this one because it leaves so much out. The reading mentions that respondents need to believe in time travel or this question becomes irrelevant. If a person believes in time travel, then that person would believe in time paradoxes too. Think of Marty McFly in Back to the Future. He stopped his parents from meeting and therefore couldn't have been born. If killing Hitler prevents WWII, who's to say the resulting world would be any better? It could become much worse. What if your grandparents or great grand parents met while fighting in WWII? Preventing that meeting would prevent your birth and then how could you travel back in time to kill Hitler? It might prevent WWII in the the first place. Hitler was instrumental, but had lieutenants that were architects of the war as well. To be certain, wouldn't you also need to kill men like Himmler, Rommel and Goebbels? Many more things would change besides the genocide of Jews. There'd be no Israel, but also no violence between Israel and Pakistan. Where would the space program be? Without German scientists, we wouldn't have made it to the moon. Killing Hitler might stop the war in Europe, but may not have prevented war with Japan. Development of nuclear weapons would be delayed, but then who develops them first or worse yet, where are they deployed? If time travel existed, much of the theory would no doubt be based on the work of scientists like Einstein, Oppenheimer and Van Braun. Each of these men left Europe during WWII to work with the U.S. Could time travel itself be prevented if these men stayed home? There's no telling what could happen. Maybe the question should be, "If you traveled back in time and killed Hitler thereby preventing WWII, but returned home to a world in utter chaos, would you travel time again in an attempt to stop yourself from killing Hitler?"

The point is, simplifying research on moral decision making by only providing two choices with limited background seems to give skewed results. It seems to me that using a scale rating strength of agreement, or providing more options, would yield better results. Redesign the test and see if the results match up.

No comments:

Post a Comment