Sunday, January 24, 2016

Week 3 Reading Response

CAN MORAL DISPUTES BE RESOLVED?

In this reading, the author poses the question: “So, which is it: right because God choose it, or chosen by God because right? Most people think it’s the latter.” The question led me to another question that I think complements it well: Did the moral and religious person gravitate toward religion because of their already established morality or did religion make the person moral? Having grown up in a religious background, I've seen my fair share of converts who gravitated toward religion initially because they felt like they'd reached a low point where they felt a complete breakdown in their ability to be moral and wanted to remedy it through religion. Like an army recruit during basic training, this type of convert is often characterized as a moral blank slate that is then instructed anew on morality through religious teachings. If morality is fluid and permits different - sometimes opposing - views depending on one's culture or religious affiliation, it's easy to see this type of convert align him/herself with the "right because [their] God chose it" camp. This type of convert can often be found in the "preconvention" phase of moral evolution. Another type of convert however is the type that picks up a few religious practices here and there out of admiration for its practitioners and gradually gravitates toward the active practice of said religion. This type of convert most likely would align his/herself with the "chosen by [their] God because right," camp, I think. This is typically the type of convert who 1) found validation in religious teachings for things they were already practicing on their own or as part of a non-religious subculture or 2)  converted from a previous religion that they'd perhaps been born into and had effectively been nurtured by. They'd fall into the convention or post-convention phase of moral evolution.

Breezing through the comments below the reading, it is interesting that a few commenters bring up the golden rule (do unto others...) as a solution for resolving cross-cultural moral disputes like honor killings. I found this to be pretty naive on their part. Someone even goes so far as to call it the "most universal moral/ethical principle," which I find false because even here in the Western world, we have many situations where it would be unwise to visit upon others the types of things we as unique individuals - with our separate tastes and tics and desires and quirks - wish to be done to us. Some people like being mistreated. Some people like being talked down to. Some like being objectified. Some like being mishandled. This is in no way a judgment on people for the things they like; just an illustration of how problematic that line of thinking can often be. The other thing about the golden rule is that it doesn't take into account the reasons people formulate in their heads for why they're really doing something to someone, why they feel like that something has to be done, and the "greater good" in their view that will ultimately result from the act.

WHY SELF-DRIVING CARS MUST BE PROGRAMMED TO KILL

I thought about similarities and differences between the trolley problem and the self-driving car having to choose between pedestrians in its path and its owner/passenger. It is interesting that in the few variations of the trolley problem I've read so far, the passengers inside the trolley are never in danger themselves of being killed or even mentioned at all. If such a variation exists and I just haven't read it yet, I stand corrected but back to my point. I'm making the comparison to bring up the issue of trust between man and machine and also to ask whether it would ever be ethical or even feasible to allow the human passenger of a self-driving car to be included in the decision-making process of choosing between mowing down a bunch of unsuspecting pedestrians or careening into the nearest wall. Unlike the AI aboard the self-driving car, the trolley operator's decision can easily be influenced by the passenger(s) through civil conversation or through mixed martial arts. I haven't read anything on the self-driving car's equivalent of a roundhouse kick to the trolley operator's face. Perhaps a manual override option in the self-driving car's settings could be included. People board a trolley expecting its operator to be 1) well-trained and 2) accountable for his decisions behind the wheel. However, in moments like the ones mentioned in the trolley problem, I also think people expect the trolley operator to 3) think and act a lot like his passengers would, were they in his shoes and 4) take his passengers' considerations into account. This is what builds trust between the operator and his/her passengers. I think one of the easiest ways designers of self-driving cars could get people to trust their products is to allow the passengers knowledge that they'd be able to influence the car's decision if ever faced with such a scenario. In the trolley, there's give and take to the amount of autonomy the trolley operator enjoys vs. the autonomy the passengers have. In the self-driving car, it seems to be all autonomy for the car, no autonomy for its passenger(s).

On a darker and more fear-mongering note, imagine a group of people bent of killing a single person and deciding the best way to do it would be to all at once run out in front of the person's self-driving car while he's inside it.

THE ROBOT'S DILEMMA

This was the most enjoyable read for me this week. I enjoyed the separation between the "machine-learning" approach and the "rule-based" approach to programming robots to make moral decisions and I definitely favor the "rule-based" approach. However, the reading got me thinking about one of my favorite sci-fi films in recent years - "Moon." In the film, the main character has a robot companion named GERTY who is very reminiscent of the AI HAL 9000 from "2001: A Space Odyssey". In "Moon," GERTY has moral directives that have been programmed into it in service of its owner's nefarious plans. However, the main character essentially programs his own morality into GERTY simply through interactions with it over time and manipulation in a manner that is almost uniquely human. For an even better movie example of this method of overriding a robot's morality, see another movie titled "Robot and Frank."

5 comments:

  1. Hi Olu -- if possible, can you bring in clips from either or both movies to help illustrate the point? I think you have a very interesting point!

    ReplyDelete
    Replies
    1. ROBOT AND FRANK:

      First, the trailer which lays out the basic premise: Robot is given to Frank to help him around the house and keep him healthy and keep his brain active. Frank gradually enlists Robot in helping him commit crimes:

      https://www.youtube.com/watch?v=c_EHfppp8PE

      This next clip is a pairing of two scenes that bookend Robot's transition during the course of the film. I think it is one example of "machine-learning" approach. Frank encounters Robot being harassed by a bunch of kids and simply asking that the kids not molest it. Frank then tells Robot how to handle a similar scenario the next time it occurs. It just so happens that the next time Robot finds itself in that scenario, it is the town sheriff and his deputies trying to get information from it that could incriminate Frank:

      https://www.youtube.com/watch?v=6zrkHQdJ7wg

      In this clip, after Robot inadvertently shoplifts an item for Frank, Frank queries it on its programming on stealing and abiding by the law:

      https://www.youtube.com/watch?v=PwQgdJo1i5Y

      These next two clips are of Frank and Robot planning the burglary and then trying to hide the evidence afterwards. Frank has successfully been able to negotiate with Robot, convincing it first to simply permit the planning phase because it will be good exercise for his deteriorating brain. They both agree to the planning but not to going through with the actual burglary. Frank is later able to manipulate Robot into actually going through with it only after proving to Robot that his plan is foolproof:

      https://www.youtube.com/watch?v=OWlbPbbsEDY
      https://www.youtube.com/watch?v=UinmuTrwoMs

      Delete
    2. MOON

      It's a little more difficult to find clips for Moon that illustrate my point without giving away the big twist in the film. Sam Rockwell plays Sam - a miner isolated on the Moon for a three-year duration for the company that employs him. During this time, the company has provided him with GERTY - a robot companion. Here's the trailer:

      https://www.youtube.com/watch?v=k4v39SJswUA

      The arc of GERTY showcases some of the things the scientists from the Deng reading cite as limitations to "rule-based" approach. GERTY has the robot equivalent of a breakdown in its conscience (breakdown in its programming) when faced with a dilemma it is not familiar with. It is initially programmed to see the company's bottom-line as the ultimate greater good while also seeing Sam's well being as a high priority. The plot of the film involves both of those objectives clashing and leading to character growth for GERTY.

      The Deng reading cites a scientist as saying that it is currently very difficult to program into Robots the ability for visualizing counterfactuals. GERTY's design seems to fall in line with this. By bringing up a counterfactual and carefully walking GERTY through it, Sam is able to convince GERTY to do something that would mislead its owners and put someone else in danger but lead to Sam's safety toward the end of the film. What GERTY ends up doing is equivalent to pushing a bystander onto the other track in order to stop the trolley - the variation of the trolley problem that illustrates the "double-effect" doctrine.

      With that out of the way, here are a few non-spoilery clips showing Sam's interactions with GERTY:

      https://www.youtube.com/watch?v=aPj6aNTXaoo
      https://www.youtube.com/watch?v=mZr0s6M2CFQ

      Here's one that shows Sam manipulating GERTY into disobeying its owner's order to not let him outside. He does this by being dishonest and by negotiating - things that I think come easier to humans than they would to a robot like GERTY:

      https://www.youtube.com/watch?v=EZKrmyIbNCo

      Finally,

      SPOILER WARNING:

      Here's a clip that best depicts the moment GERTY stops seeing the company’s bottom-line as the greater good and begins to see Sam’s right to know the truth as the greater good:

      https://www.youtube.com/watch?v=8ExWdszTWjs

      Delete
    3. Thank you Olu -- these are great! We'll play some of the clips in class tonight!

      Delete