Sunday, January 24, 2016

Ethical Decision-Making - Week Three

Rosenberg’s discussion on the resolution of moral disputes kind of made me yawn at first. I didn’t necessarily disagree; on the contrary, what he was saying seemed more like common-sense type stuff. But his points on politics and religion caught my attention and made me dig a little deeper. Rosenberg says that political disputes occasionally have a moral aspect but mostly the arguers (Republican, Democrat, etc.), share the same ends but don’t agree on the means. But, in a foggy sort of way, he’s also saying that political disputes don’t always have a religious aspect, sometimes they do, and in that case, the political dispute becomes a moral dispute. Am I reading that right? 

Anyway. I like politics. I find them both fascinating and amusing, in more or less equal parts, and observing them play out, particularly on social media, is a thing I do for fun. I spent the entirety of the Fall semester trolling Donald Trump on Twitter so I could analyze him for my Capstone project. It was the most fun I’ve ever had doing homework, but I’m saying that to say this: 

We’ve been watching the 2016 Presidential Election Circus for a few months now, and I’d be hard-pressed to find a candidate that didn’t at least pay lip service to religious interests. For this election’s round of Republican candidates, religion seems to be a most necessary component. We have Ben Carson with his Seventh Day Adventism and Ted Cruz and Mike Huckabee with their Southern Baptism, and we have, of course, Donald Trump, whose views seem to change depending on who he’s talking to at the moment. The ‘Is Obama a Muslim’ situation and Mitt Romney’s Mormonism and Kim Davis checking out on her job and calling it standing up for her religious beliefs. So is every political dispute now a moral dispute as well? 

There is some question, of course, as to whether religion, or, more specifically God, is a moral issue at all, but if that’s the case then we may as well just throw in the towel now, because nobody can really win. 

Rosenberg suggests that moral disputes are intractable. Me being me, I looked that word up in the dictionary just to be sure. Intractable means hard to control or deal with, and in a person, it means stubborn. 

Seems about right. 

Teaching robots (or cars) to be ethical may be an easier job than teaching politicians to be ethical. Teaching a machine to be ethical has its complications, to be sure. But I suppose that as long as you don’t teach the machine to also have free will, you’ll have more success with it than you’d have with a person. 

That most people would applaud others’ use of a utilitarian car as long as they don’t have to use one themselves is telling. Once we have to start thinking about all the implications of this, our heads start to bubble their ways toward explosion. Or mine does, anyway. 

It isn’t just the autonomous car itself that’s the problem. It’s also how the autonomous car is used. Take for example the movie The Fifth Element. I’m sure there are other movies that depict the same sort of thing, but that one just happens to be one I was forced to watch because I’m married. I won’t attempt to get technical here because that would be painful to watch, but basically, the cars are all connected to one big system. The police can not only stop a car, they can stop a car and open the door. Now, you say, what’s wrong with that? It eliminates dangerous car chases, blah, blah, etc. The police pull you over, you stop. They want you to open the door, you open it. And I’m not advocating the evasion of arrest in any scenario, but don’t you like feeling like the choice is THERE? And what about scenarios in which the police are actually the enemy? Again, I’m only saying what if, but we know that’s been known to happen. Shouldn’t we have the option to drive to a well-lit, well-populated area? Shouldn’t we have the option to keep a door closed if we fear for our safety? 

It’s a conundrum. 

I suppose that's where Asimov's Three Laws of Robotics come into play. As a system, it does seem rather foolproof, but in regards to the discussion on machine-learning, there is that: They can learn. This can be a good thing and a bad thing, I guess, because it seems like if a machine can learn, then a machine can learn to break laws. Especially in the interest of self-preservation. 

1 comment: