No can response
since because 9 moral dilemmas broke my brain.
But seriously,
the Buzzfeed article promised to break my brain, and it didn’t deliver. Maybe, I would argue, that’s not where
ethical decisions always take place.
“Logic is how
we reason and come up with our ethical choices,” Luís Moniz Pereira said
in Boer Deng’s Nature article, yet
our other readings seem to contradict this bold stance. Emotions play a huge role in what we decide
is right or wrong.
Rosenberg argues
as much, quite exhaustively, in his New
York Times piece. Morality can’t be
solved like an equation or proven like a theorem. As ethics are culture-specific, there is no
way to objectively argue about something like honor killing or abortion. In fact, our personal emotional reactions to
a situation are a big part of what we perceive as right or wrong, which is
visible, Rosenberg claims, in brain imaging data.
I know personally
that in almost all of the 9 moral dilemmas, I had a clear feeling about what I
would do. A few took a bit of thinking
over, weighing options and predicting outcomes.
But most of the time, one option simply “felt” like the thing to.
And that might
get to the heart of some of the issues about programming robots and
self-driving cars to behave ethically.
Morality isn’t logical. As the MIT Technology Review article puts it: “people
are in favor of cars that sacrifice the occupant to save other lives—as long
they don’t have to drive one themselves.”
Which brings up
an interesting question – the same article states that self-driving cars may
have different moral algorithms depending on who they are carrying. But should they also have different moral
algorithms for different cultures? Is it
possible that programmers are searching for one answer to what is right, when
in fact they should be looking for many?
No comments:
Post a Comment