Sunday, March 27, 2016

Privacy in the Digital Age - Response to Readings



Americans’ Views About Data Collection and Security – PEW

The first thing that stuck out to me about this research was the notion, attributed to “information scholars” that privacy is not something one can simply “have,” rather it is something that people must seek to “achieve.” I find this concept to be an interesting point of view, especially in light of the later readings about data protection and privacy laws in Europe, where it does seem that privacy is regarded by many as something that one can “simply have” – a right that a person is entitled to and not something that they actively have to work to achieve. 

I also was drawn to data on how people who had a greater knowledge or awareness of government surveillance programs, including those that claim to fall under anti-terrorism efforts, were more likely to have the strongest views on the subject: believing that certain records should not be saved for any length of time, that social media sites should not save any data, that government programs did not have adequate limitations, and generally disapproving of the programs. I find it interesting that the more people learn about these things, the more unsettled they are by them. This seems to lend itself to the idea that most people are in the dark about how much data is actually being collected about them and what’s being done with it. In a class last semester, our professor showed us the huge storage locations of government collected data and we discussed possible concerns about this kind of data collection and retention – including how laws and what is culturally acceptable can change over time and how data collected today of a person participating in something quite legal and culturally acceptable could one day be used against that person in the future should laws or cultural norms change if data is stored indefinitely without clear rules or limitations on those storing it. 

Foiling Electronic Snoops in Email 

I think the title of this article is a little misleading. I was disappointed to learn that author actually cannot provide a way to actually “foil” email snoops and leaves readers with the conclusion that while there are some things that they can do, these options are either incomplete or require you to take further risks with your privacy. While I did change the settings on my iPhone, overall this article leaves you feeling more informed, but also more unsettled. I find it particularly disturbing that companies are attempting to learn my location and device used when opening emails. 

Behind the European Privacy Policy

My first reaction to this piece was: I bet the lawyers who spoke to Schrem’s class are kicking themselves now that he’s won his case. I had heard of this case last semester and, as I said earlier, I think this is an interesting juxtaposition to how PEW states that “information scholars”, presumably in the United States, view privacy. For Silicon Valley businesses, infringing on people’s privacy and profiting off their data is just part of business, and it’s disturbing how far that mindset has spread among the general public, making people feel they have to actively work to keep things private instead of having things Europeans consider to be every person’s right: the right to privacy and the “right to be forgotten.” I think Schrem put it well when he said, “the tendency of Silicon Valley companies is to beg forgiveness rather than ask permission.” 

I think this article just adds to the multitude of reasons why Facebook and Google need some kind of oversight or limitations – some kind of check and balance. They are not governments, not elected, and answer to no one but wield huge amounts of power over a large population of the world. The article mentions how Facebook is currently facing challenges from regulators in Belgium to stop it from tracking consumers who have not even joined the service. Is this behavior - spying and collecting data on those who have not consented in any way, shape or form – all that different from governments spying on citizens of other countries? But they are not a government, they are not answerable to anyone. I think this should be much more concerning to people than it is, especially considering Facebook’s psychological experiment, Uber’s changing terms of service, and other questionable actions. 

On the Death of Privacy / Facebook’s experiments

Larry Page, Google’s chief executive, as quoted by the Guardian, has expressed his frustration with people’s lack of trust in big data companies like Google. He argues that if people would just let companies mine their health data, we could save 100,000 lives every year. “We get so worried about [data privacy] that we don’t get the benefits,” he claims. And yet more and more people (including myself) are downloading Ghostery, and more and more stories are leaking about just how much we shouldn’t trust companies like Google and Facebook. 

Mr. Mosley, the man whose “orgy” was falsely tied to Nazism, says there’s not a lot of difference between the actions of the government of East Germany and the actions of Google. A claim that can be quite sobering if you stop to think about it. Mr. Gonzelez of Spain’s case is equally interesting and made me think of what we are always told here in the US and frequently tell each other: be careful what you put (or let get put) on the internet because it will never go away and will haunt you for the rest of your life. We automatically assume that there is no “right to be forgotten.” The responsibility is solely and wholly on the consumer, the individual. This contrasts sharply with what Schrem says in the earlier article about how just as you expect the company who constructs the building you enter to prevent the building from falling down on your head, you also have a right to expect big data businesses to behave responsibility – you have a right to privacy and a right to be forgotten, especially once years have passed and the data, photo, document, or what have you is no longer representative or even factually correct. 

Facebook’s experiments:

This simply goes back to my point about the lack of oversight on companies like Facebook. In a linked Guardian article (below), the author describes how on election day 2010, “Facebook offered one group in the US a graphic with a link to find nearby polling stations, along with a button that would let you announce that you'd voted, and the profile photos of six other of your "friends" who had already done so. Users shown that page were 0.39% more likely to vote than those in the "control" group, who hadn't seen the link, button or photos. The researchers reckoned they'd mobilized 60,000 voters - and that the ripple effect caused a total of 340,000 extra votes - significantly more than George Bush won by in 2000, where Florida hung on just 537 votes.” He goes on to make the disturbing suggestion: What if Mark Zuckerberg – or some future Facebook chief – decided that he wanted to influence a future election? The article states that growing data suggests that subtly influencing people’s opinions and voting turnout could be enough to engineer voting in one direction or another. While subliminal advertising has been banned in most countries for decades, the author argues that Facebook has reinvented it, though they are using it for things that are not truly advertising. Just because further “research” experiments haven’t been revealed, doesn’t mean they aren’t happening, especially after Facebook revised their terms of service.  This almost sounds unbelievable – like a dystopian science fiction novel, but it’s not something we should just ignore or dismiss. There’s enough suspicious and shady behavior on the part of these companies that we should probably perk up and pay attention to what’s going on. 

The problem is: how would oversight of such huge companies even work? I doubt many would trust the government to do it. Especially considering how Facebook claims that their controversial study was government-sponsored: in the article it cites Facebook as saying that the “aim of the government sponsored study was to see whether positive or negative words in messages would lead to positive or negative content in status updates.” This is a question I don’t have an answer to but that I think people should be asking. 

A final note on youarewhatyoulike.com: 

We were shown this algorithm in my class last semester. In my case, just like in that of the author of this week’s article, it got a few things wrong: namely my age and “life satisfaction,” although it did identify me as politically liberal and female.  I did find it interesting just how much it got wrong about the author though: age, gender, sexual orientation and marital status – all wrong. 

Michael Kosinski, from the site, says most technologies have their bright and dark side and that the ability of machine’s to better understand people would lead to better customer experience, products, etc. This seems a little dubious in light of all it got wrong about the author, but perhaps it could be said to be true if marketing to the author’s projected persona. But he also mentions a dark side: what if the same technology were used to predict personal things about people: who is gay, has certain political views, even who is HIV-positive? Taking that further, with it already showing to be inaccurate about things like age and gender, it could have huge consequences, not only resulting in people being targeted because they were gay or held certain beliefs but people being targeted incorrectly because of such things. As he said, “lynches are not unlikely to follow.” And all this stems from information gathered through your Facebook. 

http://www.theguardian.com/technology/2014/jun/30/if-facebook-can-tweak-our-emotions-and-make-us-vote-what-else-can-it-do
 

No comments:

Post a Comment