Tuesday, March 29, 2016

In-class exercise


POPULATION

Colors: Green, Green, Green, Green, Red, Red, Red, Orange, Orange, Orange
Shapes: Circle, circle, circle, square, square, square, triangle, triangle
Alphabets: A, A, A, B, B, B, C, C


ROLES

Two police officers; two suspects  
*Colors include that of the 2 police officers’

DURATION

15 minutes


RULES

1. The police officers win if they correctly identify both suspects (the right color, shape, and alphabet combination). Unless both suspects are identified; however, the rest of the class wins

2. First off, the police officer should try to identify everyone’s color, shape, and alphabet first. I will give the police 3 related hints when there are 5 minutes left in the game. 

3. The police officers can ONLY ask, are you [shape] and [alphabet]? They can ask whomever they want, however many times they want, but they cannot fixate on the same person with consecutive questions, as that would be considered police harassment. 

4. Non-police officers can only answer “YES” or “NO.” Please do NOT reveal your identity to either the police or each other. Protect your identity and keep it to yourself!
  • YES if the officers get either the shape or alphabet right, but do NOT specify which the officer got right
  • NO if the officers get both the shape and alphabet wrong

5. Each police officer gets to officially guess twice by the end of the game. You are out of the game if you are accused by either of the officers as a suspect, even if you are wrongly accused
·      Please return your piece of paper to Dr. Lee, without revealing your true identity to anyone, if you are (a) eliminated as a potential suspect by both officers or (b) if you are identified as a suspect, even if you’re wrongly accused


PRIZE

Winner(s) get(s) 2-day extension on the next paper


Sunday, March 27, 2016

Week 12 Reading Response

This week's reading touched on privacy from a few different perspectives. I'll be honest. I've always taken the view on privacy that Max Schrems has - which is that personally, I'm "interested in privacy more in principle than it practice." I'd even take it a step further and say that I generally get bored hearing or reading about privacy.

Privacy should be a basic human right but even if it were, it would still get complicated whenever we decide to invite companies like Facebook and Google into our personal lives. Whether they admit to it or not, Facebook and Google have gotten a lot of mileage out of treating users' privacy as a courtesy they extend rather than a basic right. The people who tend to react the most to articles like the one on electronic snoops in email eventually end up getting off Facebook for good once they realize this. Every complication I've ever seen written as stemming from privacy in the digital age can be traced to this harsh reality - to users, privacy is a right, to companies, it is a courtesy.

These complications are nothing new, which is maybe why I get so bored seeing myriad think-pieces written about them that try to present each new complication as something urgent. Facebook is always going to collect information on you. It's how it keeps the lights on. A dozen Max Schrems aren't suddenly going to make the company re-evaluate and change course.

The article on Schrems did get me thinking about what the clear difference is between public vs. private online. The guy is reported to have been shocked to see information he thought he'd deleted still being stored by Facebook. While I sympathize, I think Facebook's explanation that it was a conversation between two friends, one of which had not deleted their copy of the exchange might actually be fair. It got me thinking that something shared between two people could be understood between both people as private but the levels of privacy involved in sharing that thing could differ depending on who you ask. Schrems obviously thought it was something worth deleting but his friend did not. Who does Facebook defer to when deciding just how private the information is? This is an issue that clearly went beyond merely tweaking a privacy setting and I'd be interested to see more cases like this and how they turned out.

The pew survey on data collection isn't really that revelatory. The findings on "do not disturb" zones did get me thinking about what constitutes as "do not disturb" zones online. In the reading, they use the word "home." In our increasingly connected world, where exactly does outside end and home begin?

Privacy in the Digital Age

I've never googled myself.  I probably should, just to know what traces of my existence exist on the web.  It's not that I'm not curious; I'm just afraid to see the condensed, regurgitated-for-public-consumption version of myself that other people (might)  find.  And after all - and as the articles we read this weed so carefully point out - there's not too much I can do about it.

Up until this point, I've been pretty ignorant as to more than just my unintentional public profile; I've also not paid much attention to my browsing history, cookies, or newsletter sign ups.  I was one of the many who blissfully traded their privacy for a 10% discount.  Like the Singer article states, “People are always willing to trade privacy and information when they see the direct value of sharing that information."  If I get something for it, it's fine. But when a company profits from my information without my direct consent, I'm not so fine.  The most poignant quote of the week was, for me, "We have traded our privacy for the wealth of information the web delivers to us..." 

The whole concept of selling my privacy (and personal data) for a coupon code makes me feel pretty dirty. In the Harvard Business Review's "A Penny for Your Privacy,"(https://hbr.org/2012/10/a-penny-for-your-privacy), it's clear we've all been doing it for quite awhile now.  Our Kroger cards and frequent flyer accounts have been luring us in for years.  And as long as the benefit is understood, and tangible to the account/internet/card user, it's acceptable.  But that's not always - or even usually - the case.  

UCLA's Anderson School of Business defines "Data Mining" as follows: 

 Generally, data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information - information that can be used to increase revenue, cuts costs, or both. Data mining software is one of a number of analytical tools for analyzing data. It allows users to analyze data from many different dimensions or angles, categorize it, and summarize the relationships 

I get that data mining is a thing.  And it can be extremely useful, as in the MIT Technology Review article, "Data Mining  Reveals the Four Urban Conditions that Create Vibrant Urban City Life:" https://www.technologyreview.com/s/601107/data-mining-reveals-the-four-urban-conditions-that-create-vibrant-city-life/
But the idea that my data is being mined - and sold - for profit is annoying, to say the least.  An article from the ACLU: https://www.aclu.org/blog/eight-problems-big-data, states that data mining, should it be used to identify risks for Parkinson's or cancer, for example, might be useful.  It then goes on to list 8 problems, including misuse by law enforcement and eventual anti-social behavior driven by fear.  It seems as though the big benefit of data mining is in the for-profit market, not the altruistic one.  



The Tiny Constable Part II

Happy Easter! I think it’s time to talk about tiny constables again.

United States v.Jones, the 2011 Supreme Court case I mentioned last month, fits these readings well.  The issue at stake: whether or not attaching a GPS to someone’s car without a warrant was an “unreasonable search or seizure,” forbidden by the Fourth Amendment.  The court held, unanimously, that warrantless GPS tracking was unconstitutional.

As I said last time, I find this very interesting because all the Justices, even the archconservatives, agreed that GPS tracking is a technological development so revolutionary that there is no way to know what the framers of the constitution would have thought about it.  Alito tries to picture it.  “Is it possible to imagine a case,” Alito writes, “in which a constable secreted himself somewhere in a coach and remained there for a period of time in order to monitor the movements of the coach’s owner?”

That situation is ridiculous, of course. It would have required, according to Alito, “either a gigantic coach, a very tiny constable, or both.” When Madison, Jefferson, and the rest of the gang argued about what a police search was, the modern reality of GPS tracking would never have entered the conversation.  So, the entire court agreed, we have to extrapolate based on the laws we have and the ideals we believe in to deal with this new situation.

In the digital age, we frequently find ourselves in tiny constable situations, where decades-old laws run up against situations with no historical parallel.  Many of the European privacy laws we read about this week are attempts to deal with just that.

Look at the “right to be forgotten” discussed in the Guardian piece.  Fifty years ago, it would be impossible to imagine that erroneous information could persist in such a way as it does online. If, when you were younger, a newspaper wrongly printed that you had been evicted, your employer would never be able to find a copy of it. And a library archiving that newspaper wouldn’t have to destroy their copy of that article.  But in 2016, the information about you online is a major part of who you are, and it is instantly available from anywhere in the world.  The EU is right to say that someone needs to be responsible for its accuracy.


Data security, too, is a tiny constable problem.  We are in a world of privacy issues that have no historical, legal, or moral precedent. I don’t think any one person knows the right answer. We need a clear public debate about what is acceptable and what isn’t, what we consider sacred and what we think is expendable.  We need to think about privacy in new ways.  Because the people profiting from our privacy already are.

Week 12, privacy not included.

“If you want to keep a secret, you must also keep it from yourself.”

I thought this was a fitting quote and summary for this week’s readings, as they all center on our privacy (or lack of) while we’re using the internet or milling about the world.

The article I spent the most time digesting, “The Death of Privacy,” made some provocative points. I think it’s interesting to ponder that the loss of our collective privacy has taken place over time, mostly without us realizing. It’s been a gradual evolution of us sharing All the Details. But not all of us share all. And many share only the pretty things (myself included). That makes me wonder: who am I, who are they, and what’s real? I digress, though. I think pretty much all of us accept that we’re being watched and tracked by someone, somewhere. So why are we bothered? We expect, we demand that search engines (Google), give us what we’re looking, so do we expect they won’t use that information, our habits, to make our next search a better experience?

(Aside: I was surprised to learn privacy law is so new. It makes sense, but I had no idea just the same.)

Regarding the Pew Research article, “Americans’ Views about Data Collection and Security,” I think this isn’t a new concern, per say. We long ago knew our credit card companies and retailers participated in such practices, that they partnered to share our habits (shop here, and I’ll mail you this catalog for eternity...). It’s more in our face, is all, due to social media and our look-at-me lifestyle and algorithms that make us realize more startlingly that these platforms are sharing data across the board, so that no matter where we check in, they know where we’ve been…and what we’re likely to do next.

The stat about people thinking they were entitled to privacy at work was mind-boggling. It’s work, not your home. It’s someone else’s network/platform. How and why would we ever have the expectation we’d have privacy there? It’s equivalent to thinking you can go to someone's house and say, search for porn, and hope they’d be cool with it.

Pew Research also collected people’s thoughts about surveillance cameras. I ask: why do we need to avoid them? What’s the privacy violation? I get that public surveillance is prevalent. They protect a business’ bottom line and they protect consumers and they protect private citizens. To be honest, I don’t think about them that much. I think we’re probably safer and better for having them because they can be used to track and stop criminals.

If I’m being naive here, I look forward to being told why, but I’m going to throw this out there: I don’t see why all of this is such a big deal, to be monitored at work, for companies or organizations to save data about my shopping habit, to have cameras follow me. Do I like being tracked and marketed to across platforms? Not especially. In some instances I’ve been alarmed by it. It’s life, though. There are laws. I’m going to go ahead and follow them.

One thought about the Poynter article about The Intercept. I think it’s great that one company has found a way to protect its users’ privacy, but how does what one company is doing affect change at all companies? I don’t think this is going to start a trend. I don’t even think it’ll be a blip on the radar. Companies have to want to respect our privacy, and it’s not looking like we’re headed in that direction anytime soon.


A final note: I find it intriguing there are companies being created to protect us from the companies tracking our moves. It feels…full of irony. What data are these track blocker companies using to analyze their consumer’s behavior? What are they doing about the information they’re collecting about us in the process? I’m too jaded to believe someone when they say they’re keeping my data private. Sadly, it’s not the world we live in anymore.

Week 12 Readings

I am generally a private person, but I'll usually answer direct questions with direct answers. In that scenario, I retain the choice of whether or not to answer. Data mining and tech companies don't respect the choice though.
The argument that data collection in exchange for coupons is equitable is laughable. I rarely, if ever, use coupons. When I do, more often it's a paper coupon clipped from a newspaper or generic flyer, i.e. not digital or personally identifiable.
The problem with privacy rights activists is that they present vague arguments for privacy legislation. Even this week's block of readings were light on concrete examples. Mostly the argument is your personal data could potentially be used against you. And until someone experiences that, it presents a difficult scenario to envision. Perhaps elderly citizens believe in privacy more because they remember events like the "Red Scare" in the 50s led by Sen. McCarthy. Older people remember careers and lives ruined because someone once attended a communist rally during college or read a newsletter. Then, years later, this incident was revealed in the U.S. Senate and some individuals lost everything. How many people under 40 would immediately recognize the term "Hollywood Blacklisting" in reference to that time period?
I think privacy advocates should push comparisons of data collection further than dystopian sci-fi references. It's been over 20 years since I read Orwell's 1984. I'm sure that's true for a lot of people. I get the connection, but it isn't as prevalent in my mind due to how long ago I read it. Maybe activists would have a better shot at convincing the public if they compare data mining to actual crimes.

Extortion -The obtaining of property from another induced by wrongful use of actual or threatened force, violence, or fear, or under color of official right.
Coercion -
The intimidation of a victim to compel the individual to do some act against his or her will by the use of
psychological pressure, physical force, or threats. 

Think about it. Apps aren't trying to physically harm you, but app creators do think they have an official right to your data. Why Angry Birds requires access to my contacts list is beyond me, but if I want to play, that's part of the price. Many digital services and apps require the user to hand over information to use a product, but are intentionally vague about what happens with the info. If the user knew more details, would they still want to participate?

Kidnapping -
The crime of unlawfully seizing and carrying away a person by force or Fraudor seizing and detaining a person against his or her will with an intent to carry that person away at a later time.
Generally, kidnapping occurs when a person, without lawful authority, physically [transports] (i.e., moves) another person without that other person's consent, with the intent to use the abduction in connection with some other
nefarious objective.

If you accept the idea that online profiles and data constitute a digital version of self, then companies are collecting "people" for their own uses. Digital data mining could be viewed as kidnapping user's virtual identities.

These two examples might be a stretch, but the point is they would resonate with the public more than digital doomsday prophesies.

Another issue for privacy advocates to think about are programs like Intrado's Beware. Used by the Fresno PD, Beware uses an address to mine information about residents, assigns a potential threat level and then the officer on scene receives the result.
Here's a link to an article about it.
https://www.washingtonpost.com/local/public-safety/the-new-way-police-are-surveilling-you-calculating-your-threat-score/2016/01/10/e42bccac-8e15-11e5-baf4-bdf37355da0c_story.html

First, it's not always accurate. Information about previous residents can be included in assessing the threat. Also, Intrado won't explain how the program works or how it forms conclusions.
It's not hard to imagine police officers behaving differently when given green vs red threat assessments. Racial profiling is illegal, digital profiling should be as well.

I like the way Europe approaches digital privacy, but I do have some issues with "The Right to be Forgotten." Legitimate news should not be erased because a person doesn't like the information reported. But I do agree with the idea of relevance being used as a metric for removing information. I think European support of digital privacy law stems from lesson learned under various governments over time. The U.S. never experienced fascism or dictators as government leaders. In that regard, Americans may be a little naive when it comes to the possibilities of a public under constant and secret surveillance.

Personally, I'll trade a little security for a little more privacy.

Privacy - March 29

The Death of Privacy

As I began reading this article I thought about every person on Facebook and their content. Facebook is a give-and-take platform where people give their privacy away freely while the platform secretly takes what privacy is left over. With social media, people have to know that anyone can see what they post and it will always be out in cyberspace. People who freely post photos or statuses on social media have no place to complain about losing privacy.

The internet is a pot of water on the stove. When it first came to exist, people had no idea what it could or would become. By the time we understood that we lost our privacy, the water was boiling and we were cooked. For people who joined websites, posted pictures and statuses, joined chat rooms, bought things on sites or browsed websites, as the Internet was new, there was no threat of lost privacy. Now that the Internet is what it is, there is no excuse for new Internet users to claim they thought they had privacy.

It is hard to separate the Internet from the people who post on the Internet. The Internet didn’t sneak photos of Max Mosley, a person did. The Internet didn’t write articles about him, people did. Google is perpetuating content placed on the Internet by people.



Sharing Data, but Not Happily

Honesty is an effective method of persuasion. If a company states that they are collecting this data in exchange for a discount or better service, a consumer could be more compliant than if the company secretly took the data. When an app or site asks a visitor if the site can have access to your camera or photos, you have the option to say yes or no. Being upfront about the data collection allows the consumer to decide if the app, site, or service is worth giving up the data.


Most of these “better services” are unnecessary and not a fair trade for data mining. I don’t want better ads, more “relatable” content, or “better” search results in exchange for my data. It’s like a car sales person secretly upgrading you from a standard rear-view mirror to the one that tells you what the temperature in exchange for your web browsing history. Not necessary and not worth the trade.  

Privacy in the Digital Age

The older I get the more instances I see of confirmation of the saying, “ignorance is bliss.” The stats provided in the Pew Research Center survey in this week’s reading is the latest confirmation. The survey found that 74 percent of those who are more aware of government surveillance efforts are more likely to feel there are not adequate safeguards in place, compared with 62 percent of people who have heard of only a little about such programs.

I expected to find more generational differences in attitudes about surveillance and privacy, that digital natives might be less concerned about privacy. Most of the survey results didn’t show a difference based on age.

I also thought it was interesting that social media users felt they had “a lot” of control over how much information is collected about them and how it is used – social networking sites such as Facebook are notorious for claiming your data and using it however they want.

The Guardian article on the death of privacy brought up some privacy cases that I had never heard of, and kind of still wish I hadn’t. I spent way too much time trying to find the Max Mosley video before deciding I really do not need to watch it to understand the case, and I also spent too much time trying to sort through 2004-2040.com. When I read that Alberto Frigo posting about dreaming of women besides his wife and his response of, “She did not accept that I dreamed of other girls,” (and in my mind, it was that he was posting about it for everyone to know that he was dreaming about women besides his wife) I had decided I wasn’t going to spend much time on his site. Philosophically the concept of the site is intriguing, and I understand why brutal honesty is important for his project and in some ways can admire his dedication to his cause at the expense of his relationship, but I guess I wonder if at the end of his life if he will say the site was worth it. Going back to last week’s article about waiting five minutes – giving something contrarian to your own views a second thought, maybe one day I’ll appreciate the site more.


I thought the premise of The Intercept – that they will measure the tendencies of readers without compromising their identities – was encouraging. I hope they will find a way to sustain what they are doing.

Privacy in the Digital Age - Response to Readings



Americans’ Views About Data Collection and Security – PEW

The first thing that stuck out to me about this research was the notion, attributed to “information scholars” that privacy is not something one can simply “have,” rather it is something that people must seek to “achieve.” I find this concept to be an interesting point of view, especially in light of the later readings about data protection and privacy laws in Europe, where it does seem that privacy is regarded by many as something that one can “simply have” – a right that a person is entitled to and not something that they actively have to work to achieve. 

I also was drawn to data on how people who had a greater knowledge or awareness of government surveillance programs, including those that claim to fall under anti-terrorism efforts, were more likely to have the strongest views on the subject: believing that certain records should not be saved for any length of time, that social media sites should not save any data, that government programs did not have adequate limitations, and generally disapproving of the programs. I find it interesting that the more people learn about these things, the more unsettled they are by them. This seems to lend itself to the idea that most people are in the dark about how much data is actually being collected about them and what’s being done with it. In a class last semester, our professor showed us the huge storage locations of government collected data and we discussed possible concerns about this kind of data collection and retention – including how laws and what is culturally acceptable can change over time and how data collected today of a person participating in something quite legal and culturally acceptable could one day be used against that person in the future should laws or cultural norms change if data is stored indefinitely without clear rules or limitations on those storing it. 

Foiling Electronic Snoops in Email 

I think the title of this article is a little misleading. I was disappointed to learn that author actually cannot provide a way to actually “foil” email snoops and leaves readers with the conclusion that while there are some things that they can do, these options are either incomplete or require you to take further risks with your privacy. While I did change the settings on my iPhone, overall this article leaves you feeling more informed, but also more unsettled. I find it particularly disturbing that companies are attempting to learn my location and device used when opening emails. 

Behind the European Privacy Policy

My first reaction to this piece was: I bet the lawyers who spoke to Schrem’s class are kicking themselves now that he’s won his case. I had heard of this case last semester and, as I said earlier, I think this is an interesting juxtaposition to how PEW states that “information scholars”, presumably in the United States, view privacy. For Silicon Valley businesses, infringing on people’s privacy and profiting off their data is just part of business, and it’s disturbing how far that mindset has spread among the general public, making people feel they have to actively work to keep things private instead of having things Europeans consider to be every person’s right: the right to privacy and the “right to be forgotten.” I think Schrem put it well when he said, “the tendency of Silicon Valley companies is to beg forgiveness rather than ask permission.” 

I think this article just adds to the multitude of reasons why Facebook and Google need some kind of oversight or limitations – some kind of check and balance. They are not governments, not elected, and answer to no one but wield huge amounts of power over a large population of the world. The article mentions how Facebook is currently facing challenges from regulators in Belgium to stop it from tracking consumers who have not even joined the service. Is this behavior - spying and collecting data on those who have not consented in any way, shape or form – all that different from governments spying on citizens of other countries? But they are not a government, they are not answerable to anyone. I think this should be much more concerning to people than it is, especially considering Facebook’s psychological experiment, Uber’s changing terms of service, and other questionable actions. 

On the Death of Privacy / Facebook’s experiments

Larry Page, Google’s chief executive, as quoted by the Guardian, has expressed his frustration with people’s lack of trust in big data companies like Google. He argues that if people would just let companies mine their health data, we could save 100,000 lives every year. “We get so worried about [data privacy] that we don’t get the benefits,” he claims. And yet more and more people (including myself) are downloading Ghostery, and more and more stories are leaking about just how much we shouldn’t trust companies like Google and Facebook. 

Mr. Mosley, the man whose “orgy” was falsely tied to Nazism, says there’s not a lot of difference between the actions of the government of East Germany and the actions of Google. A claim that can be quite sobering if you stop to think about it. Mr. Gonzelez of Spain’s case is equally interesting and made me think of what we are always told here in the US and frequently tell each other: be careful what you put (or let get put) on the internet because it will never go away and will haunt you for the rest of your life. We automatically assume that there is no “right to be forgotten.” The responsibility is solely and wholly on the consumer, the individual. This contrasts sharply with what Schrem says in the earlier article about how just as you expect the company who constructs the building you enter to prevent the building from falling down on your head, you also have a right to expect big data businesses to behave responsibility – you have a right to privacy and a right to be forgotten, especially once years have passed and the data, photo, document, or what have you is no longer representative or even factually correct. 

Facebook’s experiments:

This simply goes back to my point about the lack of oversight on companies like Facebook. In a linked Guardian article (below), the author describes how on election day 2010, “Facebook offered one group in the US a graphic with a link to find nearby polling stations, along with a button that would let you announce that you'd voted, and the profile photos of six other of your "friends" who had already done so. Users shown that page were 0.39% more likely to vote than those in the "control" group, who hadn't seen the link, button or photos. The researchers reckoned they'd mobilized 60,000 voters - and that the ripple effect caused a total of 340,000 extra votes - significantly more than George Bush won by in 2000, where Florida hung on just 537 votes.” He goes on to make the disturbing suggestion: What if Mark Zuckerberg – or some future Facebook chief – decided that he wanted to influence a future election? The article states that growing data suggests that subtly influencing people’s opinions and voting turnout could be enough to engineer voting in one direction or another. While subliminal advertising has been banned in most countries for decades, the author argues that Facebook has reinvented it, though they are using it for things that are not truly advertising. Just because further “research” experiments haven’t been revealed, doesn’t mean they aren’t happening, especially after Facebook revised their terms of service.  This almost sounds unbelievable – like a dystopian science fiction novel, but it’s not something we should just ignore or dismiss. There’s enough suspicious and shady behavior on the part of these companies that we should probably perk up and pay attention to what’s going on. 

The problem is: how would oversight of such huge companies even work? I doubt many would trust the government to do it. Especially considering how Facebook claims that their controversial study was government-sponsored: in the article it cites Facebook as saying that the “aim of the government sponsored study was to see whether positive or negative words in messages would lead to positive or negative content in status updates.” This is a question I don’t have an answer to but that I think people should be asking. 

A final note on youarewhatyoulike.com: 

We were shown this algorithm in my class last semester. In my case, just like in that of the author of this week’s article, it got a few things wrong: namely my age and “life satisfaction,” although it did identify me as politically liberal and female.  I did find it interesting just how much it got wrong about the author though: age, gender, sexual orientation and marital status – all wrong. 

Michael Kosinski, from the site, says most technologies have their bright and dark side and that the ability of machine’s to better understand people would lead to better customer experience, products, etc. This seems a little dubious in light of all it got wrong about the author, but perhaps it could be said to be true if marketing to the author’s projected persona. But he also mentions a dark side: what if the same technology were used to predict personal things about people: who is gay, has certain political views, even who is HIV-positive? Taking that further, with it already showing to be inaccurate about things like age and gender, it could have huge consequences, not only resulting in people being targeted because they were gay or held certain beliefs but people being targeted incorrectly because of such things. As he said, “lynches are not unlikely to follow.” And all this stems from information gathered through your Facebook. 

http://www.theguardian.com/technology/2014/jun/30/if-facebook-can-tweak-our-emotions-and-make-us-vote-what-else-can-it-do