Panoptic Sorting, Social Games and Facebook Privacy

02/23/2011

I usually prefer to reserve my blog for posting ideas explicitly about things related to social game design. In this post, however, I’d like to address an issue that has been a popular, albeit disconcerting, topic around water coolers lately: Facebook’s privacy controls. In particular, the general anxiety about the fact that Facebook requires users to opt-out if they wish to keep their information private, which of course makes most of their information public by default. You’ve read about it in the news, you’ve read the expert analysis and you’ve even seen very public displays of virtual suicides in protest. Naturally, I thought I’d offer my $.02 on the issue. (I’d also recommend an interesting article by Michael Zimmer and Chris Jay Hoofnagle in the Huffington Post about this issue and what they dub “blowforward” privacy PR, where they claim Facebook rolls out the “new” before getting consensus and then the PR tactic is to backtrack when complaints are made.)

Facebook: The privacy saga continues

Image by Ruth Suehle for opensource.com

On the surface, the uproar about Facebook’s privacy controls appears simple: when we signed up, Facebook operated a particular way, and then it was changed in ways that made many of us uncomfortable. For example, many have objected to the News Feed feature, where users’ actions are automatically posted on friends’ pages, because many believe this makes it too easy for “unknown people” to track down individual activities. “It’s not very private”, goes the complaint. But isn’t being able to see other people’s information the main point and inherent nature, the “It Factor” of Facebook? You can’t tell me you don’t like checking out your friend’s Hawaiian holiday photos at your leisure – we all do. Otherwise, why are we even using Facebook? Besides, what did we really think we were signing up for when we joined an open, information-intensive platform and community of 4 million, 40 million, 400 million people? The business model of social networks and online communities is dependent upon millions of users being able to view, very easily, all sorts of personal information. In order to have that feature, (and it sounds strangely obvious to say) this information needs to be… viewable. Socialization of information, or a system designed to maximize the easy sharing of this information has always been the goal of these kinds of sites. It’s what we have been asking for, so it shouldn’t be a surprise that we are herded towards making our privacy settings as open as possible.

This notion then leads me to my second point, that we should perhaps be less concerned about exposing our personal information to each other, and a little more concerned about who is brokering that exchange. It’s important to keep in mind that while individual privacy settings can be made more private and you can hide things from other site members, all this information is still available to the owner of the network. Nothing is private from them. The point we should be concerned about is that we simply don’t know and might never have clear visibility into what these organizations are actually doing with all of our information. And I’m not just picking on Facebook here – that’s just too easy. We are constantly being asked to share our private information with companies and organizations of all types, and we have very little transparency, let alone control, over what is done with it (clickthrough privacy policies that can be changed at any time by the vendor are hardly a safeguard).

This point makes for a great segue into what I think is the more pressing issue here: data mining. Michael Zimmer, PhD, who is an assistant professor in the School of Information Studies at the University of Wisconsin-Milwaukee, perhaps best illustrates this idea in an essay. Zimmer points out, it’s not Facebook privacy alone, or privacy on social networks in general, we need to be concerned with; what we really need to be concerned about is the data mining going on every time we give out bits of our personal information. Behavioural targeting or what Oscar Gandy, a famous political economist, first identified in the early 90’s as “panoptic sorting,” (a system wherein individuals are continually identified, assessed and classified for the purpose of coordinating and controlling their access to consumer goods and services) is the real worry.

Once this information is obtained, behavioural targeting becomes an issue and this raises not only privacy concerns, but also concerns about discrimination. For example, imagine, if you will, a man calls a large company’s technical support line for assistance. Since he has bought his product and registered it with the company, it’s fair to assume that he will get service without discrimination. It’s also fair to assume that the company has a lot of his personal information, and a strong profit motive. For instance, the company either knows or can infer information such as race and income (based on his residence), has access to a great deal more through public or semi-public searches and yet more, potentially, through information-sharing business relationships (you do read all the small print about information sharing on every privacy policy you consent to, don’t you?). It’s not hard to imagine that if the company feels that our man (based on his profile) is likely to exhibit behaviour that’s not profitable to the company, that there would be an incentive to preserve the company’s resources by, say, putting him at the bottom of the support service queue. Given a company’s mandate is generally to produce as much profit as possible for its shareholders, you couldn’t blame the executives there for feeling they had an obligation to do so. A case in point was the common discriminatory practice of redlining, which was used by banks to deny credit to those who lived in low-income neighbourhoods, until legislation like the CRA was brought in to reduce this type of discrimination in the credit market. The risk is we’ve made redlining easy, cheap, precise and available, and we’ve got no good mechanism for discovering when it’s being used.

Looking at it from this point of view, it starts to feel (to me, at least) like it doesn’t really matter that someone has to wade through 50 settings and 170 options on Facebook to safeguard themselves from other individual viewers of their profile data. On the other hand, it appears as if the genie is out of the bottle. We can’t just delete our profiles from public spaces, and it wouldn’t make much difference now even if we did. What we need is a way to watch the watchmen. What do you think? Email me at michael [at] ayogo [dot] com.

References:

http://michaelzimmer.org/2007/09/18/panoptic-sorting-on-the-rise-as-myspace-enters-behavioral-targeting-foray/

https://fiq.ischool.utoronto.ca/index.php/fiq/article/view/89/230

http://www.washingtonpost.com/wp-dyn/content/article/2010/05/23/AR2010052303828.html

http://www.theglobeandmail.com/news/technology/personal-tech/lisan-jutras/the-freak-out-over-facebook/article1570877/

http://www.nytimes.com/interactive/2010/05/12/business/facebook-privacy.html

http://www.pcworld.com/article/196410/facebook_privacy_secrets_unveiled.html

http://www.nytimes.com/2010/05/13/technology/personaltech/13basics.html

http://www.huffingtonpost.com/chris-jay-hoofnagle/how-to-win-friends-and-ma_b_598572.html

http://www.seppukoo.com/

http://www.theonion.com/articles/entire-facebook-staff-laughs-as-man-tightens-priva,17508/

http://en.wikipedia.org/wiki/Oscar_gandy