Some Further Thoughts On Privacy

The US has a (largely religion-driven) abstinence-until-marriage movement; in some states, schools are not required to provide sexual education to teens, and where it is provided, abstinence from intercourse is promoted as the best method of maintaining sexual health. But a 2007 meta-study found that abstinence-only at best had no effect at all on teen sexual health, and at worst led to higher rates of sexually-transmitted infections: in communities with greater than 20% of teens in abstinence-only programs, rates of STDs were over 60% higher than in those of regular programs.

Ignorance of their options meant these teens were less likely to use contraception when they did have sex, were more likely to engage in oral and anal sex, and less likely to seek medical testing or treatment.

I worry that ‘total privacy’ advocates are causing similar ignorance in people online. An article in the latest Wired UK heavily hypes up the scare of your data being publicly available, but without offering any explanation of why that’s bad or how you can take back control, beyond blocking all data sharing. By promoting zero-tolerance privacy, encouraging people to leave social networks or uninstall apps that share data, total privacy advocates fail to educate people on the privacy options that are available to them, and ways they can use data to their own advantage.

Facebook, for example, has excellent explanations of how they use your data, filters and preferences that let you control it, and links to external websites that explain and provide further controls for digital advertising.

My concern is that, if you advise only a zero-tolerance policy you run the risk of driving people away to alternatives that are less forthcoming with their privacy controls, or making them feel helpless to the point where they decide to ignore the subject entirely.  Either way they’ve lost power over the way they control their personal data, and are missing out on the value it could give them.

And I strongly believe there is value in my data. There is value in it for me: I can use it to be more informed about my health, to get a smarter personal assistant, to see ads that can be genuinely relevant to me. And there is value in it for everyone: shared medical data can be used to find environmental and behavioural patterns and improve the quality of public preventative healthcare.

I’m not blithe about it; I don’t want my data sold to unknown third parties, or used against me by insurers. I’m aware of the risks of the panopticon of small HD cameras that could lead to us all becoming witting or unwitting informants, and monitoring of communication by people who really have no business monitoring it.

What we need is not total privacy, but control over what we expose. We need transparency in seeing who gets our data, we need legislation to control the flow of data between third parties, we need the right to opt out, and we need better anonymity of our data when we choose to release it into large datasets.

Knowledge is power, and I’d rather have control of that power myself than completely deny it a place in the world.

Sources and further reading

The United States of Authoritarianism

I’m reading Eric Schmidt and Jared Cohen’s ‘The New Digital Age’ at the moment. It’s a fairly dry look at the near future, both personal and political, and the impact of digital technology. It’s (obviously) in favour of everything Google are doing – to the extent that anonymity is seen as a generally unfavourable aim, except in extreme circumstances – and has the occasional out-of-place digression (not sure how the robotic hairdressing machine fits into the new digital age), but is overall much more interesting than not.

One thing that’s obvious, however, is that it was written before the NSA/GCHQ leaks, as government surveillance isn’t mentioned as something that we in the West would do. In fact there’s a section on the difference between authoritarian regimes and democracies, in which it says:

[Authoritarian] regimes will compromise devices before they are sold, giving them access to what everybody says, types and shares in public and in private.

Which, if the allegations/rumours/conspiracies about the Intel backdoor and Apple SSL hole (for example) turn out to be true and based on creating security flaws rather than exploiting them, would put the US very much in the authoritarian camp.

Privacy, permission, and opting out

Earlier today I got an update notification for the Facebook app for Android, and to install the update I had to agree to some new permissions:BcFRREcIAAA9tvW.jpg_large

The thing is, I don’t agree to those new permissions. So I tweeted this:

Looks like this new update to Facebook for Android means it's time to uninstall the app.

It seemed to hit a popular nerve and got retweeted a handful of times, but then I started to get people telling me I was in error or having a knee-jerk reaction. Twitter’s 140 characters are great for short bites but somewhat lacking in context, so I thought I’d (hastily) put together this explanation.

I don’t believe that my personal data should be a condition for installing an app. I believe that when an app or service wants my data, it’s entering into an exchange with me. For me to be happy with the exchange, I need a satisfactory answer to these three questions:

  1. For what purpose do you want my data?
  2. What do I get in return?
  3. How can I get my data deleted if I change my mind?

In my opinion, Facebook’s explanations aren’t satisfactory. In the case of SMS permissions, they give the example of using SMS confirmation codes for authorisation. This is a reasonable example, but the wording is clear that it is only an example of what they require the permission for.

That causes what is, to me, an unacceptable ambiguity: a permission may be granted for a use I deem reasonable now, but once granted it doesn’t have to be requested again for a reason which I may find unreasonable.

Perhaps it doesn’t mean that, and maybe I’m being paranoid, or uncharitable, or thinking the worst, but to be honest, I’m a very light Facebook user and I don’t need the hassle of working out whether that’s the case or not.

So I don’t agree with the latest permission requests, and as they’re not optional requests I took the only course of action open to me and uninstalled the app. I’m not thinking about terminating my Facebook account, I can avoid the permissions issue by using the mobile website instead, so I will.

If Android had an optional permissions model, or if there were definite guarantees from Facebook about what these permissions were required for, this would have all passed without incident.

There are, of course, much bigger conversations being held about personal data and privacy, but it’s almost Christmas and I should stop writing this.

OK, Computer

Ever since Star Trek: The Next Generation I’ve harboured a dream of having a computer like the one on The Enterprise; one that uses natural language parsing to understand your question, can give you the answer to almost anything, and can reply to you audibly. Of course, today this is no longer a dream; with Siri, Google Now* and various similar internet-enabled applications the sci-fi dream is only the press of a button away.

But there’s one important aspect of the Star Trek computer that everyone seems less keen on: the voice command activation. The TV show computer is activated with a prefix: “Computer: …”. Now we have products like Google Glass, Motorola X, and Xbox One Kinect which promise the same functionality (“OK Glass: …”; “OK Google Now: …”; “Xbox on: …”), and the public reaction has tended towards doubt, fear or downright rejection. People I know who are otherwise fully-fledged technophiles have expressed worries about the always-on listener service.

It’s interesting that this reaction has persisted even though representatives of the companies involved have taken great pains to emphasise your privacy. In the case of the Motorola X there is a chip dedicated to only listening for your voice speaking the exact phrase “OK Google Now”, and the Xbox One Kinect behaves similarly, and in neither case is any data sent – or even, as far as I know, a network connection required. But that’s not been enough to reassure some people.

This reaction seems perhaps understandable, except that we carry around with us all day a device fully capable of listening to us and transmitting our words to unknown parties, and at home and work use other devices equally capable of doing the same.

Could this fear be down to timing? This news came at the same time as we heard about the full extent of NSA (or GCHQ here in the UK) spying, so it wouldn’t be unreasonable to think that privacy was foremost in people’s minds.

Is it perhaps a general distrust about what big companies are doing with your data? Google in particular have been fighting many privacy cases in courts across the globe, and a $15 billion lawsuit against Facebook for cookie tracking is still ongoing (I think).

Or are people blanching just because this formalised voice activation now makes it explicit that we can be listened to?

I was genuinely going to make a ‘final frontier’ joke to end this piece, but luckily I thought better of it.

* So pervasive is the image of the Star Trek computer that it’s claimed that Google’s ‘obsession’ is to build their services in its image.