Shame and Social Engineering

Just finished reading So You’ve Been Publicly Shamed, Jon Ronson’s zeitgeisty book about social media pile-ons. There were many, many good points in the book, but I forgot to highlight them as I was enjoying reading it so much. One thing that has stuck in my mind, however, is an email exchange with the film-maker Adam Curtis, in which he talks about feedback loops and the social media echo chamber:

Feedback is an engineering principle, and all engineering is devoted to trying to keep the thing you are building stable.

It’s undeniably true that I now self-censor a lot more on Twitter than I did in the past, for fear of a strong negative reaction. I don’t think I’m alone in this; anecdotal evidence suggests many people are also becoming more tame to avoid the Twitter mobs. The net effect is, as Jon Ronson himself says:

We see ourselves as nonconformist, but I think all of this is creating a more conformist, conservative age. ‘Look!’ we’re saying. ‘WE’RE normal! THIS is the average!’

I recommend you read the book yourself to see all of this in much greater context. And I wonder if Twitter and Facebook shouldn’t give away a free copy to all their users.


Blogging the Highlights: Smarter Than You Think

I make no secret of the fact that I love Russell Davies’ blog, and recently he’s been running a series of posts in which he blogs the portions he highlights in books on his Kindle. I think this is a great idea, so I’m stealing it wholesale, except I have a Kobo.

The first book is Clive Thompson’s Smarter Than You Think, which looks at common complaints against modern technology (It makes us stupid! It makes us antisocial!) and gently attempts to debunk them. It’s not cyber-utopian, but it is pro-technology. I really enjoyed the book, and agree with its conclusions.

Here are the bits I highlighted:

In 1915, a Spanish inventor unveiled a genuine, honest-to-goodness robot that could easily play Chess – a simple endgame involving only three pieces, anyway. A writer for Scientific American fretted that the inventor “Would Substitute Machinery for the Human Mind.”

I have a hobby of collecting dire predictions about the perils of technology. This is an example.

The mathematician Gottfried Wilhelm Leibniz bemoaned “that horrible mass of books which keeps on growing,” which would doom the quality writers to “the danger of general oblivion” and produce “a return to barbarism.”

That’s another example.

Each time we’re faced with bewildering new thinking tools, we panic – then quickly set about deducing how they can be used to help us work, meditate, and create.

This is kind of a distillation of the book. Each new technology seems overwhelming, there is a small outcry against it, then we adapt ourselves to it (and it to us).

“Blogging forces you to write down your arguments and assumptions. This is the single biggest reason to do it, and I think it alone makes it worth it.”

Gabriel Weinberg of DuckDuckGo said this, and I endorse this message. That’s what this very blog is for.

U.S. neurologist George Miller Beard diagnosed America’s white-collar population as suffering from neurasthenia. The disorder was, he argued, a depletion of the nervous system by its encounters with the unnatural forces of modern civilization, most particularly “steam power”, “the telegraph”, “the periodical press”, and “the sciences.”

Today we blame modern technology for memory and attention disorders instead.

Sociologists have a name for this problem: pluralistic ignorance. It occurs whenever a group of people underestimate how much others around them share their attitudes and beliefs.

“I’m not racist myself, but I couldn’t employ a black person as my colleagues wouldn’t accept it.”

Complaining is easy – much easier than getting out of your chair. Many critics have worried about the rise of so-called slacktivism, a generation of people who think clicking “like” on a Facebook page is enough to foment change. Dissent becomes a social pose.

The book’s position is that online activism helps act as an instigator of, rather than a replacement for, real-life protest. Really, I just liked the phrasing of the last sentence.

“It strikes me that social media embodies the connection between action and expression.”

Charlie Beckett said this, about the theory in the previous quote.

… this reflexively dystopian view is just as misleading as the giddy boosterism of Silicon Valley. Its nostalgia is false; it pretends these cultural prophecies of doom are somehow new and haven’t occurred with metronomic regularity, and in nearly identical form, for centuries.

(Standing ovation) I share this opinion, and I was delighted to read this in the epilogue. We’ve always had scares about new technologies, and we always will; just read some history and you’ll find it’s an inescapable solution. There never was a more innocent time, we’re not all doomed because we read on our smartphones instead of newspapers, no-one is becoming more stupid because we have better tools to outsource some of our processing to. Everything old is new again.


Samsung, Voice Control, and Privacy. Many Questions.

It’s interesting to see the fuss around Samsung’s use of voice control in its Smart TVs, because we’re going to see this happening with increasing frequency and urgency as voice-powered devices are more deeply integrated into our personal spaces. As well as other Smart TV models, Microsoft Kinect is already in millions of homes, and Amazon Echo is beginning to roll out.

These devices work in similar ways: you activate voice search with an opt-in command (“Hi TV”; “Xbox On”; “Alexa”). Android (“OK Google”) and iOS (“Hey Siri”) devices also function this way, but usually require a button press to use voice search (except when on the home screen of an unlocked device) – although I imagine future iterations will more widely use activation commands, especially on home systems like Android TV and Apple TV (with HomeKit).

Whatever system is used, after it’s activated by the voice a brief audio clip of the user’s command or query is recorded and transmitted to a cloud server stack, which is required for running the deep learning algorithms necessary to make sense of human speech.

The fear is that with any of these devices you could accidentally activate the voice service, then reveal personal data in the following few seconds of audio, which would be transmitted to the cloud servers – and potentially made available to untrusted third parties.

Given that this risk is present on all devices with voice activation, the differences I can see in the case of Samsung’s Smart TV are:

  1. the terms explicitly warn you that data leak is a possibility;
  2. the voice analysis uses third-party deep learning services instead of their own;
  3. Samsung don’t say who those third parties are, or why they’re needed; and
  4. it’s on your TV.

This leaves me with a lot of questions (and, I’m afraid, no good answers yet).

Could the first point really be at the root of the unease? Is it simply the fact that this potential privacy breach has been made clear and now we must confront it? Would ignorance be preferable to transparency?

If Microsoft’s Kinect is always listening for a voice activation keyword, and uses Azure cloud services for analysing your query, does the only difference lie in Samsung’s use of a third party? Or is it their vague language around that third party; would it make a difference if they made clear it would only be shared with Nuance (who also provide services for Huawei, LG, Motorola and more)? When the Xbox One launched there were concerns around the ‘always listening’ feature, which Microsoft alleviated with clear privacy guidelines. Is better communication all that’s needed?

If our options are to put trust in someone, or go without voice control altogether (something that’s going to be harder to resist in the future), then who do you trust with the potential to listen to you at home? Private corporations, as long as its them alone? No third parties at all, or third parties if they’re named and explained? Or what about if a government set up a central voice data clearing service, would you trust that? What safeguards and controls would be sufficient to make us trust our choice?

Aside: what would be the effect if the service we’ve trusted with our voice data began acting on it? Say, if Cortana recognised your bank details, should it let you know that you’ve leaked them accidentally? What are the limits of that? Google in Ireland reports the phone number of the Samaritans when you use text search to find information about Suicide, would it be different if it learned that from accidental voice leaks? What if a child being abused by an adult confided in Siri; would you want an automated system on Apple’s servers to contact an appropriate authority?

Finally, could the difference be as simple as the fact that Samsung have put this in a TV? Is it unexpected behaviour from an appliance that’s had a place in our living rooms for sixty years? If it were a purpose-built appliance such as Amazon’s Echo, would that change the way we feel about it?

This is just a small selection of the types of questions with which we’re going to be confronted with increasing frequency. There’s already a tension between privacy and convenience, and it’s only going to become stronger as voice technology moves out of our pockets and into our homes.

As I said, I don’t have answers for these questions. I do, however, have some (hastily considered) suggestions for companies that want to record voice data in the home:

  • Privacy policies which clearly state all parties that will have access to data, and why, and give clear notice of any changes.
  • A  plainly-written explanation of the purpose of voice control, with links to the privacy policy, as part of the device setup process.
  • The ability to opt-out of using voice activation, with a hardware button to instigate actions instead.
  • Obvious audio and visual indicators that voice recording has started, and is taking place.
  • An easily-accessible way to play back, manage and delete past voice clips.

Many companies supply some or all of these already; I think we should be looking at this as a minimum for the next wave of devices.

Update: Here’s a look at how other companies communicate their privacy policies on monitoring.


Some Further Thoughts On Privacy

The US has a (largely religion-driven) abstinence-until-marriage movement; in some states, schools are not required to provide sexual education to teens, and where it is provided, abstinence from intercourse is promoted as the best method of maintaining sexual health. But a 2007 meta-study found that abstinence-only at best had no effect at all on teen sexual health, and at worst led to higher rates of sexually-transmitted infections: in communities with greater than 20% of teens in abstinence-only programs, rates of STDs were over 60% higher than in those of regular programs.

Ignorance of their options meant these teens were less likely to use contraception when they did have sex, were more likely to engage in oral and anal sex, and less likely to seek medical testing or treatment.

I worry that ‘total privacy’ advocates are causing similar ignorance in people online. An article in the latest Wired UK heavily hypes up the scare of your data being publicly available, but without offering any explanation of why that’s bad or how you can take back control, beyond blocking all data sharing. By promoting zero-tolerance privacy, encouraging people to leave social networks or uninstall apps that share data, total privacy advocates fail to educate people on the privacy options that are available to them, and ways they can use data to their own advantage.

Facebook, for example, has excellent explanations of how they use your data, filters and preferences that let you control it, and links to external websites that explain and provide further controls for digital advertising.

My concern is that, if you advise only a zero-tolerance policy you run the risk of driving people away to alternatives that are less forthcoming with their privacy controls, or making them feel helpless to the point where they decide to ignore the subject entirely.  Either way they’ve lost power over the way they control their personal data, and are missing out on the value it could give them.

And I strongly believe there is value in my data. There is value in it for me: I can use it to be more informed about my health, to get a smarter personal assistant, to see ads that can be genuinely relevant to me. And there is value in it for everyone: shared medical data can be used to find environmental and behavioural patterns and improve the quality of public preventative healthcare.

I’m not blithe about it; I don’t want my data sold to unknown third parties, or used against me by insurers. I’m aware of the risks of the panopticon of small HD cameras that could lead to us all becoming witting or unwitting informants, and monitoring of communication by people who really have no business monitoring it.

What we need is not total privacy, but control over what we expose. We need transparency in seeing who gets our data, we need legislation to control the flow of data between third parties, we need the right to opt out, and we need better anonymity of our data when we choose to release it into large datasets.

Knowledge is power, and I’d rather have control of that power myself than completely deny it a place in the world.

Sources and further reading