Blogging the Highlights: Smarter Than You Think

I make no secret of the fact that I love Russell Davies’ blog, and recently he’s been running a series of posts in which he blogs the portions he highlights in books on his Kindle. I think this is a great idea, so I’m stealing it wholesale, except I have a Kobo.

The first book is Clive Thompson’s Smarter Than You Think, which looks at common complaints against modern technology (It makes us stupid! It makes us antisocial!) and gently attempts to debunk them. It’s not cyber-utopian, but it is pro-technology. I really enjoyed the book, and agree with its conclusions.

Here are the bits I highlighted:

In 1915, a Spanish inventor unveiled a genuine, honest-to-goodness robot that could easily play Chess – a simple endgame involving only three pieces, anyway. A writer for Scientific American fretted that the inventor “Would Substitute Machinery for the Human Mind.”

I have a hobby of collecting dire predictions about the perils of technology. This is an example.

The mathematician Gottfried Wilhelm Leibniz bemoaned “that horrible mass of books which keeps on growing,” which would doom the quality writers to “the danger of general oblivion” and produce “a return to barbarism.”

That’s another example.

Each time we’re faced with bewildering new thinking tools, we panic – then quickly set about deducing how they can be used to help us work, meditate, and create.

This is kind of a distillation of the book. Each new technology seems overwhelming, there is a small outcry against it, then we adapt ourselves to it (and it to us).

“Blogging forces you to write down your arguments and assumptions. This is the single biggest reason to do it, and I think it alone makes it worth it.”

Gabriel Weinberg of DuckDuckGo said this, and I endorse this message. That’s what this very blog is for.

U.S. neurologist George Miller Beard diagnosed America’s white-collar population as suffering from neurasthenia. The disorder was, he argued, a depletion of the nervous system by its encounters with the unnatural forces of modern civilization, most particularly “steam power”, “the telegraph”, “the periodical press”, and “the sciences.”

Today we blame modern technology for memory and attention disorders instead.

Sociologists have a name for this problem: pluralistic ignorance. It occurs whenever a group of people underestimate how much others around them share their attitudes and beliefs.

“I’m not racist myself, but I couldn’t employ a black person as my colleagues wouldn’t accept it.”

Complaining is easy – much easier than getting out of your chair. Many critics have worried about the rise of so-called slacktivism, a generation of people who think clicking “like” on a Facebook page is enough to foment change. Dissent becomes a social pose.

The book’s position is that online activism helps act as an instigator of, rather than a replacement for, real-life protest. Really, I just liked the phrasing of the last sentence.

“It strikes me that social media embodies the connection between action and expression.”

Charlie Beckett said this, about the theory in the previous quote.

… this reflexively dystopian view is just as misleading as the giddy boosterism of Silicon Valley. Its nostalgia is false; it pretends these cultural prophecies of doom are somehow new and haven’t occurred with metronomic regularity, and in nearly identical form, for centuries.

(Standing ovation) I share this opinion, and I was delighted to read this in the epilogue. We’ve always had scares about new technologies, and we always will; just read some history and you’ll find it’s an inescapable solution. There never was a more innocent time, we’re not all doomed because we read on our smartphones instead of newspapers, no-one is becoming more stupid because we have better tools to outsource some of our processing to. Everything old is new again.

Samsung, Voice Control, and Privacy. Many Questions.

It’s interesting to see the fuss around Samsung’s use of voice control in its Smart TVs, because we’re going to see this happening with increasing frequency and urgency as voice-powered devices are more deeply integrated into our personal spaces. As well as other Smart TV models, Microsoft Kinect is already in millions of homes, and Amazon Echo is beginning to roll out.

These devices work in similar ways: you activate voice search with an opt-in command (“Hi TV”; “Xbox On”; “Alexa”). Android (“OK Google”) and iOS (“Hey Siri”) devices also function this way, but usually require a button press to use voice search (except when on the home screen of an unlocked device) – although I imagine future iterations will more widely use activation commands, especially on home systems like Android TV and Apple TV (with HomeKit).

Whatever system is used, after it’s activated by the voice a brief audio clip of the user’s command or query is recorded and transmitted to a cloud server stack, which is required for running the deep learning algorithms necessary to make sense of human speech.

The fear is that with any of these devices you could accidentally activate the voice service, then reveal personal data in the following few seconds of audio, which would be transmitted to the cloud servers – and potentially made available to untrusted third parties.

Given that this risk is present on all devices with voice activation, the differences I can see in the case of Samsung’s Smart TV are:

  1. the terms explicitly warn you that data leak is a possibility;
  2. the voice analysis uses third-party deep learning services instead of their own;
  3. Samsung don’t say who those third parties are, or why they’re needed; and
  4. it’s on your TV.

This leaves me with a lot of questions (and, I’m afraid, no good answers yet).

Could the first point really be at the root of the unease? Is it simply the fact that this potential privacy breach has been made clear and now we must confront it? Would ignorance be preferable to transparency?

If Microsoft’s Kinect is always listening for a voice activation keyword, and uses Azure cloud services for analysing your query, does the only difference lie in Samsung’s use of a third party? Or is it their vague language around that third party; would it make a difference if they made clear it would only be shared with Nuance (who also provide services for Huawei, LG, Motorola and more)? When the Xbox One launched there were concerns around the ‘always listening’ feature, which Microsoft alleviated with clear privacy guidelines. Is better communication all that’s needed?

If our options are to put trust in someone, or go without voice control altogether (something that’s going to be harder to resist in the future), then who do you trust with the potential to listen to you at home? Private corporations, as long as its them alone? No third parties at all, or third parties if they’re named and explained? Or what about if a government set up a central voice data clearing service, would you trust that? What safeguards and controls would be sufficient to make us trust our choice?

Aside: what would be the effect if the service we’ve trusted with our voice data began acting on it? Say, if Cortana recognised your bank details, should it let you know that you’ve leaked them accidentally? What are the limits of that? Google in Ireland reports the phone number of the Samaritans when you use text search to find information about Suicide, would it be different if it learned that from accidental voice leaks? What if a child being abused by an adult confided in Siri; would you want an automated system on Apple’s servers to contact an appropriate authority?

Finally, could the difference be as simple as the fact that Samsung have put this in a TV? Is it unexpected behaviour from an appliance that’s had a place in our living rooms for sixty years? If it were a purpose-built appliance such as Amazon’s Echo, would that change the way we feel about it?

This is just a small selection of the types of questions with which we’re going to be confronted with increasing frequency. There’s already a tension between privacy and convenience, and it’s only going to become stronger as voice technology moves out of our pockets and into our homes.

As I said, I don’t have answers for these questions. I do, however, have some (hastily considered) suggestions for companies that want to record voice data in the home:

  • Privacy policies which clearly state all parties that will have access to data, and why, and give clear notice of any changes.
  • A  plainly-written explanation of the purpose of voice control, with links to the privacy policy, as part of the device setup process.
  • The ability to opt-out of using voice activation, with a hardware button to instigate actions instead.
  • Obvious audio and visual indicators that voice recording has started, and is taking place.
  • An easily-accessible way to play back, manage and delete past voice clips.

Many companies supply some or all of these already; I think we should be looking at this as a minimum for the next wave of devices.

Update: Here’s a look at how other companies communicate their privacy policies on monitoring.