Samsung, Voice Control, and Privacy. Many Questions.

It’s inter­est­ing to see the fuss around Samsung’s use of voice con­trol in its Smart TVs, because we’re going to see this hap­pen­ing with increas­ing fre­quen­cy and urgency as voice-pow­ered devices are more deeply inte­grat­ed into our per­son­al spaces. As well as oth­er Smart TV mod­els, Microsoft Kinect is already in mil­lions of homes, and Ama­zon Echo is begin­ning to roll out.

These devices work in sim­i­lar ways: you acti­vate voice search with an opt-in com­mand (“Hi TV”; “Xbox On”; “Alexa”). Android (“OK Google”) and iOS (“Hey Siri”) devices also func­tion this way, but usu­al­ly require a but­ton press to use voice search (except when on the home screen of an unlocked device) — although I imag­ine future iter­a­tions will more wide­ly use acti­va­tion com­mands, espe­cial­ly on home sys­tems like Android TV and Apple TV (with Home­K­it).

What­ev­er sys­tem is used, after it’s acti­vat­ed by the voice a brief audio clip of the user’s com­mand or query is record­ed and trans­mit­ted to a cloud serv­er stack, which is required for run­ning the deep learn­ing algo­rithms nec­es­sary to make sense of human speech.

The fear is that with any of these devices you could acci­den­tal­ly acti­vate the voice ser­vice, then reveal per­son­al data in the fol­low­ing few sec­onds of audio, which would be trans­mit­ted to the cloud servers — and poten­tial­ly made avail­able to untrust­ed third par­ties.

Giv­en that this risk is present on all devices with voice acti­va­tion, the dif­fer­ences I can see in the case of Samsung’s Smart TV are:

  1. the terms explic­it­ly warn you that data leak is a pos­si­bil­i­ty;
  2. the voice analy­sis uses third-par­ty deep learn­ing ser­vices instead of their own;
  3. Sam­sung don’t say who those third par­ties are, or why they’re need­ed; and
  4. it’s on your TV.

This leaves me with a lot of ques­tions (and, I’m afraid, no good answers yet).

Could the first point real­ly be at the root of the unease? Is it sim­ply the fact that this poten­tial pri­va­cy breach has been made clear and now we must con­front it? Would igno­rance be prefer­able to trans­paren­cy?

If Microsoft’s Kinect is always lis­ten­ing for a voice acti­va­tion key­word, and uses Azure cloud ser­vices for analysing your query, does the only dif­fer­ence lie in Samsung’s use of a third par­ty? Or is it their vague lan­guage around that third par­ty; would it make a dif­fer­ence if they made clear it would only be shared with Nuance (who also pro­vide ser­vices for Huawei, LG, Motoro­la and more)? When the Xbox One launched there were con­cerns around the ‘always lis­ten­ing’ fea­ture, which Microsoft alle­vi­at­ed with clear pri­va­cy guide­lines. Is bet­ter com­mu­ni­ca­tion all that’s need­ed?

If our options are to put trust in some­one, or go with­out voice con­trol alto­geth­er (some­thing that’s going to be hard­er to resist in the future), then who do you trust with the poten­tial to lis­ten to you at home? Pri­vate cor­po­ra­tions, as long as its them alone? No third par­ties at all, or third par­ties if they’re named and explained? Or what about if a gov­ern­ment set up a cen­tral voice data clear­ing ser­vice, would you trust that? What safe­guards and con­trols would be suf­fi­cient to make us trust our choice?

Aside: what would be the effect if the ser­vice we’ve trust­ed with our voice data began act­ing on it? Say, if Cor­tana recog­nised your bank details, should it let you know that you’ve leaked them acci­den­tal­ly? What are the lim­its of that? Google in Ire­land reports the phone num­ber of the Samar­i­tans when you use text search to find infor­ma­tion about Sui­cide, would it be dif­fer­ent if it learned that from acci­den­tal voice leaks? What if a child being abused by an adult con­fid­ed in Siri; would you want an auto­mat­ed sys­tem on Apple’s servers to con­tact an appro­pri­ate author­i­ty?

Final­ly, could the dif­fer­ence be as sim­ple as the fact that Sam­sung have put this in a TV? Is it unex­pect­ed behav­iour from an appli­ance that’s had a place in our liv­ing rooms for six­ty years? If it were a pur­pose-built appli­ance such as Amazon’s Echo, would that change the way we feel about it?

This is just a small selec­tion of the types of ques­tions with which we’re going to be con­front­ed with increas­ing fre­quen­cy. There’s already a ten­sion between pri­va­cy and con­ve­nience, and it’s only going to become stronger as voice tech­nol­o­gy moves out of our pock­ets and into our homes.

As I said, I don’t have answers for these ques­tions. I do, how­ev­er, have some (hasti­ly con­sid­ered) sug­ges­tions for com­pa­nies that want to record voice data in the home:

  • Pri­va­cy poli­cies which clear­ly state all par­ties that will have access to data, and why, and give clear notice of any changes.
  • A  plain­ly-writ­ten expla­na­tion of the pur­pose of voice con­trol, with links to the pri­va­cy pol­i­cy, as part of the device set­up process.
  • The abil­i­ty to opt-out of using voice acti­va­tion, with a hard­ware but­ton to insti­gate actions instead.
  • Obvi­ous audio and visu­al indi­ca­tors that voice record­ing has start­ed, and is tak­ing place.
  • An eas­i­ly-acces­si­ble way to play back, man­age and delete past voice clips.

Many com­pa­nies sup­ply some or all of these already; I think we should be look­ing at this as a min­i­mum for the next wave of devices.

Update: Here’s a look at how oth­er com­pa­nies com­mu­ni­cate their pri­va­cy poli­cies on mon­i­tor­ing.


Some Further Thoughts On Privacy

The US has a (large­ly reli­gion-dri­ven) absti­nence-until-mar­riage move­ment; in some states, schools are not required to pro­vide sex­u­al edu­ca­tion to teens, and where it is pro­vid­ed, absti­nence from inter­course is pro­mot­ed as the best method of main­tain­ing sex­u­al health. But a 2007 meta-study found that absti­nence-only at best had no effect at all on teen sex­u­al health, and at worst led to high­er rates of sex­u­al­ly-trans­mit­ted infec­tions: in com­mu­ni­ties with greater than 20% of teens in absti­nence-only pro­grams, rates of STDs were over 60% high­er than in those of reg­u­lar pro­grams.

Igno­rance of their options meant these teens were less like­ly to use con­tra­cep­tion when they did have sex, were more like­ly to engage in oral and anal sex, and less like­ly to seek med­ical test­ing or treat­ment.

I wor­ry that ‘total pri­va­cy’ advo­cates are caus­ing sim­i­lar igno­rance in peo­ple online. An arti­cle in the lat­est Wired UK heav­i­ly hypes up the scare of your data being pub­licly avail­able, but with­out offer­ing any expla­na­tion of why that’s bad or how you can take back con­trol, beyond block­ing all data shar­ing. By pro­mot­ing zero-tol­er­ance pri­va­cy, encour­ag­ing peo­ple to leave social net­works or unin­stall apps that share data, total pri­va­cy advo­cates fail to edu­cate peo­ple on the pri­va­cy options that are avail­able to them, and ways they can use data to their own advan­tage.

Face­book, for exam­ple, has excel­lent expla­na­tions of how they use your data, fil­ters and pref­er­ences that let you con­trol it, and links to exter­nal web­sites that explain and pro­vide fur­ther con­trols for dig­i­tal adver­tis­ing.

My con­cern is that, if you advise only a zero-tol­er­ance pol­i­cy you run the risk of dri­ving peo­ple away to alter­na­tives that are less forth­com­ing with their pri­va­cy con­trols, or mak­ing them feel help­less to the point where they decide to ignore the sub­ject entire­ly.  Either way they’ve lost pow­er over the way they con­trol their per­son­al data, and are miss­ing out on the val­ue it could give them.

And I strong­ly believe there is val­ue in my data. There is val­ue in it for me: I can use it to be more informed about my health, to get a smarter per­son­al assis­tant, to see ads that can be gen­uine­ly rel­e­vant to me. And there is val­ue in it for every­one: shared med­ical data can be used to find envi­ron­men­tal and behav­iour­al pat­terns and improve the qual­i­ty of pub­lic pre­ven­ta­tive health­care.

I’m not blithe about it; I don’t want my data sold to unknown third par­ties, or used against me by insur­ers. I’m aware of the risks of the panop­ti­con of small HD cam­eras that could lead to us all becom­ing wit­ting or unwit­ting infor­mants, and mon­i­tor­ing of com­mu­ni­ca­tion by peo­ple who real­ly have no busi­ness mon­i­tor­ing it.

What we need is not total pri­va­cy, but con­trol over what we expose. We need trans­paren­cy in see­ing who gets our data, we need leg­is­la­tion to con­trol the flow of data between third par­ties, we need the right to opt out, and we need bet­ter anonymi­ty of our data when we choose to release it into large datasets.

Knowl­edge is pow­er, and I’d rather have con­trol of that pow­er myself than com­plete­ly deny it a place in the world.

Sources and further reading


The United States of Authoritarianism

I’m read­ing Eric Schmidt and Jared Cohen’s ‘The New Dig­i­tal Age’ at the moment. It’s a fair­ly dry look at the near future, both per­son­al and polit­i­cal, and the impact of dig­i­tal tech­nol­o­gy. It’s (obvi­ous­ly) in favour of every­thing Google are doing — to the extent that anonymi­ty is seen as a gen­er­al­ly unfavourable aim, except in extreme cir­cum­stances — and has the occa­sion­al out-of-place digres­sion (not sure how the robot­ic hair­dress­ing machine fits into the new dig­i­tal age), but is over­all much more inter­est­ing than not.

One thing that’s obvi­ous, how­ev­er, is that it was writ­ten before the NSA/GCHQ leaks, as gov­ern­ment sur­veil­lance isn’t men­tioned as some­thing that we in the West would do. In fact there’s a sec­tion on the dif­fer­ence between author­i­tar­i­an regimes and democ­ra­cies, in which it says:

[Author­i­tar­i­an] regimes will com­pro­mise devices before they are sold, giv­ing them access to what every­body says, types and shares in pub­lic and in pri­vate.

Which, if the allegations/rumours/conspiracies about the Intel back­door and Apple SSL hole (for exam­ple) turn out to be true and based on cre­at­ing secu­ri­ty flaws rather than exploit­ing them, would put the US very much in the author­i­tar­i­an camp.


Privacy, permission, and opting out

Ear­li­er today I got an update noti­fi­ca­tion for the Face­book app for Android, and to install the update I had to agree to some new per­mis­sions:BcFRREcIAAA9tvW.jpg_large

The thing is, I don’t agree to those new per­mis­sions. So I tweet­ed this:

Looks like this new update to Face­book for Android means it’s time to unin­stall the app.

It seemed to hit a pop­u­lar nerve and got retweet­ed a hand­ful of times, but then I start­ed to get peo­ple telling me I was in error or hav­ing a knee-jerk reac­tion. Twitter’s 140 char­ac­ters are great for short bites but some­what lack­ing in con­text, so I thought I’d (hasti­ly) put togeth­er this expla­na­tion.

I don’t believe that my per­son­al data should be a con­di­tion for installing an app. I believe that when an app or ser­vice wants my data, it’s enter­ing into an exchange with me. For me to be hap­py with the exchange, I need a sat­is­fac­to­ry answer to these three ques­tions:

  1. For what pur­pose do you want my data?
  2. What do I get in return?
  3. How can I get my data delet­ed if I change my mind?

In my opin­ion, Facebook’s expla­na­tions aren’t sat­is­fac­to­ry. In the case of SMS per­mis­sions, they give the exam­ple of using SMS con­fir­ma­tion codes for autho­ri­sa­tion. This is a rea­son­able exam­ple, but the word­ing is clear that it is only an exam­ple of what they require the per­mis­sion for.

That caus­es what is, to me, an unac­cept­able ambi­gu­i­ty: a per­mis­sion may be grant­ed for a use I deem rea­son­able now, but once grant­ed it doesn’t have to be request­ed again for a rea­son which I may find unrea­son­able.

Per­haps it does­n’t mean that, and maybe I’m being para­noid, or unchar­i­ta­ble, or think­ing the worst, but to be hon­est, I’m a very light Face­book user and I don’t need the has­sle of work­ing out whether that’s the case or not.

So I don’t agree with the lat­est per­mis­sion requests, and as they’re not option­al requests I took the only course of action open to me and unin­stalled the app. I’m not think­ing about ter­mi­nat­ing my Face­book account, I can avoid the per­mis­sions issue by using the mobile web­site instead, so I will.

If Android had an option­al per­mis­sions mod­el, or if there were def­i­nite guar­an­tees from Face­book about what these per­mis­sions were required for, this would have all passed with­out inci­dent.

There are, of course, much big­ger con­ver­sa­tions being held about per­son­al data and pri­va­cy, but it’s almost Christ­mas and I should stop writ­ing this.