Shame and Social Engineering

Just fin­ished read­ing So You’ve Been Pub­licly Shamed, Jon Ronson’s zeit­geisty book about social media pile-ons. There were many, many good points in the book, but I for­got to high­light them as I was enjoy­ing read­ing it so much. One thing that has stuck in my mind, how­ev­er, is an email exchange with the film-mak­er Adam Cur­tis, in which he talks about feed­back loops and the social media echo cham­ber:

Feed­back is an engi­neer­ing prin­ci­ple, and all engi­neer­ing is devot­ed to try­ing to keep the thing you are build­ing sta­ble.

It’s unde­ni­ably true that I now self-cen­sor a lot more on Twit­ter than I did in the past, for fear of a strong neg­a­tive reac­tion. I don’t think I’m alone in this; anec­do­tal evi­dence sug­gests many peo­ple are also becom­ing more tame to avoid the Twit­ter mobs. The net effect is, as Jon Ron­son him­self says:

We see our­selves as non­con­formist, but I think all of this is cre­at­ing a more con­formist, con­ser­v­a­tive age. ‘Look!’ we’re say­ing. ‘WERE nor­mal! THIS is the aver­age!’

I rec­om­mend you read the book your­self to see all of this in much greater con­text. And I won­der if Twit­ter and Face­book shouldn’t give away a free copy to all their users.

Blogging the Highlights: Smarter Than You Think

I make no secret of the fact that I love Rus­sell Davies’ blog, and recent­ly he’s been run­ning a series of posts in which he blogs the por­tions he high­lights in books on his Kin­dle. I think this is a great idea, so I’m steal­ing it whole­sale, except I have a Kobo.

The first book is Clive Thompson’s Smarter Than You Think, which looks at com­mon com­plaints against mod­ern tech­nol­o­gy (It makes us stu­pid! It makes us anti­so­cial!) and gen­tly attempts to debunk them. It’s not cyber-utopi­an, but it is pro-tech­nol­o­gy. I real­ly enjoyed the book, and agree with its con­clu­sions.

Here are the bits I high­light­ed:

In 1915, a Span­ish inven­tor unveiled a gen­uine, hon­est-to-good­ness robot that could eas­i­ly play Chess – a sim­ple endgame involv­ing only three pieces, any­way. A writer for Sci­en­tif­ic Amer­i­can fret­ted that the inven­tor “Would Sub­sti­tute Machin­ery for the Human Mind.”

I have a hob­by of col­lect­ing dire pre­dic­tions about the per­ils of tech­nol­o­gy. This is an exam­ple.

The math­e­mati­cian Got­tfried Wil­helm Leib­niz bemoaned “that hor­ri­ble mass of books which keeps on grow­ing,” which would doom the qual­i­ty writ­ers to “the dan­ger of gen­er­al obliv­ion” and pro­duce “a return to bar­barism.”

That’s anoth­er exam­ple.

Each time we’re faced with bewil­der­ing new think­ing tools, we pan­ic – then quick­ly set about deduc­ing how they can be used to help us work, med­i­tate, and cre­ate.

This is kind of a dis­til­la­tion of the book. Each new tech­nol­o­gy seems over­whelm­ing, there is a small out­cry against it, then we adapt our­selves to it (and it to us).

Blog­ging forces you to write down your argu­ments and assump­tions. This is the sin­gle biggest rea­son to do it, and I think it alone makes it worth it.”

Gabriel Wein­berg of Duck­Duck­Go said this, and I endorse this mes­sage. That’s what this very blog is for.

U.S. neu­rol­o­gist George Miller Beard diag­nosed America’s white-col­lar pop­u­la­tion as suf­fer­ing from neuras­the­nia. The dis­or­der was, he argued, a deple­tion of the ner­vous sys­tem by its encoun­ters with the unnat­ur­al forces of mod­ern civ­i­liza­tion, most par­tic­u­lar­ly “steam pow­er”, “the tele­graph”, “the peri­od­i­cal press”, and “the sci­ences.”

Today we blame mod­ern tech­nol­o­gy for mem­o­ry and atten­tion dis­or­ders instead.

Soci­ol­o­gists have a name for this prob­lem: plu­ral­is­tic igno­rance. It occurs when­ev­er a group of peo­ple under­es­ti­mate how much oth­ers around them share their atti­tudes and beliefs.

I’m not racist myself, but I couldn’t employ a black per­son as my col­leagues wouldn’t accept it.”

Com­plain­ing is easy – much eas­i­er than get­ting out of your chair. Many crit­ics have wor­ried about the rise of so-called slack­tivism, a gen­er­a­tion of peo­ple who think click­ing “like” on a Face­book page is enough to foment change. Dis­sent becomes a social pose.

The book’s posi­tion is that online activism helps act as an insti­ga­tor of, rather than a replace­ment for, real-life protest. Real­ly, I just liked the phras­ing of the last sen­tence.

It strikes me that social media embod­ies the con­nec­tion between action and expres­sion.”

Char­lie Beck­ett said this, about the the­o­ry in the pre­vi­ous quote.

… this reflex­ive­ly dystopi­an view is just as mis­lead­ing as the gid­dy boos­t­er­ism of Sil­i­con Val­ley. Its nos­tal­gia is false; it pre­tends these cul­tur­al prophe­cies of doom are some­how new and haven’t occurred with metro­nom­ic reg­u­lar­i­ty, and in near­ly iden­ti­cal form, for cen­turies.

(Stand­ing ova­tion) I share this opin­ion, and I was delight­ed to read this in the epi­logue. We’ve always had scares about new tech­nolo­gies, and we always will; just read some his­to­ry and you’ll find it’s an inescapable solu­tion. There nev­er was a more inno­cent time, we’re not all doomed because we read on our smart­phones instead of news­pa­pers, no-one is becom­ing more stu­pid because we have bet­ter tools to out­source some of our pro­cess­ing to. Every­thing old is new again.

Samsung, Voice Control, and Privacy. Many Questions.

It’s inter­est­ing to see the fuss around Samsung’s use of voice con­trol in its Smart TVs, because we’re going to see this hap­pen­ing with increas­ing fre­quen­cy and urgency as voice-pow­ered devices are more deeply inte­grat­ed into our per­son­al spaces. As well as oth­er Smart TV mod­els, Microsoft Kinect is already in mil­lions of homes, and Ama­zon Echo is begin­ning to roll out.

These devices work in sim­i­lar ways: you acti­vate voice search with an opt-in com­mand (“Hi TV”; “Xbox On”; “Alexa”). Android (“OK Google”) and iOS (“Hey Siri”) devices also func­tion this way, but usu­al­ly require a but­ton press to use voice search (except when on the home screen of an unlocked device) — although I imag­ine future iter­a­tions will more wide­ly use acti­va­tion com­mands, espe­cial­ly on home sys­tems like Android TV and Apple TV (with Home­K­it).

What­ev­er sys­tem is used, after it’s acti­vat­ed by the voice a brief audio clip of the user’s com­mand or query is record­ed and trans­mit­ted to a cloud serv­er stack, which is required for run­ning the deep learn­ing algo­rithms nec­es­sary to make sense of human speech.

The fear is that with any of these devices you could acci­den­tal­ly acti­vate the voice ser­vice, then reveal per­son­al data in the fol­low­ing few sec­onds of audio, which would be trans­mit­ted to the cloud servers — and poten­tial­ly made avail­able to untrust­ed third par­ties.

Giv­en that this risk is present on all devices with voice acti­va­tion, the dif­fer­ences I can see in the case of Samsung’s Smart TV are:

  1. the terms explic­it­ly warn you that data leak is a pos­si­bil­i­ty;
  2. the voice analy­sis uses third-par­ty deep learn­ing ser­vices instead of their own;
  3. Sam­sung don’t say who those third par­ties are, or why they’re need­ed; and
  4. it’s on your TV.

This leaves me with a lot of ques­tions (and, I’m afraid, no good answers yet).

Could the first point real­ly be at the root of the unease? Is it sim­ply the fact that this poten­tial pri­va­cy breach has been made clear and now we must con­front it? Would igno­rance be prefer­able to trans­paren­cy?

If Microsoft’s Kinect is always lis­ten­ing for a voice acti­va­tion key­word, and uses Azure cloud ser­vices for analysing your query, does the only dif­fer­ence lie in Samsung’s use of a third par­ty? Or is it their vague lan­guage around that third par­ty; would it make a dif­fer­ence if they made clear it would only be shared with Nuance (who also pro­vide ser­vices for Huawei, LG, Motoro­la and more)? When the Xbox One launched there were con­cerns around the ‘always lis­ten­ing’ fea­ture, which Microsoft alle­vi­at­ed with clear pri­va­cy guide­lines. Is bet­ter com­mu­ni­ca­tion all that’s need­ed?

If our options are to put trust in some­one, or go with­out voice con­trol alto­geth­er (some­thing that’s going to be hard­er to resist in the future), then who do you trust with the poten­tial to lis­ten to you at home? Pri­vate cor­po­ra­tions, as long as its them alone? No third par­ties at all, or third par­ties if they’re named and explained? Or what about if a gov­ern­ment set up a cen­tral voice data clear­ing ser­vice, would you trust that? What safe­guards and con­trols would be suf­fi­cient to make us trust our choice?

Aside: what would be the effect if the ser­vice we’ve trust­ed with our voice data began act­ing on it? Say, if Cor­tana recog­nised your bank details, should it let you know that you’ve leaked them acci­den­tal­ly? What are the lim­its of that? Google in Ire­land reports the phone num­ber of the Samar­i­tans when you use text search to find infor­ma­tion about Sui­cide, would it be dif­fer­ent if it learned that from acci­den­tal voice leaks? What if a child being abused by an adult con­fid­ed in Siri; would you want an auto­mat­ed sys­tem on Apple’s servers to con­tact an appro­pri­ate author­i­ty?

Final­ly, could the dif­fer­ence be as sim­ple as the fact that Sam­sung have put this in a TV? Is it unex­pect­ed behav­iour from an appli­ance that’s had a place in our liv­ing rooms for six­ty years? If it were a pur­pose-built appli­ance such as Amazon’s Echo, would that change the way we feel about it?

This is just a small selec­tion of the types of ques­tions with which we’re going to be con­front­ed with increas­ing fre­quen­cy. There’s already a ten­sion between pri­va­cy and con­ve­nience, and it’s only going to become stronger as voice tech­nol­o­gy moves out of our pock­ets and into our homes.

As I said, I don’t have answers for these ques­tions. I do, how­ev­er, have some (hasti­ly con­sid­ered) sug­ges­tions for com­pa­nies that want to record voice data in the home:

  • Pri­va­cy poli­cies which clear­ly state all par­ties that will have access to data, and why, and give clear notice of any changes.
  • A  plain­ly-writ­ten expla­na­tion of the pur­pose of voice con­trol, with links to the pri­va­cy pol­i­cy, as part of the device set­up process.
  • The abil­i­ty to opt-out of using voice acti­va­tion, with a hard­ware but­ton to insti­gate actions instead.
  • Obvi­ous audio and visu­al indi­ca­tors that voice record­ing has start­ed, and is tak­ing place.
  • An eas­i­ly-acces­si­ble way to play back, man­age and delete past voice clips.

Many com­pa­nies sup­ply some or all of these already; I think we should be look­ing at this as a min­i­mum for the next wave of devices.

Update: Here’s a look at how oth­er com­pa­nies com­mu­ni­cate their pri­va­cy poli­cies on mon­i­tor­ing.

Some Further Thoughts On Privacy

The US has a (large­ly reli­gion-dri­ven) absti­nence-until-mar­riage move­ment; in some states, schools are not required to pro­vide sex­u­al edu­ca­tion to teens, and where it is pro­vid­ed, absti­nence from inter­course is pro­mot­ed as the best method of main­tain­ing sex­u­al health. But a 2007 meta-study found that absti­nence-only at best had no effect at all on teen sex­u­al health, and at worst led to high­er rates of sex­u­al­ly-trans­mit­ted infec­tions: in com­mu­ni­ties with greater than 20% of teens in absti­nence-only pro­grams, rates of STDs were over 60% high­er than in those of reg­u­lar pro­grams.

Igno­rance of their options meant these teens were less like­ly to use con­tra­cep­tion when they did have sex, were more like­ly to engage in oral and anal sex, and less like­ly to seek med­ical test­ing or treat­ment.

I wor­ry that ‘total pri­va­cy’ advo­cates are caus­ing sim­i­lar igno­rance in peo­ple online. An arti­cle in the lat­est Wired UK heav­i­ly hypes up the scare of your data being pub­licly avail­able, but with­out offer­ing any expla­na­tion of why that’s bad or how you can take back con­trol, beyond block­ing all data shar­ing. By pro­mot­ing zero-tol­er­ance pri­va­cy, encour­ag­ing peo­ple to leave social net­works or unin­stall apps that share data, total pri­va­cy advo­cates fail to edu­cate peo­ple on the pri­va­cy options that are avail­able to them, and ways they can use data to their own advan­tage.

Face­book, for exam­ple, has excel­lent expla­na­tions of how they use your data, fil­ters and pref­er­ences that let you con­trol it, and links to exter­nal web­sites that explain and pro­vide fur­ther con­trols for dig­i­tal adver­tis­ing.

My con­cern is that, if you advise only a zero-tol­er­ance pol­i­cy you run the risk of dri­ving peo­ple away to alter­na­tives that are less forth­com­ing with their pri­va­cy con­trols, or mak­ing them feel help­less to the point where they decide to ignore the sub­ject entire­ly.  Either way they’ve lost pow­er over the way they con­trol their per­son­al data, and are miss­ing out on the val­ue it could give them.

And I strong­ly believe there is val­ue in my data. There is val­ue in it for me: I can use it to be more informed about my health, to get a smarter per­son­al assis­tant, to see ads that can be gen­uine­ly rel­e­vant to me. And there is val­ue in it for every­one: shared med­ical data can be used to find envi­ron­men­tal and behav­iour­al pat­terns and improve the qual­i­ty of pub­lic pre­ven­ta­tive health­care.

I’m not blithe about it; I don’t want my data sold to unknown third par­ties, or used against me by insur­ers. I’m aware of the risks of the panop­ti­con of small HD cam­eras that could lead to us all becom­ing wit­ting or unwit­ting infor­mants, and mon­i­tor­ing of com­mu­ni­ca­tion by peo­ple who real­ly have no busi­ness mon­i­tor­ing it.

What we need is not total pri­va­cy, but con­trol over what we expose. We need trans­paren­cy in see­ing who gets our data, we need leg­is­la­tion to con­trol the flow of data between third par­ties, we need the right to opt out, and we need bet­ter anonymi­ty of our data when we choose to release it into large datasets.

Knowl­edge is pow­er, and I’d rather have con­trol of that pow­er myself than com­plete­ly deny it a place in the world.

Sources and further reading