The Future Is Coming Faster Than We Think

Today I read a fas­ci­nat­ing arti­cle in the Lon­don Review of Books. The Robots Are Com­ing, by John Lan­ches­ter, is about the rise of cheap automa­tion and the effect it’s going to have on the work­force and soci­ety at large. In his intro­duc­tion he talks about the Accel­er­at­ed Strate­gic Com­put­ing Ini­tia­tive’s com­put­er, Red, launched in 1996 and even­tu­al­ly capa­ble of pro­cess­ing 1.8 ter­aflops — that is, 1.8 tril­lion cal­cu­la­tions per sec­ond. It was the most pow­er­ful com­put­er in the world until about 2000. Six years lat­er, the PS3 launched, also capa­ble of pro­cess­ing 1.8 ter­aflops.

Red was only a lit­tle small­er than a ten­nis court, used as much elec­tric­i­ty as eight hun­dred hous­es, and cost $55 mil­lion. The PS3 fits under­neath a tele­vi­sion, runs off a nor­mal pow­er sock­et, and you can buy one for under two hun­dred quid. With­in a decade, a com­put­er able to process 1.8 ter­aflops went from being some­thing that could only be made by the world’s rich­est gov­ern­ment for pur­pos­es at the fur­thest reach­es of com­pu­ta­tion­al pos­si­bil­i­ty, to some­thing a teenag­er could rea­son­ably expect to find under the Christ­mas tree.

This makes me think of IBM’s Wat­son, a deep learn­ing sys­tem, ten years in the mak­ing at a cost in excess of $1 bil­lion, with hard­ware esti­mat­ed at $3 mil­lion pow­er­ing it, and com­ing soon to children’s toys.

Shame and Social Engineering

Just fin­ished read­ing So You’ve Been Pub­licly Shamed, Jon Ronson’s zeit­geisty book about social media pile-ons. There were many, many good points in the book, but I for­got to high­light them as I was enjoy­ing read­ing it so much. One thing that has stuck in my mind, how­ev­er, is an email exchange with the film-mak­er Adam Cur­tis, in which he talks about feed­back loops and the social media echo cham­ber:

Feed­back is an engi­neer­ing prin­ci­ple, and all engi­neer­ing is devot­ed to try­ing to keep the thing you are build­ing sta­ble.

It’s unde­ni­ably true that I now self-cen­sor a lot more on Twit­ter than I did in the past, for fear of a strong neg­a­tive reac­tion. I don’t think I’m alone in this; anec­do­tal evi­dence sug­gests many peo­ple are also becom­ing more tame to avoid the Twit­ter mobs. The net effect is, as Jon Ron­son him­self says:

We see our­selves as non­con­formist, but I think all of this is cre­at­ing a more con­formist, con­ser­v­a­tive age. ‘Look!’ we’re say­ing. ‘WERE nor­mal! THIS is the aver­age!’

I rec­om­mend you read the book your­self to see all of this in much greater con­text. And I won­der if Twit­ter and Face­book shouldn’t give away a free copy to all their users.

Blogging the Highlights: Smarter Than You Think

I make no secret of the fact that I love Rus­sell Davies’ blog, and recent­ly he’s been run­ning a series of posts in which he blogs the por­tions he high­lights in books on his Kin­dle. I think this is a great idea, so I’m steal­ing it whole­sale, except I have a Kobo.

The first book is Clive Thompson’s Smarter Than You Think, which looks at com­mon com­plaints against mod­ern tech­nol­o­gy (It makes us stu­pid! It makes us anti­so­cial!) and gen­tly attempts to debunk them. It’s not cyber-utopi­an, but it is pro-tech­nol­o­gy. I real­ly enjoyed the book, and agree with its con­clu­sions.

Here are the bits I high­light­ed:

In 1915, a Span­ish inven­tor unveiled a gen­uine, hon­est-to-good­ness robot that could eas­i­ly play Chess – a sim­ple endgame involv­ing only three pieces, any­way. A writer for Sci­en­tif­ic Amer­i­can fret­ted that the inven­tor “Would Sub­sti­tute Machin­ery for the Human Mind.”

I have a hob­by of col­lect­ing dire pre­dic­tions about the per­ils of tech­nol­o­gy. This is an exam­ple.

The math­e­mati­cian Got­tfried Wil­helm Leib­niz bemoaned “that hor­ri­ble mass of books which keeps on grow­ing,” which would doom the qual­i­ty writ­ers to “the dan­ger of gen­er­al obliv­ion” and pro­duce “a return to bar­barism.”

That’s anoth­er exam­ple.

Each time we’re faced with bewil­der­ing new think­ing tools, we pan­ic – then quick­ly set about deduc­ing how they can be used to help us work, med­i­tate, and cre­ate.

This is kind of a dis­til­la­tion of the book. Each new tech­nol­o­gy seems over­whelm­ing, there is a small out­cry against it, then we adapt our­selves to it (and it to us).

Blog­ging forces you to write down your argu­ments and assump­tions. This is the sin­gle biggest rea­son to do it, and I think it alone makes it worth it.”

Gabriel Wein­berg of Duck­Duck­Go said this, and I endorse this mes­sage. That’s what this very blog is for.

U.S. neu­rol­o­gist George Miller Beard diag­nosed America’s white-col­lar pop­u­la­tion as suf­fer­ing from neuras­the­nia. The dis­or­der was, he argued, a deple­tion of the ner­vous sys­tem by its encoun­ters with the unnat­ur­al forces of mod­ern civ­i­liza­tion, most par­tic­u­lar­ly “steam pow­er”, “the tele­graph”, “the peri­od­i­cal press”, and “the sci­ences.”

Today we blame mod­ern tech­nol­o­gy for mem­o­ry and atten­tion dis­or­ders instead.

Soci­ol­o­gists have a name for this prob­lem: plu­ral­is­tic igno­rance. It occurs when­ev­er a group of peo­ple under­es­ti­mate how much oth­ers around them share their atti­tudes and beliefs.

I’m not racist myself, but I couldn’t employ a black per­son as my col­leagues wouldn’t accept it.”

Com­plain­ing is easy – much eas­i­er than get­ting out of your chair. Many crit­ics have wor­ried about the rise of so-called slack­tivism, a gen­er­a­tion of peo­ple who think click­ing “like” on a Face­book page is enough to foment change. Dis­sent becomes a social pose.

The book’s posi­tion is that online activism helps act as an insti­ga­tor of, rather than a replace­ment for, real-life protest. Real­ly, I just liked the phras­ing of the last sen­tence.

It strikes me that social media embod­ies the con­nec­tion between action and expres­sion.”

Char­lie Beck­ett said this, about the the­o­ry in the pre­vi­ous quote.

… this reflex­ive­ly dystopi­an view is just as mis­lead­ing as the gid­dy boos­t­er­ism of Sil­i­con Val­ley. Its nos­tal­gia is false; it pre­tends these cul­tur­al prophe­cies of doom are some­how new and haven’t occurred with metro­nom­ic reg­u­lar­i­ty, and in near­ly iden­ti­cal form, for cen­turies.

(Stand­ing ova­tion) I share this opin­ion, and I was delight­ed to read this in the epi­logue. We’ve always had scares about new tech­nolo­gies, and we always will; just read some his­to­ry and you’ll find it’s an inescapable solu­tion. There nev­er was a more inno­cent time, we’re not all doomed because we read on our smart­phones instead of news­pa­pers, no-one is becom­ing more stu­pid because we have bet­ter tools to out­source some of our pro­cess­ing to. Every­thing old is new again.

Samsung, Voice Control, and Privacy. Many Questions.

It’s inter­est­ing to see the fuss around Samsung’s use of voice con­trol in its Smart TVs, because we’re going to see this hap­pen­ing with increas­ing fre­quen­cy and urgency as voice-pow­ered devices are more deeply inte­grat­ed into our per­son­al spaces. As well as oth­er Smart TV mod­els, Microsoft Kinect is already in mil­lions of homes, and Ama­zon Echo is begin­ning to roll out.

These devices work in sim­i­lar ways: you acti­vate voice search with an opt-in com­mand (“Hi TV”; “Xbox On”; “Alexa”). Android (“OK Google”) and iOS (“Hey Siri”) devices also func­tion this way, but usu­al­ly require a but­ton press to use voice search (except when on the home screen of an unlocked device) — although I imag­ine future iter­a­tions will more wide­ly use acti­va­tion com­mands, espe­cial­ly on home sys­tems like Android TV and Apple TV (with Home­K­it).

What­ev­er sys­tem is used, after it’s acti­vat­ed by the voice a brief audio clip of the user’s com­mand or query is record­ed and trans­mit­ted to a cloud serv­er stack, which is required for run­ning the deep learn­ing algo­rithms nec­es­sary to make sense of human speech.

The fear is that with any of these devices you could acci­den­tal­ly acti­vate the voice ser­vice, then reveal per­son­al data in the fol­low­ing few sec­onds of audio, which would be trans­mit­ted to the cloud servers — and poten­tial­ly made avail­able to untrust­ed third par­ties.

Giv­en that this risk is present on all devices with voice acti­va­tion, the dif­fer­ences I can see in the case of Samsung’s Smart TV are:

  1. the terms explic­it­ly warn you that data leak is a pos­si­bil­i­ty;
  2. the voice analy­sis uses third-par­ty deep learn­ing ser­vices instead of their own;
  3. Sam­sung don’t say who those third par­ties are, or why they’re need­ed; and
  4. it’s on your TV.

This leaves me with a lot of ques­tions (and, I’m afraid, no good answers yet).

Could the first point real­ly be at the root of the unease? Is it sim­ply the fact that this poten­tial pri­va­cy breach has been made clear and now we must con­front it? Would igno­rance be prefer­able to trans­paren­cy?

If Microsoft’s Kinect is always lis­ten­ing for a voice acti­va­tion key­word, and uses Azure cloud ser­vices for analysing your query, does the only dif­fer­ence lie in Samsung’s use of a third par­ty? Or is it their vague lan­guage around that third par­ty; would it make a dif­fer­ence if they made clear it would only be shared with Nuance (who also pro­vide ser­vices for Huawei, LG, Motoro­la and more)? When the Xbox One launched there were con­cerns around the ‘always lis­ten­ing’ fea­ture, which Microsoft alle­vi­at­ed with clear pri­va­cy guide­lines. Is bet­ter com­mu­ni­ca­tion all that’s need­ed?

If our options are to put trust in some­one, or go with­out voice con­trol alto­geth­er (some­thing that’s going to be hard­er to resist in the future), then who do you trust with the poten­tial to lis­ten to you at home? Pri­vate cor­po­ra­tions, as long as its them alone? No third par­ties at all, or third par­ties if they’re named and explained? Or what about if a gov­ern­ment set up a cen­tral voice data clear­ing ser­vice, would you trust that? What safe­guards and con­trols would be suf­fi­cient to make us trust our choice?

Aside: what would be the effect if the ser­vice we’ve trust­ed with our voice data began act­ing on it? Say, if Cor­tana recog­nised your bank details, should it let you know that you’ve leaked them acci­den­tal­ly? What are the lim­its of that? Google in Ire­land reports the phone num­ber of the Samar­i­tans when you use text search to find infor­ma­tion about Sui­cide, would it be dif­fer­ent if it learned that from acci­den­tal voice leaks? What if a child being abused by an adult con­fid­ed in Siri; would you want an auto­mat­ed sys­tem on Apple’s servers to con­tact an appro­pri­ate author­i­ty?

Final­ly, could the dif­fer­ence be as sim­ple as the fact that Sam­sung have put this in a TV? Is it unex­pect­ed behav­iour from an appli­ance that’s had a place in our liv­ing rooms for six­ty years? If it were a pur­pose-built appli­ance such as Amazon’s Echo, would that change the way we feel about it?

This is just a small selec­tion of the types of ques­tions with which we’re going to be con­front­ed with increas­ing fre­quen­cy. There’s already a ten­sion between pri­va­cy and con­ve­nience, and it’s only going to become stronger as voice tech­nol­o­gy moves out of our pock­ets and into our homes.

As I said, I don’t have answers for these ques­tions. I do, how­ev­er, have some (hasti­ly con­sid­ered) sug­ges­tions for com­pa­nies that want to record voice data in the home:

  • Pri­va­cy poli­cies which clear­ly state all par­ties that will have access to data, and why, and give clear notice of any changes.
  • A  plain­ly-writ­ten expla­na­tion of the pur­pose of voice con­trol, with links to the pri­va­cy pol­i­cy, as part of the device set­up process.
  • The abil­i­ty to opt-out of using voice acti­va­tion, with a hard­ware but­ton to insti­gate actions instead.
  • Obvi­ous audio and visu­al indi­ca­tors that voice record­ing has start­ed, and is tak­ing place.
  • An eas­i­ly-acces­si­ble way to play back, man­age and delete past voice clips.

Many com­pa­nies sup­ply some or all of these already; I think we should be look­ing at this as a min­i­mum for the next wave of devices.

Update: Here’s a look at how oth­er com­pa­nies com­mu­ni­cate their pri­va­cy poli­cies on mon­i­tor­ing.