Robots and Homeopaths

How robots mess with our minds is a great arti­cle on emo­tion­al respons­es to robots by artist and engi­neer, Alexan­der Reben. In it he explores how peo­ple are able to speak inti­mate­ly to machines, and the impli­ca­tions of that as robots gain ‘per­son­al­i­ty’. I was par­tic­u­lar­ly inter­est­ed in a quote from the philoso­pher John Camp­bell:

One of the pos­si­bil­i­ties this opens up is automat­ing aspects of our emo­tion­al lives where we usu­al­ly depend on oth­er peo­ple for sym­pa­thy and sup­port. Rather than rely­ing on your part­ner to lis­ten to the prob­lems you’ve had at work all day, why not explain them to a sym­pa­thet­ic robot that makes eye con­tact with you, lis­tens with appar­ent inter­est, mak­ing all the right nois­es, remem­bers and cross-index­es every­thing you say?

This made me think of home­opa­thy. Although there’s no evi­dence at all that the treat­ment itself is effec­tive, many peo­ple who use it claim that after treat­ment they do feel bet­ter, or even ful­ly cured. One expla­na­tion for this is that a con­sul­ta­tion with a prac­ti­tion­er can have a pos­i­tive effect on well-being; just hav­ing some­one lis­ten com­pas­sion­ate­ly can make you feel bet­ter.

So in the future we could be pre­scribed a ses­sion with a sym­pa­thet­ic robot to make us feel bet­ter.

People And Robots Working Together

So many great insights in this piece by Dr James E. Young about man­ag­ing peo­ple and robots work­ing togeth­er. Like how just being in the pres­ence of a robot made peo­ple up their game:

In our research, we showed how a sim­ple, small robot could pres­sure peo­ple to con­tin­ue a high­ly tedious task—even after the peo­ple expressed repeat­ed desire to quit—simply with ver­bal prod­ding.

The ten­den­cy to anthro­po­mor­phism, assign­ing a per­son­al­i­ty to a non-human object, is well known, but it’s still amus­ing to think of peo­ple curs­ing their robot co-work­er:

Most sur­pris­ing was not that peo­ple obeyed the robot, but the strate­gies they employed to try to resist the pres­sure. Peo­ple tried argu­ing with and ratio­nal­iz­ing with the robot, or appeal­ing to an author­i­ty who wasn’t present (a researcher), but either con­tin­ued their work or only gave up when the robot gave per­mis­sion.

I once read some­thing (can’t find it now) about our nat­ur­al def­er­ence to author­i­ty lead­ing to us pre­sum­ing infal­li­bil­i­ty in com­put­ers, even if that means sat­nav leads us into the sea. I can see this hap­pen­ing:

One could imag­ine a robot giv­ing seem­ing­ly innocu­ous direc­tion such as to make a bolt tighter, change a tool set­ting or pres­sure lev­el, or even to change which elec­tron­ic parts are used. How­ev­er, what if the robot is wrong (for exam­ple, due to a sen­sor error) and yet keeps insist­ing? Will peo­ple doubt them­selves giv­en robots’ advanced knowl­edge and sen­sor capa­bil­i­ty?

The very notion of a sar­cas­tic robot with a shit-eat­ing grin made me laugh too much:

Research has shown peo­ple feel less com­fort­able around robots who break social norms, such as by hav­ing shifty eyes or mis­matched facial expres­sions. A robot’s per­son­al­i­ty, voice pitch or even the use of whis­per­ing can affect feel­ings of trust and com­fort.

Work­ing with a robot that always grins while crit­i­ciz­ing you, stares at your feet while giv­ing rec­om­men­da­tions, stares off into space ran­dom­ly or sounds sar­cas­tic while pro­vid­ing pos­i­tive feed­back would be awk­ward and uncom­fort­able and make it hard to devel­op one’s trust in the machine.

I began read­ing this as a cute, slight­ly fun­ny piece about the future, then realised that this is hap­pen­ing right now and it stopped being quite so fun­ny. I, for one, wel­come our new robot co-work­ers

The Future Is Coming Faster Than We Think

Today I read a fas­ci­nat­ing arti­cle in the Lon­don Review of Books. The Robots Are Com­ing, by John Lan­ches­ter, is about the rise of cheap automa­tion and the effect it’s going to have on the work­force and soci­ety at large. In his intro­duc­tion he talks about the Accel­er­at­ed Strate­gic Com­put­ing Ini­tia­tive’s com­put­er, Red, launched in 1996 and even­tu­al­ly capa­ble of pro­cess­ing 1.8 ter­aflops — that is, 1.8 tril­lion cal­cu­la­tions per sec­ond. It was the most pow­er­ful com­put­er in the world until about 2000. Six years lat­er, the PS3 launched, also capa­ble of pro­cess­ing 1.8 ter­aflops.

Red was only a lit­tle small­er than a ten­nis court, used as much elec­tric­i­ty as eight hun­dred hous­es, and cost $55 mil­lion. The PS3 fits under­neath a tele­vi­sion, runs off a nor­mal pow­er sock­et, and you can buy one for under two hun­dred quid. With­in a decade, a com­put­er able to process 1.8 ter­aflops went from being some­thing that could only be made by the world’s rich­est gov­ern­ment for pur­pos­es at the fur­thest reach­es of com­pu­ta­tion­al pos­si­bil­i­ty, to some­thing a teenag­er could rea­son­ably expect to find under the Christ­mas tree.

This makes me think of IBM’s Wat­son, a deep learn­ing sys­tem, ten years in the mak­ing at a cost in excess of $1 bil­lion, with hard­ware esti­mat­ed at $3 mil­lion pow­er­ing it, and com­ing soon to children’s toys.