Robots and Homeopaths

How robots mess with our minds is a great article on emotional responses to robots by artist and engineer, Alexander Reben. In it he explores how people are able to speak intimately to machines, and the implications of that as robots gain ‘personality’. I was particularly interested in a quote from the philosopher John Campbell:

One of the possibilities this opens up is automating aspects of our emotional lives where we usually depend on other people for sympathy and support. Rather than relying on your partner to listen to the problems you’ve had at work all day, why not explain them to a sympathetic robot that makes eye contact with you, listens with apparent interest, making all the right noises, remembers and cross-indexes everything you say?

This made me think of homeopathy. Although there’s no evidence at all that the treatment itself is effective, many people who use it claim that after treatment they do feel better, or even fully cured. One explanation for this is that a consultation with a practitioner can have a positive effect on well-being; just having someone listen compassionately can make you feel better.

So in the future we could be prescribed a session with a sympathetic robot to make us feel better.

People And Robots Working Together

So many great insights in this piece by Dr James E. Young about managing people and robots working together. Like how just being in the presence of a robot made people up their game:

In our research, we showed how a simple, small robot could pressure people to continue a highly tedious task—even after the people expressed repeated desire to quit—simply with verbal prodding.

The tendency to anthropomorphism, assigning a personality to a non-human object, is well known, but it’s still amusing to think of people cursing their robot co-worker:

Most surprising was not that people obeyed the robot, but the strategies they employed to try to resist the pressure. People tried arguing with and rationalizing with the robot, or appealing to an authority who wasn’t present (a researcher), but either continued their work or only gave up when the robot gave permission.

I once read something (can’t find it now) about our natural deference to authority leading to us presuming infallibility in computers, even if that means satnav leads us into the sea. I can see this happening:

One could imagine a robot giving seemingly innocuous direction such as to make a bolt tighter, change a tool setting or pressure level, or even to change which electronic parts are used. However, what if the robot is wrong (for example, due to a sensor error) and yet keeps insisting? Will people doubt themselves given robots’ advanced knowledge and sensor capability?

The very notion of a sarcastic robot with a shit-eating grin made me laugh too much:

Research has shown people feel less comfortable around robots who break social norms, such as by having shifty eyes or mismatched facial expressions. A robot’s personality, voice pitch or even the use of whispering can affect feelings of trust and comfort.

Working with a robot that always grins while criticizing you, stares at your feet while giving recommendations, stares off into space randomly or sounds sarcastic while providing positive feedback would be awkward and uncomfortable and make it hard to develop one’s trust in the machine.

I began reading this as a cute, slightly funny piece about the future, then realised that this is happening right now and it stopped being quite so funny. I, for one, welcome our new robot co-workers

The Future Is Coming Faster Than We Think

Today I read a fascinating article in the London Review of Books. The Robots Are Coming, by John Lanchester, is about the rise of cheap automation and the effect it’s going to have on the workforce and society at large. In his introduction he talks about the Accelerated Strategic Computing Initiative’s computer, Red, launched in 1996 and eventually capable of processing 1.8 teraflops – that is, 1.8 trillion calculations per second. It was the most powerful computer in the world until about 2000. Six years later, the PS3 launched, also capable of processing 1.8 teraflops.

Red was only a little smaller than a tennis court, used as much electricity as eight hundred houses, and cost $55 million. The PS3 fits underneath a television, runs off a normal power socket, and you can buy one for under two hundred quid. Within a decade, a computer able to process 1.8 teraflops went from being something that could only be made by the world’s richest government for purposes at the furthest reaches of computational possibility, to something a teenager could reasonably expect to find under the Christmas tree.

This makes me think of IBM’s Watson, a deep learning system, ten years in the making at a cost in excess of $1 billion, with hardware estimated at $3 million powering it, and coming soon to children’s toys.