Small Numbers, Huge Changes

In a recent interview, Sundar Pichai of Google discusses improvements in the accuracy of their voice recognition:

Just in the last three years, we have taken things like error in word recognition down from about 23 percent to 8 percent.

That’s the difference between misunderstanding one word in four, to one word in twelve; the difference between completely unusable, and annoying.

Andew Ng, formerly of Google and now of Baidu, expands on this:

Most people don’t understand the difference between 95 and 99 percent accurate. Ninety-five percent means you get one-in-20 words wrong. That’s just annoying, it’s painful to go back and correct it on your cell phone.

Ninety-nine percent is game changing. If there’s 99 percent, it becomes reliable. It just works and you use it all the time. So this is not just a four percent incremental improvement, this is the difference between people rarely using it and people using it all the time.

It’s fascinating to see how these small numbers make a huge difference; you might think Google’s 92% accurate is only a little less than Baidu’s 95% accurate, but in practical terms there’s a big gulf. And it gives me pause to think about the money, human resource and computing power spent on trying to make those small huge increases in accuracy.

A conversation with a bot

It’s approaching 3 AM on Christmas Day in 2013, and a South Korean teenage girl who goes by the Twitter handle @jjong_gee texts her friend, Junmyun, to confess a personal secret. She’s depressed, and she needs support. “There was a man named Osho who once said ‘don’t be too serious, life is like a moving picture,’” replied Junmyun. “If you treat what comes at you like a game, happiness will come. I want to see you happy.” The girl tweeted a screenshot of the text, thanking him for the kind words. But Junmyun, with his words of wisdom, is actually not a real person. Junmyun is actually a bot programmed inside a popular Korean texting app called FakeTalk, or Gajja-Talk in Korean.

The App That Lets Depressed Teens Text with Celebrities and Dead Friends. Every time I read something like this I remember how brilliantly prescient Black Mirror can be.

Of course, I had to have a go myself:

A conversation with a chat bot which ends with it declaring its love for me

What’s interesting is how, despite the bot not being Turing-complete, I still felt compelled to continue the conversation, and became quite nervous at the abrupt turn it took at the end. After all, I don’t want to hurt the feelings it doesn’t have.

An interview with Andrew Ng

The Huffington Post series, Sophia, brings ‘life lessons from fascinating people’. Their latest interview is with Andrew Ng, a Stanford University professor, co-founder of Coursera, and key member of the deep learning teams at first Google and now Baidu. I really like some of the insights in his interview, the practicality with which he treats innovation and the easy way he explains hard concepts.

For example, here he talks about career advice:

I think that “follow your passion” is not good career advice. It’s actually one of the most terrible pieces of career advice we give people. Often, you first become good at something, and then you become passionate about it. And I think most people can become good at almost anything.

On retraining the workforce for a more heavily automated future:

The challenge that faces us is to find a way to scalably teach people to do non-routine non-repetitive work. Our education system, historically, has not been good at doing that at scale. The top universities are good at doing that for a relatively modest fraction of the population. But a lot of our population ends up doing work that is important but also routine and repetitive. That’s a challenge that faces our educational system.

On why machine learning is suddenly more popular, despite the technology being around for decades in some cases:

I often make an analogy to building a rocket ship. A rocket ship is a giant engine together with a ton of fuel. Both need to be really big. If you have a lot of fuel and a tiny engine, you won’t get off the ground. If you have a huge engine and a tiny amount of fuel, you can lift up, but you probably won’t make it to orbit. So you need a big engine and a lot of fuel. We finally have the tools to build the big rocket engine – that is giant computers, that’s our rocket engine. And the fuel is the data. We finally are getting the data that we need.

And the challenges of talking to computers:

Most people don’t understand the difference between 95 and 99 percent accurate. Ninety-five percent means you get one-in-20 words wrong. That’s just annoying, it’s painful to go back and correct it on your cell phone. Ninety-nine percent is game changing. If there’s 99 percent, it becomes reliable. It just works and you use it all the time. So this is not just a four percent incremental improvement, this is the difference between people rarely using it and people using it all the time.

It’s a really interesting interview, I encourage you to read it in full.

On filters and feelings

An interesting article from Wired on a recent study into how people use filters on their digital photographs.

“Serious hobbyists” use filters only to correct a problem—say, correct the exposure. “More casual photographers” are more likely to manipulate their images with filters or adjustments that make them appear more “artificial,” according to the study.

I used to be a bit of a filter snob, but now I tend to think that, certainly on Instagram, using a filter is less about trying to capture a moment than it is about capturing the way that moment felt. It’s like, when you take a picture on a hot day but that hotness doesn’t come across in the photo, Sierra or Valencia are tools to help communicate that feeling to everyone else who views it.