Talking to Léonie Watson about computer vision and blindness

I recent­ly had the plea­sure of talk­ing to Léonie Wat­son for a sec­tion on the rehab Tech Talks pod­cast which I co-present for the com­pa­ny I work for. Léonie works with The Paciel­lo Group on web stan­dards acces­si­bil­i­ty, and I’ve met her on a hand­ful of occa­sions through con­fer­ences that she attends and speaks at. She is also com­plete­ly blind, hav­ing lost her sight as an adult due to com­pli­ca­tions of dia­betes (I high­ly rec­om­mend her per­son­al account of how it hap­pened, Los­ing Sight).

As our pod­cast top­ic was com­put­er vision—extracting seman­tic data from pho­tos and video using machine learning—I was keen to find out how this could help peo­ple with impaired vision. I was absolute­ly amazed and delight­ed by what I learned, and I want­ed to share this extract from our con­ver­sa­tion.

Con­tin­ue read­ing “Talk­ing to Léonie Wat­son about com­put­er vision and blind­ness”

Learning To Live With Learning Machines

In my recent talk, OK Com­put­er, I briefly men­tion the impor­tance of pri­va­cy in sys­tems pow­ered by machine learn­ing, and hint at poten­tial dif­fi­cul­ties fac­ing Hel­lo Bar­bie, the new AI-pow­ered doll from Mat­tel when the wider world becomes aware that third par­ties could—or, will—be lis­ten­ing to what chil­dren say to it. Well, the wider world has become aware.

Hell No Bar­bie is a con­sumer cam­paign to raise aware­ness about Hel­lo Bar­bie, and to pre­vent par­ents from buy­ing it. They give eight rea­sons why Hel­lo Bar­bie is bad, rang­ing from the right to pri­vate con­ver­sa­tions, to the right to be free from being adver­tised to.

I’m not unsym­pa­thet­ic to these argu­ments. I agree with most of them (to vary­ing degrees). And it cer­tain­ly looks as though there are very valid con­cerns around the secu­ri­ty of Hel­lo Bar­bie, with it report­ed­ly being open to hack­ing.

But I think an out­right dis­missal, a refusal to engage with AI-pow­ered toys, miss­es out on the oppor­tu­ni­ties that they can bring. Cog­ni­toys Dino is anoth­er toy for chil­dren, but with a sharp­er focus on edu­ca­tion. And I know from per­son­al expe­ri­ence how much my young nieces and nephews like ask­ing ques­tions to Google Voice Search. In the case of these inter­faces, the chil­dren are gain­ing knowl­edge; but each has the same impli­ca­tions of pri­va­cy and secu­ri­ty as Hel­lo Bar­bie.

I think we need to not reject AI toys for chil­dren, but to engage with them on bet­ter terms. We need to ask the ques­tions nec­es­sary to cre­ate an eth­i­cal frame­work to accept AI into our homes. On Mediamocracy.org, in the arti­cle A Toy That Wants to Phone Home, they sug­gest some ques­tions that we might want to start with, around data, pri­va­cy, com­mer­cial­i­sa­tion, and social impli­ca­tions.

This is the approach that I endorse: learn­ing to live with new tech­nol­o­gy; under­stand­ing it, con­trol­ling it, and mak­ing it work to our ben­e­fit.