Learning To Live With Learning Machines

In my recent talk, OK Com­put­er, I briefly men­tion the impor­tance of pri­va­cy in sys­tems pow­ered by machine learn­ing, and hint at poten­tial dif­fi­cul­ties fac­ing Hel­lo Bar­bie, the new AI-pow­ered doll from Mat­tel when the wider world becomes aware that third par­ties could—or, will—be lis­ten­ing to what chil­dren say to it. Well, the wider world has become aware.

Hell No Bar­bie is a con­sumer cam­paign to raise aware­ness about Hel­lo Bar­bie, and to pre­vent par­ents from buy­ing it. They give eight rea­sons why Hel­lo Bar­bie is bad, rang­ing from the right to pri­vate con­ver­sa­tions, to the right to be free from being adver­tised to.

I’m not unsym­pa­thet­ic to these argu­ments. I agree with most of them (to vary­ing degrees). And it cer­tain­ly looks as though there are very valid con­cerns around the secu­ri­ty of Hel­lo Bar­bie, with it report­ed­ly being open to hack­ing.

But I think an out­right dis­missal, a refusal to engage with AI-pow­ered toys, miss­es out on the oppor­tu­ni­ties that they can bring. Cog­ni­toys Dino is anoth­er toy for chil­dren, but with a sharp­er focus on edu­ca­tion. And I know from per­son­al expe­ri­ence how much my young nieces and nephews like ask­ing ques­tions to Google Voice Search. In the case of these inter­faces, the chil­dren are gain­ing knowl­edge; but each has the same impli­ca­tions of pri­va­cy and secu­ri­ty as Hel­lo Bar­bie.

I think we need to not reject AI toys for chil­dren, but to engage with them on bet­ter terms. We need to ask the ques­tions nec­es­sary to cre­ate an eth­i­cal frame­work to accept AI into our homes. On, in the arti­cle A Toy That Wants to Phone Home, they sug­gest some ques­tions that we might want to start with, around data, pri­va­cy, com­mer­cial­i­sa­tion, and social impli­ca­tions.

This is the approach that I endorse: learn­ing to live with new tech­nol­o­gy; under­stand­ing it, con­trol­ling it, and mak­ing it work to our ben­e­fit.


Also pub­lished on Medi­um.