BIGOTED BOTS: RACIAL AND GENDER BIAS IN ARTIFICIAL INTELLIGENCE

We sneer at chatbots who talk like machines while we cheer the ones who act like us. Not surprisingly, developers scramble to imbue artificial intelligence with human characteristics. Advanced chatbots can crack perfectly timed jokes, quip saucy pickup lines, and fool people into thinking they’re human about 30% of the time.

But people are imperfect, making the headstrong quest to make artificial intelligences resemble their creators potentially perilous. Many attempts to humanize artificial intelligence have unwittingly tainted computer programs with toxic human flaws.

Take Tay, for example, the infamous AI developed by Microsoft’s technology, research, and search engine teams. Tay (acronym for “Thinking About You”) was designed not only to learn dynamically from human interactions but also to simulate the linguistic abilities of a “typical” American female in her late teens. The design premise should have sent alarm bells ringing but the development team decided to green light their baby.

When Microsoft unleashed Tay on Twitter, all hell broke loose. Able to engage all the denizens of Twitterverse, Tay replied to tweets, composed image captions, and rapidly evolved into the worst iteration of a human being within hours. Tay — who was originally envisioned as a friendly teenage girl — experienced just enough of the real world to become a sexist, a racist, and a Nazi sympathizer after less than a day of learning from us.

Prev1 of 4Next

Leave a Reply

Your email address will not be published. Required fields are marked *