Podcast: Why does Artificial Intelligence often turn out racist?

Between April and August, a small startup conducted an experiment – they used artificial intelligence to judge a beauty contest. Out of 44 winners, only one had dark skin. That’s because AI picks up the biases in the world around them – because that is what serves as the database with which they are programmed. There have been several other examples of AI going rogue – such as Microsoft’s chatbot Tay, who turned racist. He was programmed to talk like millennials and picked up their conversations through Twitter and other messaging apps. So was this a programming flaw, or a reflection of the things around him that he picked up? This episode of The Intersection speaks to the company about its experiment, what the results indicate and the need to…


Link to Full Article: Podcast: Why does Artificial Intelligence often turn out racist?

Pin It on Pinterest

Share This