How a Machine Learns Prejudice

If artificial intelligence takes over our lives, it probably won’t involve humans battling an army of robots that relentlessly apply Spock-like logic as they physically enslave us. Instead, the machine-learning algorithms that already let AI programs recommend a movie you’d like or recognize your friend’s face in a photo will likely be the same ones that one day deny you a loan, lead the police to your neighborhood or tell your doctor you need to go on a diet. And since humans create these algorithms, they’re just as prone to biases that could lead to bad decisions—and worse outcomes. These biases create some immediate concerns about our increasing reliance on artificially intelligent technology, as any AI system designed by humans to be absolutely “neutral” could still reinforce humans’ prejudicial thinking instead…


Link to Full Article: How a Machine Learns Prejudice

Pin It on Pinterest

Share This