Is Artificial Intelligence Prejudiced?

If information being fed to a computer is biased, experts say artificial intelligence will be biased too. Artificial intelligence (AI) may be quicker and more capable than humans, but the one thing it hasn’t yet overcome is bias. It’s true — computers can be as prejudiced as humans. A computer program is only as good as the information that it’s fed, according to Andy Hickl, chief product officer at Intel’s Saffron Cognitive Solutions Group. “Artificial intelligence ultimately has bias baked into its decisions,” he said. This can result from assumptions humans made when designing algorithms that attempt to replicate human judgment, or assumptions machines make when they learn from data. “If the machine only has information about how a portion of people act, and no knowledge of how the rest of the world speaks, acts or behaves, then we implicitly bake bias into the results produced by artificial intelligence technology,” said Hickl. Underlying Stereotypes One example is the growing trend of using “word embeddings” for screening resumes. This technique uses word associations to teach computers how to identify potential job candidates. If there’s a possibility of bias, some AI systems are designed to ask for a human to examine the…


Link to Full Article: Is Artificial Intelligence Prejudiced?

Pin It on Pinterest

Share This