Machine learning systems appear vulnerable to unconscious biases

Dive Brief:Machine learning systems can be vulnerable to discriminatory biases, according to Yieldify CTO Richard Sharp in an article for Enterpreneur.com. A number of studies have shown that unconscious bias can slip into machine learning algorithms, like personalized online advertising, if efforts aren’t made to ensure such algorithms are fair. This could be particularly vexing as machine learning moves into areas like credit scoring, hiring or criminal sentencing. Dive Insight: An examination of machine learning systems found they can discriminate by propagating prevailing social biases. “If you train a machine learning algorithm on real data from the world we live in, it will pick up on these biases,” Sharp wrote. “And to make matters worse, such algorithms have the potential to perpetuate or even exacerbate these biases when deployed.” Sharp suggests…


Link to Full Article: Machine learning systems appear vulnerable to unconscious biases