Machines Are Learning to be Sexist Like Humans. Luckily, They’re Easier To Fix.

People like to think computer programs are objective, but technology is actually learning from our human biases. Last month, a team of researchers from Boston University and Microsoft Research came up with a solution: an algorithm capable of identifying and eradicating offensive stereotypes in writing. The algorithm analyzes how often seemingly gender-neutral words are paired with gendered pronouns like “she” or “he” to find out which words a computer would most likely associate with men or women. The team expected the algorithm, which processed a three-million word corpus of Google News stories, to exhibit gender bias. But the results were even more extreme than they had predicted: for occupation searches, women were disproportionately billed as a “hairdresser”, “socialite” or “nanny”, while men were associated with “maestro”, “skipper” or “protegé”. “I…


Link to Full Article: Machines Are Learning to be Sexist Like Humans. Luckily, They’re Easier To Fix.