Machine-learning model fed web content makes racist and sexist associations

Human biases exposed by Implicit Association Tests can be replicated in machine learning using GloVe word embedding, according to a new study where GloVe was trained on “a corpus of text from the Web.”Because GloVe has no experience with material things, the associations are purely statistical. For instance, GloVe has never experienced flowers or insects, but flowers are more associated with pleasant terms, and insects are more associated with unpleasant ones. Same with musical instruments vs. weapons.Using GloVe also replicated established racial and gender-based biases in language. “In our results, European American names are more likely than African American names to be closer to pleasant than to unpleasant, with an effect size of 1.41 and p-value < 10^-8.” Below is the dataset (edited to remove untested words): European American names:…


Link to Full Article: Machine-learning model fed web content makes racist and sexist associations