Bernard Murphy
Moderator
The ACM writes a monthly blog for a general audience. The blog this month is on the hidden potential unfairness/bias in training datasets for machine learning and is interesting for at least a couple of reasons. Training datasets are built on data accumulated to date, unavoidably reflecting current social preconceptions, subtle though they may be (or not so subtle in some cases). Tools trained on these datasets will inevitably reinforce those same preconceptions rather than being objective. As ML is increasingly used in resume-screening and candidate selection, policing, sentencing and many other applications, it is vital that algorithms be absolutely fair. This is not just a moral concern. It is just as important for self-interest - the point of using algorithms rather than people to make these decisions should be to better optimize outcomes, not just to eliminate those decision-making tasks, which should grow the economy and make us safer and more fulfilled, right?
How Machine Learning Advances Will Improve the Fairness of Algorithms | HuffPost
How Machine Learning Advances Will Improve the Fairness of Algorithms | HuffPost
Last edited: