Facebook forms a special ethics team to prevent bias in its A.I. software

On Wednesday, at its F8 developer conference, Facebook revealed that it has formed a special team and developed discrete software to ensure that its artificial intelligence systems make decisions as ethically as possible, without biases.

The tool, called Fairness Flow, is an internal project that allegedly can determine whether a machine learning algorithm is biased, meaning it systematically provides certain groups worse results along lines of race, gender, or age.

Facebook, like other big tech companies with products used by large and diverse groups of people, is more deeply incorporating AI into its services. Facebook said this week it will start offering to translate messages that people receive via the Messenger app. Translation systems must first be trained on data, and the ethics push could help ensure that Facebook’s systems are taught to give fair translations.

Facebook said these efforts are not the result of any changes that have taken place in the seven weeks since it was revealed that data analytics firm Cambridge Analytica misused personal data of the social network’s users ahead of the 2016 election. But it’s clear that public sentiment toward Facebook has turned dramatically negative of late, so much so that CEO Mark Zuckerberg had to sit through hours of congressional questioning last month.

The tool is still in its early stages of development, the Facebook spokesperson said, and the team is talking with other internal teams to see how it can be implemented elsewhere.

Share on TwitterShare on FacebookShare on LinkedInPin it on PinterestSubmit to redditSubmit to StumbleUponShare on Tumblr

Written by admin

Leave a Reply