Facebook apologized for the AI mislabeling a video of a black man as ‘primate’, calling it an ‘unacceptable error’ that it was investigating to make sure this never happens again. as reported by New York Times, who viewed the June 27 video posted by the British tabloid daily mail I was given an automatic prompt asking if I would like to “continue to watch videos about primates”.
As soon as Facebook realized what was happening, it disabled the full topic recommendation feature, a spokesperson said in an email. The Verge On Saturday.
“This is clearly an unacceptable error,” the spokesperson said. An official from the agency added, “We are investigating the cause so that something like this does not happen again.” “As we said, we’ve improved the AI, but we know it’s not perfect and there’s more to come. We apologize to anyone who may have seen these aggressive recommendations.”
This case is just the latest example of an artificial intelligence tool that reveals gender or racial bias, and facial recognition tools have been shown to have certain problems with misleading people of color. In 2015, Google apologized after its Photos app tagged photos of black people as “gorillas.” Last year, Facebook said it was investigating whether algorithms trained using AI, including Facebook-owned Instagram, were racially biased.
In April, the U.S. Federal Trade Commission warned that AI tools that reveal “problematic” racial and gender bias could violate consumer protection laws if used to make decisions about credit, housing or employment. FTC privacy attorney Elisa Jillson said in a post on the FTC website: “You have to take responsibility, otherwise the FTC can handle it for you.”