Go read this story on how Facebook’s focus on growth stopped its AI team from fighting misinformation

Facebook has always been a company focused on growth above all else. More users and more engagement means more revenue. The price of one mind is clearly explained in this wonderful story. MIT technology review. It details how attempts to address misinformation from the company’s AI team using machine learning have been hampered by trying not to limit Facebook’s user engagement.

“If the model reduces participation too much, it is discarded. Otherwise, it is deployed and continuously monitored,” says Karen Hao, author of Facebook Machine Learning Models. “But this approach soon caused problems. Models that maximize participation also favor controversy, misinformation, and extremism. In short, people like absurd things.”

Above Twitter, Hao said the article “corrupted people [doing] Corrupt things.” Instead, she says, “It’s about good people trying to do what’s really right. But they’re stuck in a rotten system and are doing their best to keep it stationary.”

The story also adds more evidence to the accusations that Facebook was blinded by right-wing misinformation due to its desire to appease conservatives during President Donald Trump’s tenure. This seems to be due, at least in part, to Joel Kaplan, a former employee of the George W. Bush administration, now serving as Facebook’s vice president of global public policy and “top-ranked Republican”. Hao wrote:

Every Facebook user has about 200 “characteristics” attached to their profile. This includes various dimensions submitted by the user or estimated by machine learning models such as race, political and religious orientation, socioeconomic class, and level of education. Kaplan’s team started using this attribute to assemble custom user segments that reflect mostly conservative interests (e.g. conservative content, groups, and users who participated in the page). Then, according to a former researcher who was the subject of that review, we ran a special analysis to see how the content moderation decision would affect posts in that segment.

The Fairness Flow document, later written by the Responsible AI team, contains a case study on how to use the tool in these situations. When the team judges whether the misinformation model is fair with respect to political ideology, “fairness” is no This means that the model should affect conservative and liberal users alike. If conservatives post more misinformation by public consensus, the model should show more of the conservative content. The more liberals post more misinformation, the more often their content should be reported.

However, Kaplan’s team members followed the opposite approach. They took “fairness” in the sense that these models do not affect conservatives more than liberals. If the model does that, it stops deployment and asks for a change. At one time, the former researcher told me they blocked medical misinformation detectors that significantly reduced the scope of the vaccine campaign. They said that the model could not be deployed until the research team corrects this discrepancy. But it effectively made the model pointless. “Then it doesn’t mean anything. The model so modified would not literally affect the real problem of misinformation.

The story also suggests that Facebook’s AI researcher’s work on the problem of algorithmic bias in which machine learning models unintentionally discriminate against specific user groups prevents, at least in part, anti-conservative sentiment and potential regulation by the U.S. government. But pouring more resources into prejudice means ignoring issues related to misinformation and hate speech. Despite the company’s lip service to AI fairness, the guiding principles are still the same as growth, growth, growth.

[T]The estimation algorithm for fairness is still optional on Facebook. Teams working directly on Facebook’s news feeds, advertising services, or other products don’t need to do this. Payroll incentives are still tied to engagement and growth indicators. And there are guidelines on which fairness definitions to use in a given situation, but they do not apply.

You can read the full story of Hao. MIT technology review here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Discover

Sponsor

Latest

Facebook’s next big AI project is training its machines on users’ public videos

Teaching AI systems to fully understand what's happening in video as humans can do is one of the toughest challenges in...

Oppo Find X3 Pro review: the Chinese phone to beat

You may know Oppo as a company that is indefinitely related to OnePlus, but in recent years it has become China's...

Get a refurbished Oculus Quest with a one-year warranty for under $200 today

Oculus launched its first generation Quest VR headset in 2019, and it has quickly become a popular head-mounted display if you...

DeepMind reportedly lost a yearslong bid to win more independence from Google

The tension between Google and AI brain trust DeepMind has always been fascinating. To outline the relationship: Founded in 2010,...

Samsung’s Odyssey Neo G9 is a high-end TV disguised as a 49-inch curved gaming monitor

The follow-up to Samsung's excellent Odyssey G9 curved gaming monitor is pretty much here. The 49-inch Odyssey Neo G9 will...