Go read this story on how Facebook’s focus on growth stopped its AI team from fighting misinformation

Facebook has always been a company focused on growth above all else. More users and more engagement means more revenue. The price of one mind is clearly explained in this wonderful story. MIT technology review. It details how attempts to address misinformation from the company’s AI team using machine learning have been hampered by trying not to limit Facebook’s user engagement.

“If the model reduces participation too much, it is discarded. Otherwise, it is deployed and continuously monitored,” says Karen Hao, author of Facebook Machine Learning Models. “But this approach soon caused problems. Models that maximize participation also favor controversy, misinformation, and extremism. In short, people like absurd things.”

Above Twitter, Hao said the article “corrupted people [doing] Corrupt things.” Instead, she says, “It’s about good people trying to do what’s really right. But they’re stuck in a rotten system and are doing their best to keep it stationary.”

The story also adds more evidence to the accusations that Facebook was blinded by right-wing misinformation due to its desire to appease conservatives during President Donald Trump’s tenure. This seems to be due, at least in part, to Joel Kaplan, a former employee of the George W. Bush administration, now serving as Facebook’s vice president of global public policy and “top-ranked Republican”. Hao wrote:

Every Facebook user has about 200 “characteristics” attached to their profile. This includes various dimensions submitted by the user or estimated by machine learning models such as race, political and religious orientation, socioeconomic class, and level of education. Kaplan’s team started using this attribute to assemble custom user segments that reflect mostly conservative interests (e.g. conservative content, groups, and users who participated in the page). Then, according to a former researcher who was the subject of that review, we ran a special analysis to see how the content moderation decision would affect posts in that segment.

The Fairness Flow document, later written by the Responsible AI team, contains a case study on how to use the tool in these situations. When the team judges whether the misinformation model is fair with respect to political ideology, “fairness” is no This means that the model should affect conservative and liberal users alike. If conservatives post more misinformation by public consensus, the model should show more of the conservative content. The more liberals post more misinformation, the more often their content should be reported.

However, Kaplan’s team members followed the opposite approach. They took “fairness” in the sense that these models do not affect conservatives more than liberals. If the model does that, it stops deployment and asks for a change. At one time, the former researcher told me they blocked medical misinformation detectors that significantly reduced the scope of the vaccine campaign. They said that the model could not be deployed until the research team corrects this discrepancy. But it effectively made the model pointless. “Then it doesn’t mean anything. The model so modified would not literally affect the real problem of misinformation.

The story also suggests that Facebook’s AI researcher’s work on the problem of algorithmic bias in which machine learning models unintentionally discriminate against specific user groups prevents, at least in part, anti-conservative sentiment and potential regulation by the U.S. government. But pouring more resources into prejudice means ignoring issues related to misinformation and hate speech. Despite the company’s lip service to AI fairness, the guiding principles are still the same as growth, growth, growth.

[T]The estimation algorithm for fairness is still optional on Facebook. Teams working directly on Facebook’s news feeds, advertising services, or other products don’t need to do this. Payroll incentives are still tied to engagement and growth indicators. And there are guidelines on which fairness definitions to use in a given situation, but they do not apply.

You can read the full story of Hao. MIT technology review here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Discover

Sponsor

Latest

Consumer vs Customer | Top10Brands.online

Consumer versus customer is a very common analysis that is often performed because people often confuse the two terms and use them interchangeably....

Leaked Samsung Galaxy Watch 4 Classic images reveal traditional design

Images of the Samsung Galaxy Watch 4 Classic have leaked online, giving us a first look at what could be more...

PSA: You might want to avoid the gobs of Halo Infinite spoilers Microsoft just leaked

Master Chief, Cortana, and halo Universe, you can bow your head and start muting a few keywords on social media. Halo...

Samsung Galaxy A52 5G review: a midrange phone that will last

If a $100 low-cost phone is a fast food dollar menu and a $1,000 flagship is a steakhouse dinner, the Samsung...

Top 10 Women Try Best Winter Hair Color Shades 2020-2021

Hair color trends have become very popular over the past few years. Coloring hair Different hues are good for picking up a...