Facebook’s reckless censorship reinforces systemic biases
After coming under fire for not doing enough to prevent the spread of misinformation and fake news during the 2016 US presidential election, Mark Zuckerberg’s New Year’s Resolution for 2018 is to fix Facebook; be it “protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent.” There is no doubt that, given Facebook’s level of influence and reach, it has the responsibility to be more editorial and to remove content created with the intent to deceive or to promote hate. However, the way Facebook has been editing out certain voices has disproportionately silenced marginalized groups.
In early January, Facebook shut down Sesh Safety, a group providing harm-reduction information to drug-users, on the basis that it violated Facebook’s Community Standards. The drug-related sections of the Community Standards indicate that Facebook does not allow the sale of drugs, and “prohibit[s] the use of Facebook to facilitate or organize criminal activity that causes physical harm to people.” However, Sesh Safety is not a platform for drug sales and, in fact, educates people about the safest way to use drugs to reduce physical harm and save lives.
Facebook’s decision to deactivate the group without warning took away a support network for over 40,000 people. Though eventually reactivated, the group has been repeatedly shut down throughout January, and most recently, on February 9th.
Facebook’s censorship of harm-reduction discussions reflects a narrow and stigmatized view of drug use. The group is automatically being flagged by the system simply because it talks about drugs—reflecting automatic biases on the part of the human reviewers as well as the coders who developed these algorithms. Having this community space is all the more crucial in the context of societal stigma against substance use, which makes governments hesitant to support harm reduction policies. Facebook’s move took one of the few resources away from people who already receive so little assistance and compassion from society.
Sesh Safety is an example of the type of community-building that Facebook should be trying to promote and encourage. From the grassroots, the group fills a needed gap on the Internet in providing harm-reduction advice in real time, from people who have lived experience. Fundamentally, it provides a space where people are comfortable sharing, both in offering and asking for advice, which is of immediate and practical help, but is also a way to dispel internalized stigma.
Zuckerberg mentions in his resolution post that: “A lot of us got into technology because we believe it can be a decentralizing force that puts more power in people’s hands. (The first four words of Facebook’s mission have always been ‘give people the power’.)” Content on Facebook should absolutely be user-driven; it should not reflect and perpetuate systemic biases.
Unfortunately, there are many other examples of Facebook targeting marginalized groups. Black activists have had their accounts banned for speaking out against racism; activist Leslie Mac, co-founder of Safety Pin Box, was temporarily banned from Facebook after posting, “White folks. When racism happens in public—YOUR SILENCE IS VIOLENCE,” which apparently violated Community Standards. When Korryn Gaines was broadcasting her standoff with the police, during the last few moments of her life, Facebook decided to take her account offline in compliance with requests from the police department, consequently removing video evidence of the police brutality. Facebook also bowed to pressure from the Israeli government to remove content the government considers “incitement,” that is, accounts of Palestinian activists. Facebook has also taken down accounts of Rohingya activists who speak out about the injustices they experience at the hands of the Burmese military.
Facebook seems to be consistently following the position of governments, appearing naïve to the fact that governments do not always—and in fact, rarely—have the interest of marginalized groups at heart. Facebook claims to empower “the people,” but they are doing the opposite by simply reinforcing systemic biases.
Facebook has declared that it is committed to actively monitoring content on the site, but warns that there are bound to be errors, since “the cases we review aren’t the easy ones: they are often in a grey area where people disagree,” and Facebook must moderate a high volume of them. These challenges are certainly overwhelming, but they do not excuse, nor fully explain, the frequency of mistakes made. Facebook’s actual approach to reviewing content is systematically problematic, and ill-suited to the task at hand.
Facebook defines hate speech as “content that directly attacks people based on their: race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, or gender identity, or serious disabilities or diseases.” Not included on this list are categories such as age and social class, which Facebook deems “unprotected” categories. Facebook will only take action against content that attacks what they call “protected groups.” First off, Facebook’s ability to decide who is worthy of protection is unsettling.
Furthermore, a ProPublica article from June 2017 published Facebook’s internal learning materials for reviewers, one of which was the quiz question: “Which of the below subsets do we protect? Female drivers, black children or white men?” The correct answer was white men.
This question conveys a sick sense of humour. According to Facebook’s logic, if a group belongs to a protected category in one respect, and an unprotected category in another, they are considered unprotected. Since “drivers” and “children” are unprotected categories, content attacking female drivers or black children would be allowed. Even from a purely logical standpoint, this does not make any sense, as astrophysicist Chanda Prescod-Weinstein remarked on Twitter, “Where did @facebook’s engineers take their math classes? Members of subset C of set A are still members of A.”
https://twitter.com/ibjiyongi/status/880063075151425537
Moreover, Facebook completely misses the point of moderating content for hate speech concerns. Their procedure systematically protects those who already benefit the most from the system, perpetuating inequality. It is equally alarming that nobody was concerned about the effectiveness of an anti-hate speech procedure that protects white men over black children. Nobody considered this to be an indication of flaws in the procedure.
Another absurd example: migrants can be called “filthy” but not “filth.” The document states that “when the comparison is in the noun form,” this is not considered permissible. The procedure here is even more blatantly arbitrary, and ineffective.
In an article entitled “Facebook’s Community Standards: How and Where We Draw the Line,” Facebook’s Head of Global Policy Management Monika Bickert expressed that “being as objective as possible is the only way we can be consistent across the world.” This emphasis on so-called “objectivity” seems to be reflected in the standardized procedure prescribed to reviewers, which, in addition to being both illogical and regressive, does not encourage reviewers to pay attention to context (which would make it obvious that black children are a subset of black people, and therefore vulnerable to hate speech attacks). There is little intrinsic value from the standardization of reviewers’ responses: it is far more important to execute appropriate responses that make Facebook a safe environment for those who are not white men.
Facebook should provide better quality equity training to its staff. Equity is not Facebook’s dichotomous breakdown of groups who deserve protection and groups that don’t. There does not exist an exhaustive list of types of discrimination. It is therefore futile to prescribe formulas to identify hate speech. Instead, Facebook should educate its staff about historical events that led to current systemic inequality, and the important role and responsibility that Facebook has to break down these barriers.
It would also help to hire a more diverse staff. In 2017, women held only 19% of tech jobs at Facebook, and in the US, Hispanic people represent 5% of employees, while Black people represent just 3%.
Marginalized groups have already demonstrated their drive and willingness to carve out spaces for themselves on Facebook. Facebook has the potential to be a platform where communities can be free to contribute underrepresented opinions to influence public discourse, to push back against systemic oppressions and stigma. Facebook should be a place where people reclaim power, where people come first—and not just white men.
This article is part of an ongoing series in which The Strand tackles issues relating to systemic oppression, privilege, and identity. All are welcome to contribute to the discussion. Pitches should be directed to [email protected].
Whatever they’re doing, it isn’t working. I was encouraged by facebook to create a fund-raiser for my birthday. I did so. When one of my contributors mentioned three other charities that he supports, I posted links to them, on the actual fund-raising page, itself. Facebook immediately marked all three links as SPAM and removed them.
Whether that is the result of human review, or algorithm, it was a foolish mistake. The contributor was excited that facebook offered the fund-raising feature. Until he saw my post to raise funds for the ACLU, he didn’t know that such a service existed on facebook. He was considering the creation of a fund-raiser for his birthday and was excited about doing so. I was explaining that they have a pre-set list of optional charities, but that he can search for his individual charities and will likely find them. That is when I posted the three links to his charities of choice. And that is precisely when facebook marked them as SPAM and removed them. Both he and I went from joyous optimism, to apathy and a feeling of powerlessness. Whatever they’re doing, they’re doing it wrong.
And their so-called “Support” module is of absolutely no help. Most of their issues and suggestions are completely out of date because they’re changing at a pace that can’t be matched or updated as quickly. Using facebook has become an experience that I liken to getting into a stranger’s car and hoping against hope, that you may actually reach your destination.