Meta’s Oversight Board Calls for Human Rights Evaluation Following Policy Changes
On Wednesday, Meta’s Oversight Board urged the company to reassess recent policy changes that may adversely affect human rights. This request coincided with the publication of the Board’s initial 11 case decisions after Meta’s extensive policy overhaul at the beginning of the year.
Overview of Meta’s Policy Changes
In January, Meta implemented significant alterations to its content moderation framework. These changes included relaxing restrictions on contentious political matters such as immigration, as well as discontinuing third-party fact-checking in favor of a community-driven notes system reminiscent of X-style annotations. CEO Mark Zuckerberg proclaimed that Meta’s platforms are intended as spaces where individuals can “express themselves freely,” even amidst potential discord. He further emphasized in a video statement, “[W]e’ve reached a point where it’s just too many mistakes and too much censorship,” suggesting a cultural shift towards prioritizing free speech in the context of recent elections.
Concerns Raised by the Oversight Board
In their press release, the Oversight Board expressed trepidation regarding Meta’s updated policies, highlighting the lack of public information on any prior human rights due diligence conducted before implementing these changes. The Board called on Meta to investigate the possible negative impacts its policies may impose on vulnerable communities, including LGBTQ+ individuals, minors, and immigrants. They also requested a thorough evaluation of the Community Notes system, particularly regarding its efficacy in combatting misinformation.
Board Decisions Reflecting Human Rights Concerns
While the Oversight Board sided with Meta on several issues, there were notable disagreements reflecting deeper concerns over potential human rights violations. For instance, they overruled Meta’s decision to retain three posts related to the riots in the United Kingdom last summer, emphasizing that “the likelihood of their inciting additional and imminent unrest and violence was significant.” Additionally, the Board overturned Meta’s choice to keep content that included a racist slur and stereotypical generalizations about migrants as sexual predators. They also expressed worries about the company’s failure to adequately detect “dehumanizing speech” targeting disabled individuals.
Upholding Controversial Content Decisions
Conversely, the Board upheld some of Meta’s contentious decisions, such as allowing two posts regarding transgender individuals’ rights to bathroom access and participation in sports. Despite being “intentionally provocative,” the Board found that these posts related to matters of public concern and were unlikely to incite immediate violence or discrimination.
Meta’s Ongoing Safety Issues
This call to action from the Oversight Board comes shortly after former Facebook employee Sarah Wynn-Williams published a book detailing her experiences at the company. She noted a recurring pattern of Zuckerberg introducing new policies without consultation and disregarding potential harms stemming from the platform. Additionally, Kevin Systrom, co-founder of Instagram, recently testified in favor of a Federal Trade Commission lawsuit against Meta, stating that Instagram received “zero” funds from Zuckerberg for trust and safety initiatives following the Cambridge Analytica incident. Meta’s stance on safety continues to falter on various fronts.
Documented Impact on Human Rights
Meta’s troubling human rights record is well-established. The platform has been implicated in facilitating the Rohingya genocide and is reported to be systematically censoring content related to Palestine. The Human Rights Campaign has acknowledged that Meta’s policy changes “will normalize anti-LGBTQ+ misinformation and intensify anti-LGBTQ+ harassment.” Similarly, Amnesty International warned in February that these policies could exacerbate risks of mass violence and genocide.
The Uncertain Future of Meta’s Human Rights Evaluations
The Board’s request for Meta to assess its policies’ impacts is commendable, yet the effectiveness of this measure remains uncertain. Critics point out that Meta’s policies employ coded terminology, such as “transgenderism,” and training materials have allegedly included harmful examples like “Immigrants are grubby, filthy pieces of shit,” “Black people are more violent than whites,” and “Trans people are mentally ill.” These indicators suggest that Meta’s policy missteps and their consequences are far from accidental.
Frequently Asked Questions
What are Meta’s recent policy changes?
Meta recently overhauled its content moderation policies, easing restrictions on political topics and eliminating third-party fact-checking in favor of community-driven content assessment.
Why did the Oversight Board express concerns over Meta’s policies?
The Board raised concerns due to the lack of transparency in Meta’s human rights due diligence and the potential adverse effects on marginalized communities such as LGBTQ individuals and immigrants.
What is the significance of the Oversight Board’s decisions?
The Board’s decisions clarify Meta’s responsibility regarding harmful content and highlight its ongoing struggles with content moderation and human rights violations.
How has Meta been implicated in human rights violations?
Meta has faced criticism for its role in the Rohingya genocide and for censoring voices related to the Palestinian plight, highlighting its problematic history in safeguarding human rights.
What might be the future of Meta’s policies regarding hate speech?
While the Oversight Board’s requests aim for a more thoughtful approach to policy impact assessment, skepticism remains about Meta’s commitment to addressing these issues comprehensively.