Meta Criticized For Content Moderation Amid Israel-Hamas Conflict

An independent board challenges Meta's approach to moderating content amid the ongoing Israel-Hamas tensions, urging a better balance between expression and safety.

How does Meta moderate conflict-related content?

Meta's independent oversight board, after reviewing the company's actions following the October 7 attack by Hamas on Israel, expressed concerns over Meta's aggressive use of automated tools in content moderation. The board's report, released on Tuesday, indicated that Meta's lowered thresholds for content removal increased the likelihood of mistakenly deleting significant, non-violating content. This includes posts that shed light on human suffering from both sides of the conflict. As of December 11, these thresholds had not been adjusted back to pre-October levels.

Challenges and Accusations Of Bias

The report emerges as social media giants, including TikTok, Meta, and Google’s YouTube, grapple with a surge in conflict-related content, ranging from graphic war imagery to misinformation. Allegations of bias have surfaced, with TikTok being accused by some US lawmakers of amplifying pro-Palestinian voices while neglecting antisemitic content. Conversely, Meta is criticized by various human and civil rights organizations for suppressing Palestinian voices and allowing the spread of anti-Palestinian content on its platforms.

Facebook imagery
As of Q3 2023, Facebook has 3.049 billion monthly users, ranking as the world's largest app (image: Dado Ruvic)

Recommendations And Meta's Response To Content Issues

The board urged Meta to swiftly adapt to evolving ground situations, striking a balance between freedom of expression and safety. Concerns were also raised about the removal of content that could serve as evidence of human rights violations. Meta's oversight board, which started functioning in 2021, typically takes 90 days for case reviews but expedited two cases within 12 days. These included the removal of a post depicting the aftermath of a Gaza strike and a video of an Israeli woman taken hostage. The board ruled that both posts had significant public interest value and should be allowed with appropriate warnings. Meta acknowledged the board's decisions, noting that the content had been reinstated with warning screens. The company emphasized its commitment to balancing expression and safety on its platforms.


Subscribe to our newsletter and follow us on X/Twitter.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to REX Wire.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.