fbpx
digital decision

Meta grapples with regulating violent content in Israel-Hamas war

The Oversight Board, an independent entity that reviews the company’s content-moderation decisions, found that the company’s policies after Oct. 7 limited users’ freedom of expression

Mark Zuckerberg, CEO of Meta, leaves the “AI Insight Forum” at the Russell Senate Office Building on Capitol Hill on September 13, 2023, in Washington, D.C.

Nathan Howard/Getty Images

Mark Zuckerberg, CEO of Meta, leaves the “AI Insight Forum” at the Russell Senate Office Building on Capitol Hill on September 13, 2023, in Washington, D.C.

The social media giant Meta on Tuesday offered a window into its handling of content related to the Israeli-Hamas war, as the body tasked with overseeing its content moderation decisions called for Facebook and Instagram to allow more graphic posts about the war to be shared in real time.

The Oversight Board, an independent entity created by Meta to review its actions removing or hiding certain content, issued two decisions on Tuesday — its first since the Oct. 7 Hamas terror attacks in Israel. Each was a response to users’ appeals, after posts were removed for violating Meta policies limiting the sharing of videos depicting terrorism or violent content. 

The decisions revealed that after Oct. 7, Meta lowered the bar for when it would automatically remove content that might violate the platforms’ standards on hate speech, violence, incitement and harassment, in response to what it described as “an exceptional surge in violent and graphic content.” But the Oversight Board determined that the change resulted in the preemptive removal of content that users should have been allowed to post. 

In one case, a user appealed Meta’s decision to remove their Facebook post of a video depicting an Israeli woman being taken hostage by Hamas. They had posted it with a caption urging people to watch the video to understand the threats faced by Israel. But Meta removed the post, citing a policy that prohibits the sharing of videos of terrorist attacks. 

After the attack, Meta saw a change in the reason people were posting such videos: not to glorify them, but to “condemn and raise awareness” and “to counter emerging narratives denying the October 7 events took place or denying the severity of the atrocities,” according to the Oversight Board. So Meta reversed its choice to take down these posts, but still removed them from its “recommendations” algorithm. The Oversight Board said that keeping those posts from being recommended “does not accord with the company’s responsibilities to respect freedom of expression.” 

The second case reached a similar decision: that the removal of a graphic post, and its exclusion from Instagram’s “recommended” algorithm, violated users’ freedom of expression. In it, an Instagram post showing the victims of an Israeli attack on Al-Shifa Hospital in Gaza had been removed because it violated the company’s policy against showing violent images that depict internal organs. 

When both posts were reinstated, they included a “disturbing” warning and were not visible to users younger than 18.

The two decisions were the first decided under a new expedited screening process from the Oversight Board, illuminating how the slow-moving body — created in 2020 — has struggled to respond during fast-paced conflicts like this one. Meta’s bureaucratic web of content-moderation policies, fine-tuned by the company as it has faced years of external pressure over its approach to trust and safety, was not entirely equipped to handle the nuanced reasons people post violent content. 

Prior decisions from the board related to antisemitism also reveal how Meta’s policies often mistakenly flag content that is calling out or highlighting violent speech, rather than supporting it. 

Another decision released this week reversed Facebook’s removal of four users’ posts quoting the Nazi propaganda minister Joseph Goebbels, none of which supported Nazism. Rather, they highlighted the dangers of misinformation. 

A September decision found that Meta erred in removing an Instagram post that depicted someone criticizing the rapper Ye’s antisemitic remarks. The video had been flagged by the platform’s content-moderation systems for supporting hate speech, when it was actually condemning it. 

Subscribe now to
the Daily Kickoff

The politics and business news you need to stay up to date, delivered each morning in a must-read newsletter.