content warning

Meta Oversight Board rules against Holocaust denial content

The board ruled that Instagram erred in leaving up a post that questioned the number of Holocaust victims and existence of crematoria

Illustration by Jonathan Raa/NurPhoto via Getty Images

The Instagram logo is being displayed on a smartphone among other social media networks in this photo illustration in Brussels, Belgium, on January 22, 2024.

Citing a case made by a prominent Jewish group, the Meta Oversight Board overturned the social media giant’s decision to leave up an Instagram post that spread garbled information about the Holocaust, according to an announcement made on Tuesday. 

The Oversight Board, an independent entity created by Meta to review its actions removing or hiding certain content, urged the company, which runs Facebook and Instagram, to impose updated measures in how it tracks Holocaust denial, pointing to a submission by the American Jewish Committee and its Jacob Blaustein Institute for the Advancement of Human Rights (JBI) in the decision. JBI’s comment on the case claimed that Meta’s prohibition of Holocaust denial is “fully consistent with international human rights standards.” 

The content at the center of the case, originally posted on Instagram in September 2020, featured a meme of the “SpongeBob SquarePants” character Squidward. It questioned the number of victims of the Holocaust and existence of crematoria at the Auschwitz-Birkenau concentration camp, under a speech bubble titled “Fun Facts About The Holocaust,” according to the board. Meta removed the post in August after the board announced it selected the case to review. When the board took up the case for review, Instagram had allowed the post to remain on the platform.  

The Instagram post came one month before Meta updated its hate speech guidelines to explicitly prohibit Holocaust denial. It was reported six times for hate speech, four of which were made before the policy update, according to the board’s case decision, which noted that two of the six reports led to human reviews. The others were reviewed by automation and either deemed as non-violating or automatically closed based on Meta’s COVID-19 automation policies. 

In the review case, the board found the COVID-19 automation policy, which was created at the start of the pandemic due to a reduction in reviewers as Meta asked them to remain at home, was still in place as of May 2023. The policy led to some reports of the Holocaust post being automatically closed, the board said.  

Ted Deutch, AJC CEO, told Jewish Insider, “Jews are facing unprecedented antisemitic hatred online and offline today. In this troubling context, the Oversight Board’s affirmation that online Holocaust denial causes real harm to Jews – both through engendering fear in them and by spreading toxic conspiracies and stereotypes among users – is timely and encouraging.” 

Deutch called on other companies to take similar action to “prohibit this pernicious form of hatred, as Meta has for several years, and to ensure their policies against this and other forms of hate speech are enforced consistently.”

Felice Gaer, director of JBI, added in a statement, “Holocaust denial and distortion are never acceptable discourse; they are antisemitic attacks that fuel and perpetuate harmful conspiracies and stereotypes about Jews. 

She called on other social media and technology companies to “heed the Oversight Board’s decision and explicitly prohibit and remove Holocaust denial content on their platforms and products, subject only to extremely limited exceptions, such as for content posted for the purpose of condemnation.”” 

The Meta Oversight Board, in addition to overturning the decision on the content, recommended Meta take steps to create a system to label enforcement data that would help the company keep track of Holocaust denial posted and to implement the ban. A fifth of Americans ages 18-29 believe the Holocaust was a myth, according to a December poll from The Economist.

The overturn comes one month after two decisions made by the Oversight Board in the wake of the Israel-Hamas war illuminate how the tech company is grappling with how to regulate violent content. 

In December, the social media giant issued two decisions, each a response to users’ appeals after posts were removed for violating Meta policies limiting the sharing of videos depicting terrorism or violent content. 

The decisions revealed that after Oct. 7, Meta lowered the bar for when it would automatically remove content that might violate the platforms’ standards on hate speech, violence, incitement and harassment, in response to what it described as “an exceptional surge in violent and graphic content.” But the Oversight Board determined that the change resulted in the preemptive removal of content that users should have been allowed to post. 

In one case, a user appealed Meta’s decision to remove their Facebook post of a video depicting an Israeli woman being taken hostage by Hamas. They had posted it with a caption urging people to watch the video to understand the threats faced by Israel. But Meta removed the post, citing a policy that prohibits the sharing of videos of terrorist attacks. 

The second case reached a similar decision: that the removal of a graphic post, and its exclusion from Instagram’s “recommended” algorithm, violated users’ freedom of expression. In it, an Instagram post showing the victims of an Israeli attack on Al-Shifa Hospital in Gaza had been removed because it violated the company’s policy against showing violent images that depict internal organs. When both posts were reinstated, they included a “disturbing” warning and were not visible to users younger than 18.

Jewish Insider’s Washington correspondent Gabby Deutch contributed reporting. 

Subscribe now to
the Daily Kickoff

The politics and business news you need to stay up to date, delivered each morning in a must-read newsletter.