News

Ads Endorsing ‘Holocaust’ Against Palestinians Get Green Light on Facebook

Facebook recently came under scrutiny for approving ads endorsing violence and genocide against Palestinians, as reported by The Intercept. These ads, featuring explicit calls for violence, managed to bypass Facebook’s content moderation filters, sparking concerns about the platform’s policies.

Founder of 7amleh Expresses Concerns

Nadim Nashif, founder of the Palestinian advocacy group 7amleh, pointed out Meta’s consistent failures in addressing issues affecting the Palestinian community. Speaking to The Intercept, Nashif expressed worries about Meta’s bias and discrimination against Palestinians, citing the approval of controversial ads as a notable example.

Also read: McDonald’s Sees 70% Sales Drop in Egypt Amid Gaza Boycott

Policy Violations in Hebrew and Arabic Ads

Submitted in both Hebrew and Arabic, these ads blatantly violated Facebook and Meta’s policies by promoting violence and even advocating for the murder of Palestinian civilians. The ensuing controversy led to a test of Facebook’s automated content filtering system, initiated by Nashif after encountering an ad advocating for the assassination of Palestinian rights activist Paul Larudee.

Concerns about Moderation Effectiveness

Despite explicit violations, the sponsored post initially passed through Facebook’s machine-learning tools designed to moderate harmful content. Although the ad was eventually removed following a complaint, lingering questions remain about the initial approval and the overall efficacy of Facebook’s content moderation tools.

Ad Kan’s Involvement in the Controversy

The ads calling for the murder of Larudee were reportedly sponsored by Ad Kan, an Israeli right-wing group founded by former military and intelligence personnel. Ad Kan’s mission, as stated on its website, is to target “anti-Israeli organizations.”

Read: Egyptian Soda Gains Ground Amid Anti-Western Boycott in the Middle East

Challenges in Automated Content Moderation

An external audit last year exposed Facebook’s lack of algorithms to detect violent Hebrew content targeting Arabs. Recent revelations cast doubt on the effectiveness of Facebook’s AI tools in combatting hate speech or whether they are selectively applied when addressing content related to the Israeli-Palestinian conflict.

Parallels with Meta’s Previous Shortcomings

Drawing parallels with Meta’s previous shortcomings in protecting marginalized communities, Nashif referenced the Rohingya crisis in Myanmar. The perceived bias in Facebook’s content moderation practices becomes more evident when comparing its aggressive censorship of Arabic content to the apparent oversight in addressing Hebrew content.

Facebook’s Response and Ongoing Concerns

Facebook spokesperson Erin McPike attributed the accidental approval of the ads to inevitable mistakes in both machine and human moderation. This admission raises concerns about the consistency and accuracy of Facebook’s content review processes.

Implications for Palestinians

These incidents reinforce the perception that the world’s leading social platform applies rules selectively, raising critical questions about the platform’s impartiality. In an environment marked by ethnic tension, such double standards can have severe real-world consequences, as observed in Myanmar where Facebook posts are believed to have played a role in the Rohingya Muslim genocide.

Related Articles