Facebook stockpile defense1/15/2024 ![]() “Meta has a track record of not doing enough to protect marginalized communities.”Ĭurious if the approval was a fluke, 7amleh created and submitted 19 ads, in both Hebrew and Arabic, with text deliberately, flagrantly violating company rules - a test for Meta and Facebook. In light of assurances from Facebook that safeguards were in place, Nashif and 7amleh, which formally partners with Meta on censorship and free expression issues, were puzzled. The quick removal of the Larudee post didn’t explain how the ad was approved in the first place. (Last year, another group ran test ads inciting against Rohingya, a project along the same lines as 7amleh’s experiment in that case, all the ads were also approved.) Large-scale incitement to violence jumping from social media into the real world is not a mere hypothetical: In 2018, United Nations investigators found violently inflammatory Facebook posts played a “determining role” in Myanmar’s Rohingya genocide. Following the audit, Meta claimed it had “launched a Hebrew ‘hostile speech’ classifier to help us proactively detect more violating Hebrew content.” Content, that is, like an ad espousing murder.Īmid the Israeli war on Palestinians in Gaza, Nashif was troubled enough by the explicit call in the ad to murder Larudee that he worried similar paid posts might contribute to violence against Palestinians. Last year, an external audit commissioned by Meta found that while the company was routinely using algorithmic censorship to delete Arabic posts, the company had no equivalent algorithm in place to detect “Hebrew hostile speech” like racist rhetoric and violent incitement. While these technologies allow the company to skirt the labor issues associated with human moderators, they also obscure how moderation decisions are made behind secret algorithms. As Meta’s human-based moderation, which historically relied almost entirely on outsourced contractor labor, has drawn greater scrutiny and criticism, the company has come to lean more heavily on automated text-scanning software to enforce its speech rules and censorship policies. ![]() “Our ad review system is designed to review all ads before they go live,” according to a Facebook ad policy overview. ![]() The ad likely passed through filtering by Facebook’s automated process, based on machine-learning, that allows its global advertising business to operate at a rapid clip. That the post sponsored by Ad Kan appeared on the platform indicates Facebook approved it despite those rules. (Neither Larudee nor Ad Kan immediately responded to requests for comment.)Ĭalling for the assassination of a political activist is a violation of Facebook’s advertising rules. The ad had been placed by Ad Kan, a right-wing Israeli group founded by former Israel Defense Force and intelligence officers to combat “anti-Israeli organizations” whose funding comes from purportedly antisemitic sources, according to its website. Facebook’s automatic translation of the text ad read: “It’s time to assassinate Paul Larudi, the anti-Semitic and ‘human rights’ terrorist from the United States.” Nashif reported the ad to Facebook, and it was taken down. “Throughout this crisis, we have seen a continued pattern of Meta’s clear bias and discrimination against Palestinians.”ħamleh’s idea to test Facebook’s machine-learning censorship apparatus arose last month, when Nashif discovered an ad on his Facebook feed explicitly calling for the assassination of American activist Paul Larudee, a co-founder of the Free Gaza Movement. ![]() “The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people,” Nadim Nashif, founder of the Palestinian social media research and advocacy group 7amleh, which submitted the test ads, told The Intercept. “The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people.” Some contained violent content directly calling for the murder of Palestinian civilians, like ads demanding a “holocaust for the Palestinians” and to wipe out “Gazan women and children and the elderly.” Other posts, like those describing kids from Gaza as “future terrorists” and a reference to “Arab pigs,” contained dehumanizing language. The submitted ads, in both Hebrew and Arabic, included flagrant violations of policies for Facebook and its parent company Meta. A series of advertisements dehumanizing and calling for violence against Palestinians, intended to test Facebook’s content moderation standards, were all approved by the social network, according to materials shared with The Intercept.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |