A new study has revealed that social media giants Meta and X approved advertisements containing violent hate speech against Muslims and Jews in the lead-up to Germany’s federal elections.
The research, conducted by corporate responsibility group Eko, tested the platforms’ ad review processes by submitting ads with inflammatory content, including calls for violence and AI-generated images depicting mosques and synagogues being burned.
The findings showed that most of the ads were approved within hours. X allowed all ten ads submitted, while Meta approved five and rejected the rest.
 The rejected ads were flagged for potential political or social sensitivity.
However, the ads that were approved contained language that dehumanized Muslim refugees and called for violent actions against them. Some also included antisemitic messages linking Jewish people to conspiracies.
Eko noted that Meta failed to label AI-generated images in the ads despite its policy requiring disclosure of political content.
The organization disabled all test ads before they went live to prevent exposure to users.
The study raises concerns about content moderation on social media, especially during elections.
The European Commission, responsible for enforcing the Digital Services Act (DSA), has ongoing investigations into both Meta and X over election security and illegal content.
However, no final decisions have been made on potential penalties.
With Germany’s elections just hours away, critics argued that the EU’s online governance laws have not done enough to prevent harmful content from spreading.
Other studies have also suggested that X and TikTok’s algorithms favor far-right content in Germany.
Regulators are now being urged to take stronger action against tech companies failing to enforce their own policies.