OpenAI has banned a group of ChatGPT accounts linked to an Iranian influence operation aimed at the U.S. presidential election.
This operation involved generating fake news articles and social media posts, although it didn’t have a wide impact.
This move follows previous actions by OpenAI against similar state-affiliated misuse of ChatGPT.
For example, in May, OpenAI disrupted five campaigns intended to manipulate public opinion.
The Iranian group, known as Storm-2035, has been active since 2020.
Their goal is to create division rather than promote specific policies by using AI to generate fake news and misleading social media posts.
OpenAI’s investigation into this group was supported by a Microsoft Threat Intelligence report, which helped identify Storm-2035 and its activities.
The group operated several fake news websites, such as “evenpolitics.com,” and spread misinformation on social media platforms like X (formerly Twitter) and Instagram, though their posts received minimal engagement.
With the U.S. presidential election approaching, OpenAI anticipates more such operations, as AI tools make it easier and cheaper to create misleading content quickly.

