OpenAI Bans ChatGPT Use in Political Campaigns

SPACE

OpenAI, the innovative force behind ChatGPT, has taken a stand against disinformation with the launch of tools designed to safeguard upcoming elections worldwide. Citing concerns raised by the World Economic Forum about AI’s potential disruption in elections across the US, UK, EU, and India, OpenAI underscored its commitment to responsible AI use.

Expressing caution, OpenAI said it has refrained from endorsing ChatGPT for political campaigns and lobbying until its impact is thoroughly understood. The company said it is prioritizing safe and responsible tool deployment during elections, focusing on preventing misuse, including deep fakes, influence operations, and candidate impersonation.
To enhance safety, OpenAI employed red-teaming for new systems, actively seeks user and external partner feedback, and integrates safety measures to mitigate potential harm. Notably, tools like DALL·E incorporate guardrails to reject requests for generating images of real individuals, particularly candidates.
Maintaining transparency and trust, OpenAI explicitly prohibited the creation of chatbots masquerading as real people or institutions. Additionally, applications that misrepresent voting processes, qualifications, or discourage voting are strictly disallowed, reinforcing the company’s dedication to election integrity.

Subscribe to our newsletter for latest news and updates. You can disable anytime.