OpenAI is dealing with a new privacy complaint in Europe due to its AI chatbot, ChatGPT, generating false and damaging information.Â
The case involves a man from Norway who discovered that the chatbot falsely claimed he had been convicted of murdering two of his children and attempting to kill the third.
The privacy rights group Noyb is backing the complaint, arguing that OpenAI failed to provide a way for users to correct inaccurate personal data.
 Under the European Union’s General Data Protection Regulation (GDPR), individuals have the right to have incorrect personal data corrected.
The law also requires data controllers to ensure that the information they generate is accurate.
This is not the first time ChatGPT has been accused of fabricating false personal details.
 Previous complaints involved incorrect birth dates and biographical information, but this case stands out due to the severity of the false claims.
 Noyb believes OpenAI’s practice of adding a small disclaimer about possible mistakes is not enough to justify spreading falsehoods.
If found guilty of breaching GDPR rules, OpenAI could face fines of up to 4% of its global revenue. European regulators have been slow in addressing AI-related privacy issues, but this complaint may push them to take stronger action.
Italy’s privacy watchdog previously forced OpenAI to make changes after blocking ChatGPT access in 2023. The company later faced a €15 million fine for processing people’s data without a legal basis. However, other European regulators have taken a more cautious approach, with some investigations, like one in Poland from September 2023, still unresolved.
Noyb says this is not an isolated incident, citing other cases where ChatGPT falsely accused individuals of crimes. The organization also warns that even if OpenAI stops generating specific false claims, past inaccuracies might still exist in the AI’s system.
The complaint has been filed with Norway’s data protection authority, but OpenAI’s Irish office may also be involved in the investigation. A similar case in Austria was handed over to Ireland’s Data Protection Commission, which is still reviewing it.
OpenAI has been contacted for a response, but as investigations continue, the case raises concerns about AI-generated misinformation and the accountability of companies developing these technologies.