OpenAI, the organization behind the AI chatbot ChatGPT, is currently facing a privacy complaint in Norway after the chatbot produced false and defamatory claims concerning a particular individual. This case highlights ongoing concerns about the reliability and accuracy of content generated by AI systems.
A Norwegian citizen discovered that ChatGPT had inaccurately claimed he had been convicted of murdering two of his children and attempting to kill a third. These unsubstantiated allegations have caused significant emotional distress and threaten to tarnish the individual’s reputation.
The privacy advocacy organization NOYB (None of Your Business) is supporting the affected party by filing a complaint with Norway’s data protection authority, Datatilsynet. The group asserts that OpenAI’s ChatGPT is in violation of the General Data Protection Regulation (GDPR) by producing and disseminating false personal information. Joakim Söderberg, a data protection lawyer at NOYB, stated, “The GDPR is clear. Personal data has to be accurate. If it’s not, users have the right to have it changed to reflect the truth.”
According to the GDPR, organizations must ensure the accuracy of personal data they manage. Additionally, the regulation grants individuals the right to correct any inaccurate information pertaining to them. In this situation, the misleading information generated by ChatGPT about the complainant could be considered a violation of these stipulations. Confirmed GDPR violations can result in fines up to 4% of a company’s worldwide annual revenue.
This incident is not an isolated case; ChatGPT has faced scrutiny for inaccuracies, commonly referred to as “hallucinations,” that have led to legal actions:
- In 2023, an Australian mayor considered legal recourse after ChatGPT incorrectly claimed he had been incarcerated for bribery.
- In 2024, Italy’s data protection authority issued a €15 million penalty against OpenAI for improperly handling personal data without a valid legal basis.
- In the U.S., a defamation suit was filed against OpenAI after ChatGPT fabricated legal allegations against a radio host.
These instances underscore the broader problem of AI-generated misinformation and its potential legal ramifications.
OpenAI has acknowledged that ChatGPT can sometimes generate incorrect information, and has begun including disclaimers advising users to validate the chatbot’s outputs. However, critics argue that these disclaimers fail to sufficiently mitigate the damage caused by incorrect information. If the Norwegian data protection authority finds that OpenAI has violated GDPR, the organization could face substantial fines and be required to adopt measures to prevent similar inaccuracies in the future.
The complaint filed in Norway is part of a growing examination of AI systems like ChatGPT in relation to their adherence to data protection standards. As AI technology advances, ensuring the accuracy and dependability of AI-generated content remains a critical concern for both developers and regulatory bodies.
Image Source: Ascannio / Shutterstock