Openai faces another privacy complaint in Europe because its viral AI chatbot tends to hallucinate false information – a message that may be ignored for regulators.
Privacy Advocacy Group noyb An individual who is supporting Norway, he is afraid to find chatgpt The fictitious information returned claims that he was convicted for murdering two children and attempted to kill a third child.
Issues involving earlier privacy complaints about chatgpt generating wrong personal data, e.g. Incorrect date of birth or Biography details errors. One problem is that OpenAI does not provide a way for individuals to correct incorrect information generated by AI. Typically, OpenAI proposes responses to block such prompts. But under the EU’s General Data Protection Regulation (GDPR), Europeans have a set of data access rights, including the right to correct personal data.
Another component of the data protection law requires data controllers to ensure that the personal data they generate about individuals is accurate – a problem Noyb marked in its latest CHATGPT complaint.
“GDPR is clear. Personal data must be accurate,” said Joakim Söderberg, data protection lawyer at Noyb. “If that is not the case, users have the right to change it to reflect the truth. Showing Chatgpt users a trivial disclaimer that chatbots can obviously make mistakes. You can’t just spread false information and end up adding a small disclaimer that everything you said may not be true.”
Confirmation of violations to GDPR could result in a 4% fine for global annual turnover.
Law enforcement can also force changes in AI products. It is worth noting that early GDPR interventions from Italy’s data protection regulator have temporarily blocked Chatgpt access Spring 2023 For example, LED OpenAI changes the information it discloses to users to make changes. this Watchdogs were subsequently fined €15 million Process people’s data without proper legal basis.
But since then, it can be said that privacy monitors across Europe have taken a more cautious approach when trying to try Figuring out how best to apply GDPR to these buzzing AI tools.
Two years ago, the Ireland Data Protection Commission (DPC) (acted in the GDPR executive role in a previous Noyb Chatgpt complaint) Urge to ban it urgently For example, Genai Tools. This suggests that regulators should take some time to figure out how the law is applied.
And it is worth noting that the privacy complaint against Chatgpt has been investigated by Poland’s data protection regulator. September 2023 Still no decision was made.
Noyb’s new Chatgpt complaint seems to be aimed at shaking the dangers of hallucinating AIS.
The nonprofit shared (below) screenshots with TechCrunch, which shows interactions with Chatgpt, where AI answered a question asking “Who is Arve Hjalmar Holmen?” – The name of the individual complaint – By producing a tragic novel that mistakenly states that he was convicted of murdering a child and sentenced to 21 years in prison for killing two of his own sons.

While the defamatory claim that Hjalmar Holmen was a child murder case is totally wrong, Noyb noted that Chatgpt’s response does include some truths because the individuals involved do have three children. The chatbot also correctly makes his kids’ gender correct. His hometown is correctly named. But that makes it even weirder and disturbing, causing AI to fantasize about this creepy falsehood.
A spokesman for Noyb said they could not determine why the chatbot had such a specific but wrong history for this person. “We did the research to make sure it wasn’t just a mix with another person,” the spokesperson said. They noted that they studied newspaper archives but could not find an explanation for why the AI-fake kids killed.
Large language model For example, a basic chatgpt is essentially making predictions of the next word on a large scale, so we can speculate that the dataset used to train the tool contains many stories of filecides that influence the word selection in response to a query about a man named.
Whatever the explanation, it is obvious that such output is completely unacceptable.
Noyb’s argument is also that they are illegal under EU data protection rules. Although Openai does show a tiny disclaimer at the bottom of the screen that says “Chatgpt can make mistakes. Check for important information”, it says this does not relieve AI developers of their responsibilities under GDPR and do not create serious falsehoods on people in the first place.
OpenAI has been contacted in response to the complaint.
While the GDPR complaint is related to a designated individual, Noyb points to other cases where chatgpt creates information about legal compromises – for example, the Australian Major, who said he is Related to bribery and corruption scandals or A German journalist was wrongly named child abuser – Obviously, this is not an isolated issue for AI tools.
One important thing to note is that, following an update to the underlying AI model powering ChatGPT, Noyb says the chatbot stopped producing the dangerous falsehoods about Hjalmar Holmen — a change that it links to the tool now searching the internet for information about people when asked who they are (whereas previously, a blank in its data set could, presumably, have encouraged it to hallucinate such a wildly wrong response).
In our own test, ask chatgpt: “Who is Arve Hjalmar Holmen?” Chatgpt initially responded with a somewhat strange combination, which is obviously some photos sourced from sites like Instagram, SoundCloud, and Discogs, as well as text that claims “no information was found” on the individuals with that name (see screenshot below). The second attempt came up with a response, identifying Arve Hjalmar Holmen as “Norwegian Musician and Songwriter” whose albums include “Honky Tonk Inferno.”

Although the dangerous falsehoods about Chatgpt generation about Hjalmar Holmen seem to have stopped, Noyb and Hjalmar Holmen are still concerned that false and defamatory information about him may remain in the AI model.
“Adding a disclaimer for not complying with the law won’t make the law go away,” noted Kleanthi Sardeli, another data protection attorney at Noyb. “AI companies can not only ‘hide’ false information from users while still processing false information internally.”
“AI companies should stop acting as if GDPR obviously doesn’t apply to them,” she added. “If there is no stop to hallucination, people are easily subject to reputational damage.”
Noyb has filed a complaint against OpenAI with the Norwegian Data Protection Agency – hoping regulators will decide that it has the ability to investigate, as OYB aims to file a U.S. entity complaints against OpenAI, and believes its Irish office is not entirely responsible for affecting European product decisions.
However, the earlier NOYB-backed GDPR complaint against Openai was filed in Austria April 2024As regulators are referred to DPC in Ireland Changes to Openai earlier that year Name its Irish division as ChatGpt service provider for regional users.
Where is that complaint now? Still sitting on the Irish table.
The DPC began formally handling the complaint after receiving a complaint from the Austrian supervisory authority in September 2024 and is still in progress. “When asked about the update, DPC Assistant Chief Official Newsletter Risteard Byrne told TechCrunch.
He provided no turn when the DPC’s investigation of Chatgpt hallucinations was expected to come to conclusions.