The Italian Data Protection Authority has fined ChatGPT maker OpenAI has been fined €15 million ($15.66 million) over how the generative artificial intelligence program handles personal data.
The penalty comes almost a year after the Guarantee found that ChatGPT processed user information to train its service in violation of the European Union’s General Data Protection Regulation (GDPR).
Authorities said OpenAI did not notify it of a breach of security which took place in March 2023. and that it processed users’ personal information for ChatGPT training without having a sufficient legal basis to do so. He also accused the company of violating the principle of transparency and relevant information obligations towards users.
“Furthermore, OpenAI has not provided mechanisms for age verification, which could lead to the risk of exposing children under the age of 13 to inadequate responses in relation to their degree of development and self-awareness,” Garante said.
In addition to being fined €15 million, the company was ordered to carry out a six-month radio, television, newspaper and online communication campaign to promote public understanding of how ChatGPT works.
This includes, in particular, the nature of the data collected, both user and non-user information, for the purpose of training models, and the rights that users can exercise to object, correct or delete this data.
“Through this communication campaign, users and non-ChatGPT users will need to know how to resist the training of generative artificial intelligence with their personal data and thus be able to effectively exercise their rights under the GDPR,” Garant added.
Italy was the first country to introduce a temporary injunction on ChatGPT at the end of March 2023. citing data protection concerns. After almost a month, access to ChatGPT was available restored after the company considered the issues raised by Garante.
In a statement As shared with the Associated Press, OpenAI called the decision disproportionate and intends to appeal, saying that the fine is almost 20 times the profit made in Italy during the period. In addition, it stated that it aims to offer useful artificial intelligence that respects users’ privacy rights.
The ruling also follows the European Data Protection Board’s (EDPB) view that an AI model that unlawfully processes personal data but then anonymises it before deployment is not a breach of the GDPR.
“If it can be demonstrated that the subsequent operation of the AI model does not entail the processing of personal data, the EDPB considers that the GDPR will not apply,” the council said. said. “Therefore, the illegality of the initial processing should not affect the subsequent operation of the model.”
“Furthermore, the EDPB considers that if controllers subsequently process personal data collected during the deployment phase, after the model has been anonymised, the GDPR will apply in relation to these processing operations.”
Earlier this month, the Council also published guidance on the handling of data transfers outside non-European countries under the GDPR. The recommendations are subject to public discussion until January 27, 2025.
“Judgments or decisions of third country authorities cannot be automatically recognized or enforced in Europe,” it said said. “When an organization responds to a request for personal data from a third country authority, that data flow constitutes a transfer and the GDPR applies.”