Italy’s data protection authority has ordered OpenAI to stop processing people’s data locally with immediate effect due to concerns that the AI model, ChatGPT, is breaching the European Union’s General Data Protection Regulation (GDPR). The Garante has opened an investigation into OpenAI’s unlawful processing of people’s data and the lack of a system to prevent minors from accessing the technology. OpenAI has 20 days to respond to the order, with penalties of up to 4% of annual turnover or €20 million if it fails to comply. It’s important to note that any data protection authority under the GDPR can intervene if it sees risks to local users, as Italy has done.
If you are interested to know what ChatGPT is, Check out our article HERE!
ChatGPT Failed to Comply with EU policies
OpenAI’s ChatGPT, a powerful generative AI model, has been processing personal data of European Union (EU) users, raising concerns about compliance with the region’s General Data Protection Regulation (GDPR). The GDPR ensures that entities processing personal data are adequately protecting the information. It also mandates companies to notify relevant supervisory authorities of significant data breaches within tight time-periods.
The Italian Data Protection Authority (Garante) has opened an investigation into OpenAI’s unlawful processing of people’s data and the lack of a system to prevent minors from accessing the technology. ChatGPT can produce biographies of named individuals in the region on-demand and has been trained on data scraped from the internet, including forums such as Reddit. This method of data collection raises concerns about the lawfulness of OpenAI’s data processing.
The GDPR allows for various possibilities – from consent to public interest – for lawful data processing. However, the scale of processing to train these large language models complicates the question of legality. The regulation emphasizes data minimization, transparency, and fairness. The Garante’s statement notes the mass collection and storage of personal data and the lack of information provided to users and all interested parties whose data is collected by OpenAI. It also highlights inaccuracies in the personal data processing.
If OpenAI has processed Europeans’ data unlawfully, data protection authorities (DPAs) across the bloc could order the data to be deleted. However, it remains unclear if this would force the company to retrain its models trained on unlawfully obtained data. The lack of a legal basis that justifies the mass collection and storage of personal data also raises concerns about the lawfulness of OpenAI’s processing of personal data.
ChatGPT and Exposure to Minors
Furthermore, the Garante is concerned about the risk of minors’ data being processed by OpenAI since the company is not actively preventing people under the age of 13 from signing up to use the chatbot. The GDPR requires companies to take necessary measures to protect minors’ data, such as applying age verification technology.
The GDPR aims to regulate data processing and protect users’ privacy. OpenAI’s ChatGPT’s potential failure to comply with GDPR regulations may result in penalties of up to 4% of annual turnover or €20 million if it fails to comply.
The Garante has been active in pursuing child safety concerns, recently ordering a ban on Replika, a virtual friendship AI chatbot, and forcing TikTok to purge over half-a-million accounts belonging to underage users. If OpenAI cannot confirm the ages of its Italian users, it may be required to delete their accounts and implement a more robust sign-up process.
The future of OpenAi’s ChatBot
Lilian Edwards, an expert in data protection and Internet law at Newcastle University, highlighted the denial of lawful basis as the real time-bomb for machine learning systems such as OpenAI’s ChatGPT. She referred to the ‘right to be forgotten’ case involving Google search, where the European courts established a right for individuals to ask search engines to remove inaccurate or outdated information about them, but did not strike down Google’s processing of personal data over the lawfulness of processing point, seemingly because it was providing a public utility. Large language models like ChatGPT do not offer remedies such as rights of erasure and rectification to EU data subjects, and enforced retraining of models may be one potential solution.
It remains unclear whether technologies like ChatGPT have broken data protection law. OpenAI has yet to respond to the Garante’s order.
News Source CNN