Italy recently imposed a temporary ban on ChatGPT due to concerns over privacy. OpenAI, the company behind ChatGPT, responded by pledging to provide a platform for citizens to voice their objections to the use of their personal data in training AI models.
The “right to be forgotten” (RTBF) law, established in a 2014 EU case, grants individuals the authority to request the removal of their personal data from technology companies. However, implementing RTBF in the context of large language models (LLMs) like ChatGPT poses unique challenges.
ChatGPT relies on a repository of 300 billion words for its training. OpenAI collected this data from various sources on the internet, including some personal information acquired without consent. This raises concerns about privacy, especially when the data is sensitive and could reveal personal details about individuals.
OpenAI does not provide mechanisms for individuals to verify whether their personal data is stored by the company or to request its deletion. This violates the “right to be forgotten” principle, a fundamental component of the European General Data Protection Regulation (GDPR).
LLMs generate responses based on patterns ingrained during their extensive training. They analyze the context, word patterns, and relationships within a query to anticipate the next word in a response. They function more like text generators than search engines.
Machine unlearning is a leading solution for enabling LLMs to forget training data, but it is a complex process. One approach involves selectively removing specific data points from the model through retraining of particular segments. Other methods include approximate techniques and model editing.
Persistent data privacy concerns with LLMs could have been mitigated if responsible AI development principles had been integrated throughout the tool’s lifecycle. Explainable AI, which offers transparency and accountability, can help shed light on the root causes of issues within models.
By incorporating responsible AI techniques and AI ethics standards into new technology development, we can better evaluate, investigate, and mitigate privacy challenges posed by AI models like ChatGPT.