WOLFcon 2024 - Understanding and Using AI Workflows with FOLIO

23 September 2024


Privacy Concerns in Large Language Models

With the widespread release of Large Language Models (LLMs) by various organizations, significant privacy issues1 have emerged, including:

  • Personal details in training data: Names, addresses, and financial information may be included in the training data for these models.
  • Logged prompts containing sensitive information: User inputs, including private details, can be logged by the companies managing these LLMs.
  • Re-identification of individuals: Even in anonymized training data, individuals can potentially be re-identified through the model outputs and usage patterns.

Training Data

The privacy of individuals can be compromised if their personal information is included in the large datasets used to train LLMs. Since people have often not consented to this data usage, and even anonymization techniques are vulnerable to prompt engineering attacks, there is a significant risk that raw training data, containing sensitive details, can be exposed by models as OpenAI's ChatGPT2.

Inference Data Privacy: Revealing Too Much in Prompts

As people engage with LLMs on personal topics such relationship counseling, medical advice, and psychological concerns, the privacy risks associated with disclosing sensitive information increase.3.

These prompts are often collected and logged by organizations like OpenAI or Google, creating privacy concerns if the inclusion of sensitive details becomes part of the future training data for these models.

Re-identification

Even when personal information in LLMs is anonymized, it can be vulnerable to re-identification. By analyzing LLMs responses, individuals could potential be identified thereby comprising their privacy when interacting with theses AI systems.