ChatGPT Privacy: What You Need to Know Before You Type

ChatGPT’s Footer​

In a recent ChatGPT session, I noticed the footer “Don’t share sensitive info. Chats may be reviewed and used to train our models. Learn more.” Not sure if they recently added it, or if our ChatGPT friends have put that there from the beginning. Regardless, I am writing about sharing sensitive information in AI prompts today. My example is of ChatGPT but I feel this advice applies to all AI interactions.
AI numbers moving through the brain.gif

Why You Should Keep Personal Info Out of ChatGPT, and All AI Engines​

Keep ePHI (electronic protected health information) out of AI prompts.
When using ChatGPT or any AI engine, it’s important to be careful about what information you share. You should avoid putting personal details or health information in your prompts. This helps protect your privacy and keeps sensitive data safe. ChatGPT uses the information you give it to create responses. While the company behind ChatGPT, OpenAI, says they try to keep data private, that is a beautiful plan IMO. Here’s why:

  1. Your conversations might be reviewed by AI trainers to improve the system.
  2. De-identified information could be used for research.
  3. There’s a chance your input could appear in responses to other users.
OpenAI encrypts the data you enter, but they don’t sign special agreements to protect health information (ePHI). This means it’s best to assume anything you type into ChatGPT could potentially be seen by others.

Protecting Your Privacy When Using ChatGPT​

To use AI engines safely:
  • Don’t enter any personal details like names, addresses, or phone numbers.
  • Avoid sharing health information or financial data.
  • Be careful not to include details that could identify you or others.
  • Remember that deleting prompts later isn’t always possible for regular users.
If you’re using ChatGPT for work, especially in healthcare or finance, it’s extra important to follow these guidelines. Your company might have rules about what information can be shared with outside services.

By being careful about what you share, you can enjoy using AI while keeping your personal information secure. It’s always better to be safe than sorry when it comes to protecting private data online.

More​

AI ain’t so tough. See https://cybersafetynet.net/category/ai/ to help understand and use Artificial intelligence.
 
As far as I know, ChatGPT has done this all along. And for that matter, it seems that most of the commercial AI chat bots do this as well. I would highly recommend anyone worried about this (as I was/am) to check out GPT4All which is an open source framework for running open source LLMs. A couple of summers back I put together copies of GPT4All and Stable Diffusion running as VMs. Both of course can be set up as a VM, running in host-only mode for privacy. GPT4All specifically lists privacy as it's number one feature or 'selling point'. You can check them out yourself here:



Lee
 
As far as I know, ChatGPT has done this all along. And for that matter, it seems that most of the commercial AI chat bots do this as well. I would highly recommend anyone worried about this (as I was/am) to check out GPT4All which is an open source framework for running open source LLMs. A couple of summers back I put together copies of GPT4All and Stable Diffusion running as VMs. Both of course can be set up as a VM, running in host-only mode for privacy. GPT4All specifically lists privacy as it's number one feature or 'selling point'. You can check them out yourself here:



Lee
Thank you Lee for responding. We may be moving to an AI Cognito mode like we have with our web browsers. We sign in but then have the option to be recorded/not be recorded. Let's see who is first to propose that.