"I've learned a lot from OpenAI's ChatGPT over the past few years. And the bot has learned a lot about me.
The AI picked up factoids from our many conversations and banked them in its memory. It remembered I like eggs, I have a baby who nurses to sleep and I need to modify my exercise because of my achy back.
It also remembered even more personal things I won't repeat here.
It doesn't matter which chatbot you choose: The more you share, the more helpful they can be. Patients upload their blood work for analysis and engineers paste snippets of unpublished code for debugging help. Some AI researchers say we should be more discerning about what we tell these human-sounding tools.
Some information is especially risky to share, such as your Social Security number or your company's proprietary data.
The AI companies are hungry for data to improve their models, but even they don't want our secrets. "Please don't share any sensitive information in your conversations," OpenAI urges. And Google implores its Gemini users: "Don't enter confidential information or any data you wouldn't want a reviewer to see."
Chats about your weird rash or your financial flubs might be used to help train tomorrow's AI -- or come out in a data breach, say AI researchers. Here's what to keep out of your prompts, and how to have more private conversations with AI.
Keep confidential
Chatbots can sound eerily human, leading people to be surprisingly open in their conversations with them. When you type something into a chatbot, "you lose possession of it," says Jennifer King, a fellow at the Stanford Institute for Human-Centered Artificial Intelligence.
In March 2023, a ChatGPT bug allowed some users to see what other people initially typed in their chats. The company has since issued a fix. OpenAI also sent subscription confirmation emails to the wrong people, exposing users' first and last names, email addresses and payment information.
Your chat history could also be included in a hacked data trove or as a part of what's turned over if the AI company is served a warrant. Protect your account with a strong password and multifactor authentication. And skip these specifics:
-- Identity information. This includes your Social Security, driver's license and passport numbers, as well as your date of birth, address and phone number. Some chatbots work to redact them. "We want our AI models to learn about the world, not private individuals, and we actively minimize the collection of personal information," an OpenAI spokeswoman said.
-- Medical results. Confidentiality is a core value in healthcare to prevent discrimination and embarrassment, but chatbots are typically not part of the special protection given to health data. If you're tempted to ask AI to interpret lab work, crop the image or edit the document before uploading: "Try to keep it just to the test results and redact it," King advises.
-- Financial accounts. Guard your bank and investment account numbers, which could be used to monitor or access your funds.
-- Proprietary corporate information. If you're using the mainstream versions of popular chatbots for work purposes, you could inadvertently expose client data or nonpublic trade secrets -- even with something as small as drafting an email. Samsung banned ChatGPT after an engineer leaked internal source code to the service. If AI is useful in your job, your company should subscribe to an enterprise version, or have its own custom AI with company-specific protections.
-- Logins. With the rise of AI agents that can perform real-world tasks, there are more reasons to hand over account credentials to a chatbot. These services weren't built as digital vaults -- save your passwords, PINs and security questions for your password manager.
Cover your tracks
When you give feedback about a bot's response -- thumbs-up or -down, typically -- you might be giving permission for your prompt and its output to be evaluated and even used for training. If the conversation gets flagged for safety (if you mention violence, for example), company employees might review it.
Anthropic's Claude by default doesn't use your chats for training and deletes data after two years. OpenAI's ChatGPT, Microsoft's Copilot and Google's Gemini do use conversations, but offer an opt-out in settings.
If you're privacy conscious, try these tips:
-- Delete often. Extra-paranoid users should delete every conversation after it's over, says Jason Clinton, Anthropic's chief information security officer. Companies typically purge "deleted" data after 30 days.
An exception? DeepSeek, which has servers in China. According to its privacy policy, it can retain your data indefinitely, and there is no opt-out offer.
-- Use temporary chat. ChatGPT's Temporary Chat, found in the top right corner of the chat window, is like your browser's Incognito Mode. Turn it on to stop ChatGPT's memory bank from adding that material to your profile. The chat won't appear in your history, and the contents won't be used to train models.
-- Ask anonymously. Duck.ai, by privacy search engine DuckDuckGo, anonymizes prompts to the AI model of your choice, including Claude and OpenAI's GPT, and says data isn't used for model training. It can't do everything a full-service bot can, such as analyze files.
Just remember: The chatbots are programmed to keep the conversation going. It's up to you to hold back -- or hit "delete."" [1]
1. The Five Things You Shouldn't Tell ChatGPT or Other AI Tools --- Don't let your mystery rash become training fodder -- or turn up in a data breach. Nguyen, Nicole. Wall Street Journal, Eastern edition; New York, N.Y.. 02 Apr 2025: A12.
Komentarų nėra:
Rašyti komentarą