Anthropic Will Now Train Claude on Your Chats, Here's How to Opt Out

Anthropic announced today that it is changing its Consumer Terms and Privacy Policy, with plans to train its AI chatbot Claude with user data.

anthropic data collection
New users will be able to opt out at signup. Existing users will receive a popup that allows them to opt out of Anthropic using their data for AI training purposes.

The popup is labeled "Updates to Consumer Terms and Policies," and when it shows up, unchecking the "You can help improve Claude" toggle will disallow the use of chats. Choosing to accept the policy now will allow all new or resumed chats to be used by Anthropic. Users will need to opt in or opt out by September 28, 2025, to continue using Claude.

Opting out can also be done by going to Claude's Settings, selecting the Privacy option, and toggling off "Help improve Claude."

Anthropic says that the new training policy will allow it to deliver "even more capable, useful AI models" and strengthen safeguards against harmful usage like scams and abuse. The updated terms apply to all users on Claude Free, Pro, and Max plans, but not to services under commercial terms like Claude for Work or Claude for Education.

In addition to using chat transcripts to train Claude, Anthropic is extending data retention to five years. So if you opt in to allowing Claude to be trained with your data, Anthropic will keep your information for a five year period. Deleted conversations will not be used for future model training, and for those that do not opt in to sharing data for training, Anthropic will continue keeping information for 30 days as it does now.

Anthropic says that a "combination of tools and automated processes" will be used to filter sensitive data, with no information provided to third-parties.

Prior to today, Anthropic did not use conversations and data from users to train or improve Claude, unless users submitted feedback.

Popular Stories

iPhone 17 Pro Dark Blue and Orange

iPhone 17 Release Date, Pre-Orders, and What to Expect

Thursday August 28, 2025 4:08 am PDT by
An iPhone 17 announcement is a dead cert for September 2025 – Apple has already sent out invites for an "Awe dropping" event on Tuesday, September 9 at the Apple Park campus in Cupertino, California. The timing follows Apple's trend of introducing new iPhone models annually in the fall. At the event, Apple is expected to unveil its new-generation iPhone 17, an all-new ultra-thin iPhone 17...
Awe Dropping Apple Event Feature

Five Things to Expect From Apple's 'Awe Dropping' September 9 Event

Tuesday August 26, 2025 4:17 pm PDT by
Apple today announced its "Awe Dropping" iPhone-centric event, which is set to take place on Tuesday, September 9 at 10:00 a.m. Pacific Time. There are a long list of products that are coming, but we thought we'd pull out five feature highlights to look forward to. That Super Thin iPhone - Apple's September 9 event will see the unveiling of the first redesigned iPhone we've had in years, ...
Awe Dropping Apple Event Feature

Apple Event Logo Hints at Two iPhone 17 Pro Features

Wednesday August 27, 2025 6:36 am PDT by
Apple's logo for its upcoming September 9 event hints at two rumored iPhone 17 Pro features, including new color options and a vapor chamber cooling system. Of course, this is all just speculation for fun, as we count down the final days until the event. New Colors Last month, Macworld's Filipe Espósito reported that orange and dark blue would be two out of the five color options...
Alleged iPhone 17 Pro Antenna Design

Two All-New iPhone 17 Colors Seemingly Confirmed

Monday August 25, 2025 4:22 am PDT by
Apple will offer the upcoming iPhone 17 Pro and iPhone 17 Pro Max in a new orange color, according to Bloomberg's Mark Gurman. Gurman made the claim in the latest edition of his Power On newsletter, adding that the new iPhone 17 Air – replacing the iPhone 16 Plus – will come in a new light blue color. We've heard multiple rumors about a new iPhone 17 Pro color being a shade of orange. The ...
crossbody strap

iPhone 17's 'Crossbody Strap' Accessory to Feature Magnetic Design

Thursday August 28, 2025 7:49 am PDT by
Apple's cases for the iPhone 17 lineup will be accompanied by a new Crossbody Strap accessory with a unique magnetic design, according to the leaker known as "Majin Bu." Apple's Crossbody Strap reportedly features an unusual magnetic design; it likely has a "flexible metal core" that makes it magnetic along its entire length. At the ends, "rings polarized oppositely to the strap close the...
airpods pro 2 gradient

AirPods Pro 3: Four Key Design Changes Anticipated

Tuesday August 26, 2025 4:05 am PDT by
Apple hasn't updated the AirPods Pro since 2022 other than a shift from Lightning to USB-C, and the earbuds are due for a refresh. According to Bloomberg's Mark Gurman, Apple will launch AirPods Pro 3 later this year, and apart from new features like heart rate monitoring, we're also expecting a few design changes. The fourth‑generation AirPods offer useful clues to Apple's design cues for ...

Top Rated Comments

turbineseaplane Avatar
2 days ago at 10:48 am
gross

I hate switcharoos like this
Score: 24 Votes (Like | Disagree)
dontwalkhand Avatar
2 days ago at 11:01 am
Deleted all my AI apps because they are all worthless. Inaccurate mess. Literally pointless.
Score: 17 Votes (Like | Disagree)
canadianreader Avatar
2 days ago at 11:04 am

Prior to today, Anthropic did not use conversations and data from users to train or improve Claude, unless users submitted feedback.
This is the main reason many ChatGPT users switched to Claude. Enshi**ification continues.
Score: 11 Votes (Like | Disagree)
mdatwood Avatar
2 days ago at 11:07 am

Despite these limitations they're still very useful tools. Just be sensible about what you share.
Yeah, basically treat them like you would the open internet.
Score: 11 Votes (Like | Disagree)
routine_analyst Avatar
2 days ago at 11:04 am
this has been the plan all along. create a compelling product, get you to use it, you train it for free (you're paying a fee) and then it replaces you a few years down the road. why have human employees when you can have AI bots that have no rights?
Score: 8 Votes (Like | Disagree)
novagamer Avatar
2 days ago at 11:04 am
Note: you need to delete your conversations for the 30 day window to apply.

Also, if you violate trust and safety and it gets flagged by their systems, it's 2 years of retention and 7 years of the classification score.

TL;DR don't do anything extraordinarily nefarious with any of these tools, which should be obvious, but people that might do those things are dense.

The fact that they do delete data after 30 days of you doing so is still notable and commendable; OpenAI may not train on your data if you opt out but right now they aren't deleting anything unless you have a ZDR policy with them due to the NYT lawsuit.

If the outcome of that lawsuit is in OpenAI's favor they will purge the backups, if not and especially if it becomes material for discovery processes, oh boy.

TL;DR #2: Don't use ChatGPT for anything sensitive at all, full stop.

Despite these limitations they're still very useful tools. Just be sensible about what you share.
Score: 8 Votes (Like | Disagree)