Anthropic’s Claude will soon start learning from you.
Anthropic announced in a blog post on Thursday that it will make user chats and coding sessions available to train its models.
The change will go into effect right away if you opt in. After September 28, the changes will apply automatically unless you opt out.
Anthropic will use data from interactions with its consumer products, like its chatbot Claude, in the free, pro, and max tiers. The new policy does not apply to Anthropic’s commercial products, including Claude Gov, Claude for Education, or API use.
Users can opt out by unchecking the box on the pop-up window titled Updates to Consumer Terms and Policies.
Note the fine print: These changes take effect immediately upon confirmation. Anthropic also says it will retain user data in its secure backend for up to five years. Previously, it retained user data for only 30 days.
When asked for comment, an Anthropic spokesperson directed Business Insider to a section of the company’s blog post addressing data retention.
“The extended retention period also helps us improve our classifiers— systems that help us identify misuse — to detect harmful usage patterns. These systems get better at identifying activity like abuse, spam, or misuse when they can learn from data collected over longer periods, helping us keep Claude safe for everyone,” the post says.
Claude users can also adjust privacy settings at any time in the “Help improve Claude” bar.
In an email to Business Insider, an Anthropic spokesperson said the policy changes will help improve its data training process.
“Training on real-world conversations and coding data will help us make Claude better. When a developer debugs code with Claude or someone gets help writing an email, those interactions provide the model with valuable signals on what works and what doesn’t,” the spokesperson said. “This creates a feedback loop that helps future models improve on similar tasks. The five-year retention also helps our safety classifiers learn to detect harmful usage patterns over time.”
The changes came a day after Anthropic published a report that said its chatbot Claude had been weaponized by cybercriminals. In one instance, Anthropic noted that a threat actor used Claude Code to an “unprecedented degree” to “automate reconnaissance, harvesting victims’ credentials, and penetrating networks.” Anthropic dubbed it “vibe-hacking.”
To that end, Anthropic also said in its blog post that the privacy changes will help “strengthen our safeguards against harmful usage like scams and abuse.”
Read the full article here