You can opt out, but the fact that it's opt-in by default and made to look like a simple T/C update prompt leaves a sour taste in my mouth. The five year retention period seems... excessive. I wonder if they've buried anything else objectionable in the new terms.
It was the kick in the pants I needed to cancel my subscription.
Everywhere else in Anthropic's interface, yes/no switches show blue when enabled and black when disabled. In the box they're showing about this change the slider shows grey in both states: visit it in preferences to see the difference! It's not just disappointing but also kind of sad that someone went to the effort to do this.
This is probably because there are laws in some countries that restrict how these buttons/switches can look (think cookie banners, where sometimes there is a huge green button to accept, and a tiny greyed out text somewhere for the settings).
yes, it’s a very big loophole. and if it’s a generative model, you can just launder the data through synthetic generation/distillation to future models
I believe the big models currently get built from scratch (with random starting weights). That wasn't my point though. I meant a model created once, might be used for a very long time. Maybe they even release the weights at one point ("open source").
this is somewhat true but i'm not sure how load bearing it is. for one, i think it's going to be a while until 'we asked the model what bob said' is as admissible as the result of a database query
Implicit consent is not transparent and should be illegal in all situations. I can't tell you that unless you opt out, You have agreed to let me rent you apartment.
You can say analogy is not straightforward comparable but the overall idea is the same. If we enter a contract for me to fix your broken windows, I cannot extend it to do anything else in the house I see fit with Implicit consent.
Courts in various jurisdictions have found clickwrap agreements to be generally only valid for what one would expect to be common provisions within such agreements.
Essentially, because they are presented in a form that is so easy to bypass and so very common in our modern online life, provisions that give up too much to the service provider or would be too unusual or unexpected to find in such an agreement are unenforceable.
As a real world counterexample, medical in the USA does this shit all the time.
Local office will do a blood draw, send it to a 3rd party analysis which isn't covered by insurance, then bill you full. And you had NO contractual relationship with the testing company.
Same scam. And its all because our government is completely captured by companies and oligopoly. Our government hasn't represented the people in a long time.
> The interface design has drawn criticism from privacy advocates, as the large black "Accept" button is prominently displayed while the opt-out toggle appears in smaller text beneath. The toggle defaults to "On," meaning users who quickly click "Accept" without reading the details will automatically consent to data training.
It’s not. And also whether you move the toggle to on or off, you still have to click accept which really isn’t clear whether you’re accepting to share your data or not.
Granted, it is a stretch and not near the features of Claude (no code etc), but at least Proton's Lumo [0] is very privacy oriented.
I have to admit, I've used it a bit over the last days and still reactivated my Claude pro subscription today so... Let's say it's ok for casual stuff? Also useful for casual coding questions. So if you care about it, it's an option.
If you aren't using it for coding or advanced uses like video, etc, you can try running models locally on your machine using Ollama and others like it.
Self plug here - If you aren't technical and still want to run models locally, you can try our App [1]
Since I don't use LLMs to directly code for me, I'm going to (mis?)place my trust in Kagi assistant entirely for the time being. It claims not to associate prompts with individual accounts. Small friction of keeping a browser tab open is worth it for me for now.
From the frypan into the fire. I think the reality, proven by history and even just this short five years, is no company will hold onto their ethics in this space. This should surprise no one since the first step of the enterprise is hoovering up the worlds data without permission.
The 5 year is the real kicker. Over the next 5 years I find it doubtful that they won't keep modifying their TOS and presenting that opt out 'option' so that all it will take is one accidental click and they have all your data from the start. Also, what is to stop them from removing the opt out? Nothing says they have to give that option. 4 years and 364 days from now TOS change with no opt out and a retention increase to 10 years. By then the privacy decline will have already have been so huge nobody will even notice that this 'option' was never even real.
Oh i see thanks. That's a dark design pattern, hiding stuff like that.
No one cares about anything else but they have lots of superflous text and they are calling it "help us get better", blah blah, it's "help us earn more money and potentially sell or leak your extremely private info", so they are lying.
Considering cancelling my subscription right this moment.
I hope EU at leat considers banning or extreme-fining companies trying to retroactively use peoples extremely private data like this, it's completely over the line.
EU or not, it baffled me that people don't see this glaring conflict of interest. AI companies both produce the model and rent out inference. In other words, you're expecting that the company that (a) desperately crave your data the most and (b) that also happen to collect large amounts of high quality data from you will simply not use it. It's like asking a child to keep your candy safe.
I'd love to live in a society where laws could effectively regulate these things. I would also like a Pony.
This is why we need actual regulation, and not the semi fascist monopolist corporatocracy we've evolved into now.
Its only utopian because it's become so incredibly bad.
We shouldn't expect less, we shouldn't push guilt or responsibility onto the consumer we should push for more, unless you actively want your neighbour, you mom, and 95% of the population to be in constant trouble with absolutely everything from tech to food safety, chemicals or healthcare - most people aren't rich engineers like on this forum and i don't want to research for 5 hours every time i buy something because some absolute psychopaths have removed all regulation and sensible defaults so someone can party on a yacht.
it's almost like this multi billion dollar company is misanthropic, despite their platitudes. Should I not hold my breath on Anthropic helping facilitate "an era of AI abundance for all"? (To quote a rejected PR applicant to Anthropic from the front page)
I wonder what happens if I don't accept the new T&C? I've been successfully dismissing an updated T&C prompt in a popular group messaging application for years -- I lack the time and legal acumen to process it -- without issue.
Also, for others who want to opt-out, the toggle is in the T&C modal itself.
The new privacy policy automatically becomes effective on September 28, if you don’t already agree to it before. Anthropic states that “After September 28, you’ll need to make your selection on the model training setting in order to continue using Claude.”
Has anyone asked why OpenAI has two very separate opt-out mechanisms (one in settings, the other via a formal request that you need to lodge via their privacy or platform page)? That always seemed likely to me to be hiding a technicality that allows them to train on some forms of user data.
“If you choose not to allow us to use your chats and coding sessions to improve Claude, your chats will be retained in our back-end storage systems for up to 30 days.”
it seems really badly designed or maybe it is meant to be confusing. It does not make it clear that the two are linked together, and you have to "accept" the both together even though there is only a toggle on the "help us make the model better" item.
OpenAIs temporary chat still advertises that chats are stored for 30 days while there is court order that everything must be retained indefinitely.
I wonder why they are not obligated to state this quite extreme retention.
Two weeks left in the sub to figure it out, but I'm not yet sure. I was never all in on all the tooling, I mostly used it as smart search (e.g. ImageMagick incantations) and for trivial scripting that I couldn't be bothered writing myself, so I might just stick to whatever comes with Kagi, see if that doesn't cover me.
How does Kagi (claim that they) enforce privacy rights on the major LLM providers? Have they negotiated a special contract?
I'm looking at
> "When you use the Assistant by Kagi, your data is never used to train AI models (not by us or by the LLM providers), and no account information is shared with the LLM providers. By default, threads are deleted after 24 hours of inactivity. This behavior can be adjusted in the settings."
And trying to reconcile those claims with the instant thread. Anthropic is listed as one of their back-end providers. Is that data retained for five years on Anthropic's end, or 24 hours? Is that data used for training Anthropic models, or has Anthropic agreed in writing not to, for Kagi clients?
I'm mostly replying because I was truly using it for an ImageMagick incantation yesterday.
I use the API rather than chat, if that's an option for you. I put $20 into it every few months and it mostly does what I need. I'm using Raycast for quick and dirty questions and AnythingLLM for longer conversations.
I like think using OpenRouter is better, but there’s absolutely no guarantee from any of the individual providers with respect to privacy and no logging.
Nitpicking: “opt in by default” doesn’t exist, it’s either “opt in”, or “opt out”; this is “opt out”. By definition an “opt out” setting is selected by default.
To be fair it trips people up all the time. Even precise terminology isn't great if people misuse it. Maybe it would have been better to just use "enabled by default".
And here it is, evolving before your eyes: we're killing off the maladaptive mutant which was "opt-in by default". That's the evolution that is happening here.
It would be fair to compare it to selective breeding, rather than natural selection. The flip side of rejecting usage is promoting neologisms. We can do both things deliberately, I see no rule saying that language is only allowed to evolve naturally. A reasonable criticism would be that trying to change it on purpose makes for a lot of unnecessary fuss, but we can be moderate about it.
That's called opt-out. You're doing exactly what I described: gaslighting people into believing that opt-in and opt-out are synonymous, rendering the entire concept meaningless. The audacity of you labeling people as "political" while resorting to such Orwellian manipulation is astounding. How can you lecture others about the purpose of languages with a straight face when you're redefining terms to make it impossible for people to express a concept?
These are examples of what "opt-in by default" actually means. It means having the user manually consent to something every time, the polar opposite your definition.
It's also just pure laziness to label me as "hysterical" when PR departments of companies like Google have, like you, misused the terms opt-out and opt-in in deceptive ways.
I completely agree with you from a correctness standpoint, ...
> Diluting the distinction between opt-in and opt-out is gaslighting
> That seems like an ungenerous and frankly somewhat hysterical take.
... however, this comment was a reasonable response.
Projective framing demonstrates your own lack of concern for accuracy, clarity or conviviality, that is 180 degrees at odds with the point you are making and the site you are making it on.
I can somehow understand the parent. If you control the language, you control the discourse. This is like the famous "I'm appalled at the negativity here on HN" comment threads when doing product launches etc. Or using euphemisms to avoid calling spade a spade.[0] People are fed up with these tricks, hence these emotional reactions.
Regardless whether it's opt-in or opt-out, the business will need to confirm anything it opted for you by asking. If you don't select the opposing choice in a timely fashion, then the business assumes that it opted correctly in your interest and on your behalf.
> So, I think this is opt-in, until Sept 28.
If the business opted for consent, then you will effectively have the choice for refusal, a.k.a. opt-out.
It was the kick in the pants I needed to cancel my subscription.