I disagree with this take. I get that LLM produced text is filled with crappy, over the top writing in pretty much all cases, but if a prompter/writer/blogger is using it iteratively, the LLM output is going to be way better than their writing. Also, if a person is using LLMs to write articles, do you really want to see their likely even worse writing?
It's absolutely nutso that we (the users) have to guess what the actually limits are. And now they through this into the mix. I love using Claude Code, but if they don't offer some transparency soon re:token limits (other than a status bar), ... I don't know what I'll do, but I will continue to not be happy.
This is actually a really good response though. Because the act of having a device blaring demonstrates contempt for everyone one around them. It's hard to act in a hateful way to someone who just offered you something for free.
The main issue is sota LLMs can only reason one way - forwards, and can't go back and revise a prior statement. That would remove a whole lot of "it's not this is that" and "the big takeaway here is" and so on. Those kinds of ideas are typically at the beginning of a human writer's output structure. An LLM can't go back and edit the first paragraph, because it has to reason (for whatever that means for LLM) it's way through it to get to the big idea of the paragraph/structure. I haven't played with diffusion text models enough to know if that's a remedy for that kind of output.
When LLMs are good enough to not be detectable, what happens then? They aren't that far away atm, so it's only a matter of time until _everyone_ is assumed to be an LLM.
Ultimate user here. I assume this doesn't kill "quick answers" for pro users - which I use frequently when I need a quick summarization. For assistant use directly, I've been thinking about stepping back from ultimate, as I use claude for AI rubber ducking; which works better than all of the LLMs available on assistant.
> Kagi Assistant's web tool uses Kagi Search, and that has nothing to do with this subscription plans discussion, we're not changing anything there. The same applies to LLM-powered features in Kagi Search, like Quick Answer.
I have maxed out my ultimate usage before and when that happens the quick summarization tools did not function indicating I hit my limit. I assume it would affect those, but that might be part of how they break it up.
If a person needs an LLM to figure where an semicolon goes, a LLM is not going to help them code.
reply