Hacker News new | past | comments | ask | show | jobs | submit login

I'm uninformed about this, it may just be superstition, but my feeling while using Kagi in this way is that after using it for a few hours it gets a bit more forgetful. I come back the next day and it's smart again, for while. It's as if there's some kind of soft throttling going on in the background.

I'm an enthusiastic customer nonetheless, but it is curious.




I noticed this too! It's dramatic in the same chat. I'll come back the next day, and even though I still have the full convo history, and it's as if it completely forgot all my earlier instructions.


Makes sense. Keeping the conversation implieas that each new message carries the whole history, again. You need to create new chats from time to time, or throttle to a different model...


This is my biggest gripe with these LLMs. I primarily use Claude, and it exhibits the same described behavior. I'll find myself in a flow state and then somewhere around hour 3 it starts to pretend like it isn't capable of completing specific tasks that it had been performing for hours, days, weeks. For instance, I'm working on creating a few LLCs with their requisite social media handles and domain registrations. I _used_ to be able to ask Claude to check all US State LLC registrations, all major TLD domain registrations, and USPTO against particular terms and similar derivations. Then one day it just decided to stop doing this. And it tells me it can't search the web or whatever. Which is bullshit because I was verifying all of this data and ensuring it wasn't hallucinating - which it never was.


Could it be that you're running out of available context in the thread you're in?


Doubtful. I started new threads using carbon-copy prompts. I'll research some more to make sure I'm not missing anything, though.


Did you ever read Accelerando? I think it involved a large number of machine generated LLCs...


No, but I'll give the wikipedia summary a gander :)


Is that within the same chat?


The flow lately has been transforming test cases to accommodate interface changes, so I'm not asking it to remember something from several hours ago, I'm just asking it to make the "same" transformation from the previous prompt, except now to a different input.

It struggles with cases that exceed 1000 lines or so. Not that it loses track entirely at that size, it just starts making dumb mistakes.

Then after about 2 or 3 hours, the size at which it starts to struggle drops to maybe 500. A new chat doesn't seem to help, but who can say, it's a difficult thing to quantify. After 12 hours, both me and the AI are feeling fresh again. Or maybe it's just me, idk.

And if you're about to suggest that the real problem here is that there's so much tedious filler in these test cases that even an AI gets bored with them... Yes, yes it is.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: