Hacker News new | past | comments | ask | show | jobs | submit login

A good prompt. You don’t just ask it. You tell it how to behave and give it a shot load of context





With Claude the context window is quite small. But with adding too much context it often seems to get worse. If the context is not carefully narrowly picked and too unrelated, the LLMs often start to do unrelated things to what you've asked.

At some point it's not really worth anymore creating the perfect prompt, just code it yourself. Also saves the time to carefully review the AI generated code.


Claude's context window is not small, is it not larger than ChatGPT's?

I just looked it up, it seems to be the rate limit that's actually kicking in for me.

Yes, that's it! It is frustrating to me, too. You have to start a new chat with all relevant data, and a detailed summary of the progress/status.

Doesn't prevent it from hallucinating, only reduces hallucinations by a single digit percentage

Personally I’ve been finding that the more context I provide the more it hallucinates.

There's probably a sweet spot. Same with people. Too much context (especially unnecessary context) can be confusing/distracting, as well as being too vague (as it leaves room for multiple interpretations). But generally, I find the more refined and explicit you are, the better.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: