Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, I mention this in the post but this variant of LLaMA isn't storing any of the conversation in memory so it doesn't have context on the prior questions. You're starting fresh with each prompt. We have some ideas for how to improve this though... more soon :)



The simplest way to improve is just to re-feed the whole conversation as a prompt.


ah, ok - thanks!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: