I've always defaulted to using https://yarnpkg.com/ to search for packages cause the npmjs.com search is so slow, but while the yarnpkg.com search is super fast, actually clicking on a package and seeing the details page takes forever.
This is super fast for both search and the details page, and it's super keyboard friendly which makes it even faster to use in practice. Definitely going to become my go-to search now. Love it, thanks for building it!
Follow up Q: what are you supposed to do when the context becomes too large? Start a new conversation/context window and let Claude start from scratch?
Context filling up is sort of the Achilles heel of CLI agents. The main remedy is to have it output some type of handoff document and then run /compact which leaves you with a summary of the latest task. It sort of works but by definition it loses information, and you often find yourself having to re-explain or re-generate details to continue the work.
I made a tool[1] that lets you just start a new session and injects the original session file path, so you can extract any arbitrary details of prior work from there using sub-agents.
Either have Claude /compact or have it output things to a file it can read in on the next session. That file would be a summary of progress for work on a spec or something similar. Also good to prime it again with the Readme or any other higher level context
It’s a good idea to have Claude write down the execution plan (including todos). Or you can use something like Linear / GH Issues to track the big items. Then small/tactical todos are what you track in session todos.
This approach means you can just kill the session and restart if you hit limits.
(If you hit context limits you probably also want to look into sub-agents to help prevent context bloat. For example any time you are running and debugging unit tests, it’s usually best to start with a subagent to handle the easy errors. )
It feels like one could produce a digest of the context that works very similarly but fits in the available context window - not just by getting the LLM to use succinct language, but also mathematically; like reducing a sparse matrix.
There might be an input that would produce that sort of effect, perhaps it looks like nonsense (like reading zipped data) but when the LLM attempts to do interactive in it the outcome is close to consuming the context?
I ask it to write a markdown file describing how it should go about performing the task. Then have it read the file next time. Works well for things like creating tests for controller methods where there is a procedure it should follow that was probably developed over a session with several prompts and feedback on its output.
Start in plan mode, generating a markdown file with the plan, keep it up to date as it is executed, and after each iteration commit, clear the context and tell it to read the plan and execute the next step.
It's definitely not super straightforward, but there's plenty of recent prior art to steal from. Ruby was probably not the best place to solve this for the first time given the constraints (similar to pip), but there's no reason the Ruby ecosystem shouldn't now benefit from the work other ecosystems have done to solve it.
Not even close to the IBM one. Think of icons/pictograms or animations, you can have use a library for that but you'll have to do your own custom styling to match it with your theme. Carbon is the most encompassing one that comes with lots of things you might need that are already integrated in style and patterns with the core framework.
Zulip in general is a great example of a large open source Django app that's been maintained and actively developed for a long time. I use it as a reference quite a lot.
I've always defaulted to using https://yarnpkg.com/ to search for packages cause the npmjs.com search is so slow, but while the yarnpkg.com search is super fast, actually clicking on a package and seeing the details page takes forever.
This is super fast for both search and the details page, and it's super keyboard friendly which makes it even faster to use in practice. Definitely going to become my go-to search now. Love it, thanks for building it!
reply