Hacker News new | past | comments | ask | show | jobs | submit login

What do you use so that you can throw in a set of documents and/or a nontrivial code base into an LLM workspace and ask questions about it etc.? What the cloud-based services provide goes way beyond a simple chat interface or mere code completion (as you know, of course).





I use my https://github.com/simonw/files-to-prompt tool like this:

  files-to-prompt . -e py -e md -c | pbcopy
Now I have all the Python and Markdown files from the current project on my clipboard, in Claude's recommended XML-like format (which I find works well with other models too).

Then I paste that into the Claude web interface or Google's AI Studio if it's too long for Claude and ask questions there.

Sometimes I'll pipe it straight into my own LLM CLI tool and ask questions that way:

  files-to-prompt . -e py -e md -c | \
    llm -m gemini-2.0-flash-exp 'which files handle JWT verification?'
I can later start a chat session on top of the accumulated context like this:

  llm chat -c
(The -c means "continue most recent conversation in the chat").

Thanks. Google AI Studio isn’t local, I think, is it? I’ll have to test this, but our project sizes and specification documents are likely to run into size limitations for local models (or for the clipboard at the very least ;)). And what I’d be most interested in are big-picture questions and global analyses.

No, it's not. I've not seen any local models that can handle 1m+ tokens.

I haven't actually done many experiments with long context local models - I tend to hit the hosted API models for that kind of thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: