Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Ell – A command-line interface for LLMs written in Bash (github.com/simonmysun)
214 points by simonmysun 47 days ago | hide | past | favorite | 84 comments
Hi HN!

I've created a CLI tool called "ell" that allows you to interact with LLMs directly from your terminal. Designed with the Unix philosophy in mind, ell is simple, modular, and extensible. You can easily pipe input and output to integrate with other tools. Its templates and hook-based plugins enable you to customize and extend its functionality to suit any needs. Check out the README for usage instructions and examples.

I developed this tool because existing solutions often felt too heavy, with many dependencies, or they weren't friendly to piping and customization. I, on the contrary, wrote in almost pure Bash with least dependencies. Additionally, I found a lack of tools that could read past terminal output as context. Imagine encountering an issue in your terminal and being able to directly ask an LLM for help with a simple command—this is now possible with ell (see the demo video).

Known limitations:

- To maintain simplicity and efficiency, jq is used for JSON parsing.

- Cannot avoid curl to sending HTTPS requests. If only there were SSL / TLS support in `/dev/tcp/`!

- Perl is used to handle terminal escape sequences because regex in Bash does not support looking around.

- Markdown syntax highlighting is not perfect due to the need for streaming output. It relies on a simple state machine instead of a full parser, which may produce falsy results.

- Other known issues are listed in Github Issues. Please help add more!

I welcome any criticism and suggestions, whether it's about the idea or code!




Does ell have the ability to pipe things INTO the tool?

I use that with my https://llm.datasette.io/ tool all the time - things like this:

    cat somecode.py | llm -m claude-3.5-sonnet "Explain this code"
Or you can separate the instructions from the piped content by putting them in a system prompt instead like this:

    cat somecode.py | llm -m claude-3.5-sonnet --system "Explain this code"
Being able to pipe content like this INTO an LLM is really fun, it lets you do things like scrape a web page and use it to answer questions: https://simonwillison.net/2024/Jun/17/cli-language-models/#f...


I have been an LLM skeptic for a long time, but the llm CLI and your review of Claude 3 Opus (and subsequently discovering how comparatively cheap 3.5 Sonnet is) has started to turn LLMs into something I use daily.

Exactly that piping comes into handy all the time. I use it to estimate reading time of things on the web, through

    curl <url> | llm -m claude-3.5-sonnet -s 'How long does the main content of this article take to read? First count words, then convert using a slow and fast common reading speed.'
It gets the word count wrong a little too often for my taste, but it's usually within the right order of magnitude which is good enough for me.

One of my most used shell scripts recently is one I named just `q` which contains

    #!/bin/sh

    llm -s "Answer in as few words as possible. Use a brief style with short replies." -m claude-3.5-sonnet "$*"
This lets me write stupid questions in whatever terminal I'm in and not be judged for it, like

    [kqr@free-t590 tagnostic]$ q How do I run Docker with a different entrypoint to that in the container?
What's nice about it is that it stays in context. It's also possible to ask longer questions with heredocs, like

    [kqr@free-t590 tagnostic]$ q <<'EOF'
    > I have the following Perl code
    > 
    >     @content[sort { $dists[$a] <=> $dists[$b] } 0..$#content];
    > 
    > What does it do?
    > EOF
I have meant to write about this ever since I started a few weeks ago but I would like my thoughts to mature a bit first...


I love that ‘q’ script, definitely going to try that myself.


I did turn it into an article with some more examples, if you're curious: https://two-wrongs.com/q.html


Definitely! For example,

  cat somecode.py | ell -f -
If you prefer adding another piece of prompt instantly instead of adding it in the template:

  (cat somecode.py; echo "Explain this code") | ell -f -
I should've added this into README.

I really love your "llm" and the blog posts but somehow I missed them before. I believe I would be a lot less motivated to write ell if I had read your post first.


> I really love your "llm" and the blog posts but somehow I missed them before. I believe I would be a lot less motivated to write ell if I had read your post first.

I mean, doing a simple search like "CLI interface for LLMs" shows multiple tools made by people over the years. Not to bash your work (pun intended), but I don't see the point of creating yet another CLI interface for LLMs at this point.


To the creator, ignore this person. Thank you for sharing!

To the parent: prefer that you hold opinions like this to yourself.


>> To the parent: prefer that you hold opinions like this to yourself.

it seems weirdly inconsistent that you expect people to hear your voice as you try and shut down another expressing a viewpoint with which you don't agree. You would have been better off with just the first half of your post.


Well, either I'm not good at googling or google is not good at searching. I did searched similar products and I have listed them in the README. Perhaps I just didn't pick the correct keyword. I'm sorry that many wonderful similar products are not listed, but I currently don't find any of them completely cover the features of ell.


How do you get that claude-3.5-sonnet model to use locally with llm? I wasn't able to figure it out reading the plugin docs.


Hi, they're also trying to do something similar with shell. I'm not sure who's better. [demo](https://x-cmd.com/mod/gemini) [source code](https://github.com/x-cmd/x-cmd/blob/main/mod/gemini/lib/main)


Cool! This is looks a lot fancier.

EDIT: I was wrong. Ignore the next paragraph.

~~I haven't looked into details but it looks reading from somewhere like `.bash_history`. That's a good idea to get user input from. But as far as I learned, it cannot use the terminal ouput as context. I might be wrong. I should read more about its implementation.~~

It turns out it cannot make use of terminal output. But I like it that it use awk to process the response. I might also be able to use awk to get rid of the dependencies of jq and perl. Thank you for letting me know this.

I will add it in the related projects chapter in README


It looks beautiful and has many features, why are there so few star?


I also wonder. It didn't appear in my search because, I guess, it has too many features and the feature I want to search has a relatively low weight. I also searched x-cmd on HN but there aren't many positive comments... I would expect it's more popular on HN because it's written in POSIX shell and awk.


I wrote a similar tool I'm no longer maintaining: https://github.com/llimllib/gpt-bash-cli/ . Here are my suggestions:

- save the conversations in a sqlite db. ~everyone has sqlite available and it allows the user to do things with the data more easily than a text file

- use XDG directories instead of suggesting ~/.ellrcd (https://wiki.archlinux.org/title/XDG_Base_Directory)

- I prefer using system secret stores to environment variables; I don't want to give every program I run access to my API keys. You can see how I did that in my program


Thanks for the suggestions! I read your code and the support of images is awesome.

I would not assume everyone has sqlite but this can be done optionally with a plugin. Will consider writing a demo for this.

Using XDG directories and system secrets sounds a lot better than what I did. I will learn how to use them and try to integrate them with my code!


> I would not assume everyone has sqlite but this can be done optionally with a plugin. Will consider writing a demo for this.

Used to be everyone used BerkleyDB or some similar key-value store - for a great many usecases SQLite is just pragmatically better.

And it's arguably less exotic than perl.

You should of course do what want - but "just use SQLite" is pretty solid advice when dealing with structured data - and almost certainly better than "smart" text file.


I can't deny the benefits. But in my mind, this is not what ell should take care of. It doesn't intend to store anything whether in a text file or any other format. It should however provide the posibilities for users to store them in any way they like.


Fair enough. I seem to recall a project for keeping infinite bash history that did leverage SQLite - interfacing with such a project might be more interesting.

I was more thinking from gp comment that the project might store context or history in its own files - and then SQLite might be a good fit.



This looks nice! Thanks for mentioning it. I should definitely install this on my servers.


You may be thinking of McFly —- it’s very good.


Shameless plug: we maintain a cross-platform/cross-language secrets store (with cli tooling available) to portably read and wrire secrets (but it doesn’t use the OS facilities for encryption).

Linking to the rust implementation because it’s the fastest and most easily portable: https://github.com/neosmart/securestore-rs


>I prefer using system secret stores to environment variables

What is the recommended way to store secrets in a Linux dev machine? The requirement is random scripts and programs should be able to load their secrets like API keys at runtime with minimum hassle. And the secrets shouldn't be stored on disk in plain-text.

I see you recommended keyring [1]. Is this "the GNU/linux way"? I see another possibility being storing them in an encrypted filesystem (whether FUSE-based or not)

[1]: https://github.com/llimllib/gpt-bash-cli/blob/841682affe2d0e...


I did a fair amount of looking to try and support a Linux secret store! My conclusion was that I was too confused and so I punted to keyring which seemed to paper over a few different stores.

It seems like a classic story of unfortunate Linux fragmentation


On the contrary please don't use the keyring, its annoying and some systems don't have it. Your llm key is not that critical, and you should trust what runs on your system.

Poetry demands access to my keyring and I don't use poetry (open bug for years, it doesnt even need access).


I know that it's contraversial. It would be an option or even a customized way of using ell mentioned in document. I wouldn't force users to adopt any unmatured or uncommon standard.


I don't have sqlite, and I wouldn't know how to use it.

I would much prefer text files.

Thank you.


you can visit the website https://www.sqlite.org/ where you can find copies of the program, along with instructions on how to use it. I would prefer to use a more advanced file format to hold records. Standing on the shoulders of giants, instead of their toes, as it were. Hopefully we can advance technology beyond the 1980's. Thank you for your understanding.


I also have a similar tool called https://autocomplete.sh https://github.com/closedloop-technologies/autocomplete-sh

I really just wanted the feeling of tab-based auto-complete to just work in the terminal.

It turns out that getting the LLM responses to 'play nice' with the expected format for bash_completion was a bit of a challenge, but once that worked, I could wrap all the LLMS (OpenAI, grok, Claude, local ones like Ollama)

I also put some additional info in the context window to make it smarter: a password-sanitized recent history, which environmental variables are set, and data from `--help` of relevant commands.

I've just started to promote it around the Boston area and people seem to enjoy it.


The demo video is epic. Nicely done! https://youtu.be/IAgkjerCvz8


Thank you! It was so much fun to make. And for once my son waking me up at 2am had a positive result!


Wow that's very useful! I have also thought of completion but my idea was more like copilot. The user experience of your script should be better. I'm glad I didn't start to write that.

Regarding history in context, I suggest adding a record mode like ell. This really helps.

Password sanitizer is great. I will also add it as a plugin. Thank you for the idea!


Thanks for checking it out and the record mode is a great idea. I've been playing around with ways to get the terminal outputs but so far I haven't loved the UX of my solutions. Your co-pilot approach that can explain the commands and iterate is really valuable.

If you're open to joining, I have a small AI engineer/ open source dev Slack community in Boston. Id love to have you (https://smaht.ai)


I am open to join any community. As long as you don't mind the fact that I'm not in Bosten, why not? I have just submitted on your google form. Thanks for inviting!


I've watched autocomplete-sh in action at the AI Tinkerers meetup in Cambridge, MA. Was impressed. It is very well integrated with the shell. The idea of writing it directly in bash - bold! But an effective idea to keep it portable.


Looks interesting!

Does it work with the Fish shell? And, in case, how do I update or uninstall it?


`autocomplete remove` will delete it. I haven't tested it in fish / zsh shells.

Now that I have some Mac iOS dev work to do I'll probably build and test it


Looks great! I work on a number of different machines, so having something lightweight(like written in shell) is always desired.

Out of curiosity, can someone explain to me why certain commands start with a colon? Like : "${ELL_LOG_LEVEL:=2}";[1] I thought it was useful only as a no-op? [1]: https://github.com/simonmysun/ell/blob/main/ell.sh#L19C1-L19...


The : basically just tells bash to do nothing with the result of the line. So `: "${ELL_LOG_LEVEL:=2}";` would initialize `ELL_LOG_LEVEL` to 2 if it's not already set without producing any output.


Thanks!

The colon is here to make sure the result is not executed. I learned that from here: https://stackoverflow.com/a/28085062/2485717


This is cool! Using pure bash and unix tools is an interesting approach. I built Plandex[1] which has some similar goals (no dependencies, terminal-based, supports piping into context) but it takes quite a different route to get there—I’m using Go and compiling static binaries. It’s also ‘higher level’ and specifically focused on coding, whereas ell seems like a very lightweight and general purpose LLM tool. It reminds me a lot of Simon Willinson’s `llm` tool[2]. Are you familiar with it?

The recording feature also reminds me of savvy[3].

1 - https://github.com/plandex-ai/plandex

2 - https://github.com/simonw/llm

3 - https://github.com/getsavvyinc/savvy-cli


Thanks! Plandex is also nice! I never thought of such workflow.

Unfortunately, I did not know Simon Willinson’s `llm` tool. I would imagine he must have written such softwares. It has support for more in-depth manipulating of LLMs. ell lacks these functionalities and only make use of the most commonly-used and also most basic interfaces but has more user experience improvements like pagination or syntax highlighting while keeping as lightweight as possible. I should mention `simonw/llm` in README and channel the user with demand of more LLM manipulations there.


That’s quite an honest and emotionally mature response, and I am always glad to find Real People(tm) around the internet. Rare these days. Your product looks great btw! Consider me another Stargazer, and keep building it.


Thank you!


Cool. The link to "Risks" in the README is broken.

What I would love: `ell -r` automatically, and an alias `fix` that proposes a fix, including making changes to a file. For example, say I have a typo in main.cc and do `gcc main.cc`. When I run `fix`, I want ell to propose a fix in the file with a diff. If I accept, it should make that change. Then it should propose running `gcc` again - and run it for me if I accept.


> The link to "Risks" in the README is broken.

Fixed. Thanks for pointing out!

> `ell -r` automatically, and an alias `fix` that proposes a fix, including making changes to a file.

Good idea! `ell -r` can be added to `.bashrc`, but I'm not sure if it will conflict users' original configurations or there will be other issues. Except confirming a patch, I think it is feasible with template and plugins, but making actuall changes is challenging for me, both techinology wise and user interface design wise. I will try to figure out what can be possible


Regarding running ell -r automatically, you can just add it to your .bashrc


Yup. But the rest of the functionality is missing, I think.


Will check it out. Personally been using aichat[0] for this.

It's interesting you say there's no need for a more complex language than bash something like this. Doesn't the need for jq/curl/perl argue the opposite?

[0] https://github.com/sigoden/aichat


Indeed. That's why I list them as limitations. My original idea was to get everything done with Bash. This is however not feasible as the reasons listed. Maybe I can get rid of jq and perl using awk, but that would sacrifice a lot of simplicity and readablity of the code.

I think implementing the syntax highlighter is the bottom line of my insist. I would prefer not to write anything more complex than that with Bash. They will be either not supported, or supported via external plugins.


Second aichat. Super good. For Linux I created a little bash script that downloads the latest binary and unzips it into /home/me/bin


Interesting - unfortunately displays typical llm issues in the demo video:

> It's important to note that using `1<>` can lead to unexpected behavior if the file already exists, as it will be overwritten.

> To avoid this, you can use the `-a` option to append to the file instead of overwriting it. For example:

> `bash ls 1<> output.txt

> This will append the output of the `ls command the file `output.txt` if it already exists, or create the file if it doesn't.

Note that the example is wrong and not in line with the explanation.

Ed: AFAIK the closest thing that works would be:

    ls >> output.txt
Not sure if there are any invocations using "1<> output.txt" that would make sense in this context? Maybe binding to a custom description like 3, and using "tee --append"?


You are right. I will replace the video. Actually the last time I record it wasn't this bad. And the script was kept and I didn't make much change when recording it again.

Here's the first video I recorded for an earlier version: https://github.com/simonmysun/ell/blob/d4fc5468157fa6adc8f9f...

Unfortunately, LLMs are not stable.

For reference, here's the link to the video with mistake:

https://github.com/user-attachments/assets/1355ad08-6fbf-4c0... https://github.com/simonmysun/ell/blob/553d38f60ad104893b2a3...


Huge fan of Charmbracelet's mods. I've been using it for months now and it works great. Very customizable and the output is clean.

https://github.com/charmbracelet/mods


Thank you for letting me know!

It does well with conversation but on the contrary, ell itself is stateless (on the aspect of user input and generate contents). Conversational use of ell depends on `script` to record the terminal output. Though, I can support managing historical dialogs via a plugin with side effects. I need to consider whether this suits the idea and philosophy of ell.

Well, either I'm not good at googling ro google is not good at searching.. I did searched similar projects and never find these powerful tools in practice posted by HN users


Very cool!

I wrote a similar tool (in Node.js, though), but was trying to make it extensible with plugins.

https://github.com/hiquest/nicechat


(Reading your comment and code reminds me that I might have confused user with the terms of the plugin I proposed and the plugin in popular LLM backends. I will make it clear in ell documents)

What kind of plugins are you going to integrate? I implemented the hook system but actually don't have many ideas to add. Currently I only added paginator and syntax highlight plugins and both of them are applied after getting response from LLM backends.


So, last weekend I wanted to use an LLM to review some documents, and the problem I had was not so much the interface but that it's necessary to have some workflow management to re-run failed jobs and run the aggregate job once its dependencies are done. I ended up writing my own to do it, but I wondered if there are off-the-shelf solutions that already provide these kinds of work.


I don't have a solution yet but it's also a problem I am trying to address. The output of LLM is not stable and robust and currently we can only adjust the prompt to improve it. Fundamental tools like piping in the shells cannot easily handle this. You must either rerun the whole pipeline or start to write more complex script which includes validating and parsing the output.

I checked your solution and it looks promising. Will you make it a general purpose LLM workflow scheduler?


Thank you for your reply, that is good to know that (especially coming from someone with great bash skills). My intent was to have the simplest way to solve my problem and not any more. If I was going to make it more robust I would probably switch to Airflow, Luigi has nice features to make things simple but more limitations. I think there's Flowise that fits the bill for LLM workflow management, but I haven't had the time to investigate yet.


Thx! Will look into that.


I don't know why i keep getting the error:

"FATAL Template not found: ~/.ellrc.d/templates/default-openai.json"

after having cloned the repo in my home directory and created the configuration file in .ellrc in my home directory. Don't know, probably i'm doing something wrong... I'm new to bash projects, why does it search for the templates in .ellrc.d? what's the .d part? I don't understand.


Oh sorry that's my bad. The target clone path did not match the default value of template path.

Please make sure you either clone the repo to `~/.ellrc.d` or set ELL_TEMPLATE_PATH to where you store your templates (with `/` at the end) .


Thank you, i always assume there's some magic part that goes on behind the scenes which i don't understand, especially in things i'm not familiar with... In fact it was just a path mismatch as the error suggested.


You are welcome. Please feel free to fire any issues you may encounter.


I love the idea of piping my error messages into an LLM to help me debug. Would also love a solid local only LLM and have this basically in offline mode as well


Please stay tuned. Support for more providers are on the way!


The program seems to assume you'll clone it in your home directory, and has paths hardcoded to `~/.ellrc.d/`.

This is just bad.


I wouldn't say these paths are hardcoded. They are just default values. You can set the variables manually.

What is hard coded is that it indeed looks for configurations from `$HOME/.ellrc` and `$PWD/.ellrc`, with lowest precedence. Environment variables and command line arguments will overwrite them.


Convention over configuration isn't bad per se, as the alternatives tend to devolve to bikeshedding.


Ell is really cool!

I'm building a similar product called Savvy(https://github.com/getsavvyinc/savvy-cli) and considered an approach similar to yours (writing in pure bash) but ultimately decided to use Go for a few reasons:

- charmbracelet makes it super easy to build rich TUI - Go produces a single binary that's compatible across many platforms and keeps installation simple - It's simpler to support multiple shells.


Thanks!

Another user[1] also mentioned Savvy but I misunderstood its purpose. Now I understand it does have a similar functionality of analyzing a record of terminal! Your approach allows more chances to let LLM explain what happens, while in my case, asking ell will immediately destroy the original context (the user may have to rerun the falsy command again and cause more damage). However, exiting and reentering recording mode also feels tedious. I must find a better way to interact.

https://news.ycombinator.com/item?id=41139040


You're right, to counteract the friction I also allow users to create runbooks from their shell history.

Here's the source code: https://github.com/getsavvyinc/savvy-cli/blob/8c6a834c5a140b...


Just checked out Savvy; is the runbook-generating code (‘savvy record’) also in that repository? The one hosted at api.getsavvy.

Very interesting idea! Your terminal screenshots are excellent as well, very compelling imagery. Love the font.


Here's the code for savvy record: https://github.com/getsavvyinc/savvy-cli/blob/8c6a834c5a140b...

Lot of users typically find that they realize they should have recorded something after they've done it. That's why savvy also allows you to select commands from your shell history with savvy record history

The API source code is in a different repo.

Thanks! All credit to the dracula theme for tmux and Kitty terminal emulator.

If you have any questions or feedback feel free to email me at shantanu@getsavvy.so


This is really cool.


Thx!


Sounds cool.


Thx!


Been using the LLM cli by simonw and love it.

https://github.com/simonw/llm

https://llm.datasette.io/en/stable/

Pro tip: Use $pbpaste to inject clipboard contents in a prompt


I don't have pbcopy and pbpaste on my machine but injecting clipboard sounds interesting.


I love how a few years ago everyone was fretting over how to keep an AGI in a leakproof box, http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf

And now, a few years later we just give it a bash shell and full access to the internet. So much for the box! I can’t believe how naive we were to think that the people who developed an ai would prioritize anything over profit and how fast the genie would escape- it’s not even AGI yet but at the point where it becomes AGI, with this kind of precedent set it’s laughable to think that we will put any sort of guardrails on it.

All this time we thought it was a technical or philosophical problem but the real problem was capitalism and the glacial pace that the general public picks up on what is going on. It would take decades to get the people at large to agree that the threat is even real and decades more to get to the point where public opinion was decisive enough to counteract all that money, and only then do you get to even try to keep it contained.

The project itself is very well executed though


That truly is a visionary article; thank you for sharing it. We still cannot afford to be complacent, but I believe it is the responsibility of the users to ensure safe and ethical usage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: