I've had a positive experience building a ChatGPT shell for Emacs [1]. Not having to context switch between the editor and browser is great. With Emacs being a text paradise, there are all sorts of possible integrations, like babel integration to elisp [2] or SwiftUI [3].
In addition to a shell, functions for inserting GPT responses can be pretty neat too. For example, creating org tables [4].
There are plenty (perhaps far too many) tools for doing basically `curl` to OpenAI.
Local LLM tools are needed however; and are much better for deploying systems on terabytes of data at fractions of the cost.
I use the aichat [1] command line tool a lot for these sort of ad hoc chats. It takes piped input and has nice configurability for setting up a variety of system prompts ("roles"), etc.
If you want to use GPT-4 to manipulate and edit files in your local file system, you can use my cli tool aider [2]. It’s intended for generating and editing code, but you can use it to chat with GPT-4 to read, edit and write any text files in your local. If the files are under git source control, it will commit the changes as they happen as well.
Here’s a transcript of aider editing the ANSI-escape codes in an asciinema screencast recording, for example[3].
I wonder how many different ways people use to do basic ChatGPT queries.
My preferred method is to run a WhatsApp bot, this way I can easily use the LLM also on my phone. And on a computer I just use WhatsApp web, which I keep running anyways. Also this method natively supports iterated conversations.
The OpenAI API is available to everyone. I've spent well over $100 just trying various things out over the past two months. I was not trying to save on it, you can do quite a lot even on $10, just make sure to do some napkin maths before you query som endpoint a lot of times. For example it's a lot easier to spend a lot on Dall-e than it is on GTP-3.5
Try a Google search for GitHub projects that do that. Or really any other GPT idea. People are building many copies of everything, so I'm not even going to recommend the one that I'm using because there's probably a better one already :). It's simple code so you can also modify it to your liking.
I keep plugging my own… yet another API invoker - with parallel queries, templates, and config files written in Golang;
https://github.com/tbiehn/thoughtloom
Has some interesting examples, but I expect the population of users to be constrained to the 5 of us that enjoy CLI, jq, and writing bash scripts.
Really like the design of these tools so that you can easily pipe between them, this a good way to make things composable. Also really cool to see all of the other CLI tools folks have posted here, lots that I wasn't aware of.
I've been experimenting with CLI/LLM tools and found my favorite approach is to make the LLM constantly accessible in my shell. The way I do this is to add a transparent wrapper around whatever your shell is (bash,zsh,etc), send commands that start with capital letters to ChatGPT, and manage a history of local commands and GPT responses. This means you can ask questions about a command's output and autocomplete based on ChatGPT suggestions, etc.
Another enthusiastic vote for https://github.com/charmbracelet/mods - this is precisely the UX I was looking and waiting for - the day that I cloned it and started using it within my terminal was the day I no longer needed to even window out to firefox - and it feels very natural to compose with pipes, wrap into shell scripts, etc.
Early days, but you can see some of the ways this is already helping me out quite a bit (and increasing my enjoyment of things I already like to do): github.com/zackproser/automations
It's still a bit hacky in the current PyPi version of LMQL, but you can also use it from the command line, just like `python -c`:
echo "Who are you?" | lmql run "argmax '\"Q:{await input()} A:[RESULT]';print(RESULT) from 'chatgpt'" --no-realtime
Gives you: I am an AI language model created by OpenAI.
I am one of the LMQL devs and we plan to also add a little more seamless CLI interface, e.g. to support processing multiple lines of text (e.g. quick classification tasks).
I wanted a simple chat history and rudimentary web searches in the terminal, so I wrote my own [0] (bring your own OpenAI API key).
It was a very novel experience writing the API for the simple "tools" in plain English in the system prompt (e.g. to search the web, read a website), though I never managed to make GPT4 successfully use the "execute Javascript" one.
I have a basic python script running in one of my tabs on my CLI to talk to OpenAI. It's not even 50 lines. It's basically just a while loop for user input, which it then sends to the the ChatGPT API and prints the response. Add a try/catch for rate limits and connection issues and that's it.
It's really nice to have an always-open ChatGPT equivalent in one of my terminal tabs that I can switch to at any time.
Very cool, but curious if you see people actually directly interacting with LLMs vs in a script as part of a larger application? I see myself needing debugging, visualizing output etc. so much that an IDE makes more sense to me as an interface, so want to learn about cases where that doesn't.
That's right. TUI usability depends on startup speed, and python scripts are slow to start. The "fast access" is tmux integration so its runs in the background. The "optimised for mac os" -> works on linux but a few bugs to iron out.
I made a Typescript-based CLI and package [0] you can import into projects, extend [1], and get metrics [2]. Hopefully others can find this useful. I built in response validation, lots of configurable stuff, fully tested.
I always like like what tools this author builds, they are very useful. A question on strip-tags tool - Can we use it in a general way, to extract contents from any page?
In addition to a shell, functions for inserting GPT responses can be pretty neat too. For example, creating org tables [4].
[1]: https://xenodium.com/chatgpt-shell-available-on-melpa
[2]: https://xenodium.com/images/chatgpt-shell-available-on-melpa...
[3]: https://xenodium.com/images/chatgpt-shell-available-on-melpa...
[4]: https://raw.githubusercontent.com/xenodium/chatgpt-shell/mai...