I'd consider using something besides 'list all files' as first example in the gif. I'd think anyone who opens a terminal knows how to do that so listing files in a directory and hitting a spinner isn't very inspiring.
The second curl example is better since plenty of people won't know that off the top of their head.
Meh. I am fairly certain the number of people who would both use this and also fail to consider possible use cases half a second into the ls demo is zero.
A reminder to users of Github CoPilot who may be unaware, there is also a terminal version you can install included in your subscription. It's ok. (Edit) There is a waitlist.
I recommend using chatgpt functions for more reliability. I'm making use of them[0]. I wanted to use javascript to extend it with a plugin system, haha.
I've been using it for the past few weeks and it's really sleek. I actually don't use the AI features very often because I'm fluent in the terminal, but it's nice to know it's there if I need it.
Some people get weirded out by the fact that it's VC funded and they collect telemetry, but I think you can turn that off.
I loved Warp for it's speed, but the keybindings and lack of configuration didn't work for me. For example, it doesn't support "ctrl+x ctrl+e" which I use daily for editing long prompts in vim.
I recently switched to Kitty (https://sw.kovidgoyal.net/kitty/) and I don't think I'm ever going back. It's not flashy and doesn't have any AI features. It's just a donation-funded, wickedly fast, highly configurable, dotfiles-friendly, modern terminal emulator.
The only thing holding me back from loving Kitty was its behavior when ssh’ing, even if ssh was aliased to kitty +kitten ssh. Namely, that if you sudo’d to another user, the TERM settings were lost and so backspace became space (among other things).
Today, I finally figured out to export XTERM before sudo, which fixes it. Hallelujah.
Collecting telemetry, and being able to tie it with people's GitHub profiles, is why Warp was able to secure millions of VC dollars to build a fucking mOdErN terminal emulator.
I was surprised that Whiz could use tools like ffmpeg even though it tells ChatGPT to only use available shell commands. I asked it to "convert demo.mov to an mp4" and "cut the first five seconds of demo.mp4" and it came back with the correct commands. I guess that was enough to allow it to assume ffmpeg is probably installed. Pretty cool!
Looking at the code, the system prompt says "You MUST NOT use functions that are not available." but apart from the OS info, it's not giving GPT any info on what commands are/aren't available, so I'd imagine it's a bit of a crapshoot.
Seems like a perfect use case for local models. Not sure I want to be sending my .bash_profile or .bash_history (or local env vars...) to OpenAI. And I can't imagine doing anything in the terminal that llama2-code-7b couldn't make sense of. That can trivially run on an M1 with 8GB.
I was able to understand the Whiz source (~200 lines) in about sixty seconds, which is a big win in my opinion. The shell_gpt code is more convoluted. Would be interesting to compare how they perform.
We need good locally installed LLMs (and cheap hardware to run it). I hope there can be some kind of breakthrough for this similar to what Stable Diffusion made for image-generators. I tried to generate some simple code using a few of the llama-models small enough to run on my computer and it did surprisingly well, but still far from good enough to be useful.
I was wondering if Whiz was using agent-style planning prompts, but it's much simpler and passes a single `function` parameter named `shell` to the OpenAI API.
I thought that was clever, because shell commands are highly composable, so it's likely that something can be done in "one shot".
It will also be way more efficient with token usage.
The issue is ChatGPT hallucinating a function that doesn't exist. I'm figuring out how to rail guard against this. Thanks for trying it out and reporting with an example.
I updated the instructions and added a feature to change the model used by whiz. Could you please update whiz_cli using npm and add `export WHIZ_LLM_MODEL=gpt-4` to your shell.
The second curl example is better since plenty of people won't know that off the top of their head.