Curious to check it out but a quick question — does it have autocomplete (GitHub copilot-style) in the chat window. IMO one of the biggest missing feature in most chat apps is autocomplete. Typing messages in these chat apps quickly becomes tedious and autocompletions help a lot with this. I’m regularly shocked that it’s almost year 3 of LLMs (depending on how you count) and none of the big vendors have thought of adding this feature.
Another mind-numbingly obvious feature — hitting enter should just create a new-line. And cmd-enter should submit. Or at least have it configurable for this.
I don't think this would be good UX. Maybe when you've already typed ~20 chars or so. If it was so good at prediction from first keystroke, it'd had that info you're asking in the previous response. It could also work for short commands like "expand", "make it concise", but I can also see it being distracting for incorrect prediction.
> Typing messages in these chat apps quickly becomes tedious and autocompletions help a lot with this.
If you're on Mac, you can use dictation. focus text-input, double-tap control key and just speak.
In the editor there’s GitHub copilot autocomplete enabled in the chat assistant and it’s incredibly useful when I’m iterating with code generations.
The autocomplete is so good that even for non-coding interactions I tend to just use the zed chat assistant panel (which can be configured to use different LLM via a drop down)
More generally in multi-turn conversations with an LLM you’re often refining things that were said before, and a context-aware autocomplete is very useful. It should at least be configurable.
Mac default Dictation is ok for non technical things but for anything code related it would suck, e.g if I’m referring to things like MyCustomClass etc
I agree, but only personally. I would assume most people are on the “Enter to submit” train nowadays.
Most of my messaging happens on Discord or Element/matrix, and sometimes slack, where this is the norm. I don’t even think about Shift+Enter nowadays to do a carriage return.
There are a lot of basic features missing from the flagship llm services/apps.
Two or so years ago I built a localhost web app that lets me trivially fork convos, edit upstream messages (even bot messages), and generate an audio companion for each bot message so I can listen to it while on the move.
I figured these features would quickly appear in ChatGPT’s interface but nope. Why can’t you fork or star/pin convos?
Another mind-numbingly obvious feature — hitting enter should just create a new-line. And cmd-enter should submit. Or at least have it configurable for this.
(EDITED for clarity)