I don't think this would be good UX. Maybe when you've already typed ~20 chars or so. If it was so good at prediction from first keystroke, it'd had that info you're asking in the previous response. It could also work for short commands like "expand", "make it concise", but I can also see it being distracting for incorrect prediction.
> Typing messages in these chat apps quickly becomes tedious and autocompletions help a lot with this.
If you're on Mac, you can use dictation. focus text-input, double-tap control key and just speak.
In the editor there’s GitHub copilot autocomplete enabled in the chat assistant and it’s incredibly useful when I’m iterating with code generations.
The autocomplete is so good that even for non-coding interactions I tend to just use the zed chat assistant panel (which can be configured to use different LLM via a drop down)
More generally in multi-turn conversations with an LLM you’re often refining things that were said before, and a context-aware autocomplete is very useful. It should at least be configurable.
Mac default Dictation is ok for non technical things but for anything code related it would suck, e.g if I’m referring to things like MyCustomClass etc
> Typing messages in these chat apps quickly becomes tedious and autocompletions help a lot with this.
If you're on Mac, you can use dictation. focus text-input, double-tap control key and just speak.