Me: I don’t like to have factual based conversations with LLMs (e.g. can my cat eat cooked chicken on the bone?). I like having open ended conversations with LLMs where being vaguely in the right direction will be useful (e.g. practicing Chinese with ChatGPT voice mode).
I use claude code, and verge between it being useful (because my memory of specific functions is average), but then a bit bummed out when I have to spend a bunch of time cleaning up poor logic/flows in the application.
Curious to know the intricacies of how you are interacting with LLMs too.
When a book doesn’t explain something clearly, I ask for a deeper explanation — with examples, and sometimes exercises.
It’s like having a quiet teacher nearby who never gets frustrated if I don’t get it right away. No magic. Just thinking.
I also started building my own terminal-based GPT client (in C, of course). That’s a whole journey in itself — and it’s only just begun.