Has anyone here come up with rules of thumbs or found some good writing for how to use LLMs to balance learning with getting things done?
One possible heuristic is: input task into LLM, let it do the work, and make sure you understand everything it spits back out. One potential drawback is there is some value in having to figure out things yourself. When learning a language like French, you wouldn't get very good at speaking/writing if you told the LLM "write this paragraph for me in French", and went and verified if it was correct. At the same time, it massively accelerates getting things done. You kind of need to fail and have something correct you.
On the other side, something like Advent of Code which is purely self-motivated and for fun, you can write everything yourself, but at the same time, you could just input it into the LLM and get rid of the tedium of typing things out you know will work, especially for the first 10 or so days. This is a fast feedback loop on the "meat" which is "do I know how to solve this problem?".
I'm having a hard time figuring out when to reach for LLMs and when not to.
Hopefully someone who has more understanding of educational pedagogy can chime in :)
Using AI in particular removes the dependence on seeking help from real people, which intially seems more efficient, but actually removes the key element of human interaction which ultimately is one of the best reasons for learning, because it not only imbues one with the subject matter but also with knowledge of another.
The fast feedback loop as you call it is beyond the point of healthy for humans with regard to learning. It may work and be amusing in the short term but emphasizes knowledge that can be digitized easily and ignores the greater wisdom that comes from slower, more traditional methods.
In short, while learning with AI can appear to have benefits, the net result is a detrimental transformation into a more educated but more mechanical person further away from biological reality.