>I recommend you to sometime go to look at second-hand markets for books. That is less selective antiquarians and especially something like salvation army or flea markets. Places that accept donations of books and then try to resell them for very cheap are good picks.
This is a very dangerous recommendation! You will inevitably leave with more books, and soon enough you'll find you've filled a bookshelf and need another, which will then look bare with only a few books, so you go and acquire more books... next thing you know, your house is primarily composed of bookshelves.
Tool use is fine, when you have the education and experience to use the tools properly, and to troubleshoot and recover when things go wrong.
The use of AI is not just a labour saving device, it allows the user to bypass thinking and learning. It robs the user of an opportunity to grow. If you don't have the experience to know better it may be able to masquerade as a teacher and a problem solver, but beyond a trivial level relying on it is actively harmful to one's education. At some point the user will encounter a problem that has no existing answer in the AI's training dataset, and come to realise they have no real foundation to rely on.
Code generative AI, as it currently exists, is a poisoned chalice.
If you know the enemy and know yourself, you need not fear the result of a hundred battles.
- Sun Tzu
Failing that, I have heard that removing your jacket, wrapping it around one arm, and allowing the dog to bite that arm is a decent move, in that it will hopefully protect you long enough to make use of humanity's strengths (finding a tool to do extreme violence with, and community to rescue you).
If there's multiple dogs and no one around to help you're probably screwed.
> but I'm yet to find an instance of it explaining a concept wrong.
How do you know for sure? LLMs output is often plausible-sounding but incorrect - usually it's fairly obvious, but it can be subtle enough that I would not suggest using it until you've learned the old fashioned way and can better judge whether the LLM is wrong.
I'd be curious to know if there's progress being made behind the scenes.
reply