> AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
At least in theory that’s not what homework is. Homework should be exercises to allow practicing whatever technique you’re trying to learn, because most people learn best by repeatedly doing a thing rather than reading a few chapters of a book. By applying an LLM to the problem you’re just practicing how to use an LLM, which may be useful in its own right, but will turn you into a one trick pony who’s left unable to do anything they can’t use an LLM for.
You are assuming that they only know how to use an LLM, that doesn't follow from knowing that someone uses an LLM. Chances are they got others skill. A bit like someone who doesn't know how to make a fire and relies on a lighter. He doesn't know how to do it, but knows how to start, knows how to ask for help, knows where to look for help and how to apply the information received
In the context of homework, how likely is someone still in school, who probably considers homework to be an annoying chore, going to do this?
I can't really see an optimistic long-term result from that, similar to giving kids an iPad at a young age to get them out of your hair: shockingly poor literacy, difficulty with problem solving or critical thinking, exacerbating the problems with poor attention span that 'content creators' who target kids capitalise on, etc.
I'm not really a fan of the concept of homework in general but I don't think that swapping brain power with an OpenAI subscription is the way to go there.
It was the same way I think a lot of us used textbooks back in the day. Can’t figure out how to solve a problem, so look around for a similar setup in the chapter.
If AI is just a search over all information, this makes that process faster. I guess the downside is there was arguably something to be learned searching through the chapter as well.
Homework problems are normally geared to the text book that is being used for the class. They might take you through the same steps, developing the knowledge in the same order.
Using another source is probably going to mess you up.
Depends. Do they care about the problem? If so, they'll quickly hit diminishing returns on naive LLM use, and be forced to continue with primary sources.
Um, get with the times, luddite. You can use an LLM for everything, including curing cancer and fixing climate change.
(I still mentally cringe as I remember the posts about Disney and Marvel going out of business because of Stable Diffusion. That certainly didn't age well.)
It would be great if all technologies freed us and gave us more time to do useful or constructive stuff instead. But the truth is, and AI is a very good example of this, a lot of these technologies are just making people dumb.
I'm not saying they are essentially bad, or that they are not useful at all, far from that. But it's about the use they are given.
> You can use an LLM for everything, including curing cancer and fixing climate change.
Maybe, yes. But the danger is rather in all the things you no longer feel you have a need to do, like learning a language, or how to properly write, or read.
LLM for everything is like the fast-food of information. Cheap, unhealthy, and sometimes addicting.
At least in theory that’s not what homework is. Homework should be exercises to allow practicing whatever technique you’re trying to learn, because most people learn best by repeatedly doing a thing rather than reading a few chapters of a book. By applying an LLM to the problem you’re just practicing how to use an LLM, which may be useful in its own right, but will turn you into a one trick pony who’s left unable to do anything they can’t use an LLM for.