i think the impact of COVID lockdowns and de-SAT-ing would be harder to reverse than to build logical and mathematical reasoning into ChatGPT. The former is political and the latter is technical problem. Our whole industry, great at solving technical problems, is throwing tens of billions today, and it will be hundreds tomorrow, to solve the latter. So the math skills for the majority of the population is probably going the way of the "paper map" skills, etc.
The problem is that mathematics education isn't just about learning times tables.
It's also the primary medium schools use to communicate analytical reasoning and deductive analysis. If you cut math as a target without fundamentally reworking curricular elements, you'll have a ton of graduates who are much worse at negotiating the validity of competing logical arguments.
> It's also the primary medium schools use to communicate analytical reasoning and deductive analysis.
That would be news to History and English teachers among others. You need to learn to make a structured and coherent argument based on evidence in many subjects and the evidence for the existence of transfer learning is so weak it seems unlikely basic Math makes the average or even 75th percentile student better at reasoning. Most people follow a formula at best, and forget that much rapidly after the end of school.
I'd differentiate here between deductive and inductive reasoning. While english and history - through argumentation - expose some deductive reasoning, the vast majority of it is inductive.
>negotiating the validity of competing logical arguments
ChatGPT would soon do that for you better than you do it yourself. That begs the question - what is the core competency of humans? I see so far only the one - the ability to discover, to dig further into unknown. I think though once we get ChatGPT a bit further, such an ability would be pretty easy to add, and it would do much better than humans as it would be able to generate, prune, evaluate, verify hypotheses much faster and at incomparably larger scale/numbers.
> ChatGPT would soon do that for you better than you do it yourself.
And this is based on..? LLMs suffer from all logical fallacies humans do, afaik. It performs extremely poor where there’s no training data, since it’s mostly pattern matching. Sure, some logical reasoning is encoded in the patterns, but clearly not at a sophisticated level. It’s also easily tricked by reusing well-known patterns with variations - ie using a riddle like the wolf-lamb-shepherd with a different premise makes it fall into training data honey pot.
And as for the main argument, to replace literally critical thinking of all things, with a pattern parrot, is the most techno-naive take I’ve heard in a long time. Hot damn.
you're listing today's deficiencies. ChatGPT didn't exist several years ago, and will be a history in several years.
>to replace literally critical thinking of all things
Nobody forces you to replace. The ChatGPT would just be doing it better and faster as it is a pretty simple thing once the toolset gets built up which is happening as we speak.
>it’s mostly pattern matching
it is one component of ChatGPT. The other is the emergent (as a result of simple brute-force looking training) model over which that pattern-matching is performed.
i already do it when it comes to the translation from the languages that i don't know, ie. almost all the human languages. Soon it will be doing other types of thinking better than me too.
For anything weight bearing, I really wouldn't trust it for translation. More than once I've gotten an output that had subtly different meanings or implications than the input. A human translator is going to be significantly more interrogatable here - just as a logical reasoner.