So you gave the easy solution. What's the hard solution?
Honestly, the pervasiveness of LLMs looks to really erode the critical thinking of entire future generations. Whatever the solution, we need to be taking these existential threats a lot more seriously than how we treated social media (the plague before this current plague).
Ask students to solve harder problems, assuming they will use AI to learn more effectively.
Invert the examination process to include teaching others, which you can’t fake. Or rework it to bring the viva voce into evaluation earlier than PhD.
There are plenty of ideas. The problem is, a generation of teachers likely need to be cycled through for this to really work. Much harder for tenured professors.
Every technical revolution “threatened to erode the critical thinking of a generation”, and sure, the printing press meant that fewer texts were memorized rote… not to say there are no risks this time, but rather that it’s hard to predict in advance. I can easily imagine access to personalized tutors making education much better for those who want/need to learn something.
I’m more worried about post-truth civilization than post-college writing civilization for sure.
> Ask students to solve harder problems, assuming they will use AI to learn more effectively.
What does this look like? Like asking children learning to read to demonstrate they can read Shakespeare?
A staple of modern education is scaffolding learning, where skills are incrementally learnt and build on previously learnt simpler skills. Much of what students learn in high school and early on at university is meant as a stepping stone to acquiring more applicable skills. Just like you can't start assessing learning abilities from Shakespeare, students simply need to be told whether they master those simpler skills before moving up. Doing away with assessing simpler skills just because AI can now perform them isn't the solution to addressing a lack of critical thinking.
What will need to happen is that early subjects will need to stop being used for gatekeeping. Rather than treating education as an adversarial game, we should make it a collaborative one and instead of trying to make it more difficult for students to pass assessments with AI (and mechanically making it harder to pass them without as well), we should give students a stake in learning what they need to learn.
> What does this look like? Like asking children learning to read to demonstrate they can read Shakespeare? A staple of modern education is scaffolding learning
We definitely need to rethink some assumptions.
I view education as serving at least two goals: one is job-specific training (eg you will go on to do an English Lit PhD) and the other is proving you can do sit in a chair for long enough to do a job, and do critical thinking (you will graduate and get a “degree required” job).
I think we are mostly worried about how the death of the essay affects the latter. In some sense what you are studying is irrelevant for this case, what is important going forward is developing critical reasoning skills _in partnership with AI_.
Not claiming to have solved this, but some ideas would be:
- ask students to produce way more papers, but now you are an editor checking for high level understanding and catching falsehoods/errors.
- stop asking students to write essays, instead offer them simulated conversations with AI experts where they need to display knowledge to keep up (eg “Art Museum Curator Simulator”), or tutoring kids from lower years.
- stop asking people to do 4 year degrees and use AI to give much better apprenticeships for the real work (this might be my favorite).
> we should give students a stake in learning what they need to learn.
I strongly agree with this one, I think for many people the college degree has fallen victim to Goodheart’s law. Trade schools might actually be better but it took ZIRP to create those for software engineering.
> Every technical revolution “threatened to erode the critical thinking of a generation”
Objectively, many of them did erode some amount of critical thinking, but led to skill transfer to other domains so maybe it was neutral. Some of them were productivity boons and we got the golden age that boomers hail from. Other revolutions have just been a straight degradation in QOL. Social Media and LLMs seem to be in that vein. I'd also throw in gambling ads/micro-transactions and smoking as things that haven't exactly helped society. Out of those four examples, we only tried to course correct on smoking and, after a long period of time, we can see it's a net benefit to not smoke.
> I’m more worried about post-truth civilization than post-college writing civilization for sure.
These are the same civilizations on the same timeline.
My opinion is that even if capabilities halted now, LLMs would be more economically valuable than the internet (compared over the same 50 year trajectory). And I predict that they will not halt any time soon.
Maybe this yields more resources to invest in education like the OP author, and we end up more enriched than ever before:
> I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale.
The only thing I’m confident about is volatility, the range of outcomes is wide.
> Maybe this yields more resources to invest in education like the OP author, and we end up more enriched than ever before:
Maybe maybe maybe
Should we gamble on the lives of future gens for some economic maybes or should we take a minute to think through all probable outcomes and build out some safeguards?
I think the hard solution is to massively increase expectations. Think Star Trek where the grade schoolers are learning quantum mechanics. If everyone has access to the oracle of all human knowledge, then you should teach and test to the maximum of what a student could do with all that power. Find the frontier where the AI fails and the human adds value and teach there.
Learning requires both reasoning and knowledge. Grade schoolers almost universally lack the ability to reason needed to understand QM, simply having access to the information isn't enough to learn the subject.
So many on here keep saying stuff like this but it seems to just ignore any theory of learning. “Just make it harder”. Sure, any examples of how that’d work? “Quantum physics.” OK then, problem solved. That isn’t really explaining anything about how this should work.
> Honestly, the pervasiveness of LLMs looks to really erode the critical thinking of entire future generations.
Yes and no.
Upper middle class parents as a group will still instill critical thinking skills in their kids.
But the above comment reveals more about SES (socioeconomic status) and education in general rather than something specific to critical thinking or LLMs. The current education environment in the US heavily favors kids from higher SES families for a number of reason. LLMs won’t change this.
The challenge for the education system, imho, is to find a way for lower SES kids to thrive in an LLM environment. Pre-LLM, this was already a challenge, but was possible. Post-LLM, the LLM crutch may be too easy for some lower SES folks to lean on such that they don’t develop the skills they need to develop higher order skills.
I suspect this is the true fermi paradox. Once a civilization reaches a certain point, automation becomes harmful to the point that no one knows how do anything on their own. Societal collapse may be back to bronze age, if not more regressed.
You don't need AI for this. So much individual productivity depends on the civilization-level platform. Even when you decide to bootstrap stuff and do it from scratch, you're still operating in an environment deeply shaped by the billions of other people around and before you.
Yes, every single math class I've ever had in my life - primary or second education - banned calculators or (in engineering) required us to perform a full memory reset in front of the TA.
Using a machine to do the very thing you are supposed to be demonstrating a proficiency in is cheating and harms the legitimacy of the accreditation of the school.
If we don't get AI correctly regulated, future gens will probably not be able to work a calculator. Calculator producers will then go out of business - so no need to ban 'em.
Most of these bills are not setup for layman to consume. Probably a good application to use an LLM, honestly. It's like reading an EULA agreement.. which I'm sure we all do every time.
Maybe their products team is also just run by Gemini, and it's changing its mind every day?
I also just got the email for Gemini ultra and I couldn't even figure out what was being offered compared to pro outside of 30tb storage vs 2tb storage!
If you're slaving in interviews and only have one offer, I can see why the offer is compelling. In this current market, I totally get it. But just a couple short years ago, the parent poster's approach was probably sound.
Also people don't always interview when they are jobless. If you are already employed with a salary of $0.9X, gambling between $X and $1.25X makes perfect sense.
> "Linus Torvalds is an antisocial jerk, and he's a genius, therefore if I am an antisocial jerk I must be doing genius-level work."
There's way too much of this in general. People use a talented individual with problematic behaviors to justify their own problematic behaviors. So many talented ICs that are absolute dickheads to work with.
This is doing some heavy lifting
reply