Hacker News new | past | comments | ask | show | jobs | submit login
I Quit Teaching Because of ChatGPT (time.com)
74 points by williamstein 11 days ago | hide | past | favorite | 160 comments





I read this and it struck a chord. A decade or more ago, a friend of mine teaching English in a major state university told me that he was getting lots of students who couldn't seem to read a novel. They could read the words on the page, but they couldn't focus enough to really understand it.

My experience is that writing things out always improves my understanding of the subject. It's similar to forcing yourself to write a proof in mathematics, to verify your understanding. The formalism is hugely helpful.

If the result of chatgpt is even more kids skating through life without really picking up these skills... it may have tragic consequences. Learning to think well is the single most important thing I got from education. I would argue that improvements in human thinking -- via philosophy, logic, mathematics, the scientific method, and yes, literacy -- have made a huge difference in human lives over the past few thousand years.

Not that chatgpt would be the root cause. Chatgpt would just be a symptom of a much bigger problem.


> If the result of chatgpt is even more kids skating through life without really picking up these skills... it may have tragic consequences. Learning to think well is the single most important thing I got from education.

Do teachers have no role here? "If many kids make a certain choice, there will be big trouble". Maybe teach them to do better? Is the role of educator reduced to giving assignements and grading them in a mindless way? Seems like this would be exactly the point.

After all, without schools, people learned less. So now we have schools, and people learn more. We didn't just lament "well if people don't learn, we're all screwed". We did something about it. Is this AI thing really beyond any solution? What has been tried so far?


Most teachers have large classes. Their ability to monitor each kid and prevent them from using chatgpt or similar tools may be limited.

I come from a family with many teachers. The harsh reality is that teachers can provide a conducive environment, but they just don't have enough time per kid to make nearly the impact that people expect, which is why your home environment is so critical.

Teachers don't really teach. They create an environment to help you teach yourself. They can provide lessons, but you always have to do the work of internalizing it. Watching youtube videos is not the same as struggling to write programs. Etc.


> Most teachers have large classes. Their ability to monitor each kid and prevent them from using chatgpt or similar tools may be limited.

I'm not a teacher, and I very well may be missing your point due to my own shortcomings. But I have to ask, would this sentence not apply to people not wanting to learn math because it's hard? (or learn anything at all)

> Most teachers have large classes. Their ability to monitor each kid and prevent them from cheating or simply not learning may be limited.

I have been in school, and have learned a few things, not many. Next to me were people who learned even less, because they didn't want to. ChatGPT didn't exist then, nor smartphones, and we didn't even have internet access.

That just points to schools being imperfect, and seems to have zero connection to ChatGPT.


Math is usually assessed via quizzes and tests where you have to show your work, which makes it easier for the teacher to tell if you are getting it. It's a lot harder to assess whether a student has read a novel -- you basically have to make them write a paper on it (and be aware of what they would get from Cliffs notes). Grading papers is a ton of work, and that limits that technique.

Also, it's easy to see if someone is using a calculator. But needing to show your work makes that less of an issue anyhow.


> But I have to ask, would this sentence not apply to people not wanting to learn math because it's hard?

It is pretty easy for a teacher to find a student not learning math; they'll fail the test.

I wouldn't fault a teacher for not reliably detecting every student using a lot of LLMs to write their longer papers.


You can lead a horse to water, but you can't make it drink.

We as educators can show the interesting parts, the wonders of a field but more we can't do. The author of the article seems to have not managed to get through. Truth to be told, such happens every day in schools because most schools use completely outdated ways to teach -- frontal pedagogy itself is such -- and so ChatGPT is not their biggest problem but it certainly makes a bad situation worse. Recommended watch: Ken Robinson's Ted talk Do schools kill creativity. Recommended reading: Carl Rogers Freedom to Learn: A View of What Education Might Become. For a shorter read, Aronson's Social Animal has a chapter.


> You can lead a horse to water, but you can't make it drink.

This must have been a problem before the invention of LLMs. How was it solved before? Why didn't teaching go out of fashion when TV was invented, or why didn't math disappear when calculators were built?

You yourself highlight problems that existed before LLMs, different problems in teaching. Obviously not all teachers quit because of those, and we still have schools. People still learn. Perhaps fewer, and less, but they still do. And perhaps more do. I'm not sure how this can be accurately measured for the entire planet. After all, science still progresses. We've invented LLMs.

Once again, as almost all discussions about AI, I feel the main point of this post is "ChatGPT is different than anything that has come before, and will change everything", without actual supporting arguments. Sounds a lot like advertising.


> This must have been a problem before the invention of LLMs. How was it solved before?

- Encouraging the people who were bored to tears by 8th grade to go work in a lumber mill about it. - Encouraging parents to read to their children just all the fucking time. - Kicking out the troublemakers.

But the main solution has always been parents caring about education enough to get involved and spend time with their kids on it. It's something parents need to prioritize right when things are getting complicated and interesting in their own lives, and it's way easier to see schools as a community creche rather than a de facto cooperative at that point.


> why didn't math disappear when calculators were built?

Because you can see a child using a calculator.

> Why didn't teaching go out of fashion when TV was invented

Laws (which put students in classrooms) and a lack of TV use in classrooms. TV use in classrooms was generally driven by the teachers.

> "ChatGPT is different than anything that has come before, and will change everything"

It's effectively undetectable when used, unless you have students write essays in your presence and on paper. That's what makes it so much harder to counter than previous aids.

WRT writing large essays on paper - it's a skill that's not useful and is thus not emphasized in any class - a trend that's existed longer than ChatGPT. Reversing it now would impact all grades, and face major pushback from parents.


You can see someone using AI too.

The problem here is not with children. Children learn in classrooms with teachers who supervise them closely all day. Preventing them using calculators, smartphones, laptops, or whatever else, is very easy assuming you have the basics of classroom discipline in place. This genre of articles bemoaning the effect of LLMs on cheating is coming exclusively from universities, and it's noticeable how the most obvious solution of having students just do the work in class whilst the teacher watches is completely unthinkable to them. It works fine in high school, but universities refuse to do it.

They could do, of course. It doesn't have to be the professors who supervise. Anyone can patrol a hall, or heck, why not use AI to do it? The local CS department can easily knock up some ML models that detect students using smartphones via CCTV. Provisioned laptops can be locked down with recorded screens, etc. Fundamentally supervising students can be done at scale, but universities would apparently rather let their classes and credentials sink (further) into meaninglessness rather than change their ways.


>Because you can see a child using a calculator.

Besides, a calculator no more threatens the study of math than a word processor does writing, or a rhyming dictionary poetry. One reason is that if you hand me a TI-84, I'm not magically going to understand how to do calculus. Maybe I could somewhat piece things together, but probably not without accidentally . . . learning calculus. The other reason is that, as I am led to believe, the particular values and operations that happen in the field of Serious Math are much more incidental to the actual concepts in question than it may seem early on.

Hell, calculators haven't even really managed to supplant the study of arithmetic. They seem to have settled nicely into where they're useful.


Of course it has been a problem, I provided you with books from the 60s and the 70s to show a better way to educate people.

See https://www.youtube.com/watch?v=GEmuEWjHr5c for why TV or anything else didn't obsolete teaching.


I don't think chatgpt created the problem, although it may contribute to it.

Here's a hypothesis: The problem is that thinking analytically, rigorously, is hard work. People do not often seem to come to it by accident. It's something we have learned to do, and schools/teachers are one of many mechanisms we have used to culturally transmit these skills. Until (pick a date -- 1950, say) it was assisted by the fact that one of the most widespread forms of entertainment was reading. Yes, even by then movies were a thing, but most people couldn't afford to go to a movie every night, and they were limited to what was currently showing. So most people in western countries, where literacy was high, spent a fair amount of time reading. Reading, by its nature, tends to focus the mind. You learn to spend long periods ingesting and contemplating information.

We also taught writing -- and we still do -- but it benefits hugely from all that time spent reading. It is much easier to learn to write if you read a lot. And writing further develops your thinking skills.

The modern era has given us wonderful technologies, but there is a lot of evidence that people are struggling to focus in the way we once did. People read less, write less, and spend more time on other media. I'm not slagging on film or games or cell phones -- conceivably, other useful skills are imparted, and people enjoy them. But our ability to focus seems to be declining. Chatgpt is just another time-saver that makes it easier to avoid learning critical thinking skills.

It's not impossible that teachers will find a way to compensate. I think there are some things working against that: 1) we don't fund schools in a way that is going to allow them to spend additional time per student. 2) societal expectations seem to be that teachers magically pour knowledge into children, rather than learning, fundamentally, being something the child has to do, and which requires a large parental investment. 3) a lot of what lead to focus in the first place didn't come from the schools, but from the social environment -- i.e., reading was the most practical source of entertainment -- and so we are expecting schools to compensate for a problem they didn't create and for which we may have no totally satisfactory solution.

I suspect that's only part of it. The pace of life is very different than it once was. Even an illiterate in 1800 probably worked on fewer things per day and perhaps had more chances to focus. But that's sheer speculation.


> Maybe teach them to do better? Is the role of educator reduced to giving assignements and grading them in a mindless way?

Please don't present a strawman point when link is written by a teacher who is clearly not doing that.

"As an experienced teacher, I am familiar with pedagogical best practices. I scaffolded assignments. I researched ways to incorporate generative AI in my lesson plans, and I designed activities to draw attention to its limitations. I reminded students that ChatGPT may alter the meaning of a text when prompted to revise, that it can yield biased and inaccurate information, that it does not generate stylistically strong writing and, for those grade-oriented students, that it does not result in A-level work. It did not matter. The students still used it."


Its the parents job to reinforce love of learning and an understanding of what the heck we are all here for.

How many dystopian scifi did we read where the majority of the populace can no longer read or write they just press icons?

If we add "the result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth and truth be defamed as a lie, but that the sense by which we take our bearings in the real world—and the category of truth versus falsehood is among the mental means to this end—is being destroyed" as Hannah Arendt put it, the consequences are indeed catastrophic.


> Not that chatgpt would be the root cause. Chatgpt would just be a symptom of a much bigger problem.

I work in support. The amount of people that needs to be shown how to do things an IT professional must already know how to do, is expanding. Now it's 40-50% of the requests I receive. Nobody can read a simple email anymore, let alone parse an entire support article.


> Nobody can read a simple email anymore, let alone parse an entire support article.

IMO it's rather a case of: "I don't have time to read this, just show me already!"


As long as LLM companies can hang on for another 15 years or so, there will be an entire generation of humans who will be as utterly incapable of living without LLMs in the same way that most people in developed countries are incapable of growing their own food. Intellectual lock-in will be their moat.

This is terrifying. Our ability to think has been our biggest differentiator as a species. LLMs threaten this, and I'll die on that hill.

I’ll share my experience and the experience of my kids so far.

Aside from blindly copying and pasting a response, in which case the learner wasn’t interested in learning and probably would have plagiarized from somewhere else anyway, I have found LLM to be an incredible, endlessly patient teacher that I’m never afraid to ask a question of.

My kids who are in the tween and teenage years, are incredibly skeptical and dismissive of AI. They regard AI art as taking away creative initiative from artists and treat LLM similar to the way we treated Google growing up, if they use them at all. It’s a tool which can be helpful for answering questions that is part of the landscape of their knowledge building.

That knowledge acquisition includes school, YouTube and other short videos, their peers (online and off) Internet searches, and asking AI. Generally, I regard asking AI as one of the least problematic sources of info in that environment.

While I tend to be optimistic as a default, I truly do think that the ability to become less ignorant by asking questions is a net positive for humanity.

The only thing I truly lean on AI for right now is as an editor, helping me turn my detailed bullet points into decently crafted prose, and for generating clear and concise transcripts and takeaways from long meetings. To me that doesn’t seem like the downfall of human knowledge.


> [Writing] will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

- Socrates (written down by Plato)


Thank you. Luddite tendencies are as deep within humans as the desire to kill is. It is a personification of the death-drive and all self-destructive tendencies within humans.

It's also the tendency towards the "precautionary principal" AKA Nietzschian "last-man" style thinking applied to the world infinitely.

We should root this kind of thinking out aggressively, at least from the academy.


What's your opinion on calculators?

Update: I meant to compare calculators to something like a slide ruler for logarithms. I'm not from US and I tend to forget that some people use calculators to take 20% of 500.


Where I live these are not allowed in the classroom until 7th grade or so, i.e. when the kids have learned the skills and can then employ calculators mindfully.

This seems reasonable. When I was in school we started using calculators and other technology in 9th grade. That was in 1980 though.

I will occasionally do long multiplication in my minds eye just to make sure I can lol. Anything more complicated than that, most people will not be doing anyway. University students however, almost universally do need to write sometimes. Similarly if I had decided to do something maths heavy at uni I would be expected to be able to do some pretty complex maths without a calculator first, even if I don't need to do that all the time. It's pretty standard that higher education requires a level of intellectual rigour that is totally unecessary for day to day life. In the case of ChatGPT, it's allowing people to completely bypass that process even in those settings. Meaning you NEVER learn to do it, not just that you don't do it day to day.

>Similarly if I had decided to do something maths heavy at uni I would be expected to be able to do some pretty complex maths without a calculator first, even if I don't need to do that all the time

I got an engineering degree and don't remember ever being required to do math without a calculator. Of course, some things are easier if you don't need to bust out a calculator for everything.


I order to get in to the engineering degree though you would have had to do maths exams in school that required you to have proficiency in doing maths without a calculator that the average person does not have and will never need. I did not do an engineering degree but I did do a higher level maths class in school. I failed 2/3 of the exams but that still means I have learnt to do calculator free maths to a higher level than the vast majority of the population has ever even considered. And they don't need to. Probably doesn't impact their lives at all. That doesn't mean that there aren't some people who should really should learn to do that stuff. There's a reason I abandoned the idea of Physics or similar disciplines as a university degree choice - because I did not have the maths foundation to build on. Any arts and humanities degree, if not any degree really needs a foundation of writing skills in the same way.

Does calculating numbers based on concrete rules require "thinking" in the same way OP talks about? I think not.

You don't allow students to use calculators for operations they haven't personally mastered. If you don't learn how to add two numbers on your own, the rest of your learning is in serious jeopardy.

This is the author's lament. These students are skipping over personal mastery.


False equivalence.

With a calculator, the end result is still the same: a (typically numerical) answer of some kind. Writing one's own essay vs. getting an LLM to regurgitate it results in vastly different outcomes.


Not a super big fan, honestly. I'm a bit horrified when I see high school seniors who are smart, and have been through the entire HS math sequence... dig around in their backpack for a calculator to find 5 times 1.5 or 20% of 11.

I'm glad that we have calculators and computing devices, but I'm not glad that they have made teens with basic numeracy into an endangered species. Many tools we use expand our understanding, but the calculator causes our arithmetic skills to atrophy.


From my experience, the more advanced math you learn, the worse you become at arithmetic. I knew a lot of math majors in college, and all of them used calculators all the time.

Yes, I've gotten worse at arithmetic, too.

The point is, one is hard pressed to find anyone who can do much arithmetic-- even trivial things.


Depends on how far in basic arithmetic they get.

But, if symbolic manipulation is done by hand and then numbers just plopped in to get final result and estimation if answer is realistic enough. Well I think that is fair enough.

And spreadsheets are also useful when you need to add up bunch of things or multiply them.


A more apt comparison might be asking the abacus what it thinks of the calculator.

Calculators are reliable and predictable, so losing skill at that kind of calculation is a safe, compartmentalized offloading. We offload an extremely clearly defined set of tasks, and it gets executed cheaply, immediately, and perfectly.

LLMs are different.

A closer analogy would be something like computer algebra systems, especially integration. We can offload differentiation cheaply, immediately, and perfectly, but integration will frequently have a "unable to evaluate" result. I genuinely wonder whether integral-requiring-workers are better or worse at it as a result of growing up with CAS tools. People on the periphery (a biologist, for example) are undoubtably better off since they get answers they couldn't get before, but people on the interior (maybe a physicist) might be worse at some things they wish they could do better, relative to those who came up without those tools.


I think this is overestimating the impact of LLMs.

Fact is, even if they are capable of fully replicating and even replacing actual human thought, at best they regurgitate what has come before. They are, effectively, a tutor (as another commentator pointed out).

A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...

I personally still don't see the actual value of LLMs being realized vs their cost to build anytime soon. I'll be shocked if any of this AI investment pays off beyond some minor curiosities - in ten years we're going to look back at this period in the same way we look at cryptocurrency now - a waste of resources.


> A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...

What changes is the educational history of those humans. It's like how the world is getting obese. On average, we have areas we empirically don't choose our own long term over our short term. Apparently homework is one of those things, according to teachers like in TFA. Instead of doing their own homework, they're having their "tutor" do their homework.

Hopefully the impact of this will be like the impact of calculators, but I also fear that the impact will be like having tutors do your homework and take your tests until you hit a certain grade and suddenly the tools you're reliant on don't work, but you don't have practice doing things any other way.


I appreciate your faith in humanity. However you would be surprised to the lengths people would go to avoid thinking for themselves. Ex: a person I sit next to in class types every single group discussion question into chatgpt. When the teacher calls on him he word for word reads the answer. When the teacher follows up with another question, you hear "erh uhm I don't know" and fumbles an answer out. Especially in the context of learning, people who have self control and deliberate use of AI will benefit. But for those who use AI as a crutch to keep up with everyone else are ill prepared. The difference now is that shoddy work/understanding from AI is passable enough that somebody who doesn't put in the effort to understand can get a degree like everybody else.

I'd suggest this is a sign that most "education" or "work" is basically pointless busy work with no recognizable value.

Perpetuating a broken system isn't an argument about the threat of AI. It's just highlighting a system that needs revitalization (and AI/LLMs is not that tool).


>at best they regurgitate what has come before

I keep seeing this repeated, but it seems people either take it as being self evident or have a false assumption about how transformers work.


You can already see it in people who cannot navigate their automobile without google maps or some equivalent.

I think what is more readily apparent is that younger folks cannot reverse their car without looking at the camera. Older folks still turn around and look.

I cannot reverse my car without looking at the camera but that's because cars these days are so high off the ground, the rear windows are so tiny, and the pillars are so huge that I can't see much turning around.

Someone about four feet tall can easily walk past the rear of my car and aside from the half second of them in the side mirrors (which I can't see if I'm looking backwards anyways) there's no way I'd be able to see them looking backwards. But the ultra-wide angle camera mounted low on the gate I'd see them easily.

Plus, with the position of the camera at the end of the vehicle and its FOV, there are angles I wouldn't possibly be able to see with a car parked on either side of me. Coupled with radar sensors able to detect cross traffic before its even possibly visible in that situation the sensors greatly enhance by ability to see the situation than if I was entirely relying on turning around.

I do turn around while reversing, but more of just a double check of the overall scene. Same with double checking side mirrors while reversing. But a large amount of what I'm looking at is the screen.


Exactly. I was learning to drive in an ancient Fiat from 60s, it was like sitting in the aquarium, every direction was super visible. Next I had Daewoo Lanos which had reduced visibility in comparison, but still good. Now in the big modern sedan I wouldn't see anything when turning head, and mirrors have dead zones. Camera is a second best car upgrade I had in past decades, after automatic gearbox.

I wouldn't call those the same. My wife's car has a camera and mine doesn't. Since having children, I prefer backing up in her car because I'm so much more aware of the blind spots I have without it. We also live somewhere with lots more kids now, which adds to it. I can back up my own (camera-less) car, obviously, but it made me realize that I can't back it up without a bit of "cross my fingers and hope no kid darts into a blind spot from a blind spot."

For 13 years I drove stick and for 18 years I drove without a backup camera. Nowadays I don't turn around and look back. Automatic and camera are so much better.

I wish I had a camera in my Jeep. As I get older it gets harder to just turn around, and new cars have pretty bad sight lines. I think this is just using the tools available to you.

> new cars have pretty bad sight lines

This is really a big part of the reason for them. Newer cars are all aerodynamically contoured and this results in a high rear deck and a shallow angle on the rear window. It's much harder to see behind the car.

I still turn my head and also use the wing mirrors to reverse. It's habit and I find it easier than looking at a screen.


Nah, this "older folk" loves my cameras and it's an excellent enhancement

I'm old. I like the camera. It gives a better view for many things.

What is this based off of? Most young people (who have a car) have an old car, without backup cameras.

Backup cameras were mandated in the US in 2018 and were already getting to be pretty popular on even midrange trim vehicles by that time. Every six year old car has a backup camera, and a large percentage of 7–10-year-old cars do as well.

Seems vaguely worrying but maybe I'm just becoming fuddy-dudded or something. I've never used a car with a backup camera (did have a rear dashcam in one though) so maybe I just don't know. But the fields of view I see in images of backup cam displays aren't confidence-inspiring for me. It seems like the kind of thing that might be useful if you routinely forget to check for random toddlers/pets/bicycles having picnics directly behind your vehicle before you pull out of the driveway in the morning. Or if maybe you have a significant problem with young children, who you had so far completely failed to take notice of, dashing with uncanny timing from weird "blind areas" as you back out of a parking space, etc.

I would blame my (pretty small) commercial driving experience. But no, not really. Before I ever took a Smith System course, I would have had trouble understanding this. I'm regularly concerned, to say the least, at the complacency/obliviousness I've seen. I don't get how someone could be comfortable with not having active situational awareness of these things.

And that's the only way I can imagine it being a realistic concern that children or other drivers will suddenly and unexpectedly appear in their path. Which is the only serious kind of thing I can imagine those camera FOVs helping with. Which they shouldn't have to, because people should be cognizant that backing is one of the more risky parts of driving in terms of likelihood of any kind of collision during it, so they should be on guard -- not merely as they are backing up, but the whole way to the car. Any children in the area should be easily noticed -- unless they are lying in wait on purpose for some reason -- and positive knowledge of their locations should be actively maintained until backing is done. Nearby cars parked but still running, or with people getting in or out of them, etc., should also be fairly trivially observed and noted. If positive awareness has been broken, then so has confirmation of a clear path, and in that case it's completely reasonable to go as far as getting back out of the car and walking a few steps and taking a couple of seconds to evaluate who is within "reach" and what they could conceivably do to screw up your day in the time it takes you to get out of the parking space. It seems like it isn't that much of a reach to do some simple due diligence that should, in a ideal world, make the backup camera a solution in search of a problem.

I don't know if you're a network person or what (in 30-second hindsight, networking knowledge may not be necessary to understand the "metaphor") but often it feels as if I'm on Collision Avoidance and everyone ("everyone") else is using Collision Detection.


Why would you want to? Camera is better in every way.

Define "cannot navigate"? I've been driving for decades, but where I live traffic is everywhere so I don't go more than a few miles without using Waze and its real-time traffic monitoring.

People who can't guide themselves without relying entirely on the app.

Normally, I just glance at what action I should take next (i.e. continue straight for 2km and then take a left turn at X) and then follow the road signs to do so. But I've noticed some Uber drivers and other people who keep their eyes almost permanently at the app for locating themselves, even when there's a clear GPS or network disconnection, like in an underground pass. This often results in wasted time when the app doesn’t catch up with reality, and the driver misses a turn.


I can confidently say that a car would be useless to me without gps. I can navigate the small town (10k people) I grew up in, but that’s it. I wouldn’t try to navigate the city I live in now (~350k people) in a car, beyond the street I live on. If I had to and knew gps was broken (hypothetically), then I’d try to reschedule, instead of attempting to navigate with signs and a paper map.

I could probably work with printed turn by turn instructions, but that’s about it.

And I think that probably comes close to what the other post had in mind with „cannot navigate“


You are in for a fun adventure, if you have an afternoon to yourself and no particular destination. My grandmother always began our day trips that way, in her shiny Buick. Remember to stop to rest.

That's unfathomable to me. I live in a 1kk metropolitan area and I can go basically anywhere without a navigation system, in fact I'll only pull the GPS if I'm going to some remote neighborhood.

P.S: I was born here and lived here all my life.


GPS is super-convenient when you're in an unfamiliar area. I sometimes catch myself thinking how did we ever get along without it. But we did. You just looked at a map before you set off, noted street names and turns, and paid attention. You would do the same thing and manage pretty well if you had to.

Of course it's convenient. But I learned to drive before we had it, and learned to find my way around without it. Maps helped, but you can do a lot with logic and by understanding direction.

You don't ever walk in your city without gps? Wouldn't you know the streets after a while?

Lots of people don't live in walkable cities. There's no realistic way they'd get to where they're going by walking. They might walk their neighborhood and know that, but these days lots of people don't even bother walking around their neighborhood.

So they'll end up driving everywhere they go. Work, groceries, restaurants, etc. Always driving. Many won't go down paths that weren't previously suggested by their GPS. And often those destinations aren't designed to be walkable either. Massive parking lots separating the various storefronts. Corporate campuses completely surrounded by a sea of parking lots and garages. Nowhere to walk.

That said, there's still exploring possible in a car-centric place. I tend to take alternate paths to get places, purposefully "get lost" driving around, and explore places I've never been before. But that has costs and lots of people don't bother doing that.


I do, but only for things in my immediate area. Stuff like grocery shopping, haircuts, doctors appointments, etc. Those are all reachable on foot and I know how to get there, because I do it regularly. But the first few times I used gps to find it. Now I know the route.

But that doesn’t help me for anything beyond that radius and probably not even there, because those routes go through parks and other stuff, so the car routes would be different.


Find your way around an unfamiliar city or area without google maps or similar.

I don't really see that as a bad thing. Einstein famously didn't memorize phone numbers because you could look it up in book.

Quick external lookups are a huge productivity boost since you don't have to spend all the time memorizing something.


This is true to a point. You can get huge performance gains using L1 cache rather than accessing network storage.

I like your analogy. I agree. This doesn't fit all cases, and it really depends on how often and how long it takes you to look something up.

Memorizing directions though? Nah. Takes a while to memorize and is quick to lookup and not used frequently.

Knowing basic syntax for a programming language? That needs to be in RAM.


It's not a bad thing so long as GPS works forever, I also think driving is much more enjoyable when you don't need to be thinking about GPS, it's nice to just focus on where you're going.

I have an awesome sense of direction which I attribute to almost never using navigation apps unless I'm really worried about traffic, and even then, I can usually find a less crowded non-recommended router, because I know my way around.


A conversation between Sherlock Holmes and Dr. Watson from A Study in Scarlet, by Arthur Conan Doyle:

> That any civilized human being in this nineteenth century should not be aware that the earth travelled round the sun appeared to be to me such an extraordinary fact that I could hardly realize it.

> "You appear to be astonished," he said, smiling at my expression of surprise. "Now that I do know it I shall do my best to forget it."

> "To forget it!"

> "You see," he explained, "I consider that a man's brain originally is like a little empty attic, and you have to stock it with such furniture as you choose..."

> "But the Solar System!" I protested.

> "What the deuce is it to me?" he interrupted impatiently; "you say that we go round the sun. If we went round the moon it would not make a pennyworth of difference to me or to my work."


It can be pretty hard to tell in advance which knowledge will eventually make a difference (or would have, if you had it).

This is a very inapropos quote, because A Study In Scarlet was the first Holmes story, and Conan Doyle later retconned this because it turns out to be very important for a detective of Holmes' sort to be able to draw on seemingly-useless knowledge in order to make serendipitous leaps of logic.

I maintain that it's relevant because it illustrates how unreliably people assess just what knowledge is likely to be useful. If anything, the fact that it was retconned reinforces that such assessments can be wrong.

For example, there's a notion in this thread relying on GPS makes you soft and that unless you learn how to navigate the way your forebears did you'll be unprepared or something. I find this proposition just as dubious as the notion that actively forgetting that the earth goes around the sun helps you solve murder mysteries.

It seems to me as though such assessments are often driven by self-interest and the desire to maximize the social utility of one's own knowledge and experience. Of course, those who believe that that "GPS makes you soft" is unquestionably true may find fault with my perspective.

I also just find that passage hilarious. It's simultaneously so misguided and yet so compellingly crafted, which is what makes for great satire whether intentional or not. My Dad read Sherlock Holmes stories aloud to me when I was a kid and that passage has always stuck with me. But perhaps this wasn't the right audience to share it with.


I wouldn't say it makes you "soft", but it makes you inexperienced at it, and people often freeze when they don't know how to do something.

Drive around the country and you'll notice that sometimes Google maps doesn't work because your phone doesn't have reception.


That’s a good description of my programming.

I use the documentation panel of the inspector in Xcode, all the time, and write my code to generate this documentation.

I’m too grizzled to care much about being sneered at by insecure folks. I am able to get the job done; quickly, and of extremely high quality.


8 billion people growing their own food wouldn't be able to live in our planet even if they had all the necessary skills. Industrial agriculture and related technologies have radically increased the amount of food we can grow compared to individual small farmers.

Both of these points can be true at the same time (and IMO are). Tech changes society, but society should also reflect on these changes - and decide whether this is the direction we want to go, the tradeoff we want to have.

> 8 billion people growing their own food wouldn't be able to live in our planet even if they had all the necessary skills

In the way we currently live, no they wouldn't. But it sounds like you're saying "wouldn't be able to", full stop, which I don't believe is true. There is no reason that all 8 billion people (with the "necessary skills") couldn't grow their own food, theoretically. It would require the west giving up the way we live, but it's worth noting that this doesn't make it impossible.


Well, if by "the way we live" you mean "not having widespread malnutrition problems and hunger periodically killing significant parts of the population", then sure thing.

Maybe in the future the "alien intelligences" for next generations will be people who had classic educations before social media and LLMs.

Sooner or later, think tanks are going to start coming up with price charts showing how much your bills would be with and without crypto and AI. And that's going to be a huge political hot potato for these companies to manage. I hope they're ready to invest in massive solar farms.

But think of 8 billion people paying $40 / month to OpenAI.

Put yourself on the OpenAI IPO waiting list right now!

/s


I find Claude to be better...

I wonder what can be done. It's terrifying to realize how dependent students are going to become on these tools. Too many people are just not willing to live with the discomfort that comes with learning something difficult, when the alternative is so readily accessible. Short term gain over long term gain, exemplified.

I like the use of the word discomfort here. It does take an acceptance of some level of discomfort to engage with material you don't yet understand. Similar to how you experience some physical discomfort / strain when pushing your limits exercising. As you engage with discomfort your tolerance builds and what was once a difficult exercise becomes routine. The reward is worth the effort but I worry what the future looks like with many just opting out.

> I like the use of the word discomfort here.

It really is spot on. I've been reading "The Coddling of the American Mind" [0] by Greg Lukianoff and Jonathan Haidt. The book's precipice feeds directly into this idea and has been a fun read thus far. It seems that LLMs will feed negatively into that "coddling", described in the book, in a very negative way as they are providing discomfort avoidance.

[0] https://www.thecoddling.com/


Indeed. I tell my students it's supposed to be a workout. It's the workout that makes the learning happen.

I'm starting to think that maybe we need to start failing students again. However brutal and harmful it seems.

And go back to in person examinations. Suck for those with issues with them, but we don't need to limit time too much.


Technology has always brought fear of dumbing us down, but it rarely does. When the internet came along, people worried we'd stop remembering things. When Google Maps appeared, people thought we'd forget how to read maps.

Wait... I'm not sure anyone can read a map... maybe you're right.


Socrates warned against the spread of writing and the subsequent loss of the ability to memorize.

Some things never change, apparently- one wonders if "Get off my lawn, you damn kids!" has been around since the First Grandfather

I don't trust LLM's to create novel ideas, and I'd never copy paste any AI output. Would I rework the output or maby use certain through processes, maby. But I'm not trusting an AI blindly, its a statistical capture of its training data not a magic wand.

That said, most professors still force students to write code on paper with no autocorrect, google, or even access to the reference documentation which seems counter productive. I remember when people were calling auto completing text editors cheating, yet despite how much we accept auto complete or how many professional developers use AI tools, were still forcing college students to use pen and paper and trying to memorize syntax and all the functions of a library, all because professors can't be decent enough to even allow access to notes or the basic language or library docs.

The AI or the documentation isn't going to solve your problems. Its enabling you to not memorize an entire language that you might not even work with in your career. Do I think their is people who can and will misuse the technology, yep you bet ya. But we need to stop forcing people to suffer due to the people who disregard the rules.

You can also tell a major difference between someone who just uses AI to write the essay or paper, and someone who uses it to develop better ideas or arguments and develop multiple ways of phrasing something. LLM's are nothing new, just like how computer vision has been good for a long time.(see opencv, YOLO, and DARKNET)


I agree with this. I'll use AI as a starting point to begin to understand a new topic or help flesh out definitions of things, but it's a starting point of knowledge not the final destination. Kind of like looking up something in an encyclopedia or Wikipedia; I'm using it for a quick essence of the knowledge, so I know more of what to dig into later.

I'm probably in a small minority but I don't use auto-completion in code editors. I dislike it, and find it disrupts my thinking.

I'm also in that minority. And I think there's something to be said for memorizing that stuff instead of letting autocomplete always handle it. I think it might improve your mental tapestry you draw on to solve problems.

It might be slower and better to not autocomplete everything.

ChatGPT says that autocomplete is better. :) :) :)


I just went back to teaching! I’m hopeful that AI makes the classroom experience better for both students and instructors.

It's upheaval, but it's good overall, I think. I'm enjoying figuring out where students should and shouldn't use AI, and also--holy crap--ChatGPT can write quiz questions like nobody's business. :)

I also use it for coming up with ideas for class, and asking it to challenge my knowledge on a topic. (Just be sure to verify all its claims!)

Still trying to figure out how to convince students that the act of writing is so valuable to them that they really should do it. Something that kinda works on college students is the idea that anything easily done with AI will pay no money. So they'd better get their asses in gear and do the hard work if they want to pay off that enormous loan.

This quarter I'm having them do some low-risk (i.e. hard to be wrong, not graded on style) short writing assignments. I'm hoping that it's easier for them to write the low-risk thing than is it to try to come up with a good prompt. Also I've asked them for their personal opinion of the topics explicitly which might make them a little less comfortable to ask an AI to speak on their behalf.

With AI, though, I think we're going to have to start small and really work people back up to writing bigger essays. If you just jump in, they'll just punt to AI. And nothing takes the wind out of my sails faster than someone telling me I have to grade a bunch of shit written by ChatGPT.


> With the easy temptation of AI, many—possibly most—of my students were no longer willing to push through discomfort.

There's been an important shift in education in the last 20 years: a push for lessons to be short, entertaining and unchallenging. When students struggle with tasks, in many cases they are just told not to do them.

LLMs are the next wave of this shift - what could be awesome tools for research and writing will become a crutch. It's not so much an issue with LLMs as it is a shunning of any discomfort while learning in favour of amusement and enjoyment.


> There's been an important shift in education in the last 20 years: a push for lessons to be short, entertaining and unchallenging. When students struggle with tasks, in many cases they are just told not to do them.

I was in school 20 years ago. That is not my recollection. Lessons were long, focused and challenging if you engaged.


That's what they said.

I'm excited for these kids, to be honest. My experience in the education system in the 90's was a goddamn nightmare. I didn't make it to the 9th grade. It just wasn't designed for someone with my ADHD and chaotic situation at home. I didn't care about most of the subjects they were teaching me, and I would get beaten regularly for doing poorly. I get hyper focused on things I care about, and that system provided very few things that I cared about. Today, I'm a senior DevOps engineer. Guess what I do care about?

And it's not just that I only care about computers. I became an autodidact after I left school, and learned about the things that interested me, and only those things. I still got a great education and know a lot of things that provide value to society, and enrich others. It was just that the education system packaged my value as a human being into one big bundle that was graded in aggregate.

I have high hopes that our world's societies can have such an amazing tool as their disposal that kids don't feel like they have to cram the entirety of human existence into their brains for 12/16/18/20 years or suffer the consequences of a failed life; that they can be productive through a creative use of the tools at their disposal, and feel accomplished even if their brains don't work the same way as others'.

Not to mention the social benefits of having nearly instantaneous fact checking available, and building their opinions around it. Then, they can also be good people instead of allowing lazy idiot talking heads convince them that their situation is an immalleable doom spiral, locking them into a ecosystem of fear and idolatry that's only return is manifest destiny.


> The best educators will adapt to AI. In some ways, the changes will be positive. Teachers must move away from mechanical activities or assigning simple summaries.

So much of education always comes down to efficiency, scale, and standardization. The best way to teach or test someone is by having a one-on-one conversation with them, coincidentally the least standardized. The most scalable and standardized way to do it is to publish some material and then asynchronously collect answers to a predefined set of multiple choice questions.

It seems like everything you can fake or cheat on is a byproduct of choosing to be further along the efficiency/standardization/mechanization scale. Which is a byproduct of many many systemic factors, not least of which is funding.

It's frustrating as hell that so many of our teaching problems have obvious answers that are in many ways better for the students anyways, but those answers just aren't systemically or economically feasible. Maybe they'll have to become more viable as our current, ultra-mechanical system breaks.


> It's frustrating as hell that so many of our teaching problems have obvious answers that are in many ways better for the students anyways, but those answers just aren't systemically or economically feasible. Maybe they'll have to become more viable as our current, ultra-mechanical system breaks.

I think it’s a lie that they aren’t feasible, just like it’s a lie that sustainable, small-scale agriculture isn’t feasible. It’s more that the powerful are only interested in developing centralized systems because those are the ones they can control and profit from. You can’t indoctrinate without a standardized education system, you can’t price-fix without a monopoly/oligopoly, you can’t brainwash an entire population unless everyone’s watching the same thing controlled by the same entity. That said, it’s true that decentralized, sustainable systems aren’t feasible so long as there are people on earth vying for power over others.


This kind of thing is going to be the split between the technically inclined and the non-technically inclined. Those that believe thinking and understanding is too hard or that school is not cool or whatever will have technology to slide by on. They will be apart of the non-technical society and they will languish there. Comfortable in their lower pay and social media and grocery store tomatoes. Truly one of the herd of corporate loyalty.

Fine, I say, we can’t elevate everyone. It’s just a bit amusing though that for a long time the youth was smarter than the previous generation due to embrace of tech. Now the youth is going to be more and more clever and not necessarily more intelligent instead because that is what is valued, taking a shortcut that technology enables.

Oddly it might end up being a rift between the lower class and the middle class. Or perhaps more of a reshuffling as those who are willing to put in the efforts to understand will be apart of the techno-literate class and those who “dont care” will be apart of the “dont care” class.

Digital welfare for the non-nerdy.


My concern is the assessment divide. In the district I live I happen to be on a parent board that's providing input to the leadership with respect to technology. Privacy and security are a huge component of LLMs currently, but beyond that I think the biggest area of interest is the assessment divide.

Currently the district is looking at it through the lens of having students still "test" in traditional ways so that if, and when, assessment doesn't align with daily work they can start to understand where this divide exists.

I really like the Ted Chiang quote from the article: "Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way". I can already see this divide in some of the surrounding friend-circle wherein a lot of young (under the age of 12) are leveraging LLMs very heavily to direct them. I fear that these kids will lose the confidence, at a very early age, to even start something. It's widely discussed how getting started in projects is often the most cumbersome in a task timeline, and so without making things uncomfortable for this young generation to work at this skill I feel as though we're going to see the start of a significantly handicapped generation because they will over-rely on these tools. And, really, this is just one of many issues of over-reliance.

> Now the youth is going to be more and more clever and not necessarily more intelligent instead because that is what is valued, taking a shortcut that technology enables.

I 100% agree with this. A lot of students will fool their parents, and even a number of educators by leveraging this new "cleverness". But it's going to hit a dead end as soon as they're forced to do. I also think you're right in that this may become a class divide that further segments the population and also provides facilities for unfortunate control over the lower class through trained and targeted LLM responses.


They will also be passed over by others who put in the work and can use logic, reasoning, arguments to defend or attack work. When the time comes to review work or output, what will the clever LLM kids say without an LLM?

We’re basically asking people to stop being interested and stop having agency because a computer might have some incorrect but accessible summary of whatever topic.

A solution might be to eliminate homework, since anything outside the classroom can be cleverly mounted. What happens when students can only work on their paper during class time? Or maybe handing out paper assignments again. They can cheat all they want but filling out the answers with a pen would go a little ways to instill the non-researched answer.


The students use AI to complete assignments; the teachers use AI to create and grade them. It's a giant self-licking ice cream cone.

I'm thinking of going back into teaching because of ChatGPT.

I created an entire scheme of work for my wife today, including all the lesson plans, and next I'll work on some student resources and quizzes. It took about thirty minutes. She'll need to check them over to make sure they're okay, but still a massive time saver.

I've taken photos of my son's revision activities and had ChatGPT mark them. It's surprisingly accurate given his awful handwriting.

Report writing becomes a thing of the past, as I can upload a CSV of grades along with a sentence or two of description, and have it generate unique reports for each student.

This would all allow me to do what I used to love. I can just spend my time with students in the classroom, engaging them, teaching them, discussing with them. I won't be bringing home mounds of paperwork, that eats into my evenings and weekends. I'll go into work each day feeling fresh and ready to actually educate kids.

ChatGPT takes away the busy work from both teachers and students.


Why would anyone pay you to be a teacher when they could just use ChatGPT directly. Prompt it to develop a lesson plan to learn whatever, create the learning materials, and then evaluate their mastery of it. And be endlessly available, nights and weekends, for further discussion and help with any difficulties?

>further discussion and help with any difficulties

This is the part where you "draw the rest of the teacher"(1)

The part where all the magic happens. Public school teachers' day jobs anymore are rather focused on curriculum delivery and exam prep. They cherish the chances to actually engage in teaching -- not (necessarily) lecturing, not traffic-policing the classroom, not admin overhead, but connecting with a student and their understanding of a thing, and navigating the ways to conceptualize and illustrate it, and the ecstasy of the Click, and the pride of watching them sail forth into brilliance. That's the whole part of teaching that anyone ever became a teacher for (I hope, but not optimistically) -- namely, the part involving teaching.

If this seems unclear, it could be a semantic thing. Here, "teaching" refers to the actual essence of the profession, its locus of fundamental distinction from other professions, and the true target of a passion or fascination for teaching. Much as everything a doctor or developer does at their job cannot accurately be described as "practicing medicine" or "developing software". Some of the activities that are not the essential teaching or developing or practicing medicine are necessary, or at least ancillary, but there's not-insignificant amounts of stuff occupying the "not exactly what I got into this for"-to-"actively a waste of time" range.

I'm not trying to say that a One True Pure Essence of the Sacred Art of Teaching exists and is the sole motivator for all teachers everywhere forever, or anything. It's just that it seems like you thoroughly gathered up all the parts that are distinctly not seen as teaching proper (at least in my world, which I hope isn't an unusual perspective) and said something like, look, GPT can do all the support-work/busy-work/paper-work. What do we need the teacher for anymore? After all, GPT has got to be faster than the teacher at coming up with different ways to phrase explanations too, right? And so it is, I don't doubt that. But good teaching goes way deeper, and I have my doubts that an LLM is near the point of being able to act upon a nuanced theory-of-mind of a student's current understanding of a concept in context of their previous experience and learning/personality style and aptitudes. For example.

Maybe we overlap the same page, so let me not be uncharitable: There are surely many people employed as teachers who seldom teach anything to anyone (as opposed to, say, merely informing them of it). I would agree that their work is well within GPT's wheelhouse, and all speed to them on the way out, along with the content marketers who write pretend-useful articles all day, et al.

1. https://knowyourmeme.com/memes/how-to-draw-an-owl


When I “scan for correctness” a code change (ex code review), I do my best to look for code correctness. Often that change has to go through product review for visual correctness. A lot of my scanning is brief and determining logic with the variables I know. However I often exclude the cases that have already been tested through e2e and unit tests. These tests are valuable to ensure regressions don’t occur.

Please tell me what validation and regression testing can you guarantee by having an LLM generate a lesson plan? Why is it important to have your own unique generated lesson plan, even if that lesson plan is just a common template with synonyms swapped out?

You’ve eliminated a bunch of extra work for yourself but have no long standing regression check from the output of this generator.

These “actually LLMs are great for X topic” comments are just here for evangelism then? What do students gain from having you generate partially random lesson plans? Please don’t tell me “time savings”.


IMO most of the reason this teacher quit is students using ChatGPT to get around the puffery and wasteful production assigned.

It is important to have the skills to be able to do that kind of task, and transforming information rather than just transcribing it is a key way of training our biological supercomputers. However I can't recall even one example of that being the case when I was in school at any level. I think if it ever did happen it was as an accidental side effect rather than an intended part of the process.

It would be much harder for such generative tools to regurgitate accurate content for novel things. As an example, a report about what the student was presently working on or had just completed in labs, or a design for something they'd like to do. Maybe a focus on the hypothesis or 'request for funding' for a project would better model real world writing and have sufficient local focus to require human writing.


> I think if it ever did happen it was as an accidental side effect rather than an intended part of the process.

It's a side-effect but it's not accidental.

The objective of most school work (below PhD-level research) is knowledge mastery and skill development. This comes from repeated "practice" assignments. Look at any code you wrote as an undergrad compared to what you write today. You weren't inventing anything new, and you will probably laugh at your coding style and quality, but that puffy and wasteful production was what developed the skills to think and the mastery of the information and techniques necessary to move on to solving novel problems and perhaps creating new approaches.


The rote repetition for math is more obvious. Drilling in the repeated proof that yes, this is how things proceeded from start to finish and it is known.

However at least my History classes wanted to instill trivia, rather than a general overview and mastery of the general course of events and knowledge. Precise dates and figures, rather than knowing what to lookup if I ever did need __specific__ data. Which is exactly what I'd want to do if I did have a need for that data, to make sure I had the correct date, spelling, and maybe refresh my context years or decades later.


The real problem seems to be with the testing and grading system, and the fact that some people game them to receive some title or other credentials.

If one is really interested in learning to write well, or to understand what good writing is, then this particular teacher might still have liked their job.

That approach probably doesn't make much money though.


I don't know. When have you heard a single student or educator in a class suggest, "All submitted papers should be available to be read by everyone in the class"? Think deeply about the impacts of that. Other than a creative writing class, I can't recall a single time when I could read someone else's papers without their express permission. I'm bringing this up because you're talking about a tired trope, that making money or credentials are the conspiratorial force deciding everything you don't like in education, inspecific to writing and thus not really engaging with the article; when really, there are much simpler and more toxic aspects of writing education that educators could reverse in an afternoon.

>Think deeply about the impacts of that

I gave it a shot, but I wasn't able to get very deep about it in the first place. I couldn't even begin to come up with any ideas of what could really go wrong if everyone's papers could be freely read by the rest of the class. I couldn't read someone else's papers either, but this was generally because there was no motivation to go to the trouble of gaining access to them, not because it would have been wholly unacceptable.

Maybe I'm just tired, but I think I'm not really grasping how this whole thing follows from / connects to the parent, or within itself; you might have to spoonfeed or state the seemingly obvious or something


I'm not sure I understand your argument, except for the needless stab.

All Master's thesis papers at my university were publicly available at the library, apart from some that did theirs at a company which did not allow this.

Do you suggest that group reading would be a better approach for grading texts?


In 10th grade English, once a week we as a class reviewed one student's writing (without being told who the student was). By the end of the year everyone had a "peer review" of his or her writing at least once.

> However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. “It makes my writing look fancy,” one PhD student protested when I pointed to weaknesses in AI-revised text. IMO teachers, and academia specifically, are to blame for what AI is doing to students because they’re the ones that have defined good writing as “fancy writing”. The hardest part of learning to be a good writer has been undoing all the years of style over substance engrained in high school and college. If we’re taught to write like robots, is it any surprise we just have robots write for us given the opportunity? And is it any surprise we don’t see the point of learning to write if robots seemingly do it so much better?

A friend of mine teaches an intro to web development course - its a 2 day thing, usually over the weekend at a co-working place that does events, and the stories he tells me are insane. These are mostly college graduates and white collar professionals who want to learn some code. They are often perplexed by the idea of a file and folders. Right clicking something is novel for some. He gives all the instructions ahead of time so they can get started right away. That first day is almost exclusively helping people get the "dev environment" setup. Its just a folder on their desktop with some html, css, and js files.

Yeah this is now a common story, also for undergrads who may only use an actual computer (for anything) for the first time when they get to university. Science classes have to start by teaching the basics of desktop computing.

Funnily enough, I don't remember anyone predicting this outcome when smartphones and tablets came out.

It may not matter though. Desktop OS's aren't so different. Files, folders, windows, right clicking, it can all be learned pretty quickly. Compared to the rest of learning software development it's a nice gentle learning curve.


I imagine it's difficult to be a good teacher and find effective ways to encourage students to rigorously think about things they care about in spite of the discomfort it might cause.

I also believe increasingly capable and sophisticated AI systems will play a formative role in transforming education, not as the current chatbots that are disrupting education as mentioned in the article, but as active participants in the reimagined classrooms of the future. The transition will probably be rough, but it has the potential to bring about a better future and more fruitful learning and writing.


I want to point out here: These are PhD students that the author is talking about. Granted, it seems like the class is a bit of a 'BS' requirement from the department. Like Ethics classes and HR trainings, i.e. a class that isn't on your quals and has no affect on graduation.

Still, these aren't middle schoolers. They are all 'big kids' when it comes to intellectual attainment. They're past the pre-med and pre-law maddness, they are in academia by choice over money (the author mentions comp-sci students), and they are even bothering to attend this class at all and not just go with a 'Gentleman's A'.

And even these folks, even when confronted, Just. Do. Not. Care.

I get the author's point of view, and agree with their decision to quit.

But, as LLMs get better here in the next 5 years ... yeah, the class is pointless.

Hell, the research is probably pointless too, and the students know it. The truth of the pointlessness is very debatable, once the students get out and past the grad school blues.

I'd love to know what the students are thinking about the class and about it all too. They're the ones being trained on this, afterall, and they clearly do not see any point to it.

I mean, it just feels like the students are some of those Imperial Chinese bureaucracy applicants, sitting alone in those open air cubicles, all next to thousands of others, and trying to make the absolute perfect calligraphic strokes, one chance only, or their entire village's hopes and dreams fall to dust. And no one gives a rat's ass that the words they are writing are saying 'Time is short, dance in the flowers, laugh, run, play, the universe has spared us a precious few'.


Is this all bad?

The conversational model of learning, the dialectic, can be better for learning than just reading walls of text.

Instant access to a lot of information, in a more conversational model. I've found it to be more natural.

Are we reaching the stage where every kid has a "Young Lady's Illustrated Primer" from the Diamond Age? That would be a good thing.


If they can solve the hallucination problem, sure. Otherwise we're going to have a lot of people who "know" things about the world that are simply made up.

That is and always has been the status quo. The question is if proportionally more or less is made up now. People may be able to learn more true facts thanks to LLMs.

I think "LLMs hallucinate facts less often than people" is a strong stance to take at the moment.

Was going to say it was getting better. But all the articles I could find said GPT-4o was worse than previous versions.

Would he have this problem with smaller classrooms of kids that elect to be in the class rather than having an academic obligation to. There will be always kids who skate by, it just seems like a is exposing that there is a limitation of choices for students in higher education.

Corporate writing (web-sites, marketing, internal hr emails) before LLM's might as well have been written by an LLM for its predictable platitudes, word usage and patterns.

Related:

The Elite College Students Who Can't Read Books

https://news.ycombinator.com/item?id=41707605


> I noted where arguments were unsound. I pointed to weaknesses such as stylistic quirks that I knew to be common to ChatGPT (I noticed a sudden surge of phrases such as “delves into”). That is, I found myself spending more time giving feedback to AI than to my students.

> So I quit.

This strikes me as a non-sequitur. Her students were making a certain class of mistakes, so... she quit? Don't students always make mistakes of one kind or another? Teach them to do better, in this case by not using AI, or revising manually the AI output, or some other way. Isn't that the job?


Can’t you empathise? The thought of having to grade and give advice on AI generated content makes me want to run away and live in a remote forest.

Well I am not very good at empathy, indeed. Sorry.

But really. Teachers have complained about stupid or uninterested kids for ever. This does not make schools pointless. I am not good at teaching, but some people are, and manage to do it despite some setbacks. Is this AI thing really insurmontable? Did she at least attempt to fix the problem, before quitting?

Feels to me like "we've tried nothing, and we're all out of ideas".


He did those things. He said his students recognized the flaws and hazards of relying on AI. But they used it anyway.

Yeah it seems like they quit to make a point but the point is lost on me unless the point is that they can write an article about how they quit.

Solution: Go back to in-person pen/paper tests while having those bags you put your phones into that you get back at the end of the test.

It bodes ill.

The hand of fate doth steer our course,

As wisdom wanes to machines' force.

A hollow voice, devoid of soul,

Whispers false, yet takes its toll.

Oracles warned of knowledge lost—

Now we reap the bitter cost.


I feel for the author, being a part-time teacher myself and seeing the impact of ChatGPT first-hand.

However, I think there is a viable alternative:

1. Spend the beginning of the year/semester showing the potentially disastrous effects of GenAI (e.g. through various exercises involving GenAI)

2. Once students have been "vaccined" against ChatGPT, assume they will still cheat and switch to a type of teaching that leaves little room for cheating with ChatGPT, e.g. long in-person sessions where students write in classroom. Then grade their production (i.e. don't leave room for them to update after class).

The world is changing fast and brutally, but teachers are the head of the spear against mass enshitification.


I ask my college students how much they would pay someone to solve a problem with ChatGPT. They say "zero". Then I tell them that's how much they'll get paid for doing it.

They ask, "Then why are we solving this problem for class?"

And then we can get into the reasons they're really here.

It's not a bad discussion to have.


Super interesting technique!

Good.

One of the rock solid facts of pedagogy is blooms two sigma problem (kids who are tutored perform at ~95-98th percentile compared to standard education).

https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem

If you're not private tutoring your kids, you're failing them compared to their potential. AI gives everyone the potential for a private tutor.

Current AI systems are by far the worst that they will ever be. 5-10 years from now, I will trust AI tutor/teachers and will make sure my kids is on the right side of the achievement curve.

Edit: Seems that HN is full of teachers-pets who don't see them for the authoritarian tyrants that they are (mass reflexive downvotes). I fking hate cultural marxism, but you all who want to downvote me should give Pedagogy of the Oppressed by Friere a read. He's pretty much in 100% agreement with how shitty, authoritarian, and tyrannical mainstream education/educators are - and that book is considered "required reading" in many (most?) education departments.

https://en.wikipedia.org/wiki/Pedagogy_of_the_Oppressed

"As of 2000, the book had sold over 750,000 copies worldwide.[1]: 9 It is the third most cited book in social science.[2]"


> 5-10 years from now

Doesn't anyone think the future is unpredictable anymore? Does your life experience not include things tanking an unexpected turn, ever?

I can't believe how many commenters here and elsewhere take the future trajectory of AI for granted. Where are the flying cars, the moon bases, the nuclear powered appliances? Things don't always go up on an exponential curve, and even when they do, results are not always what we thought initially. Isn't this obvious, and rather basic?


Recursive self improvement is not a mirage.

Synthetic dataset generation which is actually good (i.e. adds meaningful information by probing a latent space in truly unique ways + very high temperature sampling + including outside feedback for diversity seeding) will solve the "internet is polluted now with LLM outputs" problem

Barring that, mass swarms of AI agents going into the world and retrieving new information, using weak/active learning to mass label, and improving themselves with increasingly less human intervention is INEVITABLE.

The issue with folks like yourselves and frankly most of mainstream ML research today is a failure of imagination. As it turns out, ML/AI Ph.D's seemed to be too busy lying about their models scores on NeurIPS papers to read sci-fi or other speculative fiction which imagines what technological changes will do to society. This a huge risk, as rich people with a lack of moral compass often view works of speculative fiction as blueprints.

Here's a few which imagined just a tiny bit of a world impacted by the exact sorts of AI systems which are moving far faster than we could have ever imagined:

1. https://en.wikipedia.org/wiki/Permutation_City

2. https://en.wikipedia.org/wiki/The_Jetsons

3. https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc...

4. https://en.wikipedia.org/wiki/Erewhon

5. https://en.wikipedia.org/wiki/The_Diamond_Age

6. https://en.wikipedia.org/wiki/Master_of_the_World_(1934_film...

https://en.wikipedia.org/wiki/Ghost_in_the_Shell_(1995_film)

The future is teleological (the academy unjustly treats that word like a slur), and you should listen a bit more to those who fear the AI and warn you about just how wrong you are about the claim that AI "won't go up on an exponential curve"

https://en.wikipedia.org/wiki/Nick_Bostrom

https://en.wikipedia.org/wiki/Eliezer_Yudkowsky


Your response to my comment is to tell me a number of things that will happen in the future?

You think writing "is INEVITABLE" in caps proves something? You think Science Fiction novels prove something about what will happen in the future?

Is this satire? Do you not understand the meaning of the word FICTION?


Life imitates art, not the other way around.

https://en.wikipedia.org/wiki/Life_imitating_art

Yes, fiction turning to reality is inevitable. LLMs by definition create "artificial life" out of art. I repeat, reality is teleological.


This was spot-on until you the final sentence. And that was more an individual position ("I will trust") rather than a general prescription ("You should trust")...so I am not sure why this is unpopular.

AI does have the potential to be a private tutor. Think that scene from Star Trek (2009)[1] or The Young Ladies Illustrated Primer.

Companies are making 11-12 figure bets that AI will be huge, so this comment isn't out of line.

[1] https://www.youtube.com/watch?v=KvMxLpce3Xw


> Current AI systems are by far the worst that they will ever be. 5-10 years from now, I will trust AI tutor/teachers and will make sure my kids is on the right side of the achievement curve.

Pretty sure I read that line about self driving cars 8 or 9 years ago.

Realistically, kids will always take the easiest path, why learn when AI will do all the thinking for them? And if AI does all the thinking and people begin to depend on them, what autonomy do these people have left? They've been completely neutered.


That's just an example of https://en.wikipedia.org/wiki/Moravec%27s_paradox

The lack of knowledge about this among the general public is (ironically), a failure of the current education system


With this progress, what will be the point of being in any upper percentile at all? Everyone will be in it if they have access to the LLM wizard.

We're outsourcing our sovereignty to a machine blob in a black box, the blob will do the thinking for all of those children.

edit: for the authoritarian tyrants that they are (mass reflexive downvotes)

Your response is to rely on opaque knowledge agents produced by the largest and shadiest corporations that humanity has ever known. Your lack of paranoia is a severe vulnerability. Good luck.


> Everyone will be in it if they have access to the LLM wizard.

You're arguing that everyone might be good at sports if they had a personal trainer. How is this not a good thing? If you take these models only by the base value they provide, you might be able to eliminate private tutoring from the equation and make performance of students more comparable.

I am far from happy that these models originate from shady for-profit corporations with dubious incentives, but a state actor could just as well train a model for support in education.

> Your response is to rely on opaque knowledge agents produced by the largest and shadiest corporations that humanity has ever known.

This is ironic. My teachers organized a big week about the importance of the personal carbon footprint when I was still in high school (~2015). The personal carbon footprint is criticized for shifting blame for climate change to personal consumer behavior, rather than corporations and was subject of a big advertising campaign from BP [0].

I think you overestimate the average teacher in terms of their ability to think critically and overall skill in their respective subjects.

[0] https://en.wikipedia.org/wiki/Carbon_footprint#Shifting_resp...


>what will be the point of being in any upper percentile at all

Not the OP, but throughout human history, being in the upper percentile for intelligence has been an advantage. The safe bet is that LLMs won't change that.

>opaque knowledge agents

By definition this true for every knowledge agent in history. If Person A is ignorant and Person B offers to teach them, there is always (by definition) an asymmetry. Aristotle was an opaque knowledge agent to Alexander.


It won’t be a competitive advantage against other humans. But it will improve our species’ ability to communicate and learn.

What's opaque about mistral large? https://huggingface.co/mistralai/Mistral-Large-Instruct-2407

I have total control over the model, literally dozens of generation parmaters to play with, literally thousands of open source projects like this

https://github.com/Mihaiii/llm_steer or https://colab.research.google.com/drive/1a-aQvKC9avdZpdyBn4j...

which allow me to jailbreak the model to my hearts content - except I didn't need to do that since Mistral isn't actually aligned anyway!

While you're the one relying on "opaque" knowledge agents, I will have DPO'd my open access model with my custom huge debate evidence dataset to be a better competitive debater than any living human on the planet.

Good luck.


The downvotes are likely not because of the criticism of teachers, but the lack of recognition that AIs are not private tutors, and AI companies are working very hard to make their products more authoritarian than any human system could ever be.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: