Hacker News new | past | comments | ask | show | jobs | submit login
Teaching with AI (openai.com)
452 points by todsacerdoti on Aug 31, 2023 | hide | past | favorite | 312 comments



The main issue that is not addressed is that students need points to pass their subjects and get a high school diploma. LLMs are a magical shortcut to these points for many students, and therefore very tempting to use, for a number of normal reasons (time-shortage, laziness, fatigue, not comprehending, insecurity, parental pressure, status, etc.). This is the current, urgent problem with ChatGPT in schools that is not being addressed well.

Anyone who has spent some time with ChatGPT knows that the 'show your work' (plan, outline, draft, etc.) argument is moot, because AI can retroactively produce all of these earlier drafts and plans.


\devil's advocate: If the augmented-student pair performs at the required level, what's the problem? The test should be how good they are at using LLMs. Tools should be absorbed.

Similar to today, when grades are a proxy for ability, but private tutoring puts an average student in the 2% (Bloom's 2σ problem) for that stage, but doesn't boost the student's general intelligence for the next stage. Also, hard work, self-discipline, and focus will increase grades but not GI. (Of course, stidents do learn that specific stage, necessary for the next stage, so this criticism is just for use as a proxy).

We might say what is really be evaluated is the ability to get good grades (duh) - whether through wealth or work.

The same argument can be applied to LLMs. Using they is an important ability... so let's test that. This is the future. Similar to calculators and open-book exams.


I won't speak about upper-division courses, but in the introductory computing courses, I'm dealing with students who still don't know the fundamentals but want to use the LLMs to "augment" their skills. But it's like trying to rely on a calculator before you learn how to do addition by hand... and the calculator sometimes misfires. They don't know enough to debug the stuff coming out, because they still don't yet have the fundamental problem-solving skills or understand the core programming techniques. In the hands of an expert (or even someone with moderate knowledge), I think these tools can be great. But there needs to be a period of development WITHOUT these tools too. "If you're nothing without the suit, then you shouldn't have it."


> But it's like trying to rely on a calculator before you learn how to do addition by hand

I use salt to season my food but I have no idea how salt is mined. I have used log tables books in the past to do math work. Back when I first used these lookup books, I had a shaky understanding of logarithms.


> I use salt to season my food but I have no idea how salt is mined.

Culinary counterpoint: I recently came across a very entertaining subreddit [0] which collects screenshots of scathing reviews of online recipes in which the reviewer admits having made changes in the recipe which are often clearly the reason for their bad results.

What I'm getting at is that you may not know how salt is mined, but you are certainly aware that sugar and flour are never substitutes for salt, however white and powdery they all may be. You also know that skipping the salt will completely ruin the recipe, but skipping the nutmeg will not. You probably safely assume that you can replace table salt with kosher salt, or table salt with coarse rock salt (provided it will dissolve in cooking). However, you hopefully know that if the recipe calls for kosher salt you may not be able to use iodised table salt instead (if doing a fermentation), and that table salt is not a happy substitute for coarse salt for finishing.

You absolutely do need some sort of basic understanding of a process to carry it out successfully, even when you have the full instructions available or a helper tool, because reality is never fully reflected in theoretical descriptions. That's why you must know the very basics of cooking before following a recipe, the basics of arithmetic before using a calculator, and yes, the basics of critical thinking and background knowledge before productively using an LLM.

[0] https://old.reddit.com/r/ididnthaveeggs/top/?sort=top&t=all


also https://slate.com/human-interest/2012/05/how-to-cook-onions-... (onions take more than 10 minutes to caramelize, but people filter for short cooking time, so the lie propagates)


> You probably safely assume that you can replace table salt with kosher salt, or table salt with coarse rock salt (provided it will dissolve in cooking). However, you hopefully know that if the recipe calls for kosher salt you may not be able to use iodised table salt instead (if doing a fermentation), and that table salt is not a happy substitute for coarse salt for finishing.

Hopefully he'd also know that 99% of recipes that use kosher salt (presumably because it sounds fancier) will work just fine with with table salt (iodised or not). For most uses the grain size will not make any difference at all and for the remaining ones it is still not essential.


The main question IMO is this:

> when I first used these lookup books, I had a shaky understanding of logarithms.

this implies that now you do have an understanding of logarithms. how did you acquire this understanding?

The fear that many here share is that LLMs will make too many people get by without learning anything substantial from the ground up. Take away their LLM access and they're completely useless - at least that's what people are afraid of. Myself I'm on the fence - it was always the case that a certain type of people seek to understand how thinks actually work, and others who just want to apply recipies. These trends will likely make the recipy-appliers at first more powerful. I'm afraid though that soon enough they will be driven out by further automation coming on top.


> this implies that now you do have an understanding of logarithms. how did you acquire this understanding?

Not knowing logarithms (a) didn't prevent me from successfully using the log tables books and (b) using the log tables books didn't prevent me from developing a better understanding of logarithms. I expect something similar with happen with GPT based education.

I acquired a deeper understanding of logarithms through a bit of reflection and thought.

The math lessons and teachers didn't help illuminate logarithms in any significant way. They simply rattled off the theory of logarithms and moved on to teaching us how to use them. I had to go out of my way to understand it myself.

Is this any different from a GPT giving you a superficial, regurgitated level of knowledge which you then have to deepen through some effort on your part?


It's not any different, but in my opinion both are not good.

Someone should teach you how logarithms work before giving you a log table. Someone at some point should show you a trigonometric circle before telling you to press the cos button in the calculator to calculate a cosine.

That said, some"one" could be a GPT chatbot. The problem is that you might not know the right question to ask.


Yes, agreed. again though, the question is what jobs will be left in a couple of years for people besides people like you who went and sought out the foundational knowledge to be able to actually check the work the LLMs are going to do. And from my experience it needs a lot of checking.


These are not at all comparable? This would be more like seasoning with salt in predefined amounts and not knowing how to season to taste, which will fall apart the moment you deviate from the recipe or change up the seasoning mix at all.

ChatGPT might tell you how much salt to use but you'll be screwed when it tells you to use 5 tablespoons for a small steak and you don't have the fundamentals to question it or figure out where it went wrong.


Salt never accidentally turns itself into cyanide. This argument is moot so long as llms hallucinate.

llms are not a source of reliable knowledge as a result. they are knowledge augmented in knowledge cases. So really you are arguing students should read the knowledge base, the "augment" is just a thin conversational summarizing veneer.


Similar to basic arithmetic?

Perhaps the experience of symbolic calculators (like mathematica) is informing, since surely, for many years now, students have been able to have it do their homework for them. How do teachers handle that?


They give an exam where Mathematica is not allowed. The ones that didn’t do the homework are in trouble.


Huh, straightforward. So it just becomes the old problem of getting students to do their homework without cheating.

\aside Some exams allow calculators and textbooks, and are really hard. Consider: what would an exam assess, where LLMs were allowed?


25years ago, in the era post TI92, HP48, we would sometimes get table assessment without calculators in HS.

University would just ban them for maths.


Writing is as much about organizing your thoughts and learning how to build an argument as it is about putting words on the page. It is about learning how to think. If students rely on an LLM, they will never get a chance to practice this essential skill, and, in my opinion, will be a lot dumber as a result.


My thoughts are along similar lines: People learn to think, to inquire, to persuade, etc. We don't know precisely how. Maybe it's correlated to education, but even the strength of a causal link is debatable.

I certainly don't oppose reforming contemporary education, but at the same time, letting it be replaced by something else by default because that thing has exponentially more engagement power invokes Chesterton's Fence. I'm not sure we even know what we're giving up.


Education has been in dire need for reform a long time. The writing was on the wall even with the advent of the Internet. We now have supercomputers in our pockets and hyperintelligent assistants at our fingertips.

Chesterton could certainly not predict the exponential growth in access to tooling and knowledge. And we're only at the start of the curve.

In my opinion we should go back to absolute basics. Forget grades. Focus on health, language, critical thinking, tool usage, and creativity. Skills that are intrinsic to humans.

Make sure education is fun with lots of play. The main advantage humans have over computers is empathy and creativity. I'm not sure AI will ever "get it".

Provide each student to the extent possible a path to follow their own curiosity and talents. Advanced maths, programming, writing, chemistry, physics, etc available for those interested, even at a young age.

But the baseline education should focus on learning the absolute minimum to survive and otherwise maximize fun, creativity, and empathy.


GPT4 can already convincingly analyse and explain why a drawn comic is funny or ironic. It's really unbelievable when you see it do that.


For me, and I assume others, the act of writing is an important part of the learning process. Writing on paper even more-so than typing on a keyboard.

Writing forces me to organize my thoughts, summarize, elaborate, reference other parts of the text, iterate until the pure essence is clear, remove redundancies, and this cements the concepts in my mind.

Merely reading, or worse, hearing, or worse copy/pasting something is only the first part of learning.

It's similar with programming but I would take it even further. I never really understand complicated code until I've written tests and run the debugger and been very close to it.

An AI chat bot is a powerful tool, but if you just use it to generate assignments you won't learn much. Inevitably it will be used both well and poorly and the results will be mixed.


> Writing on paper even more-so than typing on a keyboard.

How do you get to that conclusion? I find that if I have a text editor, I can write my thoughts down and then visually put them in order or encase them in a more general concept with ease, which I couldn't do when writing on paper.


It's worth noting that they were speaking about their personal experience in that paragraph. So probably for them, the "how do you get to that conclusion" is "trial and error".

But I've noticed that many many many people report the same effect, that there's something about pen-and-paper writing that's more effective for thought-lubrication. I resisted for a long time, but now I too am a convert to this school of thought.

Similarly almost everyone notices the downside: it's easier to reorder, reorganize, cross-link, etc, those thoughts, in a text editor (to say nothing of more sophisticated software tools). Some people have systems for doing complicated things with paper that they say mitigates this downside, but I am not currently one of them.

I guess it's possible that your brain just doesn't have this pattern in it. (That is, the pattern of finding pen-and-paper more effective for getting the thoughts to flow.) I mean, for all I know, maybe the huge silent majority doesn't have this pattern.


> there's something about pen-and-paper writing that's more effective for thought-lubrication

In my personal experience, a helpful feature of pen and paper is that it is less effective than keyboards, it takes me more time and focus to write things down. Maybe this gives the rest of my brain more time to catch up and understand the things that I am writing.

Written text is also less efficient when it comes to searching. This forces me to organize my thoughts better because I know I won't be able to CTRL+F random keywords later.


Another benefit of text is search, especially across documents. Also great for spellchecking and rearranging phrases, sentences, paragraphs.

But writing enables arrows, lines, crossing-out, small-text notes, circling, variable pressure, colour, etc. Richer than ordering/indenting text. Also higher contrast.


I will go even further. The physical sensation of pen on paper ... including the texture and pressure you apply to key words, capitalization, underlines, etc ... all of it being fed back though your muscles into your brain and getting processed/stored/intertwined with those very ideas and thoughts you are putting on paper ... and the mental faculties you allocate to aligning the text against the margins, ensuring neat spacing, etc ... puts your brain in a zone better tuned to the task at hand IMO.

Some of it may be just ... overload your brain so it cannot think of anything else ..so you stay focussed for lack of choice.


I mostly came to this conclusion in math classes at uni. Handwriting my notes was far superior (in terms of knowledge retained) to using my laptop. My recall was excellent. You could argue that writing math is hard in a text editor (due to the required symbols) but I think it was deeper than that. Writing on paper requires more mental focus than smashing keys, it takes longer and that feels good when you're digesting abstract concepts.

I don't write code on paper for (probably) obvious reasons, and I tend to write essays in a text editor although I also enjoy the act of writing on paper in that situation.


Different people undoubtedly come to different conclusions on this.

Personally, I find while the computer provides powerful writing tools, it also provides powerful distractions.

Maybe you get notifications you just quickly want to check, that slack message from the boss could be urgent. Maybe you decide to just check you're using that obscure word right, or to research a detail for your writing and an hour you haven't written the number of pages you set as your goal. Or maybe sitting in your netflix-watching chair looking at your netflix-watching screen just doesn't put you in the right mindset.


My habit for college essays was to write scratch notes on paper, with lots of bullet points, and arrows for re-arranging text, to get the outline of the essay in place, but to actually put the real words together on the computer.

I remember my mother once doing the opposite: she wrote a long letter on the computer, so she could edit and re-write until she was happy with it, then printed it out and copied it by hand, for the personal touch of a hand-written letter (a peacemaking letter to a relative).


> Writing is as much about organizing your thoughts and learning how to build an argument as it is about putting words on the page.

If you cannot organize your thoughts, explain your reasoning, etc. then you’re not going to get very far leveraging an LLM. Sure, it’ll spit out a book report, but unless you can explain — in well-structured writing - what you’re looking for, you’re not going to get what you need for the vast majority of writing assignments.


I think writing for yourself is very different than writing for school or for someone else.

I suffered through writing classes for years and it made me hate writing.

It was only when I started journaling that I started liking it, and liking that I got better at it.

Is it worth making students better at writing if it means as adults they'll not want to write again?


Essays are often about organizing other's thoughts, from references and other source material, and your own thesis based on that.

Organizing and arguing your own thought would be a good test. I think an assisting tool could still be reasonable - choosing between different organizations. Though it's unclear how to assess for original thought - the only such "essays" I know of are PhD theses.

People said similar about log tables and slide rules. And they were right - something was lost (e.g. the sense of nearby solutions). Yet here we are.


Chatgpt and writing are a match made in heaven though. Think of it like a faster lower latency more insightful spell checker.


> If the augmented-student pair performs at the required level, what's the problem?

There is one legal problem with the AI-student pair: the student doesn't own the copyright on what was produced with the AI. Meaning, any work submitted by the student that was at least partially generated by an AI is legally not 100% produced by the student.

So the comparison with an open-book exam or using a calculator doesn't hold: if I search for information in a book, or use a calculator to compute a number, and make my own report in the end, I own the resulting product. I'm the sole author of that product. If I use ChatGPT, ChatGPT is the co-author of my work.

So, using ChatGPT is the equivalent of calling your dad during the exam and ask him to answer the questions for you. Is that really what we want to evaluate?


If you use a calculator, you did not perform the calculation, the calculator did. In a similar vain an LLM could be seen as a calculator for text. With a calculator, you give it a task like perform the calculation 12*55. With an LLM you give a task like, write an outline of an essay on topic X. In both cases you used a tool to perform the task, the only difference is that the tools are becoming more powerful.

Still, learning to calculate without using a calculator and learning to write without using an LLM are in themselves useful skills that can improve the thinking process, so both should be taught.


That's not what the recent justice decisions said. If I'm using my calculator to compute 1255 and write "the result of 1255 is 660", I own the copyright on that sentence. I'm the author. If I submit that sentence to my teacher, I submit what is legally my own work.

If I ask ChatGPT "i need to compute 12 * 55. Can you help me?" I get the following result:

"To compute 12 multiplied by 55, you simply multiply the two numbers together:

12 * 55 = 660

So, 12 multiplied by 55 equals 660."

I don't own the copyright on that paragraph. Nobody does. It's public domain. Meaning, I'm legally not the author of that work. If I give that to my teacher, I submit something I'm not the author of. It's legally the same as copy-pasting a block of text from a book or from the web and pretending I wrote it. That's not something teachers want to evaluate. In fact, not only will I fail my exam, but I'm also legally in trouble.

Now, if I take ChatGPT's output and make my own content out of it, for instance rephrasing it as "according to ChatGPT, the result of 12 multiplied by 55 is 660", then it's my own work again, and ChatGPT was just a source.

As a teacher, I cannot accept answers that are not produced by students. Whether they are produced by dad, by a domain expert who wrote a book, by wikipedia authors or by ChatGPT. But I can accept personal works that were inspired by those sources. Big, big difference.


You are conflating copyright and plagiarism rules at school. Copyright stems from the copyright act, plaigarism rules stem from the academic code of conduct. Any similarity between the two is coincidental. What a teacher can accept or not accept has nothing to do with copyright.

And nobody is ever in "legal trouble" for a copyright violation in a typical school assignment... because unless the student work is publically published, the owner of the copyright is never going to know what you turned into your teacher. It's a tree falling in the forest, with nobody present to hear it.


Different countries go by different laws I guess. Here in France the rule is "no plagiarism", and even if there is no legal definition of "plagiarism", the usual definition is "you must submit your own work and, when using something that is not your own work, you must quote it correctly."

If I'm asking you "write a program that solves this problem", or "write an essay about that topic", you can certainly find a solution online that has a very permissive licence letting you use it any way you want. Good for you, but that's still not your own work and will potentially put you in deep trouble if you use it. Ditto with anything produced by ChatGPT: not your work. You don't own that. You can write "according to ChatGPT, ..." (although you probably won't impress the teacher with that), or you can get inspiration from the output to produce your own work, but not use it as is and pretend you did it.

> And nobody is ever in "legal trouble" for a copyright violation in a typical school assignment... because unless the student work is publically published, the owner of the copyright is never going to know what you turned into your teacher.

Many univs automatically runs a plagiarism-checker on anything submitted by students. Sometimes one of them gets caught. In France, that's enough to be, worse case scenario (although that rarely happens), banned from taking any public exam or work for the government, for your whole life.


I'm not disagreeing with what you just wrote, but I am pointing out plagiarism and copyright violations are not the same thing. Any similarity is coincidental.

Consider a film student who uses a modern pop song without permission in their student film, which they credit in the movie credits. No plagiarism has occurred-- but they did violate the copyright of the band's publisher.

Consider a student who finds an essay written in 1893 and passes the words in the essay off as their own- plagiarism has occurred but there is no copyright violation as works from 1893 are public domain.


> We might say what is really be evaluated is the ability to get good grades (duh) - whether through wealth or work.

Isn't that a good thing? Grades shouldn't be an IQ test. It's pretty much meaningless if you're not significantly below average or using it for super specialized tasks. Getting good grades means you can sit your ass in a seat (be it at a public library or a private tutor) and learn enough to do well on some task.

Yes, good grades doesn't mean you would be necessarily be good at some task. But it shows you can succeed in something that requires effort.


Ultimately a potential problem lies further downstream in the reduction of creation of original work and knowledge.

If you become an expert in performing via LLM, the LLM capabilities and underlying data represent the limits of knowledge creation.

To your point about calculators and open-book exams, part of the challenge is for educators to rethink learning objectives and assessments in terms of outcomes that are outside the scope of LLMs.


Knowledge & skill are important, not just general intelligence.

We need kids to know how to write, and how to elaborate their thoughts, for when they will need to do so in life.

The LLMs can't guess what points you want to make to your boss or to your colleagues about the complex professional context you are in.


This. No matter how intelligent you are, you can not make connections between things you don't know about. If you externalise all knowledge you are ultimately just extension of that knowledge source.


> If the augmented-student pair performs at the required level, what's the problem?

The problem is that they have none of the general knowledge themselves and their brain is exclusively optimised to seek out openAI interfaces for guidance. They'll be a drone, an AI zombie.


Current education is a ranking signal to see if we should continue investing in a student. Adding LLMs into things reduces the clarity of that signal, because the purpose is not to assess the quality of the students work, but the capabilities of the students.


You need the concepts in your head to have intuition. Bouncing each idea all the way out to an external resource is too slow.

Can you fake it with an LLM for a while? Sure. But you hit a point where you need to know the right things to prompt with, and how to evaluate its output.


If you're testing for the skill of using LLMs, then no problem.


let me try another way, if teacher require students write a article about Nazi, generated article pass the class. Is this fit the purpose of teacher?


I suspect it’s not being addressed well because it’s one of the fundamental challenges of school in the first place. For many, assessment and grades are the end goal, and any learning that happens is secondary.


The status quo is a miserable mess, but consider “assessment and grades” is the best apparent evidence of ultimate-goal-“learning”. Is that not reasonable for people who pay for education to ask for?

If it is reasonable, then the problem is likely the form of evidence and not its requirement se de


What barbarous society makes people pay for education?


Every society where's education is available. There are different approaches to who and how to cover the bills, but someone must do, if no one else is covering the costs, the educators do.


What utopia has teachers who teach for free and buildings that materialize from thin air?


One whose residents value continued education differently from each other and thus present the costs to the individual to make their own decision. What kind of world makes the poor person working a manual job pay for someone else going to college?


Then tax the rich properly, and they'll bear the lion's share of anything public funded.

Ruining public schooling by under funding and having expensive private education for those that can afford it is just the neoliberal agenda. If there were no private schools, if they were illegal, along with home schooling -- not saying they should be, but if they were -- the wealthy would pump funds into the public schooling system before you can say "fuck you got mine". (Kinda like no private bunkers would mean a sudden interest in mitigating climate change globally and for everyone, hah)


Every society since Adam & Eve when knowledge grew on trees.


Are educators not paid? From whence come those wages

I only meant to acknowledge the societal investment, not imply private education


Not sure what the problem is: just have a test every friday in class... no computers. Make them 50% of the grade.


That won't work in many national and international systems, due to externally prescribed exam or coursework conditions.


I think there'll be a return of oral tests. Looks like a whole generation are going to get good at doing hand written essays and white board math and coding from day 1.


I think your argument is similar to the one we had with the calculators and later with Internet. I think ChatGPT is another tool. For sure there is going to be lazy people who use it and won't learn anything, but it also sure it is going to be a boost for so many people. We will adapt.


Calculators solve problems that have exactly one correct answer. You cannot plagiarize a calculator. They are easy to incorporate into a math curriculum while ensuring that it stays educationally valuable to the students.

LLM's, the internet, even physical books all tend to deal primarily with subjective matters that can be plagiarized. They're not fundamentally different from each other; the more advanced technologies like search engines or LLM's simply make it easier to find relevant content that can be copied. They actually remove the need for students to think for themselves in a way calculators never did. LLM's just make it so easy to commit plagiarism that the system is starting to break down. Plagiarism was always a problem, but it used to be rare enough that the education system could sort-of tolerate it.


I argue that calculators are overtly harmful to arithmetic prowess. In summary, they atrophy mental arithmetic ability and discourage practice of basic skills.

It pains me (though that's my problem) to see people pull out a calculator (worse, a phone) to solve e.g., a multiplication of two single digit numbers.


Sure, calculators made people worse at mental arithmetic, but arithmetic is mechanical. It's helpful sometimes, but it's not intellectually stimulating and it doesn't require much intelligence. Mathematicians don't give a shit about arithmetic. They're busy thinking about much more important things.

Synthesizing an original thesis, like what people are supposed to do in writing essays, is totally different. It's a fundamental life skill people will need in all sorts of contexts, and using an LLM to do it for you takes away your intellectual agency in a way that using a calculator doesn't.


Engineers care about arithmetic. Carpenters do too. Any number of other creative endeavors require (or, at least, are dramatically improved) by the ability to make basic calculations (even if approximate) quickly in your head.

Arithmetic is the "write one sentence" of composition. The ability to think through a series of calculations with real-world context and consequences is the 5-paragraph essay. If you're not competent with the basics, you won't be able to accomplish the more advanced skill. Being tied to a calculator (not merely using, but being unable to not use) takes away intellectual agency in the same way as an LLM-generated essay (though, I'll agree, to a lesser degree).


> Mathematicians don't give a shit about arithmetic

Sure, once you know how to multiply you don't care about it. But try learning first year CS math without being able to multiply without perfect command of the multiplication table


Exactly. My wife tutors kids at the high school who never mastered arithmetic and are trying to learn algebra. It's hopeless.


That was true before calculators too. Correlation, causation.


I'm not sure what you mean. These kids can't do arithmetic without a calculator. While it was possible to simply not learn arithmetic before calculators, it wasn't possible to hobble onward using the calculator as a crutch.


If those kids were truly applying themselves to the algebra, I think they'd quickly internalize arithmetic too as they used it. But whatever reason led those kids to not do arithmetic without a calculator could well be a reason they don't do well at more advanced math.


My point is failing to learn the basics is a huge hurdle to learning more advanced things. You posit that one could learn the basics and the advanced math at the same time. Maybe, but that would clearly be harder than doing them in order.

Fluency in arithmetic isn't something drilled into kids just to be obnoxious, it's foundational to almost all future math skills.


> They're busy thinking about much more important things.

Generally I agree (because the content of modern mathematics is largely abstract), but to nitpick a bit, number theory is part of mathematics too!

Ramanujan and Euler, for example, certainly cared a lot about 'arithmetic', and historically, many parts of mathematics have been just as 'empirical' in terms of calculating things as they've been based on abstract proof.


Two single digit numbers is indeed sad, but I pull out a calculator daily to do math I could have done in my head. I don’t feel that that is inherently bad.


Not exactly related, but your comment about plagiarism made me think of my days of writing papers and citing APA style. How do you cite a source if it came from ChatGPT and it likely doesn’t fully understand where it got its information?


You don't. You're only supposed to cite primary sources and peer-reviewed secondary sources. ChatGPT is a tertiary source, like dictionaries and encylopediae. You use tertiary sources to get a quick overview of a topic before you begin delving into primary and secondary sources, but you never include tertiary material in your paper.


Good to know. Thank you for the response!


It'll happy generate sources for you -- just be aware that most of the citations will be bogus. Not sure how many teachers/professors test the validity of citations.


I’m guessing they will have to start checking. Even if it’s just sampling a few for validity.


Facts can not be plagiarized.

Copyright protects specific expression, which reproducing is specifically a non goal with LLMs


Plagiarism and copyright violation are subtly different. Plagiarism is just presenting someone (or something) else's work as your own. It may or may not be a copyright violation.


This semester, I regularly conduct RFC / whitepaper / chapter reading sessions during my hours. I let students use perplexity.ai, bard, chatgpt to help them understand what they otherwise can't.

Once they're done, they submit a one-pager on 3 to 5 subtle / smaller / things they find the most interesting or counter intuitive. At the end of the semester, I intend to share all their one-pagers among all of their classmates and keep an open-book test on it. Let's see how that pans out.


I hope it is successful. I'm too old to be in primary education anymore, but I would have loved to have access to an LLM during that time that I can pester with an infinite amount of questions until I grok the subject matter


A calculator is an impressive single-function tool. LLMs and other forms of AI are multi-function problem solving tools. ChatGPT and other AI tools are closer to the introduction of the world wide web than they are to the invention of the calculator.


> The main issue that is not addressed is that students need points to pass their subjects and get a high school diploma.

This is a solution (evidently imperfect and arguably obsolete) the education system uses to address the problem of proving the fact a student has gained desirable knowedge and experience. It can and should be deprecated altogether once we come up with a better solution to this. And we certainly can. For example, we can invent ana AI which would interview a student/candidate a way, proven (by numerous comparisions to their actual performance measured with other methods) to estimate their level of expertise and capability with sufficient degree of precision.

In case such a new way proves reliable and efficient we can even decouple expertise measurement and actual education on a mass scale then - let people gain knowledge whatever an alternative way they want and can, then just come and pass the test to receive an accredited degree. This way we can automate production of unlimited stream of certified experts.


I'm not saying it's a good or bad idea - but the idea of a future where an interview with a computer judges people's ability and decides whether they get to go to college or not sounds like something from Futurama.


I can see no problem here if it is objectively proven to be capable of reliably confirming whether a candidate is an expert. Unless you artificialy rule that candidates can only apply once. For cases when a candidate emerges who believes he somehow really can't pass the AI-powered test despite being perfectly competent there obviously should be fallback alternatives available, i.e. a jury of experts who would talk to him and tell whether there is a chance his claim is legitimate or he's just kidding.


It's a good idea, but doesn't solve the problem of the student then using AI to answer the questions in the interview. We're back to square one.


When the calculator came out, people had the same worries.


I'm ambivalent on LLMs, but I have found one really good use for it: helping me with language learning. I'm now at a level (C1) with my second language that it's really difficult to find resources or even tutors to help refine it.

So what I've been doing is chatting with Claude and asking it to correct whatever faults I make or asking it to give me exercises on things where I need to focus. For example, "Give me some exercises where I need to conjugate the past tense and choose the correct form."

It's like a personal language learning treadmill.


I'm surprised languages aren't more of a focus in the LLM hype. They're like if Rosetta Stone ads were true. They translate at state of the art levels, but you can also give and ask for context, and they're trained on native resources and culture. There hasn't been a jump in machine translation this big and fast, ever.


I'm applying this to Mandarin (link in profile), and while there might not be much publicity/hype, there's definitely been an avalanche of people doing the same thing.

Current problems to solve:

- even the "best" LLM (GPT-4) frequently generates explanations/grammar that are plain wrong. Even its "correct" output isn't quite native (not as bad as round-tripping a sentence through Google Translate, but it's just slightly off).

- LLMs from Chinese companies (Qwen/Baichuan/etc) are immensely better at producing natural Mandarin (but fall short in other respects, which is unsurprising because they're smaller). I haven't tried fine-tuning LLaMa-2 yet, but I've had good success fine-tuning Qwen.

- in my opinion, 90% of the market doesn't need open-ended conversation about random topics. They need structured content with a gradual progression and regular review. You can use LLMs to generate this (which is what I'm doing), but it's not like a random newbie student is going to be able to design this themselves.

Not saying any of these problems aren't solvable, just pointing out the work that still needs to be done.

For me, the most exciting prospect is automated grammar correction during spoken conversation. I've made things harder for myself because I wanted to keep everything on-device so users could be assured that if they purchase something, they'll have access to it forever[0]. The downside is that I can't (yet) practically deploy any of these cutting-edge LLMs at the edge so I'm kind of handicapped in what I can do.

[0] subject to iOS/Android forced upgrades, which I have no control over. It's all cross-platform though, so I'll make a macOS/Linux/Windows version available at some point.


I’m not sure if it reaches the level of hype, but in at least one country where English is not the dominant language—Japan, where I live—using LLMs for language learning is frequently mentioned in the press and elsewhere. Some educators are starting to use them in classes, too:

https://mainichi.jp/english/articles/20230824/p2a/00m/0na/02...


Pretty much all the major LLMs are trained on 99% English text, so it's good for people who want to learn English, not so good the other way around.


I agree, they are perfect for language learning. Reusing another project I put together a site as a weekend project mainly for myself to learn Dutch but decided to release it publicly for free (invite only) to see how others interact with it as it's almost free to run it and looks good in my portfolio. It's an audio chat app, it's not my idea, I've seen many of these but wanted to create one for my own needs.

It uses Chrome's speech to text API (webspeechapi) for input, then it sends it to ChatGPT 3.5 and I use Google's text to speech API to generate audio response along with the text. So practically I can talk to ChatGPT.

I also want to add a dictionary module where I can generate and add example sentences and images to a words and phrases.

I'm sure there are complete teams working on apps like this as it's such a straightforward use case.

You can try it, it's all Hungarian but you can translate the page. https://convo.hu/ Invite code: CONVO


>They translate at state of the art levels

They are actually much better than previous state of the art. For the couple dozen languages with enough representation in the training set, GPT-4 is by far the best translator you can get your hands on, even without the whole "ask for context" etc


Just remember, that you have no guarantees that it will be correct.

Use a combination of external sources to cross verify. Also spoken form generation is very important if you plan to interact with people.

Combining it with real conversation will definitely help.

But I can see how it can be absolutely awesome to play around, as an extra tool.


Yeah, this Show HN convinced me: https://news.ycombinator.com/item?id=36973400

Unfortunately it's no longer free to try, but it worked well.


i'm using it, it works fine for what it is


Underresourced languages also are underresourced in terms of training data for LLMs, and so for smaller languages LLMs do have significantly more problems with sometimes generating something that's completely weird and wrong not only in terms of facts but also in terms of language, word choice or grammar.


> smaller languages LLMs do have significantly more problems

Yeah, I would never try this with, for example, Chichewa or Icelandic. Which is both understandable (like you said, less training data) and a shame, because there aren't many good resources for language learners now.


I've tried to get ChatGPT to correct my french whenever i make (small) mistakes, but it never worked. Is Claude better in that regard? Ideally i want the LLM to correct every single mistake i make and provide an explanation.


Claude is much better. I could never get ChatGPT to reliably correct my mistakes, even when I slipped in egregious ones.


Please ignore this comment, I'm only typing it here to get frustration with this topic off my chest. The talk and marketing from these places is all about improving humanity and using AI to benefit everyone whereas the reality is far from this. A few are benefiting and profiting right now, while open AI became closed AI.


>while open AI became closed AI

OpenAI was never Open though. Remember pre GPT-3 they had this whole "Omg we made something but its so dangerous we're never going to make it public" bs?

https://techcrunch.com/2019/02/17/openai-text-generator-dang...

>OpenAI built a text generator so good, it’s considered too dangerous to release


Well, it’s ok to vent about things like that as long as it’s substantive. One way to facilitate this kind of discussion is to remember that topics like this need to become more substantive, not less. As it stands there’s very little in your comment, let alone a substantive point about why OpenAI is bad.

It’s hard to argue that OpenAI hasn’t helped pretty much everyone who’s used ChatGPT at all. That they do it for a profit doesn’t really change that equation much. Obviously I’d love to see them open source GPT-4 too, since it was trained on the world’s data. But they’re not compelled to.


Of course they're "not compelled to". Why would they? The legal/regulatory "situation" allows them to get away at this point. I hope we fix this kind of pilfering soon.

We're still navigating the sleaziness (or whatever you call it) involved here to train unfettered on open source and public data, and not meaningfully propagate the benefits back to the contributors. This is uncharted terrain as of now.


Surprised not to see any discussion here of Khanmigo[0], which I believe has been using GPT-4 as a tutor for quite a while now (in a beta form). It's been long enough that I've actually been (idly) trying to find any efficacy data. I'm sure that by now Khan academy has it, but I haven't seen them release it anywhere.

The famous tutoring 2-sigma result (referenced elsewhere in the comments), only took place over 6 weeks of learning, and Khanmigo should have over 6 months (I believe) of data by this point

[0]https://www.khanacademy.org/khan-labs


It's not efficacy data exactly but there was a news article a while back with some interesting perspectives on implementation.

https://www.nytimes.com/2023/06/26/technology/newark-schools...


I don’t understand why they’ve made it US only during the beta period, seems like a weird decision.


I imagine they're doing a lot of development and it can be hard to make sure new products are eg gdpr compatible (not to mention the fact that they might want to have it do different things in order to help students in other countries)


I've personally found AI to be a great help whenever I'm diving into a topic that I'm less familiar with. Recently I used it to help me prep for an interview as well. My partner uses it to help explain STEM concepts that she didn't cover in her schooling.

I do wonder how far away we are from an actual Young Lady's Illustrated Primer. Three years ago I'd say we were 50 years away. Now it feels more like 10.


I just don't know about this. I also find it answers great when I'm not familiar with a topic. However, when I am familiar with a topic I find all sorts of inconsistencies or wrong facts. I'm concerned the same inconsistencies are there in the topics I'm not familiar with, I just don't know enough about the subject to spot them.


I'm normally worried about that kind of Gell-Mann amnesia effect when I'm reading articles or watching YouTube videos that dive into a topic I'm unfamiliar with.

But when it comes to LLMs, I'm conscious of the fact that I'm asking deep, specific questions when it comes to programming and software, about things on the fringes that the LLM can't have read much about. In contrast, with things outside my area of expertise, I'm asking very superficial question that any practitioner of the field could answer, and so likely has many many sources for the LLM to pull information from.

With fields outside my area of expertise, I'm most often asking questions that are comparable to "what is a variable" or at most "how does A* search work". That gives me confidence that the quality of answers is likely to be much better with these questions.


I would expect the distribution of inconsistencies and errors to be the same for both sets of topics, in the absence of any other information.


I've noticed the same thing, specifically when asking about programming topics. It reminds me of the Gell-Mann Amnesia effect, but at least you're aware of the inaccuracies.

https://en.wikipedia.org/wiki/Michael_Crichton#GellMannAmnes...


conceptually, I get it.


The same for me, I love it for this sort of thing. I can bounce ideas off of it and it'll give me a solid response without getting tired of my questioning. And it'll explain in detail why I'm wrong. I really can't express how useful this is for my style of learning - I like to take things apart and figure out how they go back together.


This is exactly it, this sort of infinitely patient tutor dialog interaction is absolutely perfect for my style of learning and I can't help but be a little bit sad that I didn't have something like that available during school. That being said, I'm beyond thrilled to have it available now, makes it easy to never stop learning.


> Three years ago I'd say we were 50 years away. Now it feels more like 10.

I think those agents could actually reason though. LLMs do not do any reasoning. They produce plausibly reasonable text.


At a certain point the difference between reasoning and producing reasonable text, disappears.

2 + 2 = 4

2 + 2 = 4

Which of the above lines was generated by an LLM, and which was manually written by a human?


Today I asked ChatGPT to implement an oauth 2.0 token flow for me in bash. I tweaked a few parameters so it’d feed directly from a settings file in the folder I’m running it from. Then I realized we needed it in powershell, I pasted the tweaked script in and said “write this in powershell” and the script just WORKED. It was great. I don’t care about the oauth 2 workflow it’s well documented and implemented in our code in 50 places, I just wanted the script so I can integrate it into some automated testing. It would have taken me probably an hour of messing up syntax in bash and then figuring equivalents in powershell to finish up the script.

My coworkers were absolutely thrilled at my turnaround time. I was thrilled I didn’t have to do this boring task. I saw this as an absolute win.


I built a chrome plugin called Revision History [1] that I released about 2.5 weeks ago, so I've been talking to a good number of educators about this recently. I'd say the majority of teachers are terrified of AI, because it means that they have to completely change how they teach with only a few months' notice. It's not easy to change your lesson plans or assignment structures that quickly and it'll take time to see where this all lands.

Some teachers are looking for ways not to adapt, which is why there's a surge of interest in AI detection (which doesn't work well), but the sharpest educators I talk to are cognizant of the fact that there is no going back. So the plan is to incorporate AI into their curriculum and try to make assignments more "AI proof". This means more in-class work (e.g., the "flipped classroom" model [2]). Others are looking for ways to encourage students to use AI on assignments, but to revise and annotate what AI generates for them (this is what I am marketing my plugin for). Either way, it's going to be very rough over the next few years as educators scramble to keep us with a monstrous change that came about practically overnight.

[1] https://www.revisionhistory.com. The plugin helps teachers see the students' process for drafting papers, unlike many than other plugins that are trying to be "AI detectors".

[2] https://bokcenter.harvard.edu/flipped-classrooms#:~:text=A%2....


I can't help but feel that people are completely missing the forest for the trees with AI and education. Which isn't particularly surprising when you realize most people haven't ever made the connection that the primary point of education is to make effective economic contributors in your society, rather than just being something you do because it's just what we do.

We are going to use powerful AI to teach kids to do jobs that AI will almost certainly do better in 10-20 years?

Like I get that there is a notion of "What else are we supposed to do?", but it still just feels so silly and futile to go along with. Like "Lets use AI to teach kids how to program!"....uhhh, the writing is on the wall


It's been so long since calculators hit that I guess we all forgot what that was like, but Wolfram Alpha can solve all of the problems in a typical math textbook. Now writing based classes have the same problems, but the solutions are pretty similar:

- Make kids show their work (outlines, revision histories)

- Retool to focus on the things where the tools can't do all the work (proofs, diagrams, word problems for math; research, note gathering, synthesizing for writing)

- After kids learn the basics, incorporate the tools into the class in a semi-realistic way (using a TI-whatever in the last years of high school math education)


> Wolfram Alpha can solve all of the problems in a typical math textbook

I mostly disagree (and I used WolframAlpha thoroughly during my engineering education). Wolfram can solve well encapsulated tasks (solve for X, find the integral, etc). Even then often it gives you a huge expression, whereas doing it by hand you can achieve a much simpler expression.

It doesn't really handle complex problems, at least like the ones you'd find in a college level math or physics course. It can be a tool for solving certain steps within those problems (like a calculator), but you can't plug the whole thing into Wolfram and get an answer.

GPT4 is kinda OK at this, like 50%+ success rate probably, highly dependent on how common the problem is.


> It doesn't really handle complex problems, at least like the ones you'd find in a college level math or physics course

Those complex problems are taught and asked in college level classes because of Wolfram Alpha. Like the parent commenter mentioned, classes had to adapt to new technology and since Wolfram Alpha could solve the straightforward questions, college classes started asking more complex ones with multiple steps and where the problem needs to be reframed in order to actually solve it.


Nah, you can check any math or physics book that's many dedcades old and they had this same kind of problems.


What is the point of making kids do all this work if at the end, there will be zero career for them and the computers will be doing everything, I mean why learn any arithmetic at all if you can just ask your phone to sort out your personal finances for you ?

I have a feeling that the outcome might be that we actually become dumber and lazy. I personally won't be bothered learning as much if the machines already know everything, it just would seem like a waste of energy. I'll probably just continue to learn survival skills and first aid in the event of some type of catastrophic event and I happen to survive.

If these systems don't workout to replicate, build and themselves really quickly then we're headed into some very uncharted waters. When they do workout how to replicate, build and run themselves, we're probably fucked anyway.

We're at an interesting point in history here where populations are rapidly declining and the youngest generations are so fucking distracted by technology and new shiny they aren't going to be as interested in developing it as the boomers were. Going to be interesting to see where it goes.


ChatGPT can also "show work". Requiring students to show work doesn't prevent cheating.


Show work in the context of chatgpt means show each prompt and response and any edits or collation that you performed.


What would be particularly nice is if it became the norm for LLM users to be expected to not only supply the log of prompts that produced a given output, but also the specific LLM, making the entire process reproducible and verifiable.

Of course this would possibly exclude using SaaS-based LLMs like ChatGPT in places like schools, and as such it might make sense to require students to only utilize open ones. Or maybe, if OpenAI provided a verification service whereby a prompt could be checked against the output it supposedly produced in the past at some point (even if the behavior of the chatbot had since changed).


> - Make kids show their work (outlines, revision histories)

I don't like this. It seems to me that this would lead to teachers forcing formulaic approaches to writing on children (even more than they already do). No uniform process is going to work on everyone and having students write using a process like that produces bland, uninspired and unmotivated essays.

Furthermore, outlines and revision histories also seem easy enough to fake with a GPT-like AI: you can just ask the AI to write an outline and then have it iterate on that.


> make effective economic contributors in your society, rather than just being something you do because it's just what we do.

That's your utilitarian take that would eliminate many things you would consider superfluous. Social sciences, history, art, useful personal skills that might not be directly to the economy (like home related stuff or questioning authority)

It should be about empowering citizens in many ways regardless of how much they'll end up contributing to society. As for historical examples, women and wealthy people who studied and couldn't or didn't need to work still studied.

If you subscribe to the purely resource exploitation view, you end up on the road to optimize for the elites in power through lobbying and corporate manipulation.

Having a population that thinks for themselves may actually lead to more unrest and economic uncertainty of all the KPIs that corporations usually love.

Of course, corps will want education to be a specialized training ground for future human resource exploitation and govs might want to create voters for their party (or nation build through ideology and principles) but just because either might get their way to a hig degree doesn't mean that "the primary purpose" is what they get away with.


>the primary point of education is to make effective economic contributors in your society

I see zero evidence that this is true. This is not the stated nor revealed preference of a significant portion of the population.


People don't spend thousands of dollars and spend years of their lives to get a degree because the process is inherently fun. The reason to go to college is so you can get a good job that pays more than if you stopped at high school. Same thing with getting a high school diploma (though, less so).

What more evidence than that do you need?


I think you conflate economic contribution and good job too much. They often don’t overlap much.


For one thing, a lot of people spend that money and get a degree that will not meaningfully help them in the labour market.

I see your point though, I was thinking about school, not university.


You might want to watch this video for a different perspective:

https://youtube.com/watch?v=3g5HkfUrna0


"We are going to use powerful AI to teach kids to do jobs that AI will almost certainly do better in 10-20 years?"

I think understanding how to work well with AI and what its limitations are will be helpful regardless of what the outcome is.

Even if silicon brains achieve AGI or super-intelligence, I think it's highly unlikely that they will supersede biological brains along every dimension. Biological brains use physical processes that we have very little understanding of, and so they will likely not be possible to fully mimic in the foreseeable future even with AGI. We don't know exactly how we'll fit in and be able to continue being useful in the hypothetical AGI/super-intelligence scenario, but I think it's almost certain there will be gaps of various kinds that will require human brains to be in the loop to get the best results.

And even if we do assume that humans get superseded in every conceivable way, AGI does not imply infinite capacity, and work is not zero sum. Even if AI completely takes over for all the most important problems (for some definition of important), there will always be problems left over.

Right now, just because you aren't the best gardener in the world (or even if you're one of the worst), that doesn't mean you couldn't make the area around where you live greener and more beautiful if you spent a few months on it. There is always some contribution you can make to making life better.


Assignments are usually just a proxy for the skill which we're actually trying to teach or evaluate. The fact that a course assignment can be done by using a chatbot does not mean that the skill which that course is teaching can be done by using that chatbot.

If a coach assigns someone to repeatedly lift heavy weights in order to become stronger, lifting them with a forklift doesn't achieve the goal, because in real life that strength is intended for situations where you won't use a forklift. The same goes for exercising various "mental muscles".


I agree, except all those kids just go on to work in an Amazon warehouse driving a forklift anyway.

We need to start valuing human intellect more otherwise it will simply become a thing we have outsourced to machines like washing clothes and transporting our bodies.


I'm glad I learned how to figure out the shape of a function, even though graphing calculators were already a mature technology at the time I was learning that (and had been even more obsoleted by jupyter notebooks by the time I entered the workforce).

It's al very tricky to figure out what foundational knowledge will be useful in 15 years (that's why we pay educators the big bucks ... oh wait ...), but just because it's hard and uncertain doesn't mean it isn't valuable to try to figure it out.


Its not certain we'll get to that point, and if we do we'll probably need to rethink society as a whole. We have a lot of training data on human knowledge, discussion and Q&A, but very little on humans actually working and going through their thought process, which I suspect is why projects like AutoGPT aren't really that good [1].

[1]: https://www.reddit.com/r/AutoGPT/comments/13z5z3a/autogpt_is...

Relatively high fidelity and public data for some domains does exist, however (think all github commits, issues, discussions and pull requests as a whole). For those domains, it indeed might be only a matter of time.


Indeed, the problem is AI puts the whole education system into question. Even before AI it could have been argued (Case Against Education/it's mostly signaling etc). At this point I'm weary of putting my 5-year old through this pointless 12 year exercise, unfortunately in our country there is effectively no choice and home schooling is not allowed. If I could I would offer an environment where he could pursue any and all interest together with other kids under the guidance of competent people (and AI). Cultivating your strange passions is probably the only way to contribute and thrive in a post-AI world.


It's hard to stare in the face of the abyss.


Especially when the abyss stares back at you and it's chatty and apologetic.


The purpose of education is not labor preparation :)


It absolutely, unequivocally is. People can romanticize it anyway they want, but formal education is very different than religion camp, painting class, or spiritual retreat training.

I feel for people who don't get this or perhaps never contemplated it, but the system is designed to breed good workers and sort them into bins. And its not a bad thing either. Sure there are non-economic self contained benefits, but those are perks, not purposes.


>> The purpose of education is not labor preparation :)

> It absolutely, unequivocally is. People can romanticize it anyway they want, but formal education is very different than religion camp, painting class, or spiritual retreat training.

No it isn't "absolutely, unequivocally." What specific formal education are you talking about?

Especially in the past, but continuing somewhat into the present-day, formal education has mainly been about enculturation, and not "labor preparation." That can seen clearly by the former emphasis on dead classical languages and the continued (though lessened) emphasis on literature and similar subjects. There's zero value in reading Shakespeare or Lord of the Flies from a "labor preparation" standpoint.

However, I do see a modern trend where many people are so degraded by economics that they have trouble perceiving or thinking about anything except through the lens of economics or some economics-adjacent subject.


Enculturation is just to make it so people who otherwise cannot bear much economic fruit can at least not be producing negative value. Studying the classics, if nothing else, should at least produce a well adjusted human. That in and of itself has value.

But those are the fringes of the education system. The core focus is on producing high value citizens that will produce far more than they take. This is abundantly clear if you look at the social valuations of high caliber students with fruitful degrees.


> There's zero value in reading Shakespeare or Lord of the Flies from a "labor preparation" standpoint.

There is zero labor prep value in learning to extract information from text (that you possibly have no interest in reading)?


It is one important purpose, but education has lots of stakeholders who each have their own purposes.

The students want to learn things and socialize with peers.

Teachers want to teach, earn a living, get respect of society.

Parents want their children to be taught, but also want their kids to be taken care of by other adults so they can go to work in peace. Poorer parents in particular also need their kids to be fed and sometimes schools have to do that too.

Governments want an educated citizenry that is productive, pays taxes, knows the basics of law, civics and so on. They also want to monitor and protect unfortunate children who have bad parents.

If schools only had one purpose you wouldn't see the stakeholders fight each other so often. But in reality parents fight governments over the curriculum, students fight teachers over the amount of work, teachers fight government/parents over their wage and so on.


And most of that fighting goes away when parents assume the responsibility of teaching their own children. This particular responsibility is presently only available to those who build their lives around the idea of a nuclear family and home schooling. I used to think that one had to achieve some middle/upper class financial status to make this viable (and having money does make this easier), but I've seen poor families manage home schooling quite well. This requires community (a church with others who are home schooling, a home school co-op) because the kiddos will age out of your ability to teach rather quickly (10-12 and suddenly they're doing math you haven't touched in 2 decades, or more involved history or literature that the average parent may not be equipped to teach well, or electives that fall outside the experience of the parents).


> And most of that fighting goes away when parents assume the responsibility of teaching their own children.

No, because you've forgotten one of the important stakeholders here, which is society at large, which has a interest in ensuring a general level of shared education. Which once again results in fighting, as parents who are teaching their own children run up against government requirements that they may not agree with.


> The students want to learn things and socialize with peers.

No. Students want to socialize with peers or play sports/video games. Not learn.

> Teachers want to teach, earn a living, get respect of society.

This is correct

> Parents want their children to be taught, but also want their kids to be taken care of by other adults so they can go to work in peace. Poorer parents in particular also need their kids to be fed and sometimes schools have to do that too.

Also correct. Parents want schools to be daycare, or for elite families, schools are networking opportunities

> Governments want an educated citizenry that is productive, pays taxes, knows the basics of law, civics and so on. They also want to monitor and protect unfortunate children who have bad parents.

Correct. But a population can be productive while being largely uneducated (see China)

But despite the different priorities of the groups, “the student learning”, is not one of them


> But a population can be productive while being largely uneducated (see China)

China's a terrible example in trying to support your point. If the pitch is "education makes better workers" then you shouldn't be looking at GDP, you should be looking at GDP per capita, aka "Are the workers more productive in more educated countries?". And China has a terrible GDP per capita. It ranks 64th in the world to the US's 7th. Applying slightly more rigorous comparison across the world, there's a clear correlation between GDP/capita and average educational attainment.

And you have a very dismal view of students. In my area, at least at the honors level, students were pretty well engaged in learning. Now, that was mostly in order to get into good colleges and appease their parents' desire for them to learn, but they definitely were eager to have the knowledge that was being taught. By the time you get to college, a fair fraction of the students are truly engaged with the material for the material's sake. Even moreso in degrees that aren't glorified trade school programs.


> No. Students want to socialize with peers or play sports/video games. Not learn.

This is frightfully incorrect. Students definitely love to learn. They do not like to be stuffed in a chair and lectured at and forced to do rote activities. But who does?


I don't think the population of China is "largely uneducated" in the sense that began this thread. It is not rare for Chinese kids to go to school, and those schools are not only used for job training.


One of the initial proponents of public education in America, Horace Mann, saw education in a two pronged manner.

First, a functional democracy requires that the citizenship be well informed and capable of critical thinking: "A republican form of government, without intelligence in the people, must be, on a vast scale, what a mad-house, without superintendent or keepers, would be on a small one."

He also saw the economic side, saying that education was an equalizer for people in terms of helping them to reach their full potential.

I quite agree with his assessment. In a system where everyone has a vote, it becomes quite important that everyone have a sense of things that extends beyond their career vocation. His imagery of an uneducated republic being a madhouse makes much sense from this perspective.

Insofar as we have given up any optimism about the democratic enterprise, then certainly we could look at education as purely to put people into economic bins, but at least in my own public school education in the US, every student did get significant doses of math, history, science, etc., outside of their expected career direction.

This to me suggests that there is a tension, not fully resolved and HOPEFULLY never fully resolved, between education-for-economics and education-for-democracy. I think it's quite pessimistic though to give up the ghost on the education-for-democracy aspect.


Ok it’s hard to say anything “absolute” about the purpose of education since it’s a philosophical/political stance and not a physical phenomenon, but I appreciate your cynicism. I see how the rich and powerful have shaped our public education institutions, and agree that American schools at least often push students into rote labor-focused paths.

That said, the discussion is about the purpose of classrooms in a world of AI, and I think it’s a good time to remember the less economic purposes of education that have always been there under the surface. I think few teachers are more driven by bringing economic benefits to their students than enriching/exciting/interesting them, and secondary and post secondary education has always had a huge variety of non-occupational courses, from ancient history to obscure languages to nice math.

Overall, I imagine we agree on the most important thing: if education does end up changing immensely as AGI gains footing, we should change it to be less economic


> It absolutely, unequivocally is.

This is a category error. You're talking about the education system as though it was designed from accurate first principles towards a specific intended outcome. Like, you can say that the absolute unequivocal purpose of a nuclear reactor is to heat water. But when we're looking at sociopolitical organizations, that have been codified through various political forces over tens of generations, through the demands of ever-shifting stakeholders, etc this is not a useful framing.

> but the system is designed to breed good workers and sort them into bins. And its not a bad thing either. Sure there are non-economic self contained benefits, but those are perks, not purposes.

I think a more accurate framing is that the system is currently evolved into strongly emphasizing this mode of behavior.


This is true, and it is puzzling how people think that there are geniuses, evil or otherwise, who have planned the educational system so as to achieve some sort of results for the society at large that go beyond the mundane.

Where the mundane is keeping young people out of the streets, maybe teach them arithmetic and some grammar. And the leaders, that is, the teachers, want, most of the time, just to bring home a salary, not funnel the masses from schools to office desks or assembly lines.


It absolutely, unequivocally, is one of the reasons universal education to a general level is a valuable investment for a society.

But it is not the only reason, or (in my view) even the most important reason.

Maybe before assuming people haven't contemplated what you're saying, you could try to contemplate what else general education might be buying us. Maybe by imagining how it would look if school was actually just job training starting in elementary school, rather than covering all these other things.


I share your view on this. I guess it all hinges on whether AGI is possible and if so then how fast is it coming. If we won't achieve AGI then education is still necessary to push knowledge further and since we don't know who is going to achieve this it makes sense (right now) to still push education for all as social obligation.


I think, and this does not directly contradict your post, because I do think you are not far off, but formal education is supposed to help a person find their place in society. Not everyone becomes plumber, electrician, lawyer, mba, or engineer. Some become artists, activists or, heavens forfend, politicians.


Depends on who you ask.

"Purpose" in an entirely subjective thing.


Workaccount2 beat me to it. But it's well documented, e.g. https://qz.com/1314814/universal-education-was-first-promote...

"Much of this education, however, was not technical in nature but social and moral. Workers who had always spent their working days in a domestic setting, had to be taught to follow orders, to respect the space and property rights of others, be punctual, docile, and sober. The early industrial capitalists spent a great deal of effort and time in the social conditioning of their labor force, especially in Sunday schools which were designed to inculcate middle class values and attitudes, so as to make the workers more susceptible to the incentives that the factory needed."


I really take issue with describing this as some sort of well documented absolute fact that education is about labor. It isn’t the 1850s! Just because capitalists interested in skilled labor were “some of” the biggest supporters of English public schools in the 1850s doesn’t mean we should forever commit our society to their designs.

From the Marxist paper backing that article:

  England initiated a sequence of reforms in its education system since the 1830s and literacy rates gradually increased. The process was _initially motivated by a variety of reasons_ such as religion, enlightenment, social control, moral conformity, socio-political stability, and military efficiency, as was the case in other European countries (e.g., Germany, France, Holland, Switzerland) that had supported public education much earlier.15 However, in light of the modest demand for skills and literacy by the capitalists, the level of governmental support was rather small.16
  In the second phase of the Industrial Revolution, consistent with the proposed hypothesis, the demand for skilled labor in the growing industrial sector markedly increased (Cipolla 1969 and Kirby 2003) and the proportion of children aged 5 to 14 in primary schools increased from 11% in 1855 to 25% in 1870 (Flora et al. (1983)).17
Sorry if I sound challenging or rude - just hurts my soul to imagine people giving in to the capitalist’s desire for us to interpret our prison as a fact of nature


Pre AI today it seems like we’re in some weird place where IQ scores are dropping in ways that don’t match the Flynn effect and people like Bryan Caplan argue that education is just virtue signalling to gain access to higher status and paying bullshit jobs (David Graeber). Everyone here talks about how real that is in the world of whiteboard interviews and rest and vesters. It seems pretty true elsewhere in the world of white collar nepotism.

What are we really educating kids for these days? To have advantage over other kids because we have no fair meritocratic way to allocate resources or meaning in society?

Won’t AI just make this infinitely worse.

Like, one vision is more teachers and more students learning better, and another is less teachers and more baby sitters and lower government budgets for education leaving students with the equivalent of an automated telephone answering service menu instead of a real human call centre?


Nah, POSIWID.


Depends on how you define "purpose".

To me it means "the reason why <person> does <thing>", so the phrase "purpose of a system" doesn't make sense without a particular human subject who's interacting with the system.


Funny, I was going to use exactly the same acronym to explain why schools have little to do with labour preparation.


It is today. It used to not be. Blame Reagan when he was governor of California: https://www.chronicle.com/article/the-day-the-purpose-of-col...


Certainly a major purpose.


Good job!

Btw your site is exposing the .git directory https://www.revisionhistory.com/.git/config

Might want to set a filter rule for that


OMG... feedback like this is soooo helpful! Thank you. Nothing concerning in the .git directory, but yeah, I probably shouldn't be showing that. I will update my sync process to exclude that. Thank you!

Edit: should be fixed now :)


You were just being true to your name, by offering your revision history!

On a more serious note, this is a great example of how to handle a vulnerability report - fix it, change your processes, and say thank you! (geek_at could probably have done better by disclosing this in private first, though)


I figured there wouldn't be any secrets in the git and also if your site is on hacker news (or top of a comment thread on hn) you are glued to it so I thought they'd fix it fast


> You were just being true to your name, by offering your revision history!

lol

> geek_at could probably have done better by disclosing this in private first, though

I'm glad geek_at let me know quickly and also made this a learning experience for others. No harm done.


there is any extension to do that or you manually do it, i have adhd so this type of tools save y life, this happens to me a few months back i kill the vm host and make new os, maybe too overboard but i works.


I'm using DotGit [1] which checks for .git and .env files for every site you visit. You wouldn't believe the things I randomly found (and reported)

https://chrome.google.com/webstore/detail/dotgit/pampamgoihg...


The fact that homework has become so prevalent that it takes more than an hour each day (on top of already a full time job's worth of classes) is a crime against childhood anyway, good riddance.


I was frustated with the sheer amount of essays I was helping out my nephew with. It felt like every single week he had an essay from multiple classes, and it was always the same b.s for every single one. I just couldn't see the point of having to do so many of the same type of "research" assignment over and over again.

When chatgpt beta first became available I was overjoyed, it worked wonders. It worked so well that I figured teachers would have to let go of the essay crutch they had been relying on so much.


My high school experience was largely teachers mentally assuming that their homework was the only homework we were assigned, and not realizing that the time we’d need to spend on all of it was 6 or 7 times what they personally did.

I had one teacher assign four separate (essentially busy-work) assignments over Christmas break. Ridiculous.


Argh, way too much busywork nonsense indeed. I slightly suspect the intention behind it is to mould kids into mindless worker drones that corporations want, but on the other hand per Hanlon's razor it's more likely to just be teachers exercising their ego because they can.


Amen. I did great in school, except I hated homework from a very early age onwards. Tedious crap I already understood.

The occasional take-home project is probably fine, but otherwise let school be school, and leave it at that.


If you're not using the latest, best tools available to teach your students - and if you're not teaching them about those tools - then you are a bad teacher. Period.

Language models should be introduced in classrooms because they're a part of society now, and they're here to stay. Kids should learn about them - how they work, where they came from, how to use them - just like they should learn how to type or send an email.

It does remind me of my experience as a middle schooler in 2002, when our class took a trip to the library, and the librarian gave us a lesson on "how to use search engines properly." In retrospect, the societal worry at the time was about search engines replacing librarians, so it was perhaps notable that this librarian had the humility to teach us how to use her "replacement." Surely the same applies to teachers and ChatGPT: a good teacher will not be worried about whatever impact ChatGPT might have on them personally, but will instead take the opportunity to teach their students about the new horizons opened up by this technology.

(The funny part of that seminar in the library was that the lesson emphasized the need to construct efficient, keyword-based queries, rather than asking natural language questions to the search engine directly - but twenty years later we've come full circle and now you actually can just ask your question to the language models.)


How does your extension differ from Draftback?(https://chrome.google.com/webstore/detail/draftback/nnajoiem...)

Just curious.


Not entirely different but built more specifically for teachers, so that they get relevant information without having to watch the video every time. Also, draftback doesn't integrate with Google Classroom.


Thanks!


I have to say, I think I would have hated this growing up. I have a tendency to become emotionally-invested in the quality of my writing, and I don't like people seeing it in a state I don't consider presentable.

Maybe this tool would have forced me to get over that, I don't know.


I had a college professor (English 101) tell the class on the first day, "You're going to learn how to mutilate and kill your babies." He was brutal in draft reviews, but he pushed me to learn the process of drafting and writing. I produced work in that class that I didn't think I was capable of.


I feel it's a lot more intimate for the teacher/professor to see all your revisions as you're typing them. The ones you don't want to submit.


Thank you for the feedback. My wife feels this way as well. It’s good to hear perspectives on it.


Never know until you try


I`m sure in it. In fact, I agree that children should be taught modern technologies, and you should not think that AI will replace everyone. Remember how in the beginning there were a lot of reports about students writing essays using chatGPT? So my friends and I still use the https://edubirdie.com/write-my-essay-for-me service because they write reliable papers for which we get good grades. AI often writes nonsense, and it should be borne in mind that its database is rarely updated.


Just wanted to say I had this _exact_ same idea a week ago and was googling around to see if anyone had done this yet. I guess I don't have to build it now haha. Hopefully you can sell this to universities/the right people and make some headway on it!


There are a bunch that purport to do “AI detection” and a few others that are similar to mine (and more coming, I’m sure), but I like to think that mine is the most convenient to use :)


> because it means that they have to completely change how they teach with only a few months' notice

Why? Doesn't it only mean they need to change how they test understanding?


No, because most assignments aren't really to test understanding but for different pedagogical goals (you could go down a rabbit hole of educational theory about different types of assessment and how and why they are used in different circumstances).

In quite a few courses a key part of the actual valuable learning work is expected to happen outside of the classroom by practicing some activity or putting in time&effort to creatively think about some topic. And for many students this work happens only if there are adequate means for controlling that it has been actually done. So if students can trivially fake having done the practice or thinking, that course design isn't working any more, and you need some completely different structure of the coursework or activities so that the students will put in that work required for the learning outcomes.

Like, if someone assigns an essay about (for example) the impact of foobar on widget manufacturing, it's not because anybody cares what the students think about this topic, and (in most courses) not because they want the students to practice writing essays, and not even to evaluate whether students know about the impact on foobar on widget manufacturing - usually the goal of such an assignment is to (a) have students put in some time to read and think about these topics as a whole, and (b) to see if they have some specific misconceptions that should be corrected with feedback (which is substantially different from testing the level of understanding - if you'd want to do that, then probably a different assessment type would be chosen, formative vs summative assessment).

If the student has someone or something else write that essay, both these goals fail - but these things are needed for the course, so now the course needs to be redesigned to throw away the essay (because it spends time but doesn't contribute to the goals due to it being faked) and add some new activities that will achieve these goals - for example, extensive in-class discussion or debates that will require preparation and will reveal those misconceptions. But that requires changing how that course is taught, which takes time and effort.


If people don't want to learn, let them cheat themselves. Just that grades (and what determines whether a student gets to go to a good university) should be graded 100% in person.


My sister (who is a middle school teacher) and I developed a real training program for teachers, and this "guide" from OpenAI is quite underwhelming. It doesn't address 90% of the problems teachers actually face with AI...this is mostly a brochure on how to use ChatGPT to get info.

If you are a teacher or know a teacher who is struggling to adapt this school year, I'd be honored to speak with them and see if we can help.


This looks like a promotional comment to sell some kind of paid "AI Training" [1], doesn't address anything in the linked article.

[1] https://max.io/teacher-training.html


Thanks for the detective work! On the one hand, I don’t have a problem with someone mentioning a helpful resource they developed in a relevant thread, even if it’s paid. But it would be more honest to disclose that’s what’s being offered rather than disguising it as an offer of a free resource.


Oh, I think I just fell for it. I was asking them if they could share their knowledge...


Yeah you have to be careful with this AI landscape. It is fraught with a lot of the same issues as the crypto landscape, with many of the same players.


I wonder whether generative AI will be thrown out with the bathwater once this issue hits critical mass the same way cryptocurrencies were. Both technologies provide real solutions to real problems faced by small sets of honest users, but they can also be abused by bad actors to assist and amplify their actions in pretty much the same way.


> If you are a teacher or know a teacher who is struggling to adapt this school year, I'd be honored to speak with them and see if we can help.

This is a worldwide issue.

I think it's great what you two did, maybe it would be more effective if you did a small article or video on it?

Many would be honored to be able to get help from your insights, it's needed. I see how teachers are struggling in Germany, while they are still open to embrace this technology.


Thanks for the kind words and I agree!

I prefer to do the teacher training workshop in person for various reasons, but we have considered recording it.

I've also given 2 open lectures at different libraries (and have been asked to do more) for the general public. I should certainly record that, since it's more general audience.


I thought it gives good guidance.

Of course it's not a 4-hour in-person workshop, like what you're proposing. But it already adds positive value.

It covers a good amount of the topics your course covers, I think. Introductory-level, perhaps, but it's a start.

Honestly? I don't understand your comment - I read as negative towards OpenAI (am I wrong?)

I'd expect someone like you to praise OpenAI's willingness to contribute in this space.


Yeah I read this and was repeatedly surprised and thankful they finally put some of these things in writing. That section about whether or not detectors work is going to be hugely helpful to students wrongly accused of using AI to generate their essays or something. Take that page and show it to your teacher "Look! The publisher of the thing says those detectors aren't accurate!"

I'm with you the parent seems more like an ad and negativity towards OpenAI.


<< I'd expect someone like you to praise OpenAI's willingness to contribute in this space.

Why would you assume OP position in this case? There are multiple valid, albeit unstated reasons, why the company in question may not be the best vessel for those efforts. And, just to make sure that is not left unsaid, it is not like openAI is doing it for altruistic reason.

I do agree that it is not a bad starting material, but I think you will agree that it is clearly not targeted at group that gathers at HN.


It gave me the impression this person is concerned about helping teachers navigate this new and challenging reality.

Call me crazy, but that was the impression I had from the comment and the course website.


TIL about "CoderMindz Game for AI Learners! NBC Featured: First Ever Board Game for Boys and Girls Age 6+. Teaches Artificial Intelligence and Computer Programming Through Fun Robot and Neural Adventure!" https://www.codermindz.com/ https://www.amazon.com/gp/aw/d/B07FTG78C3/

Codermindz AI Curriculum: https://www.codermindz.com/stem-school/

https://K12CS.org K12 CS Curriculum (and code.org, and Khanmigo,) SHOULD/MUST incorporate AI SAFETY and Ethics curricula.

A Jupyter-book of autogradeable notebooks (for AI SAFETY first, ML, AutoML, AGI,) would be a great resource.

jupyter-edx-grader-xblock https://github.com/ibleducation/jupyter-edx-grader-xblock , Otter-Grader https://otter-grader.readthedocs.io/en/latest/ , JupyterLite because Chromebooks

What are some additional K12 CS/AI and QIS Curricula resources?


Can you share some of the outline or problems your guide solves?



The prompt 4 ”AI teacher” is pretty good for learning group theory at least. (Just trying it right now on ChatGPT 4.0)


I found lots of good value in their publication as well.

Especially for teachers, who I believe (most at least) have no clue about prompt engineering and how to talk to an LLM.


IMO ’Prompt engineering’ is an implication that the LLM:s are really immature technology. There is no intrinsic value in prompt engineering - it’s ok to wait a bit until LLM:s get a proper product shell you don’t need to walk on eggshells over. I would not promote LLM:s as production ready offerings until this aspect becomes better.

Using an LLM is like having a therapy session - where you the user are the therapist. Humans should not need to learn en masse become AI therapists, that’s a the inverse of what should happen :D


You might be influenced by the perception that LLMs are autonomous.

They're just tools. The outcome depends heavily on the user skills. That's why prompt engineering is a thing.

Heck, even Google results varies depending on the searcher skills, and we're not calling the tool immature...

As with regards to therapy, I have had the opposite experience as you described.


I agree, most don't even know they can tell it how to behave.


Those who really desire to understand how things work will be undeterred by the temptation of AI. There are two types of people: those who care to know and really understand and those who don’t. Should we really force people, past a certain point, to care when it’s clear they don’t and are only doing something because they are forced to? I would argue that people should spend more time on the things they truly care about. That’s the critical difference; when you care about something and get enjoyment and satisfaction out of it, you want to understand all the fine details and have a thirst for knowledge and true insight. When you don’t care, you take the absolute shortest path so you can make time to do whatever it is that brings you true satisfaction. That’s perfectly okay with me because I do it all the time for things I couldn’t care less about.

If someone who wants to be a software engineer can’t be bothered to learn and understand the fundamentals I’d argue that software engineering isn’t the discipline for them. The more you understand, the larger the surface area of the problem you have for which to explore further.


ChatGPT (and other LLMs) still cannot do (and probably never will) well in any consistent manner in physics. I don't think physics departments are worrying much about the AI. The only thing that can help students in a more reliable way is some coding projects. Which is okay because in most of these classes (computational physics) students are encouraged to work together, seek help and even ask on the internet (before ChatGPT…etc.) It was always about how to explain and describe the thinking. AI (at least in its current form) is very weak at the problem-solving aspects and in understanding concepts.

On the other hand, as non-native English speaker, it save me much time into paraphrasing my poorly thoughts and writing that I would need an hour to express in a good formal manner. It can guide you in some aspects of coding tasks, introduce you to some APIs …etc. This is actually a good tool that I agree that a good student (researcher) would use wisely to gain some knowledge and save sometime.

It will not help much with solving a cart on an inclined plane with some friction and a pendulum hanging from the cart. No, it will not be able to give you the normal modes.

This is just a personal experience and opinion, though. It might be completely different in other areas.


You should really think about making statements like "AI will probably never do X well". Many formal linguists made very strong statements about the impossibility of (__insert feature here__, such as pragmatic implicature) to be learned by AI, which they are now being shown to be wrong.

For instance, Miles Cranmers work on using GNNs for symbolic regression is a start towards useful new discoveries in physics. Transformers are just GNNs with a specific message passing function and position embeddings. It's not hard to see that either by a different architecture, augmentation, or potentially even just more of the same, we can get to new discoveries in physics with AI. The GNN symbolic regression work is evidence that it's already happened.

As for grounding knowledge in the LLMs we have exactly just this moment (a rather short-sighted view) there is plenty of interest and work in the area, for which I expect will be addressed in a multitude of ways. It's ability with grounded physics knowledge is not perfect, but it's very good w.r.t. the common knowledge of a human off the street. External sources alone make it much better, and that's just the exceedingly short-sighted analysis of what we have today.


Really? The only reason that ChatGPT is more adept at coding problems is because there is vastly more training data. There's nothing fundamentally different between problem solving a coding problem and physics problem. Like all the others before it, I don't think this comment will age too well.


Really? Granted LLMs might be a little weaker in physics than other areas. If someone figures out get LLMs to use a mathmatica API, and train it some more I can imagine some rapid progress.


The elephant in the room here is that these LLM's still have problems with hallucinations. Even its only 1% or even 0.1% of the time thats still a huge problem. You could have someone go their whole lives believing something they were confidently taught by an AI which is completely wrong.

Teachers should be very careful using a vanilla LLM for education without some kinds of extra guardrails or extra verification.


This is also the case if taught by any educator who happens to trust the source they looked up as well. The internet, text books, and even scientific articles can all be factually incorrect.

GNNs (for which LLMs are a subclass of) have a potential to be optimized in such a way that all the knowledge contained within them remains as parsimonious as possible. This is not the case for a human reading some internet article for which they have not gained extensive context within the field.

There are plenty of people that strongly believe in strange ideas that were taught to them by some 4th grade teacher that was never corrected over their life.

While you're statements are correct in this miniscule snapshot of time, it's exceedingly short-sighted to assert that language modeling is to be avoided due to some issues that exists this month, and disregard the clear future of improvements that will come very soon.


Damned, I'd have loved if my teachers only hallucinated 1% of the time. Instead we had the southern Baptist football coaches attempting to teach us science... poorly.


> The elephant in the room here is that these LLM's still have problems with hallucinations. Even its only 1% or even 0.1% of the time thats still a huge problem.

If you heard the bullshit that actual teachers say (both inside and outside of class), you would think that “1% hallucinations” would be a godsend.

Don’t get me wrong, some teachers are amazing and have a “hallucination rate” that is 0% or close to it (mainly by being willing to say they don’t know or they need to look something up), but these folks are the exceptions.

Education as a whole attracts a decidedly mediocre group of minds who sometimes (often?) develop god complexes.


My middle school history teacher hallucinated much more than 1%. Much more than 10%, really. He was so bad that I needed to "relearn" history in high school.


in my experience, its sometimes 100% of the time, even after repeated attempts to correct it with more specific prompts. Even on simple problems involving divisions or multiples of numbers from 1 to 10 with one additional operation.


What does "sometimes 100% of the time" mean exactly? You seem to be taking the "30% of the time it works every time" joke a bit literally.


The parent post probably means that it's not a random chance independent of the question, that while for some (many!) types of questions the hallucination rate is low, there exist some questions or groups of questions for which it will systematically provide misleading information.


I'm surprised OpenAI is encouraging large system-style prompts for the main ChatGPT webapp where they are less effective there.

Now that the ChatGPT Playground is the default interface for the ChatGPT API with full system prompt customization, they should be encouraging more use there, with potential usage credits for educational institutions.


from their own FAQ linked from this page:

    Is ChatGPT safe for all ages?

    ChatGPT is not meant for children under 13, and we require that children ages 13 to 18 obtain parental consent before using ChatGPT. 
so in other words: no

it's grossly irresponsible to be pushing "Teaching with AI" in this scenario


you act like parental consent wasn’t listed as a requirement. though it may not be broadly recognized as such, that requirement is an admission that it is foolish to hand a child access without guidance.

you know, like in the form of a parent. parental guidance. which starts with parental consent.

so in other words: it depends.

it’s grossly irresponsible to treat a hammer as inherently dangerous.


I disagree. Ethical teachers audit and examine all content they intend to be consumed by students -- it is their responsibility regardless of what medium or agents are used to create them. It is common for people to disregard that generative AI is currently a tool without agency whose use requires a selection process. Just as a camera needs to be aimed, AI does as well.


can you prove to me in a verifiable way that no matter what prompt I put into ChatGPT, it won't give me pornography back?


> can you prove to me in a verifiable way that no matter what prompt I put into ChatGPT, it won't give me pornography back?

No, but if it's turning you away even when you're explicitly asking for it, it's probably doing good enough. Nobody held Yahoo, Lycos or Altavista to this standard.

If accidental erotica is the worst outcome you can imagine for the shortcomings of AI teaching, please leave worrying about this to the professionals. Consider flawed chemistry lessons, where it tells some kid to mix two things they shouldn't. That will actually cause material harm to everyone around them.


No, but then I can't prove that to you about Google either and I don't see schools trying to ban that.


Can you prove in a verifiable way no matter what you prompt to your teacher they won't give you pornography back?


It is the teacher's responsibility to evaluate any materials they present to students. If they are given an output they interpret to be pornographic, they decide whether to provide it or not to students. I imagine it is possible that you might determine something to be pornographic that a given teacher may not. Pornography is an interpretation, which varies culturally and politically. Regardless, it is definitely not my responsibility to prove what ChatGPT will provide whatsoever, I don't work for OpenAI.


Teaching doesn't prescribe students to be younger than 18.


Can’t imagine I’d have bothered engaging with any subject I wasn’t interested in if ChatGPT existed back then.

Always remember the glorious few months when I had Encarta at home before too many students had it and before teachers clocked on where homework became just printing the page on the subject off after removing identifying bits.


You make a strong case about lack of education quality and make-work time-wasting foisted upon children.

Education is not a problem the human race has solved despite progress made.


"Please teach our models how to replace you"


The reality is that effective LLMs, combined with some kind of knowledge retrieval, are coming close to becoming the idealized individual tutor. This is also a daily reminder that studies show that individual tutoring is objectively the best way to educate people:

https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem


personal tutoring and coaching is basically mandatory for mastery. name a professional concert pianist or athlete who doesn’t have one. I act as personal tutor for comp sci students and I’m envious of them. I didn’t have one and I think it really limited my growth.


[deleted because dont want to be drawn into flamewar]


Chatgpt does tutoring just fine, i've had it draw up a lesson plan for me and execute with hardly any special prompt engineering at all, just sort of like: "Please tutor me on french adverbs, please start by asking me a few questions to find out what I already know," and it dialed in fairly well to my level.


> [deleted because don't want to be drawn into flamewar]

Good on you. That is an inspiring demonstration of restraint.


[flagged]


You are replying to swyx.


Thank you, updated indirect references to direct: I know it's un-HN but Jesus christ am I tired of hearing this person's garbage quoted ad nauseam like a gospel


Jeez bro. This is a pretty intense reaction to a lukewarm and reasonable take. Personally I appreciate an "AI influencer" being down to earth and being willing to say that the technology isn't magic, amidst a huge amount of hype. If you think people are parroting swyx uncritically - that's hardly a criticism of swyx, is it?

I think you should keep reflecting on your realization about how people got swept up in the cryptoasset hype. You can believe this technology is promising and will improve dramatically without being a fanatic. You can disagree without going for the jugular.


To hear: "the man talks about things he has no clue about with disturbing loudness"

And turn it into: "that's actually not a criticism of the guy at all!"

Maybe when you start translating everything into what you want to hear, it's hard to turn off.


> To hear: "the man talks about things he has no clue about with disturbing loudness"

> And turn it into: "that's actually not a criticism of the guy at all!"

Ironically, that's not at all what I wrote. You made statements such as these:

> But they read tweets like this, and they have the typical developer blindspot of not questioning motive enough and they believe it!

> I'm tired of hearing this person's garbage quoted ad nauseam like a gospel

And so I observed:

> If you think people are parroting swyx uncritically - that's hardly a criticism of swyx, is it?

To be clear, if you feel other people are repeating swyx uncritically, that is a criticism of those people.

When it comes to people being loud while not adding knowledge to the discussion - with all due respect, you should consider the composition of your house before casting that stone.

I apologize if my comments are frustrating for you to read; I am doing my best to give you useful feedback. You behaved in a pretty extreme way, which is not considered acceptable in this community (or most communities). And I take it you knew better:

> I know it's un-HN but...


> that's actually not a criticism of the guy at all!"

> that's hardly a criticism of swyx, is it?

"Ironically, that's not at all what I wrote"

The most extreme idea in this thread is thinking anyone would consider your feedback based on the level of coherence you've shown: maybe give it a rest?


Alrighty. Best of luck.


Who?

It has been known that LLMs cannot reason transparently nor can these black-boxes explain themselves without regurgitating and rewording their sentences to sound intelligent, but instead are confident sophists no matter what random anyone tells you otherwise.

EDIT: This is the context before it was deleted by the grandparent comment:

>> i have yet to see any ai system properly implement individual level-adjusting tutoring. i suspect because the LLM needs a proper theory of mind (https://twitter.com/swyx/status/1697121327143150004) before you can put this to practice.

My point still stands.


You're showing why I'm so annoyed by this perfectly!

It's malicious to rope theory of mind into justifying that point because it's just wrong enough.

If the reader doesn't think deeply about why on earth you would ever to rope theory of mind into this, your brain will happily go down the stochastic parrot route:

"How can it have theory of mind, theory of mind is understanding emotions outside of your own, the LLM has no emotions"

But that's a complete nerdsnipe.

If instead you distrust this person's underlying motivations to not be genuine intellectual curiosity, but rather to present a statement that is easily agreed to even at the cost of being wrong... you examine that comment at a higher level:

What is theory of mind adding here besides triggering your typical engineer's well established "LLMs are over-anthropomorphized" response? Even in psychology it's a hairy non-universally accepted or agreed upon concept!

Theory of mind gives two things at the highest level:

inward regulation: which is nonsensical for the LLM, you can tell it what emotion it's outputting as, it does not need theory of mind to act angry

outward recognition: we've let computers do this with linear algebra for over 2 decades. It's what 5 of the largest companies in technology are built on...

Commentary like that accounts is built on being just wrong enough:

You calmly state wild opinions. There are people who want to agree with any calm voice because they're seeking guidance in the storm of <insert hype cycle>. They invent a foothold in your wild statement, some sliver of truth they can squint and maybe almost make out.

Then you gain a following, which then starts to add a social aspect: If I don't get it but this is a figure head, I must be looking it wrong. Now people are squinting harder.

This repeats itself until everyone has their eyes closed following someone who has never actually said anything with any intention other than advancing their own influence.

They don't care how many useful ideas die along the way, there's no intellectual curiosity to entice them to even stumble upon something more meaningful, it's just draining the energy out of what should be a truly rewarding time for self-thinking.


Emphatic assertions "it has been known" are anti-convincing.


As parent deleted, which tweet was being referenced?


https://twitter.com/swyx/status/1697121327143150004

There was no need to delete except being so trivially shown to be wrong, I didn't chase them to twitter or something.

But that's the MO for the tech grifter:

- you herd the few people who are unsure and will listen to any confident voice

- the people who know the most about <insert tech> tend to not like that, but when the herd is small just defer to their confrontations with humility and grace, and use that show of virtue to continue herding

- the more people you herd, the easier it is to get incrementally smarter people to follow: We're all subject to certain blindspots in a large enough crowd

- the more people who follow someone who's clearly wrong, the more annoyed people who are knowledgeable about <insert tech> will get about the grifter

- This makes each future confrontation more heated, so now the heated nature of the confrontation is justification to disengage without deferring. Just be confidence and continue herding

- rinse and repeat until people who don't follow the grifter gospel are a minority.

The actual VC dollars start chasing whatever story their ilk has weaved by then. And eventually it all collapses because there was no intellectual underlying: just self-enrichment.

That realization from the crowd exhausts any good will that was left for <insert tech> and the grifters move on to the next bubble.


Thanks for ref!

I share your frustration at those who confidently & prematurely write-off rapidly-changing AI tech based on dated examples, cherry-picked anecdotes from the unskilled, & zero extrapolation based on momentum. They do a a double-disservice to those who trust them: 1st, by discouraging beneficial work on ripe, solvable challenges, and 2nd, encouraging a complacency about rapid new capabilities that may leave vulnerable people at the mercy of others who were better prepared.

But, not being familiar with the account in question, I don't see those attitudes in that tweet. It seems more an assessment "no one has quite nailed this yet" than defeatism over whether it's possible.


The tweet was just a reference in their comment:

> i have yet to see any ai system properly implement individual level-adjusting tutoring. i suspect because the LLM needs a proper theory of mind (https://twitter.com/swyx/status/1697121327143150004) before you can put this to practice.

But to be perfectly transparent, I'd never respond so harshly to someone for just that tweet, or even that comment.

Instead it's the fact they're currently a synecdoche for the crypto-ization of AI. This person doesn't usually dismiss AI, instead they heavily amplify the least helpful interpretations of it.

_

This is one of the largest voices behind the new "the rise of the AI engineer" movement in which this author specifically claimed researchers were now obsolete to AI due to the tooling they built: https://news.ycombinator.com/item?id=36538423

Like, I get wanting to make money by capturing value as much as the next person... but basing an entire brand on declaring that the people who are enabling your value proposition are irrelevant just to create a name for yourself is pointlessly distasteful.

The only thing he gained by saying researchers don't matter and understanding Attention doesn't matter is exactly I described above: a wild opinion that attracted the unsure, pissed off the knowledgeable, and served as a wedge that he could then carve out increasingly large slices of the pie for himself with.

Fast forward 2 months and now the process has done its thing, the "AI engineer" conference is being sponsored by the research driven orgs because they don't want to be on the wrong side of the steamroller.


Interesting prompts! IME the quality of the answers the users give to the ChatGPT questions in these prompts will make or break the experience.

I played around with this use case in the spring when my teenage daughter was looking for extra test prep materials. At first the experience was interesting but there was an "AI uncanny valley" shaped problem: the material just didn't seem to fit. It felt wrong.

This uncanny valley was significantly reduced, even eliminated in some instances, by including the entirety of our school district's online material about the course; information about the core competencies (across communication, thinking, personal & societal), the big ideas, the curricular competency & content about the learning standards. Our district has a pretty good website with all of this information laid out for each course and grade level.

Including all of this information in the prompt context resulted in relevant and harmonious content when asking to generate course outlines, student study-prep handouts, and even sample study session pre-tests (although ChatGPT wasn't strong at reliably creating answer sheets for the pre-tests).

Context is key!


An interesting trick I found here was to ask ChatGPT to produce tables of concept definitions and include a metaphor for each concept to help understanding. It was quite good at coming up with metaphors and that actually felt kind of magical.


> Building quizzes, tests, and lesson plans from curriculum materials

Example prompts that OpenAI shared here are a great start. However I think these use-cases are better served as micro apps built on top of these prompts. For example, a teacher will keep coming back to use this prompt with same/similar set of responses most of the year. On top of that, enriching the context with additional information pulled from local sources will quickly become a need.

ChatGPT's custom instructions will help with not having to repeat prompts but the interface falls short when it comes to repeat narrow use cases. This is where imo LLM apps shine. A simple app built with langchain or some low-code platforms and providing local data from a vector store can be super powerful.

We recently open-sourced LLMStack (https://github.com/trypromptly/LLMStack), a platform that allows users to build these micro apps to automate their workflows. Our goal is to make these workflows sharable so someone can download a yaml file for this prompt and chain and start using it in their job.


Exams that allow calculators assess tasks harder than calculation.

By analogy, if LLMs were allowed in an exam, what harder tasks would be assessed?


Likewise, you learn how calculation works before being let loose on the calculator.

Similarly, pilots must learn to fly under various regimes of automation, despite primarily using high-automation settings.


A better way to test whether student have learned or not is by looking at the question and answers that the student went through while interacting with the chat bot. Especially evaluating the follow-up questions that the student asked, which should depict whether how much he actually read and understood the information. So technically by feeding the q and a of the chat interaction on a subject that the student had with the chat bot should be fed back into chatbot to assess the student understanding and then probably conductive verbal and other forms of testing. Maybe writing on a paper. I have learned a great deal using these tools since they came. I think people are looking them the wrong way, and people just prompting and getting answers. Versus how much of the thought process that goes into it in understanding the concepts.


ChatGPT, write a believable interaction between a smart curious student and ChatGPT that demonstrates the student learning from the interaction.


Soon to be: teachers ARE AI.

You're all having fun now. But you'll regret using AI for anything because soon humans will become mostly fit for manual labour while AI concentrates the wealth of the world into the hands of the tech elite.

Then, without a human connection in teaching, children will grow up into psychologically damaged adults.


> Soon to be: teachers ARE AI.

> soon humans will become mostly fit for manual labour

> without a human connection in teaching, children will grow up into psychologically damaged adults.

If humans are only going to be doing manual labor, what will the AI teacher be teaching? Do you need 16+ years of education for manual labor?

Just taking your argument at face value, I don't understand how "AI replaces nearly all human knowledge workers" leads to "children become psychologically damaged adults."

It seems like it would free them from being strapped into a chair for 16 years and denied the opportunity to be children in an attempt to prepare them for a life of knowledge work? Unless we just keep up the ruse of an entire childhood of classroom based education for ... reasons?

To push past your argument, society and knowledge isn't zero sum.

I'm not writing software because it's the single most important thing in the universe for me to focus on right now. It's actually pretty low on the list of important things on the grand scale of important things. I'm writing software because it's the work that needs to be done right now and there isn't a replacement for me doing it.

I feel like you are asserting that plugging numbers into spreadsheets as an accountant or doing string transformations "at scale" to convert DB queries into HTML and JSON is both: 1) A fulfilling life 2) The only thing humans could possibly be doing of value right now; if you take this away there is nothing left

There are a tonne of fundamental questions/problems about life, the universe, interstellar travel, preservation of our species, etc. that I _just don't have time for_ right now because I'm over here trying to figure out how to take these bytes coming over the wire from an SQL query and pack them into a JSON object so a browser can hydrate this bit of HTML. And I'm sorry, but, this isn't how I'd choose to live my life if there was someone else I could put in this seat.

Please AI take my job so I can be free to focus on all of the stuff that comes with the next layer of abstraction/automation.


> I feel like you are asserting that plugging numbers into spreadsheets as an accountant or doing string transformations "at scale" to convert DB queries into HTML and JSON is both: 1) A fulfilling life 2) The only thing humans could possibly be doing of value right now; if you take this away there is nothing left

No, that is bad too. I am rather asserting that we already have enough technology to work LESS, live more SIMPLY, and use our time to develop a sustainable way of living minimally without spending time endlessly seeking economic growth and industrial progress.


Or teachers focus more on helping kids with the fundamental social and organizational skills necessary for learning and cooperating while AI handles the individualized lesson plans for each of the topics. The kids become much better adjusted and much more knowledgeable and go on to use AI in their working lives to create unimaginable amounts of wealth and productivity.

In other words: you know what beats one elite with an AI? Ten thousand well educated people each with their own AI.


> In other words: you know what beats one elite with an AI? Ten thousand well educated people each with their own AI.

So, now we'll be constantly fighting each other with endlessly evolving AIs. The world is already a cuthroat place with fierce competition.

Think of this analogy: imagine a karate match with people fighting each other. Sure, some people will get hurt but it's mostly controlled. Now imagine a world of people fighting each other by flying airplanes into each other...everyone dies.

We are unprepared for a new world where thousands of people fight for their slice of the pie with advanced AI, and it will either result in insanity or else a hugely more efficient use of resources (the FIRST thing people will do is find out how to use advanced AI to get a bigger slice of the pie) so that the natural biosphere will be even more efficiently destroyed.


I’m sorry you think that way. Just make more pie. Commodity prices go down over time. Intelligence makes it a non zero sum game.


Seems like we could head towards a world where people go to school from home, learn from AI, work remotely, get food delivered, find entertainment in VR. Apartments get smaller and smaller, until most people are essentially just renting a room in a large dorm, which they almost never leave.


Exactly the world described more than a century ago in The Machine Stops, which I think should be required reading in all CS curriculums. Free to read here: https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/....


I'm 10 pages in and think it should be required reading for not only CS curriculums, and regret not having been exposed earlier.

Thanks for sharing.


No. VR is to the WWW what the WWW was to the internet. It will bring the rest of the world onto the net where previously only print, video, and audio were. AI will be the next UI medium. NLUI natural language user interface or SUI spoken user interface, somebody will come up with a better name.


I made https://anylearn.ai, an education app built on OpenAI. if you click the settings icon, then the teach tab, it will generate a teaching guide on any topic. Try it!


I put in "8th grade french" and it gave me a guide on how to develop a teaching guide, not the teaching guide itself. Like "Step 4: Prepare instructional materials", "Step 5: sequence the lesson", etc., with generic instructions for each. The Test Questions tab has questions about my knowledge of lesson planning, not questions for French students.

"College-level calculus" was similar, just vague generic high-level advice with no lesson plan or specific guide.


Good catch. Will modify the prompts to make it produce the desired content.


If there was a tab with a code example when the lesson is related to programming, it would be perfect, as the chat doesn't detect markdown's code block.


somebody write a textbook chunker that generates context from textbooks for LLMs to build anki cards please.

Extra credit if you build a new anki that dynamically generates cards with different text and the same meaning to prevent answer memorization.


Anecdotally, my friend who's just starting out teaching high school physics has used ChatGPT to generate worksheet questions with mixed results, having to throw out the majority of what it generates, but still saving time overall


Just asking it to make up 10 question isn't a great way of doing it most of the time.

It turns out making a single question really is a bunch of different questions in itself. You have to ask on each question "How can this be misinterpreted", "Can the question be written better", "Is this a challenging question that actually causes a person to learn".

A lot of human generated question are just confusing hot garbage in and of themselves. Quite often we encode cultural biases in the questions. Or, if a person actually knows about the topic they can get the question wrong, based if they are only supposed to formulate the answer based on the paragraph shown to the user.

The AI Explained channel on youtube just had an episode about this in relation to the tests we're giving AI. Turns out a lot of the questions just suck.


Personally, using GPT4 has been absolutely invaluable to me when attempting to learn. I’m extremely jealous that kids these days have access to tools this powerful.

The problem with these types of learning tools, same as with the Khan Academy one, is that they’re too “safe guarded” for general public release. Obviously needed for a public launch, but giving someone free reign of the models with some prompt teaching and an understanding of hallucinations could help kids learn so much more efficiently in the future.


Who teaches the AI? I think that will become a mostly unnoticeable but persistent and growing problem in the coming decades.

We currently have a huge amount of pre-AI knowledge to train them on, but in the future that could slowly get replaced by AI output with all its inaccuracies, which then gets used to train future AIs, and transitively the people they'll have taught, etc.


Something that's confused me about detector products is: presumably OpenAI stores all prompts and responses. Wouldn't it make sense for them to provide a detector service that checks against their own records? Sure, students could use another LLM but it seems like low-hanging fruit to at least offer that as a product to higher ed.


Finally OpenAI admits AI detectors are useless.


Here’s my thing. It’s kind of like calculators and computers. It’s fine to use calculators later on because once you’ve mastered basic division you don’t need to do it 40 million more times. But if you gave calculators to first graders I struggle to believe they’re gonna be learning shit lol


I don't get this whole commotion with LLMs and education. Isn't a Show & Tell and presencial classes enough to weed out the cheaters? LLM is a killer for homework but not when you are in a class with a teacher present.


“Teaching students about critical thinking”

This is an odd point. You’re already in school to critically think and why not just google it if you need to check its accuracy anyway


That's how I've been using it for a week now to clarify certain concepts from computer science that I always had little confidence in and it has been excellent.


Care to share an example? (the concept and how it was clarified)


Teaching with AI sounds kinda meh. But I would like to see it in motion.


Great highly informative knowledge..!!


I can't be the only one thinking, given how much ChatGPT gets confidently wrong, that it's way too early to be talking about funneling this into classrooms?

The internet is bursting with anecdotes of it getting basics wrong. From dates and fictional citations, to basic math questions... how on earth can this be a learning tool for those who are not wise enough to understand the limitations?

OpenAI's examples include making lesson plans, tutoring, etc. Just like with self driving cars - too much too quick, and many are not capable of understanding the limits or blindly trust the system.

ChatGPT isn't even a year old yet...


Its probably the perfect time to be talking about it giving how fast the advancements occur

They probably wont be using that model for another year, while people will be using that website for many years


It doesn’t get the kind of things taught in most classrooms wrong in the way it gets business applications wrong, because there’s a (mostly) correct response that isn’t going to vary a ton from source to source. The weighting will always push its responses towards the right answer, though in moments of relative uncertainty I guess if you had the temperature turned super high you might get some weird responses.

It’ll (mostly) always know about the Sherman Antitrust Act and what precipitated its passage, for example.

That said, OpenAI repeatedly suggests verifying responses and says, “make it your own” which IMO includes spot checking for correctness.


> It doesn’t get the kind of things taught in most classrooms wrong in the way it gets business applications wrong, because there’s a (mostly) correct response that isn’t going to vary a ton from source to source.

It's fabricated legal cases and invented citations to back up it's statements.

The issue is, it can be difficult to know when it's wrong without putting in a lot of effort. Students won't put in the effort, and that assuming they're even capable of understand when/where it's wrong in the first place.

Just like self driving cars - we can say "pay attention and keep your hands on the wheel at all times"... but that's not what everyone does and we've seen the consequences of that already.

We need to be careful here. This tech is new. ChatGPT hasn't even existed (publicly) for a year. Getting it wrong and going too fast has consequences. In the education space in particular, those consequences can be profound.


This is nothing at all like self driving cars; firstly the risks are not even in the same ballpark, and secondly every piece of advice given includes, “check the response independently.” It says nothing about a tool like this if people choose to misuse it.

At some point, using LLMs like ChatGPT recklessly is on the user, not the tool.


The internet is sampling the interesting samples, not necessarily a realistic picture.

I'd love to see a good research study on this that shows the actual error rate as well as a comparison with other non-human alternatives (e.g. googling, using textbook only, etc) as well as possibly human (personal tutor, group instructor, ...)


> The internet is sampling the interesting samples, not necessarily a realistic picture.

A tutor is expected to know the subject and guide the student. If, say, 10% of the time it guides the student into a false understanding, the damages are significant. It's very hard to unlearn something, particularly when you have confidence you know it well.

My personal adventures with ChatGPT are probably close to a 50% success rate. It gets some stuff entirely wrong, a lot of stuff non-obviously wrong, and even more stuff subtly wrong - and it's up to you to be knowledgeable enough to wade through the BS. Students, learning a subject in school are by definition not knowledgeable enough to discern confident BS from correctness.

Will ChatGPT be useful in the future? Yes, almost certainly. But let's not rush this and get it very wrong. The consequences can be staggering in the education space - children or adults.


I'm getting north of 90% success with GPT4, and while a dedicated tutor or a group instructor would definitely be better, none of the other non-human alternatives come close. Searching the internet and youtube tutorials can also lead to wrong information and false understanding - all self-directed methods have this pitfall. ChatGPT, however, is the only one where you can probe deeper once you find problems.

If I had to place it somewhere, it would be between a study buddy and a tutor, closer to a study buddy.

Still - a well designed study will give us a much better picture of where we actually are. I think that would be extremely valuable.


It is reasonably trivial to weed out whether students are producing their own work, or are cheating using LLMs.

Have the students produce their work in the classroom under examination-style conditions, with no electronic devices allowed in a classroom. Pens / pencils are allowed. Paper books can be allowed. No electronics. Let's see you write out that insightful essay about social hierarchy in Romeo and Juliet in longhand. Calculate the roots of that quadratic equation using pencil and paper, please. Explain the links between the Treaty of Versailles and World War II using what you learned in class, and a (paper) textbook for reference if you must.

We have literally been doing this for hundreds of years. We were doing it when I was at school ~20 years ago. Any kid caught using a calculator in the classroom was told to put it away one time, or the teacher would confiscate it. Obviously, no laptops. The teachers used a blackboard and chalk; and the occasional slide projector.

I don't completely understand why people act as if it's impossible to teach properly now that we have LLMs; but perhaps a general over-reliance on laptops and electronic devices in the classroom is the reason. As it is, kids (and adults) have a huge problem with screen time, so it would serve us well to get away from it.


I see a general model here. In "learning mode" (information injection & assimilation) you can "gear up" all you want. In "testing mode" (info retrieval and presentation) you're on your own.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: