Hacker News new | past | comments | ask | show | jobs | submit login
Conversational AI is a great tool for education (twitter.com/vishnuhx)
43 points by vishnuharidas on Nov 25, 2023 | hide | past | favorite | 31 comments



I completely agree. The power of LLM and conversational AI will give each student their own private tutor, ofc. there are issues today, but the future will be wonderful for whoever is able to embrace this technology.

I already used GPT to help me tutor my kids, and for my kids to use it when they get stuck. They get unstuck faster. They are critical but also more willing to accept response as fact, we discuss this regularly and they seem to be getting the point.

so many kids get left behind because a teacher is unable to spend time with them, how amazing will it not be for each student to have their own supporting teacher?

hopefully, we will be able to harness AI for the better and good.


I find it draining to have to be on the lookout constantly for hallucinations, or omissions a person wouldn't make. I imagine as long as I'm walking well known paths -- well known to many but not me -- I'm safe, but the moment I need nuances I can expect that one of those nuances is completely and convincingly made up, except I don't know which one.


> be on the lookout constantly for hallucinations, or omissions a person wouldn't make.

--- Why is the sky red at dawn?

The angels are baking, honey! ---

You give a lot of credit to people


LLMs are especially fit for teaching students to formulate good questions. As LLMs have an endless patience, bad and boring questions anymore won't exist anymore.


I think this is an absolutely critical point. From my own tutoring experience insight with troubled students usually comes only _after_ engaging many "bad" questions in a row. Usually I find that students struggle because they do not understand, or are not aware of, some fundamental fact about the topic they are studying. Once you answer the last "dumb" question, they quickly learn all the other stuff they didn't know, because they have a sure footing.

In the future I expect AI tutors to ask more questions of students to quickly identify where their knowledge gaps are (so they won't even have to formulate the "bad" questions themselves).


I’d think that the most efficient exchange would be 50/50 with both student and AI answering and asking questions. My opinion is that the only problem with “bad questions” is emotional. They’re actually an incredibly efficient vehicle for identifying weaknesses.


I think as long as they also learn to stay somewhat skeptical and question things, this is great! Even for adult learning, this is a great approach.


the future will be wonderful for whoever is able to embrace this technology

I'd change that to "the future will be wonderful for whoever is able to afford this technology." AI has the power to give some people a massive competitive advantage over others, and some people will be willing to spend a lot to keep that to themselves. It'll be democratized to an extent but you can be absolutely certain "AI tutors" will be a market, and people at lower incomes won't be able to afford the best ones (which is fine, it's just like human tutors. That's capitalism.).


I disagree here : there is a noticeable marginal cost to having additional human tutors. It’s quite different with chatbots.

Also I personally think you are under a misconception. Commoditised high quality AI tutors would increase variance/inequality of outcomes, not decrease it. As it would increase the learning rate of gifted individuals even more drastically than the one of the general population.

The more edtech improves and becomes cheaper, the more the limits of individuals will be set by their biological constraints. Which are mostly genetics


Education is already low quality as it is - we want to it even less quality using procedural text generators?


This; its really hard to discern nuance when you do not understand a specific field. The value of procedural text generators is higher when you as an individual with the knowledge, understand when the AI is hallucinating or spewing nonsense versus when you are rubber duckying your way into solutions.


Breathless anecdata like this always talk about self education.

The op is saying “I learned <TECHNICAL_JARGON>” but did they? How are they quantifying learning? How do they know what they “learned” is even correct.

I agree with the headline but I think it needs a qualifier of “in the presence of an educator”. The educator, can be a technical text, is there to sanity check the conversational agent.

In my experience the best use of such agents at this time is in a domain where I’m already an expert and I want it to do some remedial work or lookups for me.


Wouldn't you be afraid that the AI is wrong? I've seen AI explain some things well, but I've also seen it manufacture completely fake concepts to fill in the blank. Maybe textbooks aren't perfect either, but at least they're written by accountable parties and read hundreds of times before published.


I think that in many ways this is a plus. It’s actually really hard outside of actually attending a class with fellow students to find examples of something “wrong” that you can then attempt to crossreference and correct. But when studying with other students, that’s what you do all the time.

The friction involved is counterbalanced by the conversational aspect which makes it feel less tedious or even fun.

I often like to have the ai:

- provide exercises

- play the student asking me the e teacher to clarify and answer questions

- have it put on a little theater play with different students

I was talking about this earlier on mastodon:

- https://hachyderm.io/@mnl/111471750626975201

- https://hachyderm.io/@mnl/111467764783141450

- https://typeshare.co/go-go-golems/posts/ai-driven-self-educa...

I also find it one of the most effective ways of learning, and complement it with the usual book/video learning and (very importantly) writing down the important points learned during the conversation (potentially coming up with Flashcards which can then also be “cosplayed” by gpt), because gpt transcripts are just indigestible.


I've used LLMs chat for finding statistics references many times; the important part is often surfacing the missing keyword or jargon needed to find the actual literature on a topic. If there's actual literature, you find it, and if it's a hallucination then you get a sense that the idea is at least very obscure, if not entirely unstudied. Though I have found some very obscure topics by this method...


Yes, you need to be aware of the possibility of confabulation by the AI. But you can minimize that problem by:

- Using an LLM that hallucinates less. GPT-4 is more reliable than GPT-3.5, and the just-released Claude 2.1 is reportedly better than its predecessor. In my experience, Bard confabulates too much to be useful for many purposes.

- Using the AI to explore relatively general topics. In my tests, GPT-4 is excellent for getting an overview of, say, linguistic theories, the history of ethics, or the differences between quantum and classical physics. The more focused the topic is--how a particular verb conjugates in Romanian, what David Hume said about the death penalty, how gravity affects neutrinos--the more you need to double-check with other sources.

- Focusing not on learning facts but on the interactive exploration. One interesting exercise is to discuss counterfactuals: How might human civilization have developed if electricity had not been harnessed? What would have happened if a fifty-meter-diameter asteroid had struck the Rhine Valley in May 1944? There are no right or wrong answers to such questions, but exploring them with the AI can be very rewarding.

I agree with the OP: Interaction with LLMs can be a great way to learn, and it will get only better as their reliability improves further and they become more customizable for the individual learner. What I want most now is for them to have a persistent memory of our past conversations. Better multimodal capabilities would also be nice.


Yes, I have experienced this multiple times - for example, ChatGPT and Bard giving totally opposite answers to the same questions. So I learned to take their answers with a pinch of salt and ensure to further research (either Google it, or ask another Conv-AI tool) when I start to smell something wrong.

IMO, these Conv-AI tools should indicate to the user when they are hallucinating.


> IMO, these Conv-AI tools should indicate to the user when they are hallucinating.

If they could do that, then it would be fairly easy to "not hallucinate".

To put it another way the "don't hallucinate" problem and the "warn me if you're hallucinating" problem are in the same difficulty class.


Like many human habitual bullshitters, they do not even know when they're doing it.


Yeah but no but...

Generally - the areas where GPT starts to become unreliable are usually fairly "off-piste". My gut feeling is that in most standard educational contexts it would probably be fine.

Curious to hear specific examples where people have found this not to be the case.


No more than I would be afraid that a person I’m having a conversation with could be wrong.


Surely some people are more trustworthy than others?

Some people are quite introspective and have pretty good meta-awareness, and hedge when they're not sure about something. While other people are known to just bullshit whenever they don't know something (or when they just feel like it), and don't change this behaviour even when lots of people have pointed it out.

How much more would you be worried if you were having a conversation with a known bullshitter?


It's a trade-off. You get handholding and customized learning at the expense of accuracy. Over time, accuracy will improve and the ratio will skew more and more towards favoring AI.


depends on how far into the weeds you get and the model you're using. gpt4 is pretty reliable at explaining concepts at a high level and its better at it than any textbook or wiki page can be because of its ability to answer follow-up questions in context. its basically the next best thing to a college professor's office hours.


Yes even the original author of tweet acknowledges it https://twitter.com/vishnuhx/status/1727796397150224518


It’s more akin to having a conversation with a well educated person. They aren’t perfect either but it’s good enough to pick up general knowledge. It’s an amazing tool.


Honestly this is a perfectly valid concern. I mostly use it as a jumping off point. Sometimes we don't have the language required to do our own search on a subject.

If the LLM can give me a bird's eye view of the subject, then it enables me to go off and do my own research and come with my own conclusions, even if they don't align with what the LLM originally told me.

The fact is, there's a ton of misinformation on the Internet. Doesn't matter if you're getting your info from an LLM or not, you should almost always be trying to get your info from multiple sources if possible.


obviously need some work to reduce/eliminate hallucination but LLMs will definitely transform learning. At worst you'll occasionally need to text search your notebook to confirm certain facts. Stuff like RAG and fine-tuning is already being done to improve those issues and I assume improvements will be made at the model level eventually

When combined with the ability to run code and read images I think it will really help with learning math. Show it your work, have it tell you why you got a wrong answer, and then it can tell you the concepts you need to review


This is certainly true and there will be many ways to improve on the default chatGPT experience to provide a full tutor / educational experience. Wonder who's working in this space (besides Khanmigo)?


I just generated a reading comprehension story about 2 pokemon characters along with multiple-choice questions. This is pretty amazing for helping my kind WANT to read something.


I had a lot of fun, having ChatGPT generate a fake diary of, say, Adolf Hitler for certain dates in history, as well as time machining him into the present. I just like being able to pick up a piece of history and turn it around and look at it from different sides.

The big disclaimer is that you can really only do that with subjects that you already know very well. Then again, precisely that can be part of the fun. Because it is fun to see where the model gets it right, vs where some details are in conflict with something you know from other sources.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: