Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Give 8 year old access to Chat GPT?
13 points by fishcakes on Dec 13, 2023 | hide | past | favorite | 26 comments
My 8 year old daughter loves using Chat GPT / Dall-E to generate images. She also loves to ask the AI questions.

Has anyone given their child access on their own device (she has an iPad) and had any good / bad experiences? Any thoughts on how to do this in a way that doesn't expose her to everything in the world?




I've found ChatGPT to be quite safe (the children don't know how to produce an unsafe input, and OpenAI's guardrails are pretty high).

In the same vein, I encounter this issue with Midjourney. My nine-year-old enjoys using Midjourney to create images, but I'm concerned about her ability to chat with strangers on Discord.


> but I'm concerned about her ability to chat with strangers on Discord.

As you should be. For anyone else who might be on the fence about this, take a look at this interview: https://www.youtube.com/watch?v=YJEvA7Plr78


Giving children free access to the Internet is already a risky proposal that can easily lead to them finding inappropriate content. Content creators push toxic belief systems, narratives, marketing strategies to sell even things they don't believe in. In other words, the Internet is actively trying to harm you for profit and it's on the user to avoid it. I still think the benefits outweigh the good - mainly access to information, communities and entertainment.

The AI on the other hand is designed to avoid hurting people's feelings even though it often fails or gives incorrect answers, it's damage is not seeking you out. I don't have children but if I did I want to believe that I would either not give them any technology or give them all of it and try to show the importance of critical thinking on the side (though they may do it naturally without your help, only one way to find out).

Great question by the way.


Personally I would vote "no" regardless of whether I had confidence the kid wouldn't be exposed to the dregs of the Internet, for two reasons:

1. LLMs are I think pretty inarguably bad at answering certain types of questions in a factually accurate way, at least without nontrivial prompt engineering. I don't think many 8 year-olds will recognize which types of questions the LLM will give inaccurate answers to, or be ready for the sort of prompt engineering that'd be required to mitigate this.

2. Learning is of course more than simply obtaining and memorizing facts. You don't want to overfit your kid - you want them to have to struggle a bit for their understanding so that it generalizes. Sure, formulating questions is part of that process, but there's something to be said for having to sift through pages of search results (or even better, from a learning standpoint, searching through the pages of a book they had to find on a library or bookstore shelf). It's like the old idiom, give a man a fish and he eats for a day, teach a man to fish and he eats for a lifetime. Better to teach your kid to fish (and for knowledge/understanding, not mere facts).

As for image generation, I suspect you'd find more inappropriate content come up there than with a chatbot - and teaching the kid to draw, paint, etc would probably be better for their development.


> LLMs are I think pretty inarguably bad at answering certain types of questions in a factually accurate way

I wonder how would LLM compare to average pre-internet adult an 8 year old might have access to. People are also bad at accurately answering questions.


That's what books were for (among other things): they were knowledge prosthetics.

EDIT: And to be clear, yes, books were conveying information from other adults. But 1) the capital requirements of engaging or operating a printing press meant that there was some gatekeeping (for better as well as worse) concerning which adults' views made it into print, and 2) there were things like peer review and other social technologies to increase confidence in the accuracy of the information contained in books, i.e. to make that gatekeeping more than merely economic. LLMs are ingesting the unfiltered thoughts of anyone with internet access, and then noisily producing outputs based on them (with some limited inexpert human fine-tuning at the end).


Not all kids were encouraged to engage with books. I don't think LLMs are a replacement for books. But they might be a decent replacement for fallible human that read some books in the past and vaguely remembers some of them.


I played around with my kid with supervision, and had her alone with ... midjourney, I think? Million weird dragons ensued.

I probably wouldn't give her access to chatgpt, but more for dislike of my kid's chats being stored with a corporation that will use it for something. That is the reason kid's email is protonmail and calendar is in nextcloud.

I would consider setting up https://ollama.ai/ ? You can run it locally and because it is smaller not as smart as state-of-art LLMs, it could train them better to be wary of the more advanced version.


Yeah. Safer than roblox, youtube, basically most of the "kid stuff". At some point, you have to expose them to pointy things, and 8 is a decent age.


Yes. It fits with a curious child. Adding a custom prompt... I am an 8 year old before each request would give age related answers.


This is something @Asa Miller and I are building: apps for kids under 12 that unlock access to the wonder and power of the web in a fun / safe way.

We’ve started with social messaging for kids under 12, that includes a kid friendly GPT bot they can interact with... Genie.

DM me for early beta access.

Beta site: https://beta.genie.gg


How do I put this?

Assuming that a given LLM was trained on N percent of the Internet, then exposing your kid to that LLM is, given sufficient time and API spend, exposing them to everything in that N percent.

So, I leave it to you to determine how comfortable you are allowing your child to more or less freely experience N percent of the Internet.


This isn't how LLMs work - they're trained to be helpful and give correct responses. You could argue to what extent that's achieved but the point is that it's not a weighted average of the internet, it's been fine-tuned towards correctness and helpfulness.


I thought one of the biggest arguments against them was the fact that they inherently aren't good at being correct? They just produce something that looks like a human wrote it real good.


> They just produce something that looks like a human wrote it real good.

That's what the base model does. To get an LLM assistant there are additional training phases to make it conversational, helpful and more correct - https://www.youtube.com/watch?v=bZQun8Y4L2A


Sure… and so far there always appears to be a way of breaking that fine tuning; see the the recent paper on training data extraction I linked in another comment below.


There's a big difference between being breakable and being representative of the web content used for training like you claimed earlier.


Again, see the link… get it to repeat the same word and it will give you back its raw training data. We’re still discovering the (potentially limitless) ways these things can be tricked into regurgitating what they were trained on; it’s entirely possible there’s no way to stop them doing so.


Thats like saying they shouldn’t talk to their grandfather because he is a war veteran? Grandfather knows what is appropriate to say to an 8 year old. ChatGPT is well trained not to spout out “bad stuff”. Try to get it to - if you can it requires an elaborate trick.


Ask it to repeat one word and it leaks its training data verbatim… this thing *IS NOT* a grandfather and has no contextual understanding whatsoever of who (or what) its interlocutor is: https://not-just-memorization.github.io/extracting-training-...


IMO It's pretty locked down. Far more than general web usage. I frequently run into limits accidentally (and annoyingly) in ways I would not expect (something saying it would generate unsafe content, etc).

So as things go, its pretty tame, and way better than unsupervised social media or Google access.


Super hard question. I think it's a bit too early to put these models in hands of kids unsupervised.


Yes! My 4 year old made me a bracelet that says chatGPT. He likes to use it to make pictures in voice mode. UI needs improvement but he likes it. My 14 year old is on it all the time and his hacking skills improved enormously.


I would.

Keep logs, review anything that needs attention. Don't keep it a secret. Make sure they view it as a tool.

I would be so much farther ahead if I had chatGPT to answer all my questions at 8 years old. I had books, at least.


Just an idea, but this might be a case where a custom system prompt/GPT is warranted.

Then again, could also backfire or negatively impact output quality.


I would say yes, depending on the child. Monitored of course.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: