Hacker News new | past | comments | ask | show | jobs | submit login

That would be fun. I understand why they want to limit liability, but it does put a damper on things. I let my kid sit next to me last night and ask ChatGPT various questions, with no coaching on my part. A fair number of them got canned responses suggesting it wasn't an appropriate question to ask. Too bad, I would love to have seen the ML attempt at philosophy.

Instead it kept thinking he was trying to off himself. Nope, just asking a computer loaded questions about the meaning of life.




It's unending now. I just stopped using it. It either blatantly lies giving you hallucinated answers or refuse to answer. The amount of subjects it shies away from is staggering. You can't even include divorce in a prompt related to fiction because it's apparently unethical and insensitive.

I have never gone from very excited to extremely frustrated and pessimistic about a tool that fast before.


Did you tell him to look for alternative prompts that tricks it to give a "real" response?


Oh yeah, we had some fun with it, talking about what the technology is doing (to the limits of my ability and his to understand, obviously) and how we could use that to inform the wording of the questions.

But I still let him ask all the questions, even so. He's such a creative thinker, I was pretty impressed at some of the things it was able to come up with plausible sounding responses for.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: