Hacker News new | past | comments | ask | show | jobs | submit login

One of the things that's most annoying to me is that you have to walk on eggshells to get it not to interpret what you write as challenging it.

I forget the situation I had a while back; I asked it to do something with which I wasn't really familiar (maybe it was translating in a foreign language). I asked it, "Why did do X instead of Y?" And instead of explaining why it did X instead of Y, it said, "I'm sorry, you're right; Y is a better thing to do." No! Maybe Y is better, but if it is, I still don't know why!

That was probably GPT-3.5 though; I haven't experienced that in a long time. (In part because a lot of my system prompts include "If any questions are asked, be sure to clearly correct any errors or misconceptions" -- i.e., don't assume the questioner is correct.) But also because I've gotten in the habit of saying something like "Can you explain why you chose X instead of Y", or even just "Would Y work as well?" Those are harder to interpret as implicit challenges.




It probably learned this from customer support transcripts.

Me: gets email spam

Me: hi, GDPR article 15 tell me what data you have of me and where you got it

Them: sorry to hear we've bothered you, you've been removed from our database

No! That's not what I asked! I want to know how this specific, unique email address leaked, and afaict they're a legit business (like, Facebook levels of legit, not like a viagra spammer that works without accountability altogether) so it won't be through criminal means and they can just tell me. Instead they try to be too helpful / get rid of my complaint faster.

Maybe too tangential but the avoidance of answering the question in favor of "correcting" something (in their perception of correcting) reminded me of this




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: