>would combat a lot of the ways folks have found to break this.
Isn't it more that chatgpt is broken, and users are pointing it out
I suppose it depends what chatgpt is for, but I assume the end goal isn't just 'chat' (god I hope not) so it does actually need to reliably know correct answers or say when it doesn't. BSing is the worst option.
I mean arguably we BS all the time; how many pub quizzes do people confidently assert an answer? So I don't think that's totally the issue.
It's slightly different to the point I was talking to; but the thing about ChatGPT I find "uncanny" right now is how formal and expert it sounds about everything. A person would vary in certainty/surety and would sometimes show their working out etc.
So again it's a demonstration of how AI currently works; a best fit to the form and words of the question rather than genuine determination.
(I am not at all an AI expert but my understanding is that essentially all of this is not an engineering problem but a compute scale problem to train a large enough corpus - but someone expert please correct me!)
My point was more that the promise of AI isn't better BSing, if I wanted that I'd go to the pub.
The promise of AI is better answers, more correct answers, answers that are easier to get. It's supposed to enhance what computers can do, and complement their strengths, not undermine those strengths.
A computer program that gives an incorrect answer is buggy. Why should AI be held to a different standard?
Isn't it more that chatgpt is broken, and users are pointing it out
I suppose it depends what chatgpt is for, but I assume the end goal isn't just 'chat' (god I hope not) so it does actually need to reliably know correct answers or say when it doesn't. BSing is the worst option.