
Giving GPT-3 a Turing Test - panic
http://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html
======
peter_d_sherman
>"Ten years ago, if I had this conversation, I would have assumed the entity
on the other end was a human. You can no longer take it for granted that an AI
does not know the answer to “common sense” questions."

[...]

>"Now we’re getting into surreal territory. GPT-3 knows how to have a normal
conversation. It doesn’t quite know how to say “Wait a moment… your question
is nonsense.” It also doesn’t know how to say “I don’t know.”

"GPT-3, why do two wrongs not make a right -- but three lefts, do?" <g>

------
oehtXRwMkIs
Good read, but I wish the author would have conducted a better Turing test by
having someone that does not know beforehand interface with the AI. Add a
control, find more people for both acting as control and participating in
discrimination, and it would have been fascinating to see the results of such
a test.

------
zerocrates
I wonder if it's just coincidence that all the "heavier" answers are whatever
was listed second.

~~~
Don_Patrick
I don't think it's just coincidence. A lot of "A or B" questions tend to have
B as the correct answer with the first option just being there to confuse the
reader. I find it likely that a statistical algorithm would pick up on that.

