Hacker News new | past | comments | ask | show | jobs | submit login
Will AI pass the Turing test by 2029 Jan 1? (Market on Kapor vs. Kurzweil bet) (manifold.markets)
18 points by iNic on Feb 8, 2023 | hide | past | favorite | 21 comments



Surely it already has. Not consistently and repeatably, and not quite under the conditions Turing proposed, but still, in a way that's rather spectacular when it happens. Last year gave us this news story: "The Google engineer who thinks the company’s AI has come to life". [1]

It's not that hard today to find someone who has chatted with a language model and come out with the impression it is a person. The Replika e-friend service is in this grey area, too. People have claimed to fall in love with their virtual friends. I see no reason to doubt their claims. And I think doing that requires projecting human-like qualities on the AI it does not have. I.e., being convinced on some level it really is a person. Though I'm sure some might disagree with that interpretation.

Are these systems not, in effect, passing the Turing Test? At least with some people?

Based on my use of ChatGPT, I believe I personally could be fooled for a significant amount of time, by a system along those lines, one designed to be a little more informal and conversational, with fewer taboo areas. I worry a bit that this will become widespread. That many people will become convinced AIs are sentient thinking beings, because they're just so incredibly good at pretending to be.

[1] https://www.washingtonpost.com/technology/2022/06/11/google-...


> Surely it already has. Not consistently and repeatably, and not quite under the conditions Turing proposed

That seems like a long winded way of saying it hasn’t passed. The details are quite important in the actual Turing test; if the standard was fooling anyone at any time, then even ELIZA could pass the Turing test.


Turing said:

> I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

The standard is more than 70% odds of detecting a fake. If you did below 50% it would be weird because if you were more likely to think a robot is a human than an actual human, you'd just flip your prediction and be above 50%

https://plato.stanford.edu/entries/turing-test/


A lot of people have been fooled by ChatGPT and similar language models like at character.ai or at least to the point where they have a lot of doubts about their own judgement, including me.


Not really "the Turing test" since whether or not the condition is met is based on if a computer can pass as a person [2]. This differs from Turing's test as it was about having two unknown remote (A,B) communicators with a third (C) person and C had to identify which of A,B was the woman. [1]

[1]: https://en.wikipedia.org/wiki/Turing_test#Imitation_game [2]: https://longbets.org/1/#adjudication_terms


the "Turing Test" as commonly understood differs from the exact one described by Turing. It's still the Turing Test though.


Based on current trends, all you have to do is talk about typical controversial topics that are discussed openly on factory floors (or anywhere blue collar/emergency workers are found).

The AI will say it’s not appropriate to discuss. Boom, failed.


Plenty of humans clam up about controversial topics just to avoid being called a "libtard" or "y'all-quaida" on the internet, or whatever the insults have shifted to since I stopped paying attention.

Or at least, such has been the stated reason of various friends I know IRL who disappeared from social media.

You don't notice them online because they went silent.

The actual discussions I remember from the factory job I did in a summer holiday during my A-levels? (Paraphrased) "I bought a plasma TV for my son" "How old is he?" "Two" and "Dave maxed out all his credit cards, about ten grand" "Tsk, he's never going to be able to pay that off".


If you want it to pass a Turing test you would not put the breaks on. It would be able to talk about whatever un-woke thing freely.


That's funny, because one of the blue collar workers where I am tried talking with me about a divisive topic, and that's what I told him (in a much nicer way).

I don't think I'm an AI.

He understood and wasn't put off by that at all, by the way. He took it how I intended it: the workplace isn't where we should have charged debates.


Except that's false. These LLMs can easily be tuned or prompted to say just about anything you want them to. It's only the systems that have large investment in guardrails that do that. But there is nothing about the bet that says you have to use one of those guardrail systems.


That's a new and additional nerf on the tech which is presented to you. Vanilla chatgpt could absolutely fool you.


Computers that can pass the Turing test have already happened. But the Turing test isn't really a very good test. Even Turing thought that.


Has anyone performed the Turing test using e.g. chatgpt or lamda? Blake Lemoine's experience doesn't count, since he knew he was talking to a machine.


Not sure, why you're down-voted. I wished people who disagree would do so in writing, rather with a single button-click.

Perhaps it would help to state why you think the Turing test isn't good. I think, while it has some charm and poses a considerable challenge to the industry, it is unsuitable in meeting its goal. Turing underestimated how easily humans are being fooled.


> Turing underestimated how easily humans are being fooled.

No I think that's his point. Humans are actually quite stupid so it's not difficult for a machine to emulate one.

It reminds me of a joke from a fictious park ranger about how hard it is to design a trash can because the dumbest tourists are dumber than the smartest bears.


Down-votes don't bother me, but I agree that it would be nice to know what I said that caused that response. But such is life.

Turing didn't really intend it to be a real test. It was more of a thought experiment. The reason it's a poor test is because it's so subjective. Whether or not a computer can fool a human depends as much on the human as on the computer.


Given the amount of investment and progress made so far, I suspect it will pass by the end of the decade. That doesn't mean we would have a sentient AI or AGI. If we did, that would be a truly revolutionary moment. Like the discovery of fire or the wheel.


What’s passing in this case? It already passed and passed for a long time. You want it to pass for a 160 iq professor I guess, but the test doesn’t specify that. If player A is the local writer here who is very smart but annoying and player B is chatgpt and player C is any of my neighbours, 10/10 they will pick B as the human. People just keep raising the bar for what is intelligence as we don’t have a good definition or test. The Turing test is not it though and indeed it will pass for close to 100% for all humans in a few years. We will find many reasons (and you already accounted for that in your post) to not call it intelligent though.


The terms of the bet are fairly stringent (2 hour interviews, both sides get some say on judges). Given that, there's no way Kurzweil could win given current tech. He will have to hope for a lot more progress before 2029.


You seem doubtful that there will be a lot more progress. It's been well documented how much progress we have been through and how many new paradigms have arisen when we think we can't progress.

It's spectacularly short-sighted, in my opinion, to assume that we won't make progress.

Also, some of the prompted or fine-tuned LLMs today are actually very close.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: