Hacker News new | past | comments | ask | show | jobs | submit login

Don't understand why we keep giving gary marcus attention





Gary Marcus is cringe and wrong, but it's good to listen to folks who are cringe and wrong, because very occasionally, their willingness to be cringe means they're not wrong about something everyone thinks is true.

Can you be specific?

Garry Marcus constantly repeats the line that "deep learning has hit a wall!1!" - he was saying this pre-ChatGPT even! It's very easy to dunk on him for this.

That said, his willingness to push back against orthodoxy means he's occasionally right. Scaling really has seemed to plateau since GPT-3.5, Hallucinations are still a problem that are perhaps unsolvable under the current paradigm, LLMs do seem to have problems with things far outside their training data.

Basically, while listening to Gary Marcus, you will hear a lot of nonsense, it will probably give you a better picture of reality if you can sort the wheat from the chaff. Listening to only Sam Altman, or other AI Hypelords, you'll think the Singularity is right around the corner. Listen to Gary Marcus, you won't.

Sam Altman has been substantially more correct on average than Gary Marcus, but I believe Marcus is right that the Singularity narrative is bogus.


>Sam Altman has been substantially more correct on average than Gary Marcus

I've seen some of Marcus' other writing and he's definitely a colorful dude. But is Altman really right more often/substantively? Actually, the comparison shouldn't be to Altman but to the AI hype train in general.

And, while I might have missed some of Marcus's writing on specific points, on the broader themes he seems to be effectively exposing the AI hype.


you obviously never actually read the paper; you should.

Gary I respect you - will do!

He recently posted a question he put to grok3 — a variation on the trick LLM question (my characterization) of "count the number of this letter in this word." Apparently this Achilles heel is a well-known LLM shortcoming.

Weirdly though, I tried the same example he gave on lmarena and actually got the correct result from grok3, not what Gary got. So I am a little suspicious of his ... methodology?

Since LLMs are not deterministic it's possible we are both right (or were testing different variations on the model?). But there's a righteousness about his glee in finding these faults in LLMs. Never hedging with, "but your results may vary" or "but perhaps they will soon be able to accomplish this."

EDIT: the exact prompt (his typo 'world'): "Can you circle all the consonants in the world Chattanooga"


I think it's fair to say though that if your results may vary, and be wrong, then they're not reliable enough for many use-cases. I'd have to see his full argument though to see if that's what he was claiming. I'm just trying to be charitable here.

I'm trying to be charitable as well — I suppose to both sides of the debate. Myself, I see pros and cons. The hype absolutely needs to be shut down, but a spokesperson that is more even-handed would be more convincing (in my opinion).

Here is his post, FWIW: https://garymarcus.substack.com/p/grok-3-beta-in-shambles


JKCalhoun says "...a spokesperson that is more even-handed would be more convincing (in my opinion)."

Why? The stance of science toward new "discoveries" should always be skepticism.


I agree. I also think you can find the line between skepticism and partisanship.

I don't see it as righteous glee but just hoping that people will see the problem with how you could even begin to be suspicious of him. If it is so easy to get something wrong when you're trying to be correct, or get something accidentally correct as you're trying to expose things that are wrong ... Then what are we really doing here with these things.

Well, like any tool, hopefully using it where it makes sense. We already know that asking it to count vowels, etc. is not what we should be doing with these things. Writing code in Python however is a very different story.

Right it is even more problematic with code making hidden mistakes no person would ever make.

I don't know the guy, what's wrong with what he wrote?

Gary Marcus has made himself the most prominent proponent of "deep learning is a parlor trick and cannot create real AI" (note: deep learning, not just LLMs), which he has been saying almost unmodified from before LLMs even existed to now.

Though I think he might have stopped setting specific concrete goalposts to move, sometime between when I last checked in on him and now. After (often almost instantly) losing a couple dozen consecutive rounds of "LLMs/deep learning fundamentally cannot/will never", while never acknowledging any of it.


What does it mean when someone is sticking to their guns? Is it a bad thing? I do appreciate consistency, albeit a fair consistency and Gary Marcus's points do stand. When these criticisms are addressed (if it's possible to) you'd probably hear less from Gary Marcus.

Show me the goalposts I have moved, with actual quotes to prove it. nobody ever has when I have asked.

Aso consider eg the bets I have made with Miles Brundage (and offered to Musk(, with money where I have backed up my views.

good summary of predictions i made - mostly correct – is here: https://open.substack.com/pub/garymarcus/p/25-ai-predictions...


There's also a perspective that all of the ongoing problems have been the same while newer techniques shove them under different rugs. So I can see how that would look like that to the credulous.

This is exactly what's happening, with the additional feature that the newer techniques likewise come with their own hype.

I'm subscribed to his substack because he's curmudgeonly and it's funny, and he occasionally makes good points, but he's constantly beating the same anti-hype drum. He might not get any particular facts wrong but you can count on him only focusing on the facts that let him continue to show AI through that same anti-hype lense.

How would it be possible to, say, show the reality of a forest fire's devastation while not appearing to show a bias for showing charred trees?



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: