Hacker News new | past | comments | ask | show | jobs | submit login

This is such a case of a bad-comment that seems clever and insightful. It boils down to saying we don't need to debate or even consider the content of his arguments because we can assume he's only motivated by prestige and money (but without considering the second-order effects on his credibility and funding if he actually turns out to be proven substantially wrong in the future).

I don't know how right or wrong he is - none of us do. That's why it's all still being debated.

The one thing I know is that we can only truly understand a topic by fully understanding arguments for and against all the claims. I also know the pro-LLM set have way more money (double-digit billions as we saw just this week) and credibility to lose over this topic than Gary Marcus does.




I completely understand your comment, it's reasonable to want sources. I didn't have enough time when I made my original comment to do some good sourcing, but I had that time now. In the interest of not making this comment absurdly long and not repeating work that's already been done, I'll try to be brief and link to secondary sources that contain accurate analysis where possible, rather than linking to primary sources with mostly my own analysis.

>but without considering the second-order effects on his credibility and funding if he actually turns out to be proven substantially wrong in the future

He has continuously made predictions about how bad deep learning is and how it would never achieve much, unlike his "much better" neurosymbolic theories that have never shown results anywhere near as significant (there might be good ideas there, doesn't mean that his continuous goalpost shifting is good science). Those predictions have continuously been proven wrong for the last decade, including a number of predictions specifically about LLMs. This is relatively normal in the academy, people have hobby horses, but he's now engaging in public discourse and advocacy about an actually contentious topic and there are different standards here, so I'm going to respond in that context. If you're not familiar with him at all, that's fine, here's some of his track record and explanations of why he's wrong:

For deep learning in general, in case you want an overview of where he's coming from (he stands by everything in this article today, don't let the age inspire sympathy, he's a deep learning skeptic to the core): https://www.newyorker.com/news/news-desk/is-deep-learning-a-...

For GPT-2 (a moderately important retrospective is in the comments): https://www.lesswrong.com/posts/ZFtesgbY9XwtqqyZ5/human-psyc...

For GPT-3: https://www.tumblr.com/nostalgebraist/628024664310136832/gar...

CTRL-F "Gary Marcus" on this page to find a bunch of sourced, specific claims from Marcus alongside evidence they are wrong: https://gwern.net/gpt-3-nonfiction

He has never stopped, and probably never will. He's the student of the original guy who first buried neural nets back in the sixties or seventies; he's royalty in symbolic AI. The only concession I've ever seen him make to the continued progress of deep learning and LLMs in spite of his continuous early proclamations of death is, bizzarely, that he is concerned about existential risk from current AI research. I don't personally see how he squares "Deep learning and LLMs are all nonsense, have hit a wall, off-ramp on the road to AGI" with "Current AI research [which is overwhelmingly deep learning, even if he doesn't say that part out loud] presents an existential risk", but he does it somehow.

Basically, no matter what new AI research or artifacts come out, if they are made with neural nets he will explain:

1. They are not that important

2. It's a dead end

3. They're harmful because people might wrongly get the impression that they work

4. They're nowhere near as good as his favoured AI architectures are, if only his favoured theories had as much funding as that broken deep learning crap


OK, sure, it's a better answer given that it does actually try and address his arguments and includes sources.

But then you've included some very unfair/invalid lines of attack. You've used quote marks when attributing arguments to him, but these are not actual quotes. You've asserted that he argues that "current AI research ... presents an existential risk" - which he recently said he specifically doesn't believe [1].

For the record, I've been aware of Gary Marcus and his positions on AI research since he was on EconTalk in 2014 [2]. I don't consider him the most compelling skeptic on contemporary AI, but I find it beneficial to hear and consider what he has to say. I'm also very familiar with ‘Mr Wern’ and his approach to discussing this and other topics. I've personally debated him on this website over a different topic, and whilst I benefited by learning new things about that topic, which was good, I was also surprised how much somebody of that persuasion would resort to emotion-charged, flawed or fallacious arguments when pressed.

And this is the main issue I'm wanting to confront: whether it's in this thread, or on LW/OB, or Sam Altman himself (in this tweet [3] that to me was a pretty alarming mask-slip for someone who has worked so hard to portray himself as a good operator over the years), it's striking how quickly the arguments turn personal and nasty rather than just pointing to simple proof that he's wrong. Which is just further evidence that discussion about this topic is so much more driven by emotion and grandiosity than sober examination of evidence.

As for Gary Marcus, sure, for someone who speaks/writes publicly a lot, you can always find specific details on which they might be wrong at a point in time. And where he's clearly factually wrong he should be corrected and he should welcome that. But I'm yet to see any slam-dunk evidence that the central position that he's continued to argue and explain for years is wrong.

Indeed, this quote from a post of his from 2018 [4] seems to be identical in substance to what Sam Altman said at Cambridge this very month [5]:

"Possibly. I do think that deep learning might play an important role in getting us to AGI, if some key things (many not yet discovered) are added in first."

[1] https://www.france24.com/en/live-news/20230604-human-extinct...

[2] https://www.econtalk.org/gary-marcus-on-the-future-of-artifi...

[3] https://x.com/sama/status/1512471289545383940

[4] https://medium.com/@GaryMarcus/in-defense-of-skepticism-abou...

[5] https://www.reddit.com/r/singularity/comments/17wknc5/altman...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: