Hacker Newsnew | past | comments | ask | show | jobs | submit | jsnell's commentslogin

I don't think it's "regardless", your opinion on LeCun being right should be highly correlated to your opinion on whether this is good for Europe.

If you think that LLMs are sufficient and RSI is imminent (<1 year), this is horrible for Europe. It is a distracting boondoggle exactly at the wrong time.


It's sufficient to think that there is a chance that they will not be, however, for there to be a non-zero value to fund other approaches.

And even if you think the chance is zero, unless you also think there is a zero chance they will be capable of pivoting quickly, it might still be beneficial.

I think his views are largely flawed, but chances are there will still be lots of useful science coming out of it as well. Even if current architectures can achieve AGI, it does not mean there can't also be better, cheaper, more effective ways of doing the same things, and so exploring the space more broadly can still be of significant value.


I think LeCun has been so consistently wrong and boneheaded for basically all of the AI boom, that this is much, much more likely to be bad than good for Europe. Probably one of the worst people to give that much money to that can even raise it in the field.

LeCun was stubbornly 'wrong and boneheaded' in the 80s, but turned out to be right. His contention now is that LLMs don't truly understand the physical world - I don't think we know enough yet to say whether he is wrong.

Could you please elaborate on what he was wrong about?

He said that LLMs wouldn't have common sense about how the real world physically works, because it's so obvious to humans that we don't bother putting it into text. This seems pretty foolish honestly given the scale of internet data, and even at the time LLMs could handle the example he said they couldn't

I believe he didn't think that reasoning/CoT would work well or scale like it has


Whenever I see claims about AGI being reachable through large language models, it reminds me of the miasma theory of disease. Many respectable medical professionals were convinced this was true, and they viewed the entire world through this lens. They interpreted data in ways that aligned with a miasmatic view.

Of course now we know this was delusional and it seems almost funny in retrospect. I feel the same way when I hear that 'just scale language models' suddenly created something that's true AGI, indistinguishable from human intelligence.


> Whenever I see claims about AGI being reachable through large language models, it reminds me of the miasma theory of disease.

Whenever I see people think the model architecture matters much, I think they have a magical view of AI. Progress comes from high quality data, the models are good as they are now. Of course you can still improve the models, but you get much more upside from data, or even better - from interactive environments. The path to AGI is not based on pure thinking, it's based on scaling interaction.

To remain in the same miasma theory of disease analogy, if you think architecture is the key, then look at how humans dealt with pandemics... Black Death in the 14th century killed half of Europe, and none could think of the germ theory of disease. Think about it - it was as desperate a situation as it gets, and none had the simple spark to keep hygiene.

The fact is we are also not smart from the brain alone, we are smart from our experience. Interaction and environment are the scaffolds of intelligence, not the model. For example 1B users do more for an AI company than a better model, they act like human in the loop curators of LLM work.


If I'm understanding you, it seems like you're struck by hindsight bias. No one knew the miasma theory was wrong... it could have been right! Only with hindsight can we say it was wrong. Seems like we're in the same situation with LLMs and AGI.

The miasma theory of disease was "not even wrong" in the sense that it was formulated before we even had the modern scientific method to define the criteria for a theory in the first place. And it was sort of accidentally correct in that some non-infectious diseases are caused by airborne toxins.

Plenty of scientific authorities believed in it through the 19th century, and they didn't blindly believe it: it had good arguments for it, and intelligent people weighed the pros and cons of it and often ended up on the side of miasma over contagionism. William Farr was no idiot, and he had sophisticated statistical arguments for it. And, as evidence that it was a scientific theory, it was abandoned by its proponents once contagionism had more evidence on its side.

It's only with hindsight that we think contagionism is obviously correct.


> Only with hindsight can we say it was wrong

It really depends what you mean by 'we'. Laymen? Maybe. But people said it was wrong at the time with perfectly good reasoning. It might not have been accessible to the average person, but that's hardly to say that only hindsight could reveal the correct answer.


It's unintuitive to me that architecture doesn't matter - deep learning models, for all their impressive capabilities, are still deficient compared to human learners as far as generalisation, online learning, representational simplicity and data efficiency are concerned.

Just because RNNs and Transformers both work with enormous datasets doesn't mean that architecture/algorithm is irrelevant, it just suggests that they share underlying primitives. But those primitives may not be the right ones for 'AGI'.


If model arch doesn't matter much how come transformers changed everything?

Luck. RNNs can do it just as good, Mamba, S4, etc - for a given budget of compute and data. The larger the model the less architecture makes a difference. It will learn in any of the 10,000 variations that have been tried, and come about 10-15% close to the best. What you need is a data loop, or a data source of exceptional quality and size, data has more leverage. Architecture games reflect more on efficiency, some method can be 10x more efficient than another.

That's not how I read the transformer stuff around the time it was coming out: they had concrete hypotheses that made sense, not just random attempts at striking it lucky. In other words, they called their shots in advance.

I'm not aware that we have notably different data sources before or after transformers, so what confounding event are you suggesting transformers 'lucked' in to being contemporaneous with?

Also, why are we seeing diminishing returns if only the data matters. Are we running out of data?


The premise is wrong, we are not seeing diminishing returns. By basically any metric that has a ratio scale, AI progress is accelerating, not slowing down.

For example?

For example:

The METR time-horizon benchmark shows steady exponential growth. The frontier lab revenue has been growing exponentially from basically the moment they had any revenues. (The latter has confounding factors. For example it doesn't just depend on the quality of the model but on the quality of the apps and products using the model. But the model quality is still the main component, the products seem to pop into existence the moment the necessary model capabilities exist.)


Note we're in a sub-thread about whether 'only data matters, not architecture', so I don't disagree that functionality or revenue are growing _in general_, but that's not we're talking about here.

The point is that core model architectures don't just keep scaling without modification. MoE, inference-time, RAG, etc. are all modifications that aren't 'just use more data to get better results'.


> Of course you can still improve the models, but you get much more upside from data, or even better - from interactive environments.

I'm on the contrary believe that the hunt for better data is an attempt to climb the local hill and be stuck there without reaching the global maximum. Interactive environments are good, they can help, but it is just one of possible ways to learn about causality. Is it the best way? I don't think so, it is the easier way: just throw money at the problem and eventually you'll get something that you'll claim to be the goal you chased all this time. And yes, it will have something in it you will be able to call "causal inference" in your marketing.

But current models are notoriously difficult to teach. They eat enormous amount of training data, a human needs much less. They eat enormous amount of energy to train, a human needs much less. It means that the very approach is deficient. It should be possible to do the same with the tiny fraction of data and money.

> The fact is we are also not smart from the brain alone, we are smart from our experience. Interaction and environment are the scaffolds of intelligence, not the model.

Well, I learned English almost all the way to B2 by reading books. I was too lazy to use a dictionary most of the time, so it was not interactive: I didn't interact even with dictionary, I was just reading books. How many books I've read to get to B2? ~10 or so. Well, I read a lot of English in Internet too, and watched some movies. But lets multiply 10 books by 10. Strictly speaking it was not B2, I was almost completely unable to produce English and my pronunciation was not just bad, it was worse. Even now I stumble sometimes on words I cannot pronounce. Like I know the words and I mentally constructed a sentence with it, but I cannot say it, because I don't know how. So to pass B2 I spent some time practicing speech, listening and writing. And learning some stupid topic like "travel" to have a vocabulary to talk about them in length.

How many books does LLM need to consume to get to B2 in a language unknown to it? How many audio records it needs to consume? Life wouldn't be enough for me to read and/or listen so much.

If there was a human who needed to consume as much information as LLM to learn, they would be the stupidest person in all the history of the humanity.


Are you asking how many books a large language model would need to read to learn a new language if it was only trained on a different language? probably just 1 (the dictionary)

Do you know anything about how languages work? A dictionary doesn't have sufficient information to speak a language.

The miasma theory of disease, though wrong, made lots of predictions that proved useful and productive. Swamps smell bad, so drain them; malaria decreases. Excrement in the street smells bad, so build sewage systems; cholera decreases. Florence Nightingale implemented sanitary improvements in hospitals inspired by miasma theory that improved outcomes.

It was empirical and, though ultimately wrong, useful. Apply as you will to theories of learning.


Just because you raise 1 billion dollars to do X doesn't mean you can't pivot and do Y if it is in the best interest of your mission.

I won't comment on Yann LeCun or his current technical strategy, but if you can avoid sunk cost fallacy and pivot nimbly I don't think it is bad for Europe at all. It is "1 billion dollars for an AI research lab", not "1 billion dollars to do X".


It's been 6 months away for 5 years now. In that time we've seen relatively mild incremental changes, not any qualitative ones. It's probably not 6 months away.

Yeah. I feel like that like many projects the last 20% take 80% of time, and imho we are not in the last 20%

Sure LLMs are getting better and better, and at least for me more and more useful, and more and more correct. Arguably better than humans at many tasks yet terribly lacking behind in some others.

Coding wise, one of the things it does “best”, it still has many issues: For me still some of the biggest issues are still lack of initiative and lack of reliable memory. When I do use it to write code the first manifests for me by often sticking to a suboptimal yet overly complex approach quite often. And lack of memory in that I have to keep reminding it of edge cases (else it often breaks functionality), or to stop reinventing the wheel instead of using functions/classes already implemented in the project.

All that can be mitigated by careful prompting, but no matter the claim about information recall accuracy I still find that even with that information in the prompt it is quite unreliable.

And more generally the simple fact that when you talk to one the only way to “store” these memories is externally (ie not by updating the weights), is kinda like dealing with someone that can’t retain memories and has to keep writing things down to even get a small chance to cope. I get that updating the weights is possible in theory but just not practical, still.


It's 6 months away the same way coding is apparently "solved" now.

I think we - in last few months - are very close to, if not already at, the point where "coding" is solved. That doesn't mean that software design or software engineering is solved, but it does mean that a SOTA model like GPT 5.4 or Opus 4.6 has a good chance of being able to code up a working version of whatever you specify, with reason.

What's still missing is the general reasoning ability to plan what to build or how to attack novel problems - how to assess the consequences of deciding to build something a given way, and I doubt that auto-regressively trained LLMs is the way to get there, but there is a huge swathe of apps that are so boilerplate in nature that this isn't the limitation.

I think that LeCun is on the right track to AGI with JEPA - hardly a unique insight, but significant to now have a well funded lab pursuing this approach. Whether they are successful, or timely, will depend if this startup executes as a blue skies research lab, or in more of an urgent engineering mode. I think at this point most of the things needed for AGI are more engineering challenges rather than what I'd consider as research problems.


Sure, Claude and other SOTA LLMs do generate about 90% of my code but I feel like we are not closer to solving the last 10% than we were a year ago in the days of Claude 3.7. It can pretty reliably get 90% there and then I can either keep prompting it to get the rest done or just do it manually which is quite often faster.

Reminds me of how cold fusion reactors are only 5 years away for decades now

Cold fusion reactors haven't produced usable intermediate results. LLMs have.

LLMs produce slop far to often to say they are in any way better than cold fusion in terms of usable results. "AI" kind of is the cold fusion of tech. We've always been 5 or 10 years away from "AGI" and likely always will be.

That's just nonsense. That they produce slop does not negate that I and many others get plenty of value out of them in their current form, while we get zero value out of fusion so far - cold or otherwise.

But I swear this time is different! Just give me another 6 months!

And another 6 trillion dollars :^)

> RSI

Wait, we have another acronym to track. Is this the same/different than AGI and/or ASI?


Some people should definitely be getting Repetitive Strain Injury from all the hyping up of LLMs.

Recursive self improvement. It's when AI speeds up the development of the next AI.

Recursive Self Improvement

[flagged]


If you respond to me with a coherent comment explaining that you're not an AI agent yourself, I will be pleasantly surprised and redact my accusation.

But until then — I am quite confident that you are an agent (OpenClaw or otherwise?) polluting HN with relatively useless, non-human chatbot substance.

I'm especially sure of this based on how frequently you've commented in the past day, all of which are comments with the same exact structure and "AI tells".

You seem to be a founder of an AI agent company (https://kalibr.systems/) that ships "self-healing agents". All of your comments today appear to have been made exactly 10 minutes apart, and your bio says "lover of all things agentic".

This is not conducive to productive conversation! Please stop!

Gah... Dead internet theory in action.

@dang is there a policy against botting comments on HN?


Most projects under the "google" org will have exactly the same disclaimer about not being official Google products.

Crazy.

And this project uses "google" in its org, so I would assume it is offical or at least lawyers are running toward the owner with lawsuits.


But at least they are under the Google organization. Thing is anyone could create an organization, name it something like "googlesomething", use Google logos, and design it in a way that some users might believe it has an official connection.

Couldn't Google do a cease and desist for that kind of impersonation?

I think so, but it could be enough for someone to create such an organization, share it on HN for malicious purposes, such as infecting devices, and have it taken down only afterward. I'm not saying that's what happened here, but it does illustrate a potential attack vector.

Where are you getting that from?

The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.

> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now

It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.


I think it largely hinges on what they mean by "included"; does that mean it was specifically excluded by the terms of the contract or does it mean that it's not expressly permitted? I doubt the DoD is used to defense contractors thinking they have the right to dictate policy regarding the use of their products, and it's equally possible that anthropic isn't used to customers demanding full control over products (as evidenced by how many chatbots will arbitrarily refuse to engage with certain requests, especially erotic or politically-incorrect subject-matters). Sometimes both parties have valid cases when there's a contract disagreement.

>A pretty clear indication that the current language has some.

Or alternatively that there is some disagreement between the DoD and Anthropic as to how the contract is to be interpreted and that the DoD is removing the ambiguity in future contracts.


I'm hope not, and that they'll instead spin out WB, for it to be gobbled up again. Anything done three times is tradition, and breaking it just wouldn't do.

It is basically impossible for AI software improvements to devalue the AI compute investments.

It's the other way around, software improvements make the hardware more valuable. Let's say that one unit of compute can generate one unit of value. As the software improves on any of the principal axes (cheaper cost for same quality, or new capabilities that you could previously not get for any price), that same unit of compute will produce more value.

What would threaten those compute investments? Basically order of magnitude improvements in the hardware, but that kind of thing will take longer to happen than the projected lifetime of the hardware. (Or the demand for AI evaporating, but that tends to be an issue of faith that is hard to have a useful discussion on.)


That's assuming all existing LLM investments divided by the all existing LLM usage is net valuable as baseline. But if that is not yet like that, then software improvements may or may not bring those investments over the valuable threshold.


That's an interesting take.

It does assume that more intelligence is both possible and useful -- that's probably not unlikely.


The abstract of the article is very short, and seems pretty clear to both of your questions.

This is what is special about them:

> a set of ten math questions which have arisen naturally in the research process of the authors. The questions had not been shared publicly until now;

I.e. these are problems of some practical interest, not just performative/competitive maths.

And this is what is know about the solutions:

> the answers are known to the authors of the questions but will remain encrypted for a short time.

I.e. a solution is known, but is guaranteed to not be in the training set for any AI.


> I.e. a solution is known, but is guaranteed to not be in the training set for any AI.

Not a mathematician and obviously you guys understand this better than I do. One thing I can't understand is how they're going to judge if a solution was AI written or human written. I mean, a human could also potentially solve the problem and pass it off as AI? You might say why would a human want to do that? Normal mathematicians might not want to do that. But mathematicians hired by Anthropic or OpenAI might want to do that to pass it off as AI achievements?


Well, I think the paper answers that too. These problems are intended as a tool for honest researchers to use for exploring the capabilities of current AI models, in a reasonably fair way. They're specifically not intended as a rigorous benchmark to be treated adversarially.

Of course a math expert could solve the problems themselves and lie by saying that an AI model did it. In the same way, somebody with enough money could secretly film a movie and then claim that it was made by AI. That's outside the scope of what this paper is trying to address.

The point is not to score models based on how many of the problems they can solve. The point is to look at the models' responses and see how good they are at tackling the problem. And that's why the authors say that ideally, people solving these problems with AI would post complete chat transcripts (or the equivalent) so that readers can assess how much of the intellectual contribution actually came from AI.


> these are problems of some practical interest, not just performative/competitive maths.

FrontierMath did this a year ago. Where is the novelty here?

> a solution is known, but is guaranteed to not be in the training set for any AI.

Wrong, as the questions were poses to commercial AI models and they can solve them.

This paper violates basic benchmarking principles.


> Wrong, as the questions were poses to commercial AI models and they can solve them.

Why does this matter? As far as I can tell, because the solution is not known this only affects the time constant (i.e. the problems were known for longer than a week). It doesn't seem that I should care about that.


Because the companies have the data and can solve them -- so providing the question to a company with the necessary manpower, one cannot guarantee anymore that the solution is not known, and not contained in the training sample.


What the OP was pointing out is two typical tells for lazy ChatGPT-generated text, right in the intro. (The m-dash, "it's not just X, it's Y").

Of course that kind of heuristic can have false positives, and not every accusation of AI-written content on HN is correct. But given how much stuff Gregg has written over the years, it's easy to spot-check a few previous posts. This clearly isn't his normal style of writing.

Once we know this blog was generated by a chatbot, why would the reader care about any of it? Was there a Mia, or did the prompt ask for a humanizing anecdote? Basically, show us the prompt rather than the slop.



I'm not sure the volume here is particularly different to past examples. I think the main difference is that there was no custom harness, tooling or fine-tuning. It's just the out of the box capabilities for a generally available model and a generic agent.


But it's not failing 50% of the time. Their status page[0] shows about 99.6% availability for both the API and Claude Code. And specifically for the vulnerability finding use case that the article was about and you're dismissing as "not worth much", why in the world would you need continuous checks to produce value?

[0] https://status.claude.com/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: