Hacker Newsnew | past | comments | ask | show | jobs | submit | kubb's commentslogin

Not in the next decade. Won't get funded.

Private investment in the US has grown from 100 billion in 2024 to almost 300 billion USD in 2025 [0]. Add public investments worldwide and private investments in at least China and Europe.

I'm pretty sure money is not going to be the blocker.

[0] https://hai.stanford.edu/ai-index/2026-ai-index-report


The money will go to LLMs.

Why not both? You don’t need 1trillion allocated before you have a proof of concept to demonstrate your non-LLM model, and once you have a PoC you will definitely have the larger investors interested

You will need 100s of billions to make a viable POC.

You only need to train a range of small models in order to establish a plausible scaling law, IMO.

For a PoC? That sounds very unlikely. I think you’re off by at least 2–3 orders of magnitude

Let's wait 10 years and see.

Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta’s former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models.

LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. “The idea that you’re going to extend the capabilities of LLMs [large language models] to the point that they’re going to have human-level intelligence is complete nonsense,” he said. [0]

[0] https://www.wired.com/story/yann-lecun-raises-dollar1-billio...


Now check how much OpenAI got in their last funding round, and you have your answer.

I don't think it's valid to draw broad conclusions from the funding of a new company vs. an industry leader. If AMI builds something that looks impressive considering the funding they got, then they'll get plenty more in the next round.

He must be trolling.

AI is hands down the most researched topic in CS departments. Of the 10 largest companies (by market cap), only 3 aren't balls-deep in AI R&D. The fastest growing (private or public) companies by revenue are also almost all companies focused primarily on AI (Anthropic, OpenAI, xAI, Scale AI, Nvidia).

And the money isn't even the most important part. It's all about mindshare and collective research time. The architectural concepts can be researched and developed on top of open models, so even individual relatively poor researchers unaffiliated to anything can make breakthroughs.

Even the computing required for the legendary "Attention is all you need" paper could probably be recreated on con-/prosumer hardware in a month's time.


1B is what Microsoft invested in Open AI in 2019[0]. That was enough to get the ball rolling.

[0] https://en.wikipedia.org/wiki/OpenAI#Creation_of_for-profit_...


Why on earth would you start your ai startup in Paris? Of all places in western Europe it's one of the hardest to find, attract and keep talented people. The wages are super low, housing is high and language is an issue.

Probably because LeCun is from there. But top AI talent needs to be paid top cash and the taxes there are brutal for high earners especially.

s/competitor/intelligence services/

+1, it hasnt even been 24 hours and I already see these stupid CyberSec companies trying to squeeze themselves between this.

Whenever someone complains about not being able to use a slightly different syntax, I assume they just don't have any neuroplasticity anymore.

I think syntax matches with our brains or not. I think anyone is capable of learning any syntax. The question is whether they want to. At some level, programming is art.

Altman must be much more strategic and calculated in his communication than Trump who just kind of blurts out whatever.

They definitely don't feel remorse.

There's a good chance that they'll catch up. The "AI race" is a race to the bottom, with the leaders blowing huge wads of cash on capabilities that get replicated months later by the competition at a fraction of the cost.

The only benefit of leading is mindshare. OpenAI is doubling down on that, by investing in communication companies. That's their pathetic attempt at a "moat".


They catch up by distilling frontier models. They will eventually figure out how to prevent that from happening. No one has any interest in investing tens of billions if the product can be copied and sold for less.

>No one has any interest in investing tens of billions if the product can be copied and sold for less.

That is what has happened until now though


You’re right, but it’s gonna be hard to stop them from raging. In many ways people want to be justified in a „see, I told you so, Rust is useless” belief, and they’re willing to take one or two questionable logical steps to get there.

Somehow this is about the EU?

The world order is early on in a major restructuring. The EU is a major region and on a path to greater self reliance and determination. This is good for the world imo (as an American)

EU is on the right path, but the problem is that we’re going way too slow.

Check out the “28th regime” that standardizes incorporation for European companies, it was announced back in November, and it won’t see the light this year probably. We can’t wait any second more, we need to act now not to become totally irrelevant and it might even be too late.


US controlling the world's energy routes goes back to the Suez Crisis, where it wrestled the canal from Britain. Reagan blew up a Russian-German pipeline. The Nord Stream sabotage was at least condoned and cheered on. Now the closure of Hormuz was first provoked and then co-opted by the US.

Yes, this is about the rest of the world.


How do you check if what it produced is even the right thing? Models love to go chasing the wrong goal based on a reasonable spec.

When the end result has problems and needs to be reworked.

You can't figure this out instantly except when you'd review everything the LLM produces, which I am not. So the round trip time is pretty long, but I can trace it back to the intent now because I commit every architecture decision in an ADRs, which I pour most of my energy into. These are part of the repo.

Using these ADRs helped a lot because most of the assumptions of the LLM get surfaced early on, and you restrict the implementation leeway.


Got it. I imagine concurrency bugs will hit hard with this approach because they show up rarely and are hard to debug.

That's why I use Rust and an Actor architecture. :)

Do they? I haven't experienced models deviating from a spec in a very long time. If anything I feel they are being too conservative and have started to ask to confirm too much.

The problem is not the LLM deviating from the plan (though that rarely also happens when it thinks it has a better idea) but rather if the plan is not strict enough and the LLM decides on the fly HOW it is going to build your plan.

they can't allow themselves NOT to blast money left and right

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: