Private investment in the US has grown from 100 billion in 2024 to almost 300 billion USD in 2025 [0]. Add public investments worldwide and private investments in at least China and Europe.
I'm pretty sure money is not going to be the blocker.
Why not both? You don’t need 1trillion allocated before you have a proof of concept to demonstrate your non-LLM model, and once you have a PoC you will definitely have the larger investors interested
Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta’s former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models.
LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. “The idea that you’re going to extend the capabilities of LLMs [large language models] to the point that they’re going to have human-level intelligence is complete nonsense,” he said. [0]
I don't think it's valid to draw broad conclusions from the funding of a new company vs. an industry leader. If AMI builds something that looks impressive considering the funding they got, then they'll get plenty more in the next round.
AI is hands down the most researched topic in CS departments. Of the 10 largest companies (by market cap), only 3 aren't balls-deep in AI R&D. The fastest growing (private or public) companies by revenue are also almost all companies focused primarily on AI (Anthropic, OpenAI, xAI, Scale AI, Nvidia).
And the money isn't even the most important part. It's all about mindshare and collective research time. The architectural concepts can be researched and developed on top of open models, so even individual relatively poor researchers unaffiliated to anything can make breakthroughs.
Even the computing required for the legendary "Attention is all you need" paper could probably be recreated on con-/prosumer hardware in a month's time.
Why on earth would you start your ai startup in Paris? Of all places in western Europe it's one of the hardest to find, attract and keep talented people. The wages are super low, housing is high and language is an issue.
I think syntax matches with our brains or not. I think anyone is capable of learning any syntax. The question is whether they want to. At some level, programming is art.
There's a good chance that they'll catch up. The "AI race" is a race to the bottom, with the leaders blowing huge wads of cash on capabilities that get replicated months later by the competition at a fraction of the cost.
The only benefit of leading is mindshare. OpenAI is doubling down on that, by investing in communication companies. That's their pathetic attempt at a "moat".
They catch up by distilling frontier models. They will eventually figure out how to prevent that from happening. No one has any interest in investing tens of billions if the product can be copied and sold for less.
You’re right, but it’s gonna be hard to stop them from raging. In many ways people want to be justified in a „see, I told you so, Rust is useless” belief, and they’re willing to take one or two questionable logical steps to get there.
The world order is early on in a major restructuring. The EU is a major region and on a path to greater self reliance and determination. This is good for the world imo (as an American)
EU is on the right path, but the problem is that we’re going way too slow.
Check out the “28th regime” that standardizes incorporation for European companies, it was announced back in November, and it won’t see the light this year probably. We can’t wait any second more, we need to act now not to become totally irrelevant and it might even be too late.
US controlling the world's energy routes goes back to the Suez Crisis, where it wrestled the canal from Britain. Reagan blew up a Russian-German pipeline. The Nord Stream sabotage was at least condoned and cheered on. Now the closure of Hormuz was first provoked and then co-opted by the US.
When the end result has problems and needs to be reworked.
You can't figure this out instantly except when you'd review everything the LLM produces, which I am not. So the round trip time is pretty long, but I can trace it back to the intent now because I commit every architecture decision in an ADRs, which I pour most of my energy into. These are part of the repo.
Using these ADRs helped a lot because most of the assumptions of the LLM get surfaced early on, and you restrict the implementation leeway.
Do they? I haven't experienced models deviating from a spec in a very long time. If anything I feel they are being too conservative and have started to ask to confirm too much.
The problem is not the LLM deviating from the plan (though that rarely also happens when it thinks it has a better idea) but rather if the plan is not strict enough and the LLM decides on the fly HOW it is going to build your plan.
reply