most human generated methods look the same. in fact, in SWE, we reward people for generating code that look & feel the same, they call it "work as a team".
> Models don't emit something they don't know. They remix and rewrite what they know. There's no invention, just recall...
People really need to stop saying this. I get that it was the Smart Guy Thing To Say in 2023, but by this point it’s pretty clear that that it’s not true in any way that matters for most practical purposes.
Coding LLMs have clearly been trained on conversations where a piece of code is shown, a transformation is requested (rewrite this from Python to Go), and then the transformed code is shown. It’s not that they’re just learning codebases, they’re learning what working with code looks like.
Thus you can ask an LLM to refactor a program in a language it has never seen, and it will “know” what refactoring means, because it has seen it done many times, and it will stand a good chance of doing the right thing.
That’s why they’re useful. They’re doing something way more sophisticated than just “recombining codebases from their training data”, and anyone chirping 2023 sound bites is going to miss that.
I don't know, I have mixed-bag experiences and it's not really improving. It greatly varies depending on the programming language and the kind of problem which I'm trying to solve.
The tasks where it works great are things I'd expect to be part of dataset (github, blog posts), or they are "classic" LM tasks (understand + copy-paste/patch). The actual intelligence, in my opinion, is still very limited. So while it's true it's not "just recall" it still might be "mostly recall".
BTW: Copy-paste is something which works great in any attention-based model. On the other hand, models like RWKV usually fail and are not suited for this IMHO (but I think they have much better potential for the AGI)
> It’s not that they’re just learning codebases, they’re learning what working with code looks like.
Working in any not-in-training-set environment very quickly shows the shortcomings of this belief.
For example, Cloudflare Workers is V8 but it sure ain't Node, and the local sqlite in a Durable Object has a sync API with very different guarantees than a typical client-server SQL setup.
Even in a more standard setting, it's really hard to even get an LLM to use the current-stable APIs when its training data contains now-deprecated examples. Your local rules, llms.txt mentions, corrections etc slip out of the context pretty fast and it goes back to trained data.
The LLM can perhaps "read any code" but it really really prefers writing only code that was in its training set.
Trivially, humans don't emit something they don't know either. You don't spontaneously figure out Javascript from first principles, you put together your existing knowledge into new shapes.
Nontrivially, LLMs can absolutely produce code for entirely new requirements. I've seen them do it many times. Will it be put together from smaller fragments? Yes, this is called "experience" or if the fragments are small enough, "understanding".
>> Nontrivially, LLMs can absolutely produce code for entirely new requirements. I've seen them do it many times.
I think most people writing software today are reinventing a wheel, even in corporate environments for internal tools. Everyone wants their own tweak or thinks their idea is unique and nobody wants to share code publicly, so everyone pays programmers to develop buggy bespoke custom versions of the same stuff that's been done 100 times before.
I guess what I'm saying is that your requirements are probably not new, and to the extent they are yes an LLM can fill in the blanks due to its fluency in languages.
Nothing is truly and completely new. I'm not formulating my requirements in an extinct language. My point is "filling in the blanks" and "do new things" are a spectrum.
LLMs have their limits, but they really can understand and productively contribute to programs that achieve a purpose that no program on the internet has done yet. What they are doing is not interpolation at the highest level. It may be interpolation/extrapolation at a lower level, but this goes for any skill learnt by anyone ever.
What makes you categorically say that "AIs can't"?
Based on my experience with present day AIs, I personally wouldn't be surprised at all that if you showed Gemini 2.5 Pro a video of an insect colony and asked it "Take a look at the way they organize and see if that gives you inspiration for an optimization algorithm", it will spit something interesting out.
That's a good point but all it means is that we can't test the hypothesis one way or the other due to never being entirely certain that a given task isn't anywhere in the training data. Supposing that "AIs can't" is then just as invalid as supposing that "AIs can".
What makes you categorically say that "humans can"?
I couldn't do that with an ant colony. I would have to train on ant research first.
(Oh, and AIs can absolutely explore what they don't know. Watch a Claude Code instance look at a new repository. Exploration is a convergent skill in long-horizon RL.)
> Humans can observe ants and invent any colony optimization. AIs can’t.
Surely this is exactly what current AI do? Observe stuff and apply that observation? Isn't this the exact criticism, that they aren't inventing ant colonies from first principles without ever seeing one?
> Humans can explore what they don’t know. AIs can’t.
We only learned to decode Egyptian hieroglyphs because of the Rosetta Stone. There's no translation for North Sentinelese, the Voynich manuscript, or Linear A.
This doesn't make sense thermodynamically because models are far smaller than the training data they purport to hold and recall, so there must be some level of "understanding" going on. Whether that's the same as human understanding is a different matter.
It’s a lossy text compression technique. It’s clever applied statistics. Basically an advanced association rules algorithm which has been around for decades but modified to consider order and relative positions.
There is no understanding, regardless of the wants of all the capital investors in this domain.
> They remix and rewrite what they know. There's no invention, just recall...
If they only recalled they wouldn’t “hallucinate”. What’s a lie if not an invention? So clearly they can come up with data that they weren’t trained on, for better or worse.
Depends on the training? If there was eg RLHF then those connections are stronger and more likely; that's a difference (but not a category difference).
i have this feeling with LLM's generated react frontend, they all look the same