Hacker News new | past | comments | ask | show | jobs | submit login

They key bit here is there is no known way (as yet) to encode "reasoning".

I was a big fan of genetic programming, wrote a lot of code, did lots of research. And unlike LLMs it could end up on code that had never been written before that accomplished some task, but the random walk through a galactic sized space with atom (or maybe molecule) sized solution spaces made it computationally infeasible.

Being able to somehow code 'reasoning' one could do the equivalent of gradient descent to converge on a working solution but without that, you are unlikely[1] to find anything in reasonable amounts of time.

[1] The chance is non-zero but it is very very near zero.




LLMs can definitely end up with code that has never been written before, even before considering that you would be able both to ask it for modifications to very constrained parts of the code and can sample more broadly than always picking the most probably tokens.

But it also appear to have a far higher probability of producing changes that move towards something that will run.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: