Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If llms are able to write better code with more declarative and local programming components and tailwind, then I could imagine a future where a new programming language is created to maximize llm success.


This so much.

To me it seems so strange that few good language designers and ml folks didn't group together to work on this.

It's clear that there is a space for some LLM meta language that could be designed to compile to bytecode, binary, JS, etc.

It also doesn't need to be textual like we code, but some form of AST llama can manipulate with ease.


At that point why not just have LLMs generate bytecode in one shot?

Plenty of training data to go on, I'd imagine.


The code would be un reviewable.


It would also be harder for the LLM to work with. Much like with humans, the model's ability to understand and create code is deeply intertwined and inseparable from its general NLP ability.


Why couldn't you use an LLM to generate source code from a prompt, compile it, then train a new LLM on the same prompt using the compiled output?

It seems no different in kind to me than image or audio generation.


...by a human :)


Hence very important in the transitional phase we are currently in where LLMs can’t do everything yet.


Would this be addressed by better documentation of code and APIs as well as examples? All this would go into the training materials and then be the body of knowledge.


readability would probably be the sticking point


> I could imagine a future where a new programming language is created to maximize llm success.

Who will write the useful training data without LLMs? I feel we are getting less and less new things. Changes will be smaller and incremental.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: