Hacker News new | past | comments | ask | show | jobs | submit login

Yesterday I told ChatGPT:

> We're going to write a program in a "new" language that is a mix of Ruby and INTERCAL. We're going to take the "come from" statement and use it to allow "hijacking" the return of a function. Furthermore, we're going to do it conditionally. "come from <method> if <condition>" will execute the following block if <method> was executed and <condition> is true. In <condition>, "result" can be used to refer to the result of executing <method>.

And that was enough to get it to understand my example code and correctly infer what it was intended to return.

Given it took that little, I don't think you need much code in a new language before you have a reasonable starting point as long as you can give examples and document the differences, and then use that on a few large projects that has good test suites, and work through any breakage.




You still need libraries and the ecosystem.

If it can't find those, then because there is no training data, best case scenario is that it will hallucinate APIs that don't exist.


If the LLM understands the language it can aid in creation of the libraries and ecosystem because it can also translate code. I just tested it by having ChatGPT translate one of my Ruby scripts to Python for example.

I don't like Crystal all that much, but it's similar enough to Ruby that if ChatGPT can handle Ruby->Python, it can handle Ruby->Crystal with relatively little work.

But it doesn't need to handle it flawlessly, because every new library you translate and fix up gives you a new codebase you can use to finetune it.


We need RLHF -> RLCF/RLIF/RLEF (Reinforcement Learning from Compiler/Interpreter/Execution Feedback).


But that's the thing, libraries are written and it should be able to just read/learn/train on it, that's all.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: