I wonder if lack of code as a training/example dataset for LLMs could be a problem to produce well enough Mog code reliably.
It feels like a custom defined DSL (domain specific language) problem.
Models are good at generating code that already have a large corpus of examples, documentation, and training data behind them. A brand new language may be good for LLM to speak on, but it is hard for LLMs to produce it reliably until it becomes widely used. And it is hard for it to become widely used until models can already produce it well.
It feels like a custom defined DSL (domain specific language) problem.
Models are good at generating code that already have a large corpus of examples, documentation, and training data behind them. A brand new language may be good for LLM to speak on, but it is hard for LLMs to produce it reliably until it becomes widely used. And it is hard for it to become widely used until models can already produce it well.