Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you instead put parentheses around the lexical sequences, then you wouldn't need syntax like `[3]` to denote length.

You also wouldn't need indentation levels to be syntactically meaningful.

You could also get rid of LLM tokens like square brackets, curly braces, colons, and commas.

And you could have objects nested to arbitrary depth.

In near the same character count as TOON (sometimes more, sometimes less).

(I was telling someone over the weekend that there are only a few small wins for Lisps in most AI work right now. I hadn't considered that the printed syntax itself might have a use with these LLM huge black boxes.)



have you tried it? models struggle keeping track of opening/closing braces, which is exactly why xml/csv (or toon) tends to work better than json


What is the reason that the current LLMs work better with XML than with JSON? Is it the names in the element tags?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: