Hacker News new | past | comments | ask | show | jobs | submit | goose-'s comments login

What an unexpected and fun read to find on HN.

Bonus tip: while focused on the overlapping image in the middle, jiggle your screen, and the diff will move around while the rest remains static. This helped me solve the impossible challenge instantly.

I'd love to learn more about the underlying mechanism here. Anyone can point me in the right direction?


My takeaway after scanning the paper -

In an ideal setting, a trained model learns exactly the real world probability distribution, and generates data indistinguishable from those sampled from the real world. Training on them would be fine, but pointless, since the model is already a perfect representation of the real world.

Practically, however, a model is only a lossy approximation of the real world probability distribution. Repeated self-training would simply compound the loss - amplifying both the probable and the improbable.


Since Data Formulator performs data transformation on your behalf to get the desired visualization, how can we verify those transformations are not contaminated by LLM hallucinations, and ultimately, the validity of the visualization?


We can’t. Without the driver this car runs on probability. And that all. A capable operator is still needed in the loop.


You can see the generated code.


Do you think the people who this is made for can grasp the code?


this is constant challenge! code is the ultimate verification tool, but not everyone gets it.

sometimes reading charts help, sometimes looking at data helps, other times only code can serve the verification purpose...


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: