Bonus tip: while focused on the overlapping image in the middle, jiggle your screen, and the diff will move around while the rest remains static. This helped me solve the impossible challenge instantly.
I'd love to learn more about the underlying mechanism here. Anyone can point me in the right direction?
In an ideal setting, a trained model learns exactly the real world probability distribution, and generates data indistinguishable from those sampled from the real world. Training on them would be fine, but pointless, since the model is already a perfect representation of the real world.
Practically, however, a model is only a lossy approximation of the real world probability distribution. Repeated self-training would simply compound the loss - amplifying both the probable and the improbable.
Since Data Formulator performs data transformation on your behalf to get the desired visualization, how can we verify those transformations are not contaminated by LLM hallucinations, and ultimately, the validity of the visualization?
Bonus tip: while focused on the overlapping image in the middle, jiggle your screen, and the diff will move around while the rest remains static. This helped me solve the impossible challenge instantly.
I'd love to learn more about the underlying mechanism here. Anyone can point me in the right direction?