Hacker News new | past | comments | ask | show | jobs | submit login

You're assuming I'm not aware of that, and didn't try multiple times. The original program wasn't the output of a single request; it took several retries and some refining, but all the retries produced something at least mostly correct.

Doesn't matter how many times I try it now, it just doesn't work.




>> You're assuming I'm not aware of that, and didn't try multiple times.

I'm not! What you describe is perfectly normal. Of course you'd need multiple tries to get the "right" output and of course if you keep trying you'll get different results. It's a stochastic process and you have very limited means to control the output.

If you sample from a model you can expect to get a distribution of results, rather tautologically. Until you've drawn a large enough number of samples there's no way to tell what is a representative sample, and what should surprise you. But what is a "large enough" number of samples is hard to tell, given a large model, like a large language model.

>> Doesn't matter how many times I try it now, it just doesn't work.

Wanna bet? If you try long enough you'll eventually get results very similar to the ones you got originally. But it might take time. Or not. Either you got lucky the first few times and saw uncommon results, or you're getting unlucky now and seeing uncommon results. It's hard to know until you've spent a lot of time and systematically test the model.

Statistics is a bitch. Not least because you need to draw lots of samples and you never know how close you are to the true distribution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: