Hacker News new | past | comments | ask | show | jobs | submit login

Your blog post has so many errors that I don't even know where to start. As another poster mentioned, areas under non-degenerate probability density functions are 1 by definition, whether they're uniform, Gaussian, standardized or not. What you described as a "standard uniform distribution" is really a degenerate distribution[1], meaning that you assume no uncertainty at all (stdev=0). There's nothing "uniform" about that, you might just as well start with a Gaussian with stdev=0.

"converting from a standard uniform distribution to a Gaussian distribution" as you described does not make any sense at all. If you replace an initial assumption of a degenerate distribution with a Gaussian, as you seem to be doing, you replace a no-uncertainty (stdev=0) assumption with some uncertainty (so the uncertainty blow-up is infinite), but it doesn't affect point estimates such as the mean or median, unless you make separate assumptions about that. There is nothing in your story that leads to multiplying some initial time estimate by sqrt(pi). The only tenuous connection with sqrt(pi) in the whole story is that the Gaussian integral happens to be sqrt(pi). There are some deep mathematical reasons for that, which has to do with polar coordinates transformations. But it has nothing to do with adjusting uncertainties or best estimates.

[1] https://en.wikipedia.org/wiki/Degenerate_distribution




Thank you for your valuable feedback; it will take some time to process, and I will adjust the blog post as my insights grow (potentially discarding the whole idea, but for me it's a learning process.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: