Hacker Newsnew | past | comments | ask | show | jobs | submit | samch93's commentslogin

A (deep) NN is just a really complicated data model, the way one treats the estimation of its parameters and prediction of new data determines whether one is a Bayesian or a frequentist. The Bayesian assigns a distribution to the parameters and then conditions on the data to obtain a posterior distribution based on which a posterior predictive distribution is obtained for new data, while the frequentist treats parameters as fixed quantities and estimates them from the likelihood alone, e.g., with maximum likelihood (potentially using some hacks such as regularization, which themselves can be given a Bayesian interpretation).


probably this article: Hoekstra, R., Morey, R.D., Rouder, J.N. et al. Robust misinterpretation of confidence intervals. Psychon Bull Rev 21, 1157–1164 (2014). https://doi.org/10.3758/s13423-013-0572-3

another good article on misinterpretation of p-values and confidence intervals is: Greenland, S., Senn, S.J., Rothman, K.J. et al. Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. Eur J Epidemiol 31, 337–350 (2016). https://doi.org/10.1007/s10654-016-0149-3


I agree that there are many things Fisher got wrong (eugenics, tobacco lobbying, etc.), but his contributions to statistics (e.g., maximum likelihood or ANOVA) and genetics are among the most fundamental of the 20th century.


And it's all well-documented. And it's not all that damning. Here's a pretty comprehensive article about it: https://www.nature.com/articles/s41437-020-00394-6. It ends with a well chosen quote from Fisher:

“More attention to the History of Science is needed, as much by scientists as by historians, and especially by biologists, and this should mean a deliberate attempt to understand the thoughts of the great masters of the past, to see in what circumstances or intellectual milieu their ideas were formed, where they took the wrong turning track or stopped short of the right”


Link's Awakening is such a weird but fascinating entry in the Zelda series. I was not surprised to learn that its story was directly influenced by Twin Peaks.


My fav Zelda to date.


Tell me more!

EDIT: Ok it was easy to google. TIL.


The Gaming Historian has a good video about the development of the game (https://youtu.be/pfvk6CJ3v34)


I love this.


Nintendo ninjas coming in 3, 2, 1,....


try the following R code:

fisher.test(matrix(c(786, 298, 22999, 11156), nrow = 2))

#> Fisher's Exact Test for Count Data

#>

#>data: matrix(c(786, 298, 22999, 11156), nrow = 2)

#>p-value = 0.0003266

#>alternative hypothesis: true odds ratio is not equal to 1

#>95 percent confidence interval:

#> 1.116046 1.469612

#>sample estimates:

#>odds ratio

#> 1.279435


Do you know about Claude Shannon's Master thesis?


Guetzli means cookie in Swiss German, I wonder what inspired the devs to choose that name


Other compression algo is Brotli (or brötli).

Also popular Swiss pastry.


Brotli, Zopfli, Guetzli, Butteraugli, ... Google's Zurich branch uses this particular naming scheme for their compression research projects.


What other swiss foods ending in *li can we expect them to name future compression algorithms?


Weggli(Small bread tasting a bit like a Zopf), Hörnli(The kind of pasta americans use to make Mac n Cheese), Mütschli(Small bread which is a bit harder than a Weggli), Schnitteli(Piece of bread with jam or honey etc.), Schöggeli(Small chocolate), etc...

Basically you can add -li to almost any word in swiss german and there are lots of words already which end with -li. It's a way to make a word sound "cute".

As an example, "Auto" means car, but if we call it "Autöli" it means that it's either a toy car or just a little(,cute) car :)


I'm guessing Gipfeli


It exists! https://github.com/google/gipfeli

It was aiming for better compression than LZ4 at much higher speeds than deflate. Now brotli and zstd on their faster settings mostly beat it in that niche.


Wow haha, that is amazing. Thanks for the find!


It‘s such a tragedy that he died so young. Prof Mackay was a role model who impacted our world in many positive ways (e.g. his work in information theory, Bayesian statistics, environmental science). I recommend his lecture on information theory which is available on youtube [1]. He also documented the progress of his cancer very detailed on his blog [2], which is a very interesting (and sad) read.

[1] https://youtu.be/BCiZc0n6COY

[2] http://itila.blogspot.com/?m=1


The ASA recently published a new statement which is more optimistic about the use of p-values [1]. I myself also think that correctly used p-values are in many situations a good tool for making sense out of data. Of course, a decision should never be conducted on a p-value alone, but the same could also be said about confidence/credible intervals, Bayes factors, relative belief ratios, and any other inferential tool available (and I‘m saying this as someone who is doing research in Bayesian hypothesis testing methodology). Data analysts always need to use common sense and put the data at hand into broader context.

[1] https://projecteuclid.org/journals/annals-of-applied-statist...


Is the nuance here that the ASA is OK with p-values but not OK with the rhetorical phrasings around statistical significance? My take is that it is easy to casually misinterpret or misrepresent statistical results because of how fuzzy these language around it all is. Phrases like "statistically significant" imply a certain kind of causality to the reader, when the actual rigorous claims are very specific and nuanced. Moving away from such soft phrasings might mean people have to stick to precise and narrow claims, whereas the normalization of soft phrasings makes room for bad claims or bad interpretations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: