Hacker News new | past | comments | ask | show | jobs | submit login

Nobody evaluates a model based simply on how well it fits the data and its number of parameters; you also look at how well the model parameters are constrained and what the uncertainty bands on the fit are.

The existing practice isn't to blindly look at the number of parameters in a model without considering the actual fit. It's a strawman argument to pretend that that's the case. A model that statistically overfits the data is just as suspect as one that underfits it. If you try to publish a paper about a model with a 0.01 chi-squared value being fit to some data, then it's going to be rejected. It doesn't matter if it's one parameter or one hundred parameters, it's clearly encoding information about the actual dataset rather than being a general model. The model that they present in this paper would have a chi-squared value of essentially zero, and someone would be laughed out of the room if they tried to present it at a conference.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: