Hacker News new | past | comments | ask | show | jobs | submit login

A lot of the talk on machine learning reminds me of thermodynamics. There are some states we can say can happen. We can determine what a state can be composed of in terms of microstates with certain probabilities. We have definite answers for some things, and in other situations, we have to settle for big picture images. It all depends on the measure of the space you are working in. Nevertheless, there are ways to quantify errors in machine learning routines. There are mathematically sound ways to reduce error too, and intuition gives us even more models (to test). I do not think it should be a debate based on deterministic and probabilistic guarantees. The question should be more geared to how can we make assumptions to form better models and consistently do testing along the way.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: