Hacker News new | past | comments | ask | show | jobs | submit login

Hashed probably means that some features are mapped to a desired perceptron weight via a hash function. This serves as an implicit regularization and can be much more efficient (no need for a predetermined feature vector, sparse representation etc.). It's called hash trick in ML. Perceptron is a single layer NN. Dynamic predictor, a branch predictor that adapts to program input, not just some predetermined state (like a formula that was shown to be good enough for a collection of programs).



In this case, I'm pretty sure it means that multiple instruction addreses are hashed to the same perceptron (or is what you said effectively the same thing in ML-speak).

Since you don't have branch predictors for each address, you share the same prediction data for multiple addresses (this is one often underlooked cost to heavily branchy code).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: