You know what would be great for learning probability? A set of easy problems (graded in difficulty) and their answers. There are many introductions in text, but the best presentation would be one where you also play with the notions, and initially this must be easy enough.
It's like the difference between supervised learning and RL. In one you just get a dataset, in the other you get a playground - a dynamic dataset where you can explore and test hypothesis.
Selecting the problems requires real flair, it's one thing to understand the subject, and quite another to understand the usual pitfalls and stumbling blocks of students learning it.
3) the concept of threshold has been challenged. eugene izhikevich for example presents some counterexamples for the idea of the threshold (where a spike a spike was expected and doesn't happen, or when a spike was not expected and happens), and provides an alternative explanation involving dynamical systems.
so, while useful, it is important to understand that most artificial neural models are simplified abstractions. eventually we will figure out why biology is doing what is doing. e.g: how much of the biology has to do with information processing, how much of it has to do with keeping neurons alive, and how much of it has to do with passing information around.
I'm enjoying the Probability for Everyone post. I'm also quickly reminded how complex the notion used in Probability Theory is. Lowercase, uppercase, script uppercase, double struck uppercase, and Greek uppercase all have specific meanings (sometimes multiple meaning), and they get held together with the set theory symbols. It's even worse since some people use different characters for inconsequential things, and then there's situations (online forums) where the notation can't be used and substitutes are made, I even recall some books using bold letters which made note taking a real hassle.
Well, all of that is really just Hungarian Notation [0] with typesetting rather than extra characters, so it's not 100% necessary to fully understand -- just look at the definition again. The exception is standard sets, like the Reals or Natural Numbers etc. (which use the double strike through). But that's pretty standardized.
It's like the difference between supervised learning and RL. In one you just get a dataset, in the other you get a playground - a dynamic dataset where you can explore and test hypothesis.
Selecting the problems requires real flair, it's one thing to understand the subject, and quite another to understand the usual pitfalls and stumbling blocks of students learning it.