I send this to students who feel discouraged by university-level topics. If math professors find things difficult, then you're probably OK... just keep hacking at it!
I believe a decade from now we'll see the "unbundling" of the three types of value universities provide: /1/ employability signal, /2/ knowledge, /3/ social. I don't know what will replace /1/ and /3/, but I look forward the future where /2/ can be made more widely accessible (without the debt).
Imagine a future where everyone knows 101-level concepts in chemistry, biology, physics, etc. and able to function in society informed by all this knowledge, rather than defer to "experts."
It's not a complete book, but the topics it covers are well explained.
Maybe the fact it is concise is a feature—you can quickly get an overview of each topic, without all the details a textbook author would have to cover...
It would be nice if you have a contiguous time to sit and learn CALC, but it's not required. You can think of it as a Mario World level with several stages, that you can complete one at a time. The bottom half of the page in this concept maps shows the world map for CALC world: https://minireference.com/static/conceptmaps/math_and_physic...
The data-poor and computation-poor context of old school statistics definitely biased the methods towards the "recipe" approach scientists are supposed to follow mechanically, where each recipe is some predefined sequence of steps, justified based on an analytical approximations to a sampling distribution (given lots of assumptions).
In modern computation-rich days, we can get away from the recipes by using resampling methods (e.g. permutation tests and bootstrap), so we don't need the analytical approximation formulas anymore.
I think there is still room for small sample methods though... it's not like biological and social sciences are dealing with very large samples.
> Alternative explanation I: Nonsignificant results may not be reported at all and thus they won’t appear in the dataset.
> This cannot explain the results. Remember that when results should be significant because you have adequate power to detect a real effect, only a small percentage of p-values will be in the 0.05 to 0.01 range. People should be reporting p-values that are much lower for typical significant effects, but they aren’t: they're reporting suspicious effects. Publication bias is usually a bias towards significance not a bias towards a p-value between 0.05 and 0.01. People adjust their marginal estimates of interest to get at least below 0.05 but they don’t take their extremely significant estimates and adjust them so they’re in the suspicious range.
I hope to finish the book by the end of this summer (the book itself won't be free, but all the notebooks and additional exercises will be free online at nobsstats.com)
My colleague swears by her iPad, but I always go the paper route.
reply