My pick (I know the article just said "science" and didn't leave the question open): Noether's Theorem which implies symmetries and conservation laws come in matched pairs in physics. From this you can derive conservation of momentum isn't just some empirical relation we have historically observed, but critically linked to symmetry of translation. Once you learn this you have a lot less sympathy for reaction-less drive con-artists.
> From this you can derive conservation of momentum isn't just some empirical relation we have historically observed, but critically linked to symmetry of translation.
It changes conservation of momentum from an empirical relation we have historically observed to something that is derivable from a different empirical relation (translation symmetry) that we have historically observed. It's still ultimately an observed empirical relation.
It gives you something more to check. Your con-man adding n^3 terms to force equations saying this is the basis of their reaction-less drive is easy to dismiss, as even their whack-job theory implies conservation of momentum (and no reaction-less drive).
I too have a great fondness for Noether's First Theorem! However, just a slight technical correction: the theorem only goes one way. A conservation law alone doesn't necessarily imply a (continuous) symmetry of the system. For example, there's no symmetry prescription for baryon or lepton number conservation. Of course, that doesn't bar them from arising from symmetries of some awesome future theory...
Along those lines (no pun intended), I think every STEM college graduate ought to be able to prove that the shortest path between 2 points is a line segment instead of taking it for granted.
EDIT: Whoops! I just realized the entire point of this problem was missed, and it's my fault -- I worded my comment pretty poorly in hindsight.
I meant to say that STEM college graduates ought to be able to solve problems such as this one. That would include less-trivial instances, like the light-ray example I gave in another comment where you have to find the path that light travels when it crosses between media with different velocities. Unfortunately I made the mistake of posing the most trivial instance of this kind of problem as an example (Euclidean vector space), which resulted in many people missing the point I was trying to make. Sorry about that. The point wasn't solving this specific problem, the point was this kind of problem, and my example was overly simplified and therefore not representative of what I was trying to convey.
With that aside, note that this is still not a basic geometry problem, since you need to be able to discuss the lengths of arbitrary smooth curves, not just lines/angles/circular arcs.
I'm trying not to be a smart aleck and instead I'm putting the question into terms anyone who's finished high school would understand. We're not talking general relativity here, we're just talking about plain Euclidean geometry. The kind you (or maybe everyone except you) automatically thinks of... the kind you learn in middle/high school.
> I'm trying not to be a smart aleck and instead I'm putting the question into terms anyone who's finished high school would understand.
How's it being a smart aleck? A geodesic is a very easy concept to understand. Not only that, but it's ridiculous to dumb down information when the whole point of the conversation was to understand truths. If this question was on history concepts, and you said that Columbus discovered America, it would be a bit ridiculous for you to criticize me on calling you out on it, right? A straight line being the shortest distance between two points is about as true as Columbus discovering America. You have to add on a bunch of qualifiers for either to be remotely true.
> We're not talking general relativity here, we're just talking about plain Euclidean geometry.
See this is the thing, you don't even need to understand general relativity to understand this. Airplane pilots understand this, they fly on great circles because of this concept, which has absolutely nothing to do with general relativity.
> or maybe everyone except you
Or maybe anyone with a math degree, 4 or 5 physics classes, an aerospace degree, airplane pilots, optical engineers. It's really not that esoteric.
Looks like we found another person who doesn't know how to do it :) look it up! In pre-college geometry this isn't something you prove, it's something you postulate and take for granted.
Well it's been 35 years ... as I recall it's an indirect proof, it starts with assuming there is some other shorter path, and then showing that to be impossible by creating a contradiction of other proven theorems and postulates.
> Well it's been 35 years ... as I recall it's an indirect proof, it starts with assuming there is some other shorter path, and then showing that to be impossible by creating a contradiction of other proven theorems and postulates.
Then I'm gonna call B.S. on this until you can show me the proof somewhere. Every single time I've searched this the only rigorous proof I've found has been using calculus (often calculus of variations).
1. Prove the triangle inequality, & extend via induction to show that [A] a straight line is the shortest polygonal path, and that [B] adding a point to a polygonal path can only make it longer.
2. Arc length is defined as the limit of the length of a polygonal approximations to a curve. From (B), we know our approximations would approach the arc length from below.
So if an curve A->B has an arc length, there's a polygon A->B which is at least as short. And from [A] we know there's no polygon shorter than the straight line.
This is weaker than proofs you can do with calculus -- it doesn't prove uniqueness, and it completely ignores non-rectifiable curves, for instance. But I suspect that's what ams6110 is remembering.
2. Arc length is defined as the limit of the length of a polygonal approximations to a curve. From (B), we know our approximations would approach the arc length from below.
You call this "geometry"? Since when does geometry involve a function limiting and bounding process? We didn't even know what a limit was until calculus...
Well maybe I'm mistaken. It struck me that this is one of the proofs we discussed in class. I took calculus in high school also, maybe we discussed it there. However I've forgotten most of the geometry, and calculus I ever learned, having never had a practical professional need to use any of it. I'm not sure whether that reinforces or contradicts your original point on this subthread.
Variational minimization is one approach. The details of the proof would come down to what sort of space you're working in, though. Euclidian spaces are a lot simpler to prove this in than more complicated spaces with different notions of intrinsic and extrinsic curvature. For example, the shortest path on a sphere is also a line, but it's extrinsically curved and also might not be unique. On the other hand, the proof in a one-dimensional Euclidean space sort of comes from the definition of the metric over the space.
Since the notion of curved path (maybe aside from circular arcs) is not discussed in basic geometry, the problem would be impossible to state.
If you assume given the notion of arc-length as the least-upper-bound on lengths of piecewise-linear paths between the two points, then the triangle inequality of basic geometry suffices. This seems not more hand-wavy a proof as something about "light-travelling-at-constant-speed" analogies.
I don't know what does or doesn't get into what, but the argument for the shortest path from A to B being the direct straight-line path is conceptually simple, like so: let f(t) be a curve which passes from A to B. We can split f(t) into coordinate functions x(t), y(t), …, and we can choose a coordinate system such that A and B differ only on their x coordinate, with B having the larger x coordinate.
The total length of the curve described by f is the integral of |df/dt| dt, which is to say, the integral of sqrt((dx/dt)^2 + (dy/dt)^2 + ...) dt. This integrand is at least as large as dx/dt dt. But the integral of this latter integrand is the difference of B's and A's x coordinates, which is to say, the length of the direct straight-line path from A to B.
Thus, every path from A to B is at least as long as the direct straight-line path. [In fact, paying more careful attention, we have by this same argument the slightly stronger result that every other path is strictly longer than the straight-line path]
Now, without some context, this is not a good proof because you don't say your assumptions/axioms up front. Did you already prove that sqrt(x2+y2)>=x? Did you already prove that this particual integral exists? That you are allowed to choose a coordinate system so that A and B are both on the x axis?
If you take all of the above for granted (or "obvious"), you may as well say that the straight line being shortest is obvious.
Every proof is like this [except in the particular context of 100% formalized proof systems agreed upon in all details beforehand, but that is a niche context, not the default for mathematical discussion or thought].
I spelt out the parts of the argument that might be unfamiliar. I'm sure anyone here can readily convince themselves that sqrt(x^2 + y^2) >= x [x^2 is increasing for positive inputs, thus so is its inverse function, so sqrt(x^2 + y^2) >= sqrt(x^2) = |x| >= x], and that you can choose a coordinate system so that A and B are both on the x axis [take A to be the origin; take the vector from A to B to give the positive x axis, and other axes accordingly perpendicular to this one], and so on.
The integral business can be phrased however one likes, formally or informally, to account for the idea of a rectifiable curve as one whose length is given by treating it as built of many component straight-line pieces. The conceptual argument is the same, regardless of any particular choice of technical framework.
And if you furthermore aren't sure that, say, x^2 is increasing on positive inputs, we can explore that as well. If you furthermore aren't sold on modus ponens, we can explore that as well. Everything is infinitely explorable (and malleable!). But I noted only what here seemed directly relevant to the curiosity at hand.
I think it was exactly the right level of rigor for a HN comment. Gets the important idea across, doesn't get bogged down in the details that would be easy enough to fill in.
I'm not totally sure; I think it depends on what exactly you classify as calculus of variations, and what assumptions you're willing to make along the way. For what it's worth, I don't think I know how to prove the most general case possible. But given that I encountered this in physics rather than in math, I was satisfied with proving the physical analog, which would be the claim that the path of light in Euclidean space is always linear. Assuming we're talking about light allowed me to make 3 useful assumptions: (1) that the speed of the particle doing the traveling is constant, which comes in quite handy in the proof [1], (2) that there is only 1 independent variable (i.e., time) rather than 3 space coordinates, and (3) that our functions are all sufficiently smooth and such. The nice thing about this proof is it doesn't need the Euler-Lagrange equation that everybody uses... it doesn't need calculus of variations at all. Every step is an elementary calculus step. But the downside is the thought process that actually leads to the derivation is the one you'd only really get after studying a bit of calculus of variations, so you wouldn't come up with it without having that foundation (even though it's not required). (Yes, you can try to remove time from this proof and still keep it rigorous, but then it'd require a stronger background than just freshman calculus in order to be understood.)
[1] The reason I was satisfied with this is that it's obvious that the speed of the particle is irrelevant to the length of the path so long as the medium is uniform, so once I proved it for light of constant speed, I was done. It's not necessarily satisfying for a mathematician though.
This is a pictorial description of the proof in the sibling comment. It does not use calculus of variations, just the definition of the path length of a (differentiable parametrized) curve. If you want to call your parametrization a "light-ray" thats fine, but muddies the waters for me.
Just spell out what we mean by path, length of a path, and use calculus of a single variable. No need to introduce 'time', 'constant speed' or the like.
BTW, a truly variational proof is in the wikipedia article [0].
"Learns enough to do this"... in what sense? If you compare this to a puzzle, are you claiming they have all the pieces, or are you claiming they actually would be able to put the pieces together if you asked them to? The proof steps are tricky, it's not just a straightforward evaluation of some expression.
Also -- are you really claiming every STEM student at your university learns calculus of variations as a freshman? Because that's the only way I've actually seen others prove this. While doing it with plain calculus isn't impossible, it requires a similar kind of thought process that you get after learning calculus of variations, so I'm really skeptical what you're saying is the case, unless you're thinking of a different proof or something.
Sample bias: I feel that so many subtleties are lost because people don't generally realize that their observations/anecdotes/experiences are biased.
If I see a steak on my plate, I can't conclude that there is no world hunger.
If I see a snowball, I can't conclude that there is no global warming.
If I've never experienced racial bias against me, I can't conclude that it does not happen to other people.
> Sample bias: I feel that so many subtleties are lost because people don't generally realize that their observations/anecdotes/experiences are biased.
This one is so odd to me. There's plenty of logical errors and shortcuts the brain takes that I had to learn to consciously think around, but the idea that my experience isn't necessarily representative of the entire world was something that was blindingly obvious when I was literally a child. I'm still taken aback for a second when I see people say things like "most of the people I know do X" or even "everyone in my family has Y", without giving any thought to whether their sample even approaches representativeness.
You have to realize that a lot of people understand sample bias. Those people (myself included) simply don't care about global warming, global hunger, racism, etc.
I think we should make it more widely understood how bad we human are at science. Our brain is full of what we call cognitive biases, that prevent us from processing data correctly, our sensors are specially bad, even the ones we mostly rely on (eyes and ears) are very very easily cheated, and we can't even tell. Our storage unit (memory) is awful, we forget a lot, and time is our ennemy.
Scientific method. The great majority don't understand even that fundamental of science, and thus they cannot utilize it themselves, many do not trust it, they don't understand the epistemological problems it addresses, and they don't understand what terms like 'study', 'theory', etc. really mean.
This is easily the most important topic that needs to be more widely known. It's hard to progress in any other topic if you don't understand the basic methods for filtering good information from the many mistakes, misinterpretations, scams and other bad data. The need for education on the very basics of the scientific method has even made it into the recent popular press as concerns about "fake news".
Democracy and modern civilization itself requires at last some understanding of the scientific method. Sagan's warning in "The Demon-Haunted World"[1] was frighteningly prescient:
I have a foreboding of an America in my children's or grandchildren's time --
when the United States is a service and information economy; when nearly all
the manufacturing industries have slipped away to other countries; when awesome
technological powers are in the hands of a very few, and no one representing
the public interest can even grasp the issues; when the people have lost the
ability to set their own agendas or knowledgeably question those in authority;
when, clutching our crystals and nervously consulting our horoscopes, our
critical faculties in decline, unable to distinguish between what feels good
and what's true, we slide, almost without noticing, back into superstition
and darkness...
Until the public "baloney detection kit"[2] (and uses it regularly), trying to teach other topic is probably a waste of time[3].
“The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.”
In general, I wish more people understood some elementary facts about global warming, for example, because we observe more warming at the poles, during the winter and during the night, and also we observe cooling of the stratosphere, the cause must be in the Earth's atmosphere, it cannot be the Sun.
In mathematics, I wish more people knew about typed lambda calculus and its connection to mathematical logic. I think the fact that most mathematicians use predicate logic, while theorem provers use e.g. calculus of constructions is one of the big obstacles for more wide usage of automated theorem provers. But it's just a language barrier.
In economics, I wish more people knew about (I don't know the name, unfortunately) the approach to calculating of equilibrium prices that is described in J.M.Blatt's book Dynamic Economic Systems, and is based on input-output production matrices. It's a much more superior approach to the classical supply-demand analysis that dominates the textbooks, because it requires much less free parameters and assumptions.
"Emergence": Most everything in science (from Chemistry to Consciousness) is an emergent phenomenon and most people are plain ignorant of the concept and meaning of that word, although I did see it appear a lot in 2016, in scientific articles. The only thing (or things) in science that are not emergent are the laws of Quantum Mechanics and Relativity (as a simplification), and there are more and more physicists that believe even both of those "rule sets" probably are emergent from some more fundamental and simpler set of "rules", that give rise to both QM, Gravity, Spacetime.
My problem with emergence is that it makes you think in one conceptual direction, and not the other.
To explain, emergence as stated in the parent is a name for how a bunch of molecules could interact to make something you might call a football. This is a phenomenon at the "higher" everyday physical level emerging from the phenomena of the "lower" chemical level.
What I think this story misses is the importance of the "even higher" phenomena which resulted in a football being created, those of the human cultural level which caused the football to be manufactured and, for example, placed on some grass, or a desk, to be studied.
This isn't a purely sociological or philosophical point either, when you're looking at brain macro-activity for example, it's only partially explicable by referencing the micro-activity of, say, individual neurons, to get a better understanding you need to refer to activity outside of the brain that caused certain individual neurons to be acting like that. These higher level phenomena can be simple things like the movement in the neck that caused a change in stimulation, and also more wooly concepts like cultures which provide certain aesthetics or behaviours.
Emergence is important, but I feel its too often considered as a standalone phenomenon, rather than as one side of an irreducible causal cycle.
Emergence is a broad term for when things with definable, reproducable, novel, and predictable behaviors, which can be studied, quantified, and have math rules written about them, arise spontaneously from the lower rule-sets they are built on. It's like when you can identify a separate and distinct set of behaviors in the causality chain, but yes the word 'causality' encompasses 'emergence' in a very vague way, and so vague that it's indistinguishable from 'time' itself.
While I agree with Eliezer that "emergence did it" is no more of an explanation than "god did it", I'm not sure that this is the meaning people are trying to convey when they say e.g. "intelligence is an emergent phenomenon". What the statement means to me is that intelligence is a computationally dense phenomenon. This doesn't rule out the possibility of understanding it from a lower level, but it does rule out _simplifying_ it to the point where it can be understood without performing large computations.
In my mind, the concepts of irreducible complexity and emergence are inherently linked through the concept of abstraction. An emergent phenomenon is a computationally dense phenomenon for which much of the computation can be abstracted away, at the cost of giving up a low-level understanding. In a sense, emergence is the antithesis of irreducible complexity. In another sense, emergence is a sparse chain of islands in a sea of irreducible complexity.
Emergence as a concept only serves to eliminate the need for a designer. It doesn't actually explain anything to someone who already accepts the possibility of complexity without design. But that doesn't mean it's a useless concept, just that it is sometimes misused (if the LW post's premise is true).
Right, Emergence is a general term like Entropy, and I chose that on purpose because Emergence is more like the negative of or inverse of Entropy. Emergence is when you get order, complexity, and non-randomness out of pure randomness plus time plus rules.
The best way to explain emergence, is with an example, imo. The classic example is to look at Conways' Game of Life, and ask yourself what God must have 'designed' the 'lifeforms' that appear in it. Where is their behaviors encoded into reality to make them become real (i.e. happen). The answer is that they are not specifically designed but instead 'emerge'. Not that your snarky 4 word joke of a post deserved a response though. This is for others, not you.
> The only thing (or things) in science that are not emergent are the laws of Quantum Mechanics and Relativity (as a simplification)
By "relativity" I'm assuming you're referring to SR or GR here. I always thought of relativity as an emergent phenomenon due to the large scales we are working with. What makes you say it is not?
There are currently no good theories about why SR and GR hold true. They appear to be as fundamental as QM, but are totally independent of QM. QM always obeys SR/GR in terms of how spacetime behaves, and how mass affects things, etc, but there is no theory that says Relativity is true 'because of' QM (or emerging from QM), although that is a goal that is being searched for. It would be nice if either QM caused GR or vice versa, but no one has a clue how it could be true yet.
> QM always obeys SR/GR in terms of how spacetime behaves, and how mass affects things, etc.
We still haven't experimentally verified how gravity works on atomic scales, though, have we? I consider it completely plausible that it works differently than GR postulates, because to me (non-physicist here) GR seems to be a "large numbers" phenomenon and assumes space possesses certain properties that it may very well not possess on small scales (e.g. any kind of smoothness or even continuity).
We have verified (experimentally) that individual particles, whose quantum properties can be individually measured, do obey the rules of SR/GR. That is, space is curved for them in the presence of mass, and time slows down based on velocity, etc. Everything about QM that involves spacetime, has to be considered using the Relativity definitions of spacetime. Non-relativistic classical calculations will give the wrong answers always and relativistic is always right. Gravity only 'appears' to be a 'large numbers' problem because it's effect is so weak compared to EMF, as far as is proven. But my opinion is it's 50/50 chance that in the end SR/GR actually is emergent from something more fundamental rather than "that's just how it is."
Wow, did you read this somewhere in an article that goes into more interesting depth? 2000 kwh/day is like $200 bucks a day where I live, that's crazy! Now add in what's needed for the animals I eat. And that's just for the oxygen we consume?
One of the more interesting things I've learned in quite some time.
EDIT: You know, this is something that is so relate-able to the layman, it should be a widely used propaganda talking point.
And if you think about it more, if this number is correct, it shows in a very understandable way the magnitude of solar energy that the Earth receives - if this number is true, 100% sustainable energy is not only possible but should be incredibly easy once we develop the proper technology.
I wonder, could global transmission possibly be an answer to the lack of storage problem?
I always used to worry more about 'running out of oxygen' than CO2 emissions as I became aware of environmental science and the growing discussion about climate change. I think there are two main reasons scientists do not worry about it as much as originally seemed intuitive (to me):
First, the raw amount of oxygen in the atmosphere is large enough that fluctuations can go unnoticed. To decrease the percentage of oxygen by 5% would result in many orders of magnitude more CO2 (proportionally) being released, such that the oxygen would be negligible.
There's also the geology of it: rocks absorb and release oxygen over millions of years and add hysteresis to atmospheric composition. The amount of atmospheric gases trapped in the Earth's crust is significant and resists long-term changes.
Technically put, "any two optimization algorithms are equivalent when their performance is averaged across all possible problems"
More intuitively paraphrased, "if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems.".
I forget if either or any of these are actually proven, or if they're the "folkloric" intuitions from more technical papers.
More boldly overstated, there's no such thing as a general optimisation, all optimisations are relative to a subset of possible inputs.
In extremis, there's exists a situation where Bogosort is preferable to Quicksort.
I think this is provably false, probably, since you can compose a pathological case where you have a no-op that adds an O(n) (or more) step to the runtime and an optimization step that purely removes that no-op.
There's optimizations which are tradeoffs against reducing the valid input space (or optimization for the common input cases), and then there's optimizations that eliminate unnecessary operations or combine things cleverly without degrading the worst-case scenario.
I'm not immediately convinced by your no-op example because I can conceive of situations that needs a little extra time to be passed between actions, such as in the case of sorting elements whose value isn't easily determinable and is best estimated n clock ticks after the previous action.
This is of course a million miles away from the kind of sorting problems you get in Computer Science 101, but this is my point, the traditional sorting problems happen in a very well defined context, as soon as that changes, you might find any algorithm being optimal. Including Bogosort and, for example, quicksort with some extra no-ops.
I like your point, but for the reason above I still think the No Free Lunch theorem is valuable, else I wouldn't have had this insight in the first place. At least, I consider it an insight, YMMV.
Sure, it's certainly valuable to realize that you rarely get an optimization in one case that's not pessimal in another, and it is well-removed from the constrained environments that people work through for demoing algorithms.
I was just thinking in terms of more abstract optimization (e.g. platform-agnostic ones that just strictly reduce work to be done) versus platform-specific optimizations/workarounds/bin-packing (like nops to prevent your pipelines or cache lines from being pathological).
It has been proven, but only for the useless statement that an algorithm that does better on some case does worse on some other case (that you may not care about). Originally it was for search using that if you happen to look the right place first you win. This almost made sense, but then it get extended to general optimization, algorithms, and machine learning.
There are in fact algorithms that do better than others with respect to realistic distributions. And that contradicts the strong interpretations of the no-free lunch theorem bandied about in search, algorithms, and machine learning.
A friend of mine once thought up a parallel universe where, by pure chance, bogosort always correctly sorts every single array, boggling the minds of everybody in that universe.
A lot of people on HN consider themselves logical because they know how to work with Boolean logic and they know how to identify some Boolean fallacies (slippery slope, correlation does not imply causation, etc.).
Of course, this is absolutely insufficient for almost any real-world scenario; Boolean logic does not equip one to make probabilistic inferences about the state of the world, which is the best we can do for the vast majority of everyday propositions.
The reality is that "slippery slope" is a very powerful heuristic, correlation correlates with causation, etc. People here often mistakenly think probabilistic arguments are Boolean (e.g. thinking Occam's razor tells you to believe only the most a priori likely hypothesis) or mistakenly think that Boolean arguments can be interpreted probabilistically (e.g. "Correlation doesn't imply causation, so I'm not going to treat this demonstrated correlation as evidence for causation").
The concept of covariance which describes the way different observers' measurements of the same phenomenon are related.
More generally the concept that the relationships between observers are a fundamental aspect of the systems they measure. In special relativity for example this is the Poincaré group which relates observers in different inertial reference frames.
Another example is entropy in statistical mechanics (although I'm being weird by casting it in this light): entropy depends on the knowledge and sensitivity of the observer to the microstates of the system they are measuring. The relation to two observers here is considerably more complicated than a group but it is there and a fundamental aspect of the subject.
I believe this concept is fundamental to every science but the "softer" you get, the harder it is to model. It is there implicitly whenever we make an observation.
I'd appreciate a more general appreciation of how little can be accurately predicted. Any time I hear sports commentators predict winners I internally shake my head. Why do so few people have the ability to admit to themselves that most things are just unpredictable?
> Science—that is, reliable methods for obtaining knowledge
I really dislike this definition of science. Science is about making accurate predictions about the future (even if those future predictions concern the accuracy of past events). Predictivity is the defining feature of science, because without it, science may as well be any other subject. The Wikipedia definition is much better, because it incorporates the utility of knowledge but emphasizes the key role of prediction:
> Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.
I disagree about the emphasis on prediction. Science involves predictions, but it does not involve only predictions.
Chaos theory, for example, shows that physical, deterministic systems can be rigorously understood while prediction is (approximately) impossible. Likewise, you can make predictions without understanding, with sufficiently advanced machine learning (interpretability vs. flexibility).
Conservation of energy, not just as an abstract concept, but in that every time you fly in a plane, drive in a car, turn on the air conditioner, etc, you are using energy, which pollutes, and, while we use fossil fuels, contributes to the greenhouse effect.
Most people seem to think their contribution doesn't count or is somehow less than everyone else's.
Also, curiosity and appreciating the beauty of nature. These were two of my main motivations to get my PhD in physics. To me they are fundamental to science.
From statistics: Simpson's Paradox.
Maybe a little more obscure than what some others have mentioned, but it is the idea that there can be patterns in data that appear in separate groups, but then vanish or reverse when those groups are combined. It can be a useful framework.
I'd pick Juarez Cirino's book "Radical Criminology". It reeks of ultraidealist marxism but I think the observations it makes on the workings of the criminal system in capitalist societies is simply enlightening. He draws from ideas of Foucault, Marx, Pasukanis and many others to form a concise and understandable take on modern criminal and penal systems.
Even in portuguese the book is unfortunately pretty hard to find. I actually am thinking about translating it myself, I already asked the publisher if I can make it easily available online(I'm thinking something like a github repo so people can help translate and fix errors), not sure if they'll answer me tho.
Send me a mail at erik@14th.info and I'll let you know if I get something done.
What sometimes looks to be a slippery slope argument is actually just the first person to see someone constructing the Overton window against them.
I'm not saying slippery slope arguments aren't hideously overused, but it's deviously hard to explain to many people when their own mind is being used against them and not knowing what the Overton window is makes that even harder.
Error bars. Whenever I see a plot without them, I think someone is trying to pull a fast one. Some scientific fields (or quasi-scientific) fields use them much more than others.
Quick-and-dirty physicists' method of generating an error bar when dealing with number of events in an interval is to assume that Poisson counting statistics apply, and take the square root of the number of events.
Example: Two months ago we sold 400 units. Last month we sold 410 units. Should we celebrate?
Square root of 400 is 20. The change in sales is within the margin of error. Celebration would be premature.
Users flagged the previous copy of this comment because inciting race wars is off-topic and not welcome on Hacker News. We've asked you not to do this before, and we have to ban accounts that continue like this. Please stop.