It changes conservation of momentum from an empirical relation we have historically observed to something that is derivable from a different empirical relation (translation symmetry) that we have historically observed. It's still ultimately an observed empirical relation.
I meant to say that STEM college graduates ought to be able to solve problems such as this one. That would include less-trivial instances, like the light-ray example I gave in another comment where you have to find the path that light travels when it crosses between media with different velocities. Unfortunately I made the mistake of posing the most trivial instance of this kind of problem as an example (Euclidean vector space), which resulted in many people missing the point I was trying to make. Sorry about that. The point wasn't solving this specific problem, the point was this kind of problem, and my example was overly simplified and therefore not representative of what I was trying to convey.
With that aside, note that this is still not a basic geometry problem, since you need to be able to discuss the lengths of arbitrary smooth curves, not just lines/angles/circular arcs.
How's it being a smart aleck? A geodesic is a very easy concept to understand. Not only that, but it's ridiculous to dumb down information when the whole point of the conversation was to understand truths. If this question was on history concepts, and you said that Columbus discovered America, it would be a bit ridiculous for you to criticize me on calling you out on it, right? A straight line being the shortest distance between two points is about as true as Columbus discovering America. You have to add on a bunch of qualifiers for either to be remotely true.
> We're not talking general relativity here, we're just talking about plain Euclidean geometry.
See this is the thing, you don't even need to understand general relativity to understand this. Airplane pilots understand this, they fly on great circles because of this concept, which has absolutely nothing to do with general relativity.
> or maybe everyone except you
Or maybe anyone with a math degree, 4 or 5 physics classes, an aerospace degree, airplane pilots, optical engineers. It's really not that esoteric.
Looks like we found another person who doesn't know how to do it :) look it up! In pre-college geometry this isn't something you prove, it's something you postulate and take for granted.
Then I'm gonna call B.S. on this until you can show me the proof somewhere. Every single time I've searched this the only rigorous proof I've found has been using calculus (often calculus of variations).
1. Prove the triangle inequality, & extend via induction to show that [A] a straight line is the shortest polygonal path, and that [B] adding a point to a polygonal path can only make it longer.
2. Arc length is defined as the limit of the length of a polygonal approximations to a curve. From (B), we know our approximations would approach the arc length from below.
So if an curve A->B has an arc length, there's a polygon A->B which is at least as short. And from [A] we know there's no polygon shorter than the straight line.
This is weaker than proofs you can do with calculus -- it doesn't prove uniqueness, and it completely ignores non-rectifiable curves, for instance. But I suspect that's what ams6110 is remembering.
You call this "geometry"? Since when does geometry involve a function limiting and bounding process? We didn't even know what a limit was until calculus...
If you assume given the notion of arc-length as the least-upper-bound on lengths of piecewise-linear paths between the two points, then the triangle inequality of basic geometry suffices. This seems not more hand-wavy a proof as something about "light-travelling-at-constant-speed" analogies.
>> It's still not a basic geometry problem
seems we're in agreement?
The total length of the curve described by f is the integral of |df/dt| dt, which is to say, the integral of sqrt((dx/dt)^2 + (dy/dt)^2 + ...) dt. This integrand is at least as large as dx/dt dt. But the integral of this latter integrand is the difference of B's and A's x coordinates, which is to say, the length of the direct straight-line path from A to B.
Thus, every path from A to B is at least as long as the direct straight-line path. [In fact, paying more careful attention, we have by this same argument the slightly stronger result that every other path is strictly longer than the straight-line path]
If you take all of the above for granted (or "obvious"), you may as well say that the straight line being shortest is obvious.
I spelt out the parts of the argument that might be unfamiliar. I'm sure anyone here can readily convince themselves that sqrt(x^2 + y^2) >= x [x^2 is increasing for positive inputs, thus so is its inverse function, so sqrt(x^2 + y^2) >= sqrt(x^2) = |x| >= x], and that you can choose a coordinate system so that A and B are both on the x axis [take A to be the origin; take the vector from A to B to give the positive x axis, and other axes accordingly perpendicular to this one], and so on.
The integral business can be phrased however one likes, formally or informally, to account for the idea of a rectifiable curve as one whose length is given by treating it as built of many component straight-line pieces. The conceptual argument is the same, regardless of any particular choice of technical framework.
And if you furthermore aren't sure that, say, x^2 is increasing on positive inputs, we can explore that as well. If you furthermore aren't sold on modus ponens, we can explore that as well. Everything is infinitely explorable (and malleable!). But I noted only what here seemed directly relevant to the curiosity at hand.
 The reason I was satisfied with this is that it's obvious that the speed of the particle is irrelevant to the length of the path so long as the medium is uniform, so once I proved it for light of constant speed, I was done. It's not necessarily satisfying for a mathematician though.
Just spell out what we mean by path, length of a path, and use calculus of a single variable. No need to introduce 'time', 'constant speed' or the like.
BTW, a truly variational proof is in the wikipedia article .
Well, for starters, it was also posted an hour before that one...
But it's not the same. The sibling comment performs a change of basis. I don't.
Also -- are you really claiming every STEM student at your university learns calculus of variations as a freshman? Because that's the only way I've actually seen others prove this. While doing it with plain calculus isn't impossible, it requires a similar kind of thought process that you get after learning calculus of variations, so I'm really skeptical what you're saying is the case, unless you're thinking of a different proof or something.
If I see a steak on my plate, I can't conclude that there is no world hunger.
If I see a snowball, I can't conclude that there is no global warming.
If I've never experienced racial bias against me, I can't conclude that it does not happen to other people.
Edit: for clarity.
This one is so odd to me. There's plenty of logical errors and shortcuts the brain takes that I had to learn to consciously think around, but the idea that my experience isn't necessarily representative of the entire world was something that was blindingly obvious when I was literally a child. I'm still taken aback for a second when I see people say things like "most of the people I know do X" or even "everyone in my family has Y", without giving any thought to whether their sample even approaches representativeness.
We call a lot of science concepts «not intuitive» because of the limitations of our brains, and very little people, at the cost of studying a lot, and being highly stimulated in their first years of life, can actually really get a grasp of them. Quantum mechanics of course, but just see how people reacted to the monty hall problem: https://priceonomics.com/the-time-everyone-corrected-the-wor...
Other examples at https://en.wikipedia.org/wiki/Counterintuitive#Counterintuit...
Sure, we are better at science than all other species on earth, but in absolute terms, we human are very badly designed for it. Yet, we try our best.
Hah. I can't tell if you are joking, because it's true.
Democracy and modern civilization itself requires at last some understanding of the scientific method. Sagan's warning in "The Demon-Haunted World" was frighteningly prescient:
I have a foreboding of an America in my children's or grandchildren's time --
when the United States is a service and information economy; when nearly all
the manufacturing industries have slipped away to other countries; when awesome
technological powers are in the hands of a very few, and no one representing
the public interest can even grasp the issues; when the people have lost the
ability to set their own agendas or knowledgeably question those in authority;
when, clutching our crystals and nervously consulting our horoscopes, our
critical faculties in decline, unable to distinguish between what feels good
and what's true, we slide, almost without noticing, back into superstition
You may fool all the people some of the time; you can even fool some of the people all the time; but you can’t fool all of the people all the time.
“The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.”
— Richard Feynman
In mathematics, I wish more people knew about typed lambda calculus and its connection to mathematical logic. I think the fact that most mathematicians use predicate logic, while theorem provers use e.g. calculus of constructions is one of the big obstacles for more wide usage of automated theorem provers. But it's just a language barrier.
In economics, I wish more people knew about (I don't know the name, unfortunately) the approach to calculating of equilibrium prices that is described in J.M.Blatt's book Dynamic Economic Systems, and is based on input-output production matrices. It's a much more superior approach to the classical supply-demand analysis that dominates the textbooks, because it requires much less free parameters and assumptions.
To explain, emergence as stated in the parent is a name for how a bunch of molecules could interact to make something you might call a football. This is a phenomenon at the "higher" everyday physical level emerging from the phenomena of the "lower" chemical level.
What I think this story misses is the importance of the "even higher" phenomena which resulted in a football being created, those of the human cultural level which caused the football to be manufactured and, for example, placed on some grass, or a desk, to be studied.
This isn't a purely sociological or philosophical point either, when you're looking at brain macro-activity for example, it's only partially explicable by referencing the micro-activity of, say, individual neurons, to get a better understanding you need to refer to activity outside of the brain that caused certain individual neurons to be acting like that. These higher level phenomena can be simple things like the movement in the neck that caused a change in stimulation, and also more wooly concepts like cultures which provide certain aesthetics or behaviours.
Emergence is important, but I feel its too often considered as a standalone phenomenon, rather than as one side of an irreducible causal cycle.
In my mind, the concepts of irreducible complexity and emergence are inherently linked through the concept of abstraction. An emergent phenomenon is a computationally dense phenomenon for which much of the computation can be abstracted away, at the cost of giving up a low-level understanding. In a sense, emergence is the antithesis of irreducible complexity. In another sense, emergence is a sparse chain of islands in a sea of irreducible complexity.
By "relativity" I'm assuming you're referring to SR or GR here. I always thought of relativity as an emergent phenomenon due to the large scales we are working with. What makes you say it is not?
We still haven't experimentally verified how gravity works on atomic scales, though, have we? I consider it completely plausible that it works differently than GR postulates, because to me (non-physicist here) GR seems to be a "large numbers" phenomenon and assumes space possesses certain properties that it may very well not possess on small scales (e.g. any kind of smoothness or even continuity).
That earth's forest area decreases by 13 000 000 hectares per year.
And that we add about 75 000 000 people every year.
One of the more interesting things I've learned in quite some time.
EDIT: You know, this is something that is so relate-able to the layman, it should be a widely used propaganda talking point.
I wonder, could global transmission possibly be an answer to the lack of storage problem?
First, the raw amount of oxygen in the atmosphere is large enough that fluctuations can go unnoticed. To decrease the percentage of oxygen by 5% would result in many orders of magnitude more CO2 (proportionally) being released, such that the oxygen would be negligible.
There's also the geology of it: rocks absorb and release oxygen over millions of years and add hysteresis to atmospheric composition. The amount of atmospheric gases trapped in the Earth's crust is significant and resists long-term changes.
Technically put, "any two optimization algorithms are equivalent when their performance is averaged across all possible problems"
More intuitively paraphrased, "if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems.".
I forget if either or any of these are actually proven, or if they're the "folkloric" intuitions from more technical papers.
More boldly overstated, there's no such thing as a general optimisation, all optimisations are relative to a subset of possible inputs.
In extremis, there's exists a situation where Bogosort is preferable to Quicksort.
There's optimizations which are tradeoffs against reducing the valid input space (or optimization for the common input cases), and then there's optimizations that eliminate unnecessary operations or combine things cleverly without degrading the worst-case scenario.
This is of course a million miles away from the kind of sorting problems you get in Computer Science 101, but this is my point, the traditional sorting problems happen in a very well defined context, as soon as that changes, you might find any algorithm being optimal. Including Bogosort and, for example, quicksort with some extra no-ops.
I like your point, but for the reason above I still think the No Free Lunch theorem is valuable, else I wouldn't have had this insight in the first place. At least, I consider it an insight, YMMV.
I was just thinking in terms of more abstract optimization (e.g. platform-agnostic ones that just strictly reduce work to be done) versus platform-specific optimizations/workarounds/bin-packing (like nops to prevent your pipelines or cache lines from being pathological).
It has been proven, but only for the useless statement that an algorithm that does better on some case does worse on some other case (that you may not care about). Originally it was for search using that if you happen to look the right place first you win. This almost made sense, but then it get extended to general optimization, algorithms, and machine learning.
There are in fact algorithms that do better than others with respect to realistic distributions. And that contradicts the strong interpretations of the no-free lunch theorem bandied about in search, algorithms, and machine learning.
Of course, this is absolutely insufficient for almost any real-world scenario; Boolean logic does not equip one to make probabilistic inferences about the state of the world, which is the best we can do for the vast majority of everyday propositions.
The reality is that "slippery slope" is a very powerful heuristic, correlation correlates with causation, etc. People here often mistakenly think probabilistic arguments are Boolean (e.g. thinking Occam's razor tells you to believe only the most a priori likely hypothesis) or mistakenly think that Boolean arguments can be interpreted probabilistically (e.g. "Correlation doesn't imply causation, so I'm not going to treat this demonstrated correlation as evidence for causation").
People have very bad understanding of rate vs level problems.
This doesn't even get to the problem of compounding yet, which is related.
Another pet of mine is Bayesian logic. Doctors and psychologists at least should definitely be familiar with the idea.
this is covered in all medical schools because it's part of the licensing exam:
It's about 1/4 of the way down on page 28.
Not sure about psychologists, but I'm guessing they are exposed to it to.
The big difference IMO is learning it vs. understanding/internalizing/recognizing it when it is applicable.
More generally the concept that the relationships between observers are a fundamental aspect of the systems they measure. In special relativity for example this is the Poincaré group which relates observers in different inertial reference frames.
Another example is entropy in statistical mechanics (although I'm being weird by casting it in this light): entropy depends on the knowledge and sensitivity of the observer to the microstates of the system they are measuring. The relation to two observers here is considerably more complicated than a group but it is there and a fundamental aspect of the subject.
I believe this concept is fundamental to every science but the "softer" you get, the harder it is to model. It is there implicitly whenever we make an observation.
I really dislike this definition of science. Science is about making accurate predictions about the future (even if those future predictions concern the accuracy of past events). Predictivity is the defining feature of science, because without it, science may as well be any other subject. The Wikipedia definition is much better, because it incorporates the utility of knowledge but emphasizes the key role of prediction:
> Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.
Chaos theory, for example, shows that physical, deterministic systems can be rigorously understood while prediction is (approximately) impossible. Likewise, you can make predictions without understanding, with sufficiently advanced machine learning (interpretability vs. flexibility).
Most people seem to think their contribution doesn't count or is somehow less than everyone else's.
Also, curiosity and appreciating the beauty of nature. These were two of my main motivations to get my PhD in physics. To me they are fundamental to science.
Send me a mail at firstname.lastname@example.org and I'll let you know if I get something done.
What sometimes looks to be a slippery slope argument is actually just the first person to see someone constructing the Overton window against them.
I'm not saying slippery slope arguments aren't hideously overused, but it's deviously hard to explain to many people when their own mind is being used against them and not knowing what the Overton window is makes that even harder.
Quick-and-dirty physicists' method of generating an error bar when dealing with number of events in an interval is to assume that Poisson counting statistics apply, and take the square root of the number of events.
Example: Two months ago we sold 400 units. Last month we sold 410 units. Should we celebrate?
Square root of 400 is 20. The change in sales is within the margin of error. Celebration would be premature.
processes and measurement aren't ideal. learn to live with it.
Lumens is so over-used and so often wrongly used.
"Indeed, the expectation of reciprocity can both reduce and even undermine altruism."
Surely expectation of reciprocity would make the reciprocal loop stronger, not weaker.