"Maybe people intuitively figured out what was up (one of the parameters of the Drake Equation must be much lower than our estimate) but stopped there and didn’t bother explaining the formal probability argument."
There's no "intuitively" about it. We know from the data available that the probability of machine-building general intelligence of the kind that is unique to humans on Earth is fantastically unlikely to arise. It depends on a confluence of completely unrelated selective pressures: one on tool-making, one on social cooperation, and one on continuous mate competition/selection. There is good evidence that all of those forces are important to specifically human--not dolphin or bird or whatever--intelligence on Earth, and without any one of them we'd still be fairly handy monkey-creatures banging rocks around.
As well as a theoretical understanding, we have empirical data. We can ask, "If something is at all probable how many times will it have evolved on Earth?"
Eyes, for example, have evolved independently multiple times, based on differences in retinal biochemistry. Wings and fins and legs have also evolved many times across an incredible diversity of species. This tells us that things that are easy to evolve have evolved multiple times in Earth's history.
Specifically human, machine-building intelligence has evolved exactly once. This is strong evidence that it has an enormously low probability.
Once you've realized that one of the parameters in the Drake equation is incredibly small, the Silent Universe is no longer surprising. And both empirically and theoretically, the probability of evolving specifically human, machine-building intelligence is incredibly small.
However, I have never met anyone enamoured of Fermi's Paradox who gives any credence to this, which is why no attempt to resolve the paradox will ever have any effect on the discussion. This is in general the case with so-called paradoxes: no matter how many times they are clearly and simply resolved, people who want there to be paradoxes will pretend the resolution doesn't exist, or will deliberately misrepresent it as a non-solution. Or they will present some other non-solution as more compelling, even when the solution presented makes those non-solutions unnecessary.
This story is a textbook example of bad reporting.
The headline and opening paragraph combine two completely different categories of error--"wrong" and "late"--so that it is possible to say "most Americans".
The story then goes on to talk exclusively about "wrong" as if that was the dominant category. "Late" is never mentioned.
The abstract of the ($60) report talks about errors only, but reading between the lines "late" is subsumed under the author's notion of "error".
So this story tells us nothing about the actual rate of misdiagnosis, but it leaves unwary readers with the probably false impression that "most Americans will experience misdiagnosis".
This is not to say that misdiagnosis is not a serious problem. But then, so is really bad reporting.
I agree on the phrasing issue. Both there "types" seem to require Bohr's classical observer, but as soon as you assume that observers are classical you've swept the big question under the carpet, which is, "Why is there a classical world at all?" Decoherence and similar approaches at least try to address this question, and don't fit at all well with their two-Type scheme.
Or to put it another way: observers are intrinsic to reality as well as the systems under observation, and any attempt to treat them separately will fail. But this is uninteresting, because any interpretation is going to have to acknowledge this at some level (as Bohr correctly pointed out quite a long time ago.)
If you've got Many Worlds at the back of your mind there, rest assured (!) that the lead author considers it to be a Type-I interpretation See http://arxiv.org/pdf/1509.04711v1.pdf
The creation of science depended on a large number of random factors, from Judeic monotheism (or something like it) to the disasters of the Reformation and Counter-Reformation, and even (plausibly) the English Civil War and its aftermath. Those random accidents--including the founding of universities in the late Middle Ages--created a set of conditions where people with the brains to create science were given access to institutions that let them think and investigate at the same time when they had both the social freedom and the technological capacity to publish their work, and the freedom to engage in institutional innovation to create things like the Royal Society, whose founding should be considered the final act in the birth of modern science: once it existed, it would be extremely hard not to get something like science going.
So on this view, the reason why science happened here and not there was the same reason why hominids with the capacity for general, tool-using, representational intelligence and language happened in Africa and not the Americas: such developments depend on a confluence of multiple unlikely factors and as such are very unlikely to happen at all, much less multiple times. If science hadn't been created in Western Europe in the 1600's it might never have happened. It only looks inevitable because it did.
OP asked about shuffling multiple times and having the deck become less random intentionally. AFAIK, the only way to do this deliberately is with the riffle, and it only works from a sorted deck.
A deck can indeed end up more ordered after one or more shuffles than it was before. But if the shuffle has any randomness to it, this is coincidence, and it will only happen as often as it is likely, which is not much. The same way you do have a probability of flipping 10 heads in a row, you can accidentally organize a deck. But the chance of flipping 10 heads in a row is (you know already) slightly lower than one in a thousand. The chance of just happening to organize a randomized deck after one or multiple shuffles is "a lot" lower than that.
If you want to be precise, it doesn't make sense to talk about _one_ shuffled or randomized deck.
Randomness strictly speaking only makes sense for distributions, not for particular instances.
If you have any probability distribution of permutations, then shuffling doesn't make it less random (ie doesn't decrease entropy), no matter how bad your shuffle is. As long as your shuffle does not depend on the state of the cards.
A random shuffle might, by chance, return a specific deck into a fully sorted state.
OP was deleted because it was getting downvoted. Apparently the state is a normal, expected outcome from randomization, and it was a dumb question to ask.
That's a very odd observation, because religious people as a population show very slight deviations from non-religious people. It is extremely difficult to tell by observation whether or not someone is religious, beyond church/temple/mosque attendance, which is equivalent to asking them.
Does he reference any data that would give his suggestion non-negligible plausibility?
It's a mediocre article. It glosses a bunch of history adequately and then points out a completely different anomaly. By "completely different" I mean "has all the signatures of an instrumental or analysis artefact." It's intermittent (huge red flag) and while the article doesn't say so (additional red flag) close to the threshold of observation.
There are completely mundane explanations (upper atmosphere models slightly wrong, unaccounted-for EM effects) so while there may be a fundamental cause (gravity is doing something exciting) the odds are that it's a boring effect, just like the superluminal neutrino observations.
I didn't find the writing misleading at all. It didn't overstate the likelihood that there is actually new physics to be found in these anomalies and gave a specific example where an anomaly was explained by known physics that was merely unaccounted for.
The article definitely did not state or suggest that it wasn't a boring effect. It was just enumerating surprises due to gravitational theories' mismatching reality.
Discovering that the Pioneer probes were off due to the moment of their thermal radiation was not 'new physics', but it was still interesting, in that it was missed in the initial attempt at modeling. And that's fine. It's still interesting and made for good reading.
Anyway, I liked it. I don't think it was mediocre at all.
Their results may be significant to VCs, but likely aren't to anyone else.
They get about a 4% increase in good outcomes, from 23% to 27%, for a 1 sigma increase in expert positive response. This is significant at the 5% level.
It's enough to say that "Experts do slightly better than chance in IP-heavy fields", but about 3/4 of those businesses will still fail, and that remains true regardless of expert evaluation.
If you're a big investor this could make a difference to your bottom line. If you're a small investor or entrepreneur, it is pretty much irrelevant.
We find it because that's what we're looking for, or inventing. Here's a comparable question: "How is it that we can find an unfathomably small sub-set of possible symbols--a mere 26!--that are capable of encoding any idea whatsoever?"
The answer is: we are humans, doing human things within the scope of human capabilities. If there are ideas that are inexpressible by us, we can't possibly know about them. If there is physics profoundly beyond our ken (what lies behind the quantum veil, for example) we simply don't know about it.
Mathematics is a natural language (as physicists use it) to describe nature to ourselves. The fact of the knowing subject, and the activity of the knowing subject, cannot be left out without leaving a central mystery, which always amounts to "Why does the knowing subject do what they do?" (like restrict math to Smolin's four key categories of number, geometry, algebra and logic). If you imbue some mystical subject-free "mathematics" with these properties, rather than the activity of the knowing subject with them, they will remain mysterious.
> How is it that we can find an unfathomably small sub-set of possible symbols--a mere 26!--that are capable of encoding any idea whatsoever?
That's not an equivalent question. Any repertoire of N distinguishable symbols for N>1 is essentially equivalent.
But there's no reason a priori to believe that the laws of physics should be modellable with mathematics at all, let alone that we should be able to figure out what those mathematics are, let alone that they should turn out to be simple enough that the model (or at least a very significant chunk of it) can fit in a single human brain.
Consider dreaming: when you are in a dream state you are living in essentially a solipsistic world where science doesn't work. There's no inherent reason why that could not be totality of your existence. It's just an accident of biology that you wake up occasionally and get to experience the "real" world, which we consider "real" because it seems to behave according to mathematical laws. The existence of such experience is not a given.
Reality is continuous. Human categories--like cancer--only have sharp edges because we draw them with an act of selective attention. The edge of our attention is discontinuous. Nothing else (that doesn't involved quantum mechanics or integer counting of attentionally-isolated objects) is.
"Cancer" is not a simple thing. Two people with "breast cancer" may have very similar or almost completely different diseases. As others here have pointed out, the magnitude and frequency of wins and losses matter even though trades are binary win/lose (which they can be because we've created an entire category of imaginary objects called dollars that can be counted).
Bayes rule is as applicable to any area of significant uncertainty, including win/loss magnitude. It is universal.
There's no "intuitively" about it. We know from the data available that the probability of machine-building general intelligence of the kind that is unique to humans on Earth is fantastically unlikely to arise. It depends on a confluence of completely unrelated selective pressures: one on tool-making, one on social cooperation, and one on continuous mate competition/selection. There is good evidence that all of those forces are important to specifically human--not dolphin or bird or whatever--intelligence on Earth, and without any one of them we'd still be fairly handy monkey-creatures banging rocks around.
As well as a theoretical understanding, we have empirical data. We can ask, "If something is at all probable how many times will it have evolved on Earth?"
Eyes, for example, have evolved independently multiple times, based on differences in retinal biochemistry. Wings and fins and legs have also evolved many times across an incredible diversity of species. This tells us that things that are easy to evolve have evolved multiple times in Earth's history.
Specifically human, machine-building intelligence has evolved exactly once. This is strong evidence that it has an enormously low probability.
Once you've realized that one of the parameters in the Drake equation is incredibly small, the Silent Universe is no longer surprising. And both empirically and theoretically, the probability of evolving specifically human, machine-building intelligence is incredibly small.
However, I have never met anyone enamoured of Fermi's Paradox who gives any credence to this, which is why no attempt to resolve the paradox will ever have any effect on the discussion. This is in general the case with so-called paradoxes: no matter how many times they are clearly and simply resolved, people who want there to be paradoxes will pretend the resolution doesn't exist, or will deliberately misrepresent it as a non-solution. Or they will present some other non-solution as more compelling, even when the solution presented makes those non-solutions unnecessary.