> but here's a few more arguably transformational consequences:
You are confusing important happenings to transformational changes ( ie paradigm shifts ).
Nothing you listed are transformation changes to the world order. The soviet union existed before ww2 and they were a dominant force before ww2. Communist china rose within the ( american/anglo-american/western/international ) world order.
Probably the most important geopolitic event in the 20th century was the collapse of the soviet union. But it wasn't transformational. It didn't cause a paradigm shift. The world order remained.
> but it seems inarguable that things changed quite significantly
Change happened within a western world order. It wasn't transformational to the world order.
Lots of history happened after ww2. What didn't happen was a transformational change to the world order. Man landed on the moon. Didn't change the world order.
I think it’s unfair to call Dyson a non-expert. Climate science was not his speciality, and he was humble about his relation to it and quick to point out that there are people more familiar with the current state of the field than him, but he was involved in important early climate studies. From Wikipedia:
> Around 1979 Dyson worked with the Institute for Energy Analysis on climate studies. This group, under Alvin Weinberg's direction, pioneered multidisciplinary climate studies, including a strong biology group. Also during the 1970s, Dyson worked on climate studies conducted by the JASON defense advisory group.
There have been lots of advances in modelling complex systems, but it’s notoriously difficult to find models that work beyond a short timeframe. Dumb simple models often work better than cutting edge, complicated, much more realistic ones simply because there are less variables to get wrong. And every time you make an advance, the assumed accuracy of the model’s long term predictions needs to be reset: if model A predicted 1 year accurately, and you improved it so model A.1 predicted the next year accurately, you don’t have a model that accurately predicted 2 years; there is still only 1 unknown year that was predicted, and A.1 was just as accurate/inaccurate as model A at predicting that unknown year whether or not it took that past year into account.
We’ve seen how easy it is for sophisticated and complex models of chaotic systems like the stock market to fail despite people’s faith in them. His point seems as prudent now as back then; I think some systems are a lot more complicated than we’d like, and that we should have enough hubris to acknowledge that and be constantly testing our assumptions against empirical data.
He didn’t dismiss the value of models in helping understand chaotic systems, he was just wary about their long term predictions. And that doesn’t necessarily mean they’re wrong, only that they should not be taken as Gospel for anything beyond a fairly short time frame.
I'm not sure why so many people consider "refactoring" to be a synonym for variable renaming. Refactoring is a non-functional change which makes the code base more manageable [1] . Renaming is a kind of refactoring, but not a terribly important type of refactoring.
> Even Google Translate, which pulls off the neat trick of approximating translations by statistically associating sentences across languages, doesn’t understand a word of what it is translating.
This is just another incarnation of "AI is the thing we haven't done." He's parroting Chomsky's disdain for statistical models and John Searle's fundamental misunderstanding of AI. For the former, Norvig has a fair rundown of Chomsky's complaints (http://norvig.com/chomsky.html).
> bears no resemblance to what a neuroscientist would call intelligent
TensorFlow gets results. The neuroscientist can claim it's a P-zombie, but they need to point to some criteria for accepting something as intelligence. Otherwise we're just moving goalposts.
Note also, misleading title. Supercavitation technology has been around for a long time, the Chinese have solved some of the technical issues regarding scaling up to full size Submarine and steering. Still no solution to range and many of the related issues. Also unlikely to be put into service any time soon.
It's true that I like to be technically correct. I don't consider this a problem itself. In fact, it can be quite important for programmers and mathematicians to be technically correct, as this bug in Cap'n Proto illustrates quite nicely.
However, this is not the only reason I'm pointing out that this is not a compiler bug. The other reason is that there are certain implications:
1. The compiler developers are likely to declare that this is not their problem.
2. The problem may exist for other compilers as well: even if it complies to the standard it may do this. Even worse, it may only show up in a new version of the compiler, or in specific situations.
I am not saying that this is a good situation and programmers should just be more careful. Everybody makes mistakes.
The root of the problem is at the specification. Ideally, there would be no undefined behavior. From an optimizer's point of view it is very sensible to assume that the programmer will not invoke undefined behavior and use optimizations based on this principle. People surely love a compiler which produces fast code, so it may not be desirable to totally eradicate undefined behavior (and end this kind of optimizations).
The most practical remedy I see is to add a debug flag which crashes or somehow indicates undefined behavior. Indeed, GCC has done this. So ultimately, we seem to agree that the compilers should change to improve this situation.
This is a called a binomial process. We want to estimate the true proportion of exits for a given city. A city has x exits out of n trials, so one estimate of this proportion is just x/n. However, if you have a bunch of cities like this list, you're going to have cities that by random chance end up with a fraction close to 1. That doesn't mean that startups there are guaranteed to succeed. It means if you flip a coin 5 time, and replicate it 1000 times, you're doing to have some runs of 5 heads, and some runs of 5 tails. If you kept going, you would find the proportion approaching 1/2 (for a coin) for all cases.
So the problem is how you compare a city with 5/6 exits like Branford, CT USA with 143/208 like Mountain View. Is Branford that much better because 5/6 > 143/208 ? Mostly all you know is that the error in your estimate is much larger for Branford than for Mountain View, because your value of n is 6 vs 208. You can't say with statistical confidence that Branford is better.
So one trick to punish the little n locations is to do some smoothing. Laplace smoothing is to add 1 for all outcomes, so 1 success to x and one failure, meaning we add 2 to n. That also means that nothing gets exactly to 0.0 or 1.0. The odds in Saint Petersburg Russia aren't really 0.0 because the were 0/11. There is some chance you could succeed, so it gives you a better estimate of "unseen events".
The next thing you want to do is look at confidence intervals, rather than our point estimate of x/n or even (x+1)/(n+2). There are a number of formulas you can use, I used one built into R, a statistical modelling language. This gives you a lower and upper bound on your true estimate of the proportion. If the bounds is exact, then 95% of the time the interval will contain this true, unknown proportion.
The bounds on my smoothed counts are:
x+1 n+2 lb x1/n2 ub
Mountain View, CA USA 144 209 0.621 0.689 0.751
Branford, CT USA 6 7 0.421 0.857 0.996
Los Angeles, CA USA 56 180 0.244 0.311 0.384
So the true estimate of Mountain View is somewhere between 0.621 to 0.751, while Branford CT is between 0.421 and 0.996. Since these estimates overlap, we can't really say one is better than the other. Also consider LA, which has range of 0.244 to 0.384. Since 0.384 < 0.421, we could say that LA has a worse exit ratio than either Brandford or Mountain View, with 95% confidence.
To sort, it is often good to be conservative and use the lower bound. I used a 95%, which is good for saying Branford is better than LA, but might be a bit large for sorting. You could use a 90% or even 80% interval for that, if desired.
It is really crucial to take into account what you don't know when comparing fractions based on different values of n.
[0] http://www.youtube.com/watch?v=26PjRQmIoeE