Of course there was scientific opposition to Galileo. Scientific ideas get hammered out through debate and disagreement. New ideas in science often take decades or even centuries to fully develop and reach broad acceptance.
That is why it is important that new ideas can be discussed freely, which wasn't the case in Galileo's time.
They are now, but the uninformed reader can't assume that that's static throughout history. There will have been some point where that became fully true, some point where it wasn't, and some continuous function of time defining the extent to which that's true in between. Can you define any of the latter two things on the list?
> They are now, but the uninformed reader can't assume that that's static throughout history.
Scientific ideas are not discussed freely. They are driven by funding, which is disproportionally awarded to novel results. Non-novel results or failures to reproduce are overly ignored and under-funded.
How many people alive today could come up with Dijkstra's shortest path algorithm when faced with a problem that calls for it? Probably hundreds of thousands, if not more. But, Dijkstra described the problem and its solution at a time when computer science was an obscure field of study. His contribution - among others - helped pave the way for the modern computerized world.
If someone today solves an obscure problem of similar difficulty to Dijkstra's graph search, they won't be called great anything. It is the impact that matters, not the difficulty of the problem or the IQ of the inventor. Great scientists, philosophers and artists had the foresight, luck, and ability to develop something that subsequently had a massive impact on the world.
A point in the article that I agree with is that great philosophers shouldn't be treated as flawless geniuses. Great people of the past made massively impactful contributions, but that doesn't make them the ultimate authority on their field. Answers to today's problems aren't going to be found by disecting footnotes of Aristotle, Turing or Einstein, but in work of people who will make contributions today that will shape the future.
If nothing else, he was huge influence on the development of the three major monotheistic religions.
He also was Aristotle's teacher, and Aristotle was the philosopher who more or less created the entire scientific project (via people like Galen and Avicenna).
It's almost impossible to imagine what the world would be like today without them.
I think Plato's most significant work is his Theory of Forms -- he essentially articulated the object-oriented approach of looking at the world. Plato's abstract, idealistic "forms" are a lot like classes, whereas individual "objects" instantiate those classes.
Aristotle took this framework and developed it further with the idea of inheritance. He classified things into tree structures (with great success) across many fields including biology, zoology, geology, psychology, politics, physics, and so on.
Um, calling Shadows and Forms "a lot like" Object-Orientation in Computer Science really belittles much of Information Theory developed in the last hundred years.
If anything, I would consider it an early version of "Signs and Signifiers".
The site isn't making any asynchronous requests, so there really is no "real time" aspect to this. If you look at the bottom of the page, it says "Showing the latest hot searches in All Regions".
So, these are just the top X searches from some time period, presumably after some processing and filtering (NSFW results eliminated, capitalization and spelling corrected, maybe some categories omitted, etc).
From the perspective of theoretical computer science, the interviewer was correct. I'll try to sketch out why.
------------------
1. In the initial probing for the upper bound, there is no point to grow faster than 2x each time
If we double the probe each time, we'll find the upper bound in O(log N) time. Then, we'll need O(log N) additional time to find the real answer. That makes the entire algorithm O(log N).
Suppose instead, in the initial probing we grow the bound faster, say by squaring each time. We'll find the upper bound faster, but we'll still need additional O(log N) time to find the real answer. So, we didn't really make the algorithm (asymptotically) faster - it is still O(log N).
(I glossed over some details in the explanation, but even if you work out the math exactly - which is not that hard to do - the conclusion holds.)
------------------
2. There is no point starting from a number greater than 1
There is no way to pick a "good" starting number - should it be 1,000? 1,000,000,000? 10^100? (10^100)^100? You might as well start from 1. That way, you guarantee that you'll find small numbers fast and large numbers still in an asymptotically optimal time.
------------------
As someone with fairly strong theoretical computer science backgound, I can see the intended meaning behind the interviewer's question and answer. But, it is a theoretical question. There definitely are a lot of highly valuable software developers out there who couldn't answer it.
Given that is a theoretical question that you are attempting to fit into an actual algorithm to ostensibly solve a real problem, doesn't the actual performance of the algorithm matter? I can see that within the context of theory of computer science there may not be a difference between the speed of the two algorithms, since both are "asymptotically optimal" (a term I'm not sure I entirely grok), yet within virtually any defined set of random numbers less than infinity the algorithm I provided will perform substantially faster.
I suppose I'm at a loss for words, since as a seasoned software developer without a deep background in theoretical computer science, it seems that I am being told to disregard my intuition based on experience in favor of a theoretical model that seems to ignore actual problem sets. Am I missing something?
AFAICT, the parent and the interviewer is right because both methods run in roughly the same time because the time of the binary search is orders of magnitude greater than the upper bound finding part. I might have made one or more mistakes though.
If I have time later today I'll extend your snippet, but my inclination is that the differences in speed become apparent only with very large numbers. I still suspect my algorithm is about 40-50% faster if you start with a googol.
It seems that the problem involves two parts:
1) To find an upper bound
2) Then divide the remaining region in halves until the number is found.
The first observation that I have is that given that the secret number s is chosen, the first step can be completed arbitrarily quickly. One could use a function that rises arbitrarily fast. Imagine for example the function taking k to the Ackerman function A(2,2,k). That rises so fast it's incomprehensible, but really one could easily produce a function which rises faster still (the algorithm that picks s first!). The problem is, though, of how fast a fixed algorithm is for random s. If s is truly chosen at random from the positive integers, this leads to problems. Fix your putative algorithm. Suppose for the moment that it starts at 0 (it is not going to matter where it starts) What is the probability that the kth number that your function spits out is less than a random integer? 100% After all, how many integers are greater than any given integer?
Therefore, no growth function is any better on average at finding the upper bound than any other. Therefore step 1) is an insoluble problem. The problem should have been specified in some other way in order for it to make sense.
However, the second step involves log_2(n) time (in the worst-case-scenario and still O(log n) in general) where n is the upper bound --the output of step 1-- which means that the time to complete the algorithm is (Time of step 1 to find n) + O(log n). IF the problem made sense and it was the case that step 1) were soluble, then it would matter how fast step 1 is countered by the degree to which step 1 tends to overshoot the secret number --because overshooting by k has the penalty of log(k) extra operations.
Is there are framework in which question 1 makes sense? It would make sense if there were a given probability distribution on the integers (a function on the positive integers whose sum over all integers = 1, for example choose the function f(n) = 6/(n^2 * pi^2)) A probability distribution gives you a way of answering the question: "what proportion of the positive integers are greater than k?"
My intuition says that they should almost always run at about the same time if you don't know anything about the distribution of guesses. Your algorithm has the advantage that it will find an upper bound for the number much faster than the interviewer's algorithm. HOWEVER, having found that upper bound, your algorithm will probably be much farther away from the actual number than the interviewer's, so you have to do that much more narrowing down. What it comes down to is being lucky that you're close to the number in your initial guess, otherwise, you're still talking logarithmic time to either find an upper bound or (in your case) lower bound for the number.
This was my conclusion, that although the two approaches had the same average big O time, my intuition said it was somewhat faster to start from some number larger than 0.
In retrospect, I might have mentioned this and then gave the interviewer's preferred answer in code, just to keep things simpler.
On the contrary, we overeat precisely because we listen to our bodies.
Evolution tuned us to survive in a world where food is scarce and periods of starvation are frequent. In the rare occasion that you have excess food, you eat it and build up fat reserves. The saved up fat will help you survive the period of starvation that undoubtedly follows.
So why isn't there a decline in appetite as the fat reserves grow? It won't matter while there are no side effects of adding more fat, but there seems to be a widespread belief that fat causes all sorts of health problems. At the least there is decreased mobility (more likely to be eaten by lions), earlier mortality (eg heart attacks) etc.
I wonder if there are any studies correlating obesity levels with reproductive success, and success of the offspring produced. That is what should end up controlling the genes and behaviour behind all this.
That may be true of actors and directors. But for software? I can't imagine.
Look at the presenter's experience:
> Over a six month period, I lead the project to rewrite a top 100 website using a new software stack. Doing so, we used HAProxy, Varnish, Nginx, PHP-FPM, Symfony2, Syslog-ng, Redis and MySQL to create a platform that handles 100 million page views per day and has room to grow.
There are tons of companies out there eager to hire someone with that experience.