Your program is slow so you profile it, and find out that f() takes 99% of the time. So you work a lot to optimize f(), and re-profiling shows that now f() takes 98% of the time.
Doesn't seem that impressive after all the work you've put into optimizing f(), but your program is actually twice as fast :)
For example, say your function f() takes 100% of the time. You then make it twice as fast. You look at the percentage, and, surprise, it still is taking 100% of the time!
It's understandable though in the context of the sequence of actions you might be going through as a tinkerer (as opposed to a scientist). You start with a list of functions and the % of time they take. Improving the top one will have the biggest effect, so this is a good thing to look at when deciding what to to. Then you go back to the same list afterwards.
I can see this (and lots of variations of this) happening
Or, a computer scientist would say, the right one. The actual speed doesn't matter as much as the complexity.
In reality, it does, in the end; but it's also important to consider the type of optimization you're doing, and not just stop at being impressed with twice as fast.
For example, a primitively optimized function that's twice as fast for input size 100 would still be twice as fast for input size 100,000, but a function optimized to be an order less complex could be twice as fast for size 100, and a thousand or so times as fast for size 100,000.
The latter optimization would be significantly better, objectively, and thus, it would make a lot of sense to look at the first program and be unimpressed with a mere linear doubling in speed.
Except, as an engineer would say, in practice the complexity doesn't matter as much as constant factors and the time and cost of design, implementation, and execution.
In all cases where you can fit n in the total memory capacity of the earth, Batcher's network is vastly superior. Constants matter a whole lot more than you might think.
I'm not arguing against revising algorithms to look for better complexity classes, but if you rewrite your algorithm in a smaller complexity class you had better be sure that your constants are of reasonable size.
Would you rather call a slow function, a fast function, or skip it altogether?
(A coupon only saves you money if you would buy the item in the first place.)
Sorry, what's the context? Admin / account and login areas don't care / matter much, because they want it :p they'll wait...
1) Create a benchmark (some code that performs the task that I want to optimise), and measure the absolute time it needs.
2) Use a profiler to see where most of the time is spent. Optimise that code.
3) Run benchmark to see if your optimisation are effective.
Or, in other words: the percentage tells you where to focus your effort. The absolute time tells you how successful your optimisation is.
For example when profiling a long-running service, or a kernel, there's no "absolute time it needs", you need other proxies of running time such as the CPU usage percentage, but even that is not noise-free.
So if you just run "perf" on it, you need to be aware of this fallacy.
static char buffer[1024*1024*2];
In other words, when a project gets handed down from above to launch in, say, 3 months, there's no way in hell you can get the servers requisitioned, approved, and installed in that time. It became standard practice for each team to slightly over-request server capacity with each project and throwing the excess hosts into a rainy day pool, immediately available and repurposeable as required.
I know of a Social Security Administration acquisition back in the days of 100 MHz processors. The bureaucracy took so long with this, by the time the order could be put out to suppliers, 100 MHz processors were no longer available, so SSA ended up with a bunch of workstations that were 166 MHz processors down-clocked to 100 MHz. (Otherwise, they would have had to start the whole process over again.)
orig_time = 100
orig_not_f = 0.01 * orig_time = 1
new_f = 0.98 * new_total
new_total = new_f + orig_not_f
new_total = 0.98 * new_total + orig_not_f
new_total - 0.98 * new_total = orig_not_f
0.02 * new_total = 1
new_total = 1 / 0.02 = 50
(Oh, how we laughed.)
[In case anyone reading this thinks the above might be amusing if only they knew what mathematical objects were actually being referred to: (1) No, probably not. (2) Abelian group; Zorn's lemma; line bundle. For the last one, you need to make it the first Stiefel-Whitney class if you're working over the real numbers rather than the complex numbers.]
Mine is like umbrella (the u).
[EDITED to add: Er, except that the first vowel in my "umbrella" isn't the same as I think you're putting in yours. In IPA, my umbrella is /ʌmbrɛlə/ and my banana is /bənɑːnə/, unless I've made mistakes there.]
Natural uranium is ~1% U235; bombs need 90+% U235. So when you've enriched it from 1% to 2% it doesn't seem like you've made a lot of progress towards 90.
If instead of enriching U235 you think of it as eliminating U238, though, then you've done half of the work.
That's the wrong way to think of it though. The right way to measure progress is in terms of Separative Work Units: https://en.wikipedia.org/wiki/Separative_work_units
If you start with 10000 kg of natural (0.7%) uranium and you want to separate it into 45 kg of highly-enriched (90% U235) uranium and 9955 kg of depleted (0.3% U235) uranium, then you will have to do 8800 kg of "Separative work units".
On the other hand, separating that same fuel into 3000 kg of partially-enriched (1.4% U235) uranium and 7000 kg of partially-depleted (0.4% U235) uranium only takes 1790 kg of "Separative work units", even though the increased concentration of U235 means that "half the U238 has been eliminated".
Isotope enrichment is an area where, to borrow a line from software engineering, the first 90% takes 90% of the time, and the last 10% takes the other 90% of the time.
You mean 40 kg and 9960 kg (so the sum is 10000 kg)? <insert "Were you a Putnam Fellow?" joke here>
Actually I meant 45 kg and 9955 kg. I adjusted the numbers several times looking for values which would come out with round-ish quantities but %U235 values in the right ranges for HEU and DU.
>> people really loved to hate on Tarsnap
So that's why the last 20% takes 80% of the time.
Of course, you can add all the water in the universe and they'll still not be 100% water. The water-percent increment just gets smaller and smaller, the more water you add.
This "potato paradox" illustrates the same effect, but in the other direction, where a small relative decrease yields a large absolute decrease.
You won't have a potato anymore. But you won't have water anymore either.
You'll have a black hole.
We don't know if black holes have hair, so the answer to your question is "We don't know".
Or another way - given you pour water in at any constant rate forever, given any p% you want to dilute to, there will be a point in time where that dilution level is exceeded, and will remain so for every point in time after that.
At this point, you have 1lb of solids and 199lbs of water, or 99.5%. Then you cough round off, and there you are, 100%!
In other words, it's another way of phrasing that it takes twice as much evidence to be 99% sure as it is to be 98% sure. Or that it's twice as hard to have 99% uptime than 98%.
Not twice as much evidence. Evidence needs to be measured logarithmically. (Otherwise you'd say it takes twice as much evidence to be 67% sure as 50% sure (2:1 versus 1:1), but the second takes no evidence at all for a binary proposition.)
It takes twice as much evidence to be 99% sure as 91% sure. 98% to 99% is 17 decibels to 20 decibels, which is less than 20% more evidence.
Earth has developed an insatiable appetite for Martian potatoes with their tasty pulp. Yuki runs a successful manufacturing plant on Mars which synthesizes a product called 'Martian Instaspuds'; instant potatoes being more cost effective to ship. Now these red potatoes (it's Mars after all) are processed in lots of 100kg mass which consist of 1kg pulp to 99kg water, 1p:99w. For 'Instaspuds' this ratio must be reduced to 9p:1w. How much water must be boiled-off in solar powered kilns on the Martian equatorial surface to achieve this ratio?
1) posit the pulp mass remains constant at 1kg
2) x = end product mass; 9/10 x = 1kg; x = 10/9 kg;
1/9 kg = final water mass in resulting mixture
You have 100 lbs of Martian potatoes, which are 1/100 water by weight. You let them dehydrate until they're 1/50 water. How much do they weigh now?
Now the answer is staring you right in the face.
I think it is the fact that they used potatoes that makes it counterintuitive. Had it instead been a glass of water that had 1% of dissolved salt in it, it would have been very straightforward.
If you phrased the question as "You have N pounds of potatoes", or with a specific number other than 100, it would come across as less unintuitive. As you read, you see "100 lbs", and "99%", so percents and potato components are both out of 100. So then you see 98%, which is 98/100...
This seems like the kind of thing that could be tested with a study. Ask a few hundred people each version of the problem (ideally filtering out anyone who has seen the problem before), and see how many get each version right.
(Potato paradox problem paradox: You ask 100 people to solve the potato paradox. 99% of them get the answer wrong. You drop people who answered incorrectly until you have a group where 98% got the answer wrong. How many people are left?)
Here's an example from his book:
Gerd Gigerenzer Reckoning With Risk.
With the potatoes, you imagine a crumbled potato. You have in mind how slow a potato would lose water and how much weight it would retain, which is of course the wrong way to look at the problem.
A fresh lake gets infested with algae and the amount of algae doubles every day. The algae covers the whole lake in 10 days. How many days did it take the algae to cover half the lake?
That is, restate the question with a different end water percentage and the answer is immediately obvious:
> You have 100 lbs of Martian potatoes, which are 99 percent water by weight. You let them dehydrate until they're 50 percent water. How much do they weigh now?
And, of course, it's pretty easy to get "2 pounds", but your brain is pretty fixed on the numbers all clustered together in the other example.
A statement or proposition that, despite sound (or apparently sound) reasoning from acceptable premises, leads to a conclusion that seems senseless, logically unacceptable, or self-contradictory.
0.99 * 100 - (0.99-x)(100 - y) = y
This assumes that you are starting with 100 pounds of potatoes at 99% water weight.
Here's a WolframAlpha link:
So as stated it is a trick question, the kind of parlour game found in old books of puzzles.
Finally, it seems to be a common failure of education to allow people to go through their 'mathematical' training and leave them with the impression that 'percent' is some kind of dimension when in fact it is short hand for a ratio: y is x percent of z. If z is not specified then you don't know what y is regardless of how much effort you put into discussing x.
So I suspect that in real life there are not many occasions when the paradox appears surprising.
Then m(p) = 1/(1-p) . The "unintuitive" aspect is that we want to think of this function as linear when it isn't.
By taylor's theorem, how bad a linear approximation to this function will be is based on its higher order derivatives. We can get an idea of this by looking at its second derivative:
m''(p) = 2/(1-p)^3. If you plot this graph, you'll see that it really starts to blow up past 0.8, so the nonlinearities start dominating.
If you said you had a pool that was 99% water, that changed to 98% water, the massive weight drop would be much less surprising.
In fact, it's not even the same thing. A 100-gallon pool (for very small people) that is 99% full has 99 gallons. 98% full has 98 gallons...
a seemingly absurd or self-contradictory statement or proposition that when investigated or explained may prove to be well founded or true.
"in a paradox, he has discovered that stepping back from his job has increased the rewards he gleans from it"
A veridical paradox produces a result that appears absurd but is demonstrated to be true nevertheless.
Amazing what philosophers will get up to.
Took me ten seconds to get it. Takes most people (IME) a lot longer than me to get similar things.
Maybe "no number sense" is actually "other people's minds don't usually work like mine"?
Particularly with the second form of the question in the article - anyone who's ever had a potato knows that it doesn't shrink to half it's weight overnight. Poor situation selection and confusing wording do not make a paradox.
Quine (who never went by QuineQuine, more's the pity) had an answer to this problem: Differentiate between veridical and falsidical paradoxes.
A veridical paradox is a paradox of the linked type: It appears absurd, but there is no problem in the logic, so it doesn't lead to a contradiction. Therefore, the conclusion is valid.
A falsidical paradox is similar to the Liar Paradox: It appears absurd and has a hole in the logic, such that there is no conclusion (it contradicts itself, and is also called an antinomy, as in the Liar Paradox) or the conclusion is invalid (as in one of the many fake proofs that 1 = 0). In Quine's words:
> In a falsidical paradox there is always a fallacy in the argument, but the proposition purportedly established has furthermore to seem absurd and to be indeed false.
Naming things paradoxes attracts this sort of individual, so it's a kind of marketing ploy for an idea: it won't just get it talked about, it'll get it talked about forever, because each generation of paradox-mongers will take it up anew and discuss it and analyze it and do absolutely anything except acknowledge that the specific flaw in the reasoning that leads to the appearance of an impossible conclusion being true was exposed generations ago.
For example, consider the supposed paradox of the evening star and morning star, which I will first state in the a way that makes it clear there is no paradox, then restate in the traditional way.
"The Evening Star is Venus seen in the evening sky."
"The Morning Star is Venus seen in the morning sky."
"Transitivity implies therefore that the Evening Star is the Morning Star, since Venus seen in the evening sky is Venus seen in the morning sky."
This is obviously stupid: no one would make this claim. The traditional formulation actually depends on a falsehood, or at least on radically incomplete statements:
"The Evening Star is the planet Venus."
"The Morning Star is the planet Venus."
"Transitivity implies therefore that the Evening Star is the Morning Star, since the planet Venus is the planet Venus."
At this point, you can spend millions of words explaining why the statements identifying lights in the sky viewed at a particular time of day with a ball of rock orbiting the sun are problematic. It will be pointless: paradox-mongers will simply not let their nice toy be demolished.
Most traditional paradoxes have straightforward resolutions (frequently involving inserting a knowing subject into them, like the person who observes Venus) but none of them will ever be "solved" because of this purely psychological resistance to the possibility of their solution on the part of a very vocal sub-population.
One useful trick is to never let a paradox be stated in the traditional way. The first step in any discussion should be to restate the paradoxical situation as completely as possible, usually by introducing the perspective of particular individuals, as I've done above. Traditional paradoxes almost all depend on very specific ways of stating them for their psychological effect, and breaking out of that ritual pattern of restatement often makes them look simply stupid. One can then ask what is missing in the ritual statement of the paradox that makes it not seem stupid.
In the case of the Potato Paradox, the restatements we've seen here, which introduce the ratio 1:99 as the way to think about the problem, is a good example of this. Since the answer is not intuitive, the correct way to introduce the problem is to make it intuitive, not to blurt out a non-intuitive answer and expect the now-confused listener to catch up. That's just bad pedagogy.
(there's also a followup, at http://spikedmath.com/335.html )