Hacker News new | comments | show | ask | jobs | submit login
Potato paradox (wikipedia.org)
559 points by Panoramix 859 days ago | hide | past | web | 132 comments | favorite



This is not that uncommon when optimizing code.

Your program is slow so you profile it, and find out that f() takes 99% of the time. So you work a lot to optimize f(), and re-profiling shows that now f() takes 98% of the time.

Doesn't seem that impressive after all the work you've put into optimizing f(), but your program is actually twice as fast :)


This just shows that the percentage of total time a function takes is really not the thing you should be looking at when optimizing code and you want to know how much faster your code really is.

For example, say your function f() takes 100% of the time. You then make it twice as fast. You look at the percentage, and, surprise, it still is taking 100% of the time!


Naturally, if you can look at a program which is now twice as fast and be unimpressed you are looking at the wrong metric.

It's understandable though in the context of the sequence of actions you might be going through as a tinkerer (as opposed to a scientist). You start with a list of functions and the % of time they take. Improving the top one will have the biggest effect, so this is a good thing to look at when deciding what to to. Then you go back to the same list afterwards.

I can see this (and lots of variations of this) happening


> Naturally, if you can look at a program which is now twice as fast and be unimpressed you are looking at the wrong metric.

Or, a computer scientist would say, the right one. The actual speed doesn't matter as much as the complexity.

In reality, it does, in the end; but it's also important to consider the type of optimization you're doing, and not just stop at being impressed with twice as fast.

For example, a primitively optimized function that's twice as fast for input size 100 would still be twice as fast for input size 100,000, but a function optimized to be an order less complex could be twice as fast for size 100, and a thousand or so times as fast for size 100,000.

The latter optimization would be significantly better, objectively, and thus, it would make a lot of sense to look at the first program and be unimpressed with a mere linear doubling in speed.


> Or, a computer scientist would say, the right one. The actual speed doesn't matter as much as the complexity.

Except, as an engineer would say, in practice the complexity doesn't matter as much as constant factors and the time and cost of design, implementation, and execution.


Neither are strictly correct in isolation. Of course the complexity matters, and of course the coefficient and reality factors matter. Often focusing on a core algorithm that's far too complex will save you time, money, and grief down the road. Engineering is, of course, knowing how to make the tradeoffs; but it certainly also involves knowing when one of those smart tradeoffs is time invested in getting your algorithmic complexity down.


There is a sorting network called AKS which has O(log(n)) depth. Batcher's Odd-Even Mergesort has a depth of O(log^2(n)). Which is better?

In all cases where you can fit n in the total memory capacity of the earth, Batcher's network is vastly superior. Constants matter a whole lot more than you might think.

I'm not arguing against revising algorithms to look for better complexity classes, but if you rewrite your algorithm in a smaller complexity class you had better be sure that your constants are of reasonable size.


For sure. But that's a far more nuanced argument than "twice as fast." :)


It's like grocery shopping and coupons, however. If you shouldn't be calling the function, then even a 99% reduction is not useful in that context.

Would you rather call a slow function, a fast function, or skip it altogether?

(A coupon only saves you money if you would buy the item in the first place.)


And then no watter what, if you're broke you will go to all lengths jumping through hoops and calculating tiny scores even if it's a waste of time... Because you're broke :p

Sorry, what's the context? Admin / account and login areas don't care / matter much, because they want it :p they'll wait...


I've experienced slow admin/internal UIs that were so slow and inefficient they brought down the consumer site... and yes, I'm talking algorithmically. Processing data in triple nested loops because it was "quick and easy" and then the data grew beyond a relatively small threshold (something like 10k records was where it started to break).


This is an interesting observation, and confirms the way I optimise code:

1) Create a benchmark (some code that performs the task that I want to optimise), and measure the absolute time it needs.

2) Use a profiler to see where most of the time is spent. Optimise that code.

3) Run benchmark to see if your optimisation are effective.

Or, in other words: the percentage tells you where to focus your effort. The absolute time tells you how successful your optimisation is.


This is the obvious strategy, but it's not always applicable.

For example when profiling a long-running service, or a kernel, there's no "absolute time it needs", you need other proxies of running time such as the CPU usage percentage, but even that is not noise-free.

So if you just run "perf" on it, you need to be aware of this fallacy.


I am pretty much a novice in program optimization, but doesn't `perf` sample clock cycles? So in effect CPU usage percentage doesn't matter as much since it's not using wall clock time?


Without knowing your code, I can think of a performance hack to get the running time of f() down to less than 10% of the time.


Me too, move all the code from f() to g() and never call f(). I got the performance of f() down to 0%!


What is this?


Slowing down the rest of the program? :)


Well yeah, just add a wait loop somewhere else


I thought they were speed-up loops...

http://thedailywtf.com/articles/The-Speedup-Loop


Hah, this is great! It reminds me of the classic

  static char buffer[1024*1024*2];
story from a few years back.

http://www.dodgycoder.net/2012/02/coding-tricks-of-game-deve...


From CToGD:

In other words, when a project gets handed down from above to launch in, say, 3 months, there's no way in hell you can get the servers requisitioned, approved, and installed in that time. It became standard practice for each team to slightly over-request server capacity with each project and throwing the excess hosts into a rainy day pool, immediately available and repurposeable as required.

I know of a Social Security Administration acquisition back in the days of 100 MHz processors. The bureaucracy took so long with this, by the time the order could be put out to suppliers, 100 MHz processors were no longer available, so SSA ended up with a bunch of workstations that were 166 MHz processors down-clocked to 100 MHz. (Otherwise, they would have had to start the whole process over again.)


That is pure genius. Some very forward thinking there.


Or make 10 copies of f and call one at random


If you understand Amdahl's law, it's not suprising at all!


But Gustafson's law has a lot more meaning. Who cares if it takes some fixed time to do something if you can parallelize and get 2x, 4x, 8x, 1000x, etc. as much done in the same time?


[deleted]


Let's say your program takes 100 seconds, of which 99% is spent in f(). Then if you optimize f() such that only 98% of the total time spent in f, then:

    orig_time = 100
    orig_not_f = 0.01 * orig_time = 1
    new_f = 0.98 * new_total
    new_total = new_f + orig_not_f
    new_total = 0.98 * new_total + orig_not_f
    new_total - 0.98 * new_total = orig_not_f
    0.02 * new_total = 1
    new_total = 1 / 0.02 = 50


beautiful



One of the people I went to university with had a little (very short) mental catalogue of "chromatic mathematical fruit jokes". There are exactly two famous ones. "What's purple and commutes?" "An abelian grape." And: "What's yellow and equivalent to the axiom of choice?" "Zorn's lemon." He invented another, which requires more esoteric knowledge: "What's green and determined up to isomorphism by its first Chern class?" "A lime bundle." I don't remember whether anyone found a credible fourth example.

(Oh, how we laughed.)

[In case anyone reading this thinks the above might be amusing if only they knew what mathematical objects were actually being referred to: (1) No, probably not. (2) Abelian group; Zorn's lemma; line bundle. For the last one, you need to make it the first Stiefel-Whitney class if you're working over the real numbers rather than the complex numbers.]


I vaguely remember a joke about limit of the supremum (lim sup) involving lime soup. In fact, I brought some lime soup into math class one day in undergrad. It tasted awful. And I can't remember the joke.


You don't need to. This is hilarious.


"What's yellow, normed and complete? A Bananach Space"


Nice! Except it doesn't quite work for me because the vowel sounds don't match. (For me the second and third "a"s in "banana" are like the "a" in "arm", whereas the two in "Banach" are like the "a" in "at", and those are quite different sounds. Other people may differ -- and indeed I may be mispronouncing "Banach".)


Your banana is like arm?

Mine is like umbrella (the u).

Buh-nah-nuh.


Whoops. Mine's the same. Two schwas and one a-as-in-arm.

[EDITED to add: Er, except that the first vowel in my "umbrella" isn't the same as I think you're putting in yours. In IPA, my umbrella is /ʌmbrɛlə/ and my banana is /bənɑːnə/, unless I've made mistakes there.]


As an American, I'll never understand how the English can indiscriminately throw /ɑː/'s into perfectly normal words, yet when faced with a foreign import that actually asks for it, willfully refuse.


Maybe I made more than one mistake in my description (I definitely made one: the third "a" in "banana" is of course a schwa, not the vowel in "arm") -- but the "a"s I put in "Banach" are (modulo incompetence) pretty much exactly the ones you'll find by clicking the "listen" link after Banach's name at the start of his Wikipedia page. So I'm not sure what you mean about "a foreign import that actually asks for it". And I just checked the etymology of "banana", and the likely Wolof original has exactly one /ɑː/ in it, in the same place as I put one. But, see above, I misdescribed how I say "banana" so I may have caused confusion.


The Ham Sandwich Theorem may have the best companion image/caption combo I've ever seen on wikipedia

https://en.wikipedia.org/wiki/Ham_sandwich_theorem#/media/Fi...




I guess mine is sort of an anti-food-related math concept, then! https://en.wikipedia.org/wiki/No_free_lunch_theorem





I want the bigger half!


"announced the acceptance by the journal Theoretical Computer Science of a more efficient algorithm for pancake sorting than the one proposed by Bill Gates and Christos Papadimitriou."

Bill Gates!


This is not an implausible basis with which to start a new list on Wikipedia. It is original research, but it will likely be let in.


"research" is a generous term :) I thought about it, but the qualification to be on my list is pretty subjective. I suppose though, it could be generalized to "ideas inspired by food in STEM fields".


The Squeeze Theorem in calculus [1] is also known as the "Sandwich Theorem".

[1] https://en.wikipedia.org/wiki/Squeeze_theorem


Just don't discuss this one at the dinner table:

https://en.wikipedia.org/wiki/Hairy_ball_theorem


You should bump it up to five; the ham sandwich theorem references a "pancake theorem" involving the bisection of two layers instead of the ham sandwich's three.


A much more important example of this than "martian potatoes" is uranium enrichment.

Natural uranium is ~1% U235; bombs need 90+% U235. So when you've enriched it from 1% to 2% it doesn't seem like you've made a lot of progress towards 90.

If instead of enriching U235 you think of it as eliminating U238, though, then you've done half of the work.


If instead of enriching U235 you think of it as eliminating U238, though, then you've done half of the work.

That's the wrong way to think of it though. The right way to measure progress is in terms of Separative Work Units: https://en.wikipedia.org/wiki/Separative_work_units

If you start with 10000 kg of natural (0.7%) uranium and you want to separate it into 45 kg of highly-enriched (90% U235) uranium and 9955 kg of depleted (0.3% U235) uranium, then you will have to do 8800 kg of "Separative work units".

On the other hand, separating that same fuel into 3000 kg of partially-enriched (1.4% U235) uranium and 7000 kg of partially-depleted (0.4% U235) uranium only takes 1790 kg of "Separative work units", even though the increased concentration of U235 means that "half the U238 has been eliminated".

Isotope enrichment is an area where, to borrow a line from software engineering, the first 90% takes 90% of the time, and the last 10% takes the other 90% of the time.


45 kg … and 9960 kg

You mean 40 kg and 9960 kg (so the sum is 10000 kg)? <insert "Were you a Putnam Fellow?" joke here>


blushes

Actually I meant 45 kg and 9955 kg. I adjusted the numbers several times looking for values which would come out with round-ish quantities but %U235 values in the right ranges for HEU and DU.


Annnnd just spent an hour reading that delightful Colin Percival Putnam thread. Thanks for the reminder on that. Man, people really loved to hate on Tarsnap because of his perceived arrogance.


  >> people really loved to hate on Tarsnap
On the contrary, Colin is highly regarded on HN both for his technical ability, and his success in business (despite breaking from conventional wisdom in how to be successful in business).


Thanks for pointing this out. I've vaguely heard before that some nations (like North Korea) have difficulty building nuclear bombs because it's hard to enrich Uranium. This really quantifies that difficulty for me.


It is worth mentioning that eliminating the first half of the U238 is still easier then eliminating the second half.


as much as our brains are designed to understand logarithmic scales, this percentage problem never really feels intuitive for me. If you have 1 part U235 and 99 parts U238, to get to 2%, it is 50.5% (50/99) of the work to get to to 100% enriched. If all you need to get to weapon grade uranium is 90% enriched, you need to eliminate 98.88888 of the 99 units of U238 (1/1.11111 = .90). So there is no real significant difference in your progress, you are now 50.56% of the way there.


> then you've done half of the work.

So that's why the last 20% takes 80% of the time.


Another angle on this problem: How much water must you add to the potatoes to make them 100% water?

Of course, you can add all the water in the universe and they'll still not be 100% water. The water-percent increment just gets smaller and smaller, the more water you add.

This "potato paradox" illustrates the same effect, but in the other direction, where a small relative decrease yields a large absolute decrease.


But does the water remember the potato? :-)


It doesn't matter. I remember the potato, and that's enough for me.


No more potato? Now I'm waiting for someone to invent the "Latvian Deprivation Paradox".


If you add water to the potato infinitely (hypothetically speaking), does the water percentage approach 100%? And also, does the solid percentage approach 0%?


No.

You won't have a potato anymore. But you won't have water anymore either.

You'll have a black hole.


Is potato-blackhole distinguishable from same mass water black-hole?


> Is potato-blackhole distinguishable from same mass water black-hole?

https://en.wikipedia.org/wiki/No-hair_theorem

We don't know if black holes have hair, so the answer to your question is "We don't know".

https://en.wikipedia.org/wiki/Black_hole_information_paradox


Yes. More precisely, given a positive value p > 0, you can choose a "w" such that adding w% of water dilutes it to p% potato.

Or another way - given you pour water in at any constant rate forever, given any p% you want to dilute to, there will be a point in time where that dilution level is exceeded, and will remain so for every point in time after that.


If the potato is sufficiently finely divided, and the volume of water is large enough, evetually all the potato will dissolve completely in the water, and there will be no solids left.


Simple: 100lbs of water.

At this point, you have 1lb of solids and 199lbs of water, or 99.5%. Then you cough round off, and there you are, 100%!


The solution is much more intuitive if you use odds ratios instead of percentage probabilities. You go from a 99:1 ratio to a 98:2 (or 49:1) ratio.

In other words, it's another way of phrasing that it takes twice as much evidence to be 99% sure as it is to be 98% sure. Or that it's twice as hard to have 99% uptime than 98%.


> it takes twice as much evidence to be 99% sure as it is to be 98% sure.

Not twice as much evidence. Evidence needs to be measured logarithmically. (Otherwise you'd say it takes twice as much evidence to be 67% sure as 50% sure (2:1 versus 1:1), but the second takes no evidence at all for a binary proposition.)

It takes twice as much evidence to be 99% sure as 91% sure. 98% to 99% is 17 decibels to 20 decibels, which is less than 20% more evidence.


This paradox also helps illustrate why you might want to avoid percentages or 0-1 decimal probability in statistics - in some cases, the compression at the end of the range can mask very important phenomenon. (0.01% and 1% look almost the same as percentages or decimals, but can have different implications.) Particularly important if you're doing anything at the tails of the distribution, like thinking about how to increase extreme values (is increasing the proportion of extreme-values from 0.01% to 0.10% extremely important or utterly trivial?)


Good observations. Look at how much easier it would be to solve this version of the Martian potatoes problem which avoids percentages altogether:

Earth has developed an insatiable appetite for Martian potatoes with their tasty pulp. Yuki runs a successful manufacturing plant on Mars which synthesizes a product called 'Martian Instaspuds'; instant potatoes being more cost effective to ship. Now these red potatoes (it's Mars after all) are processed in lots of 100kg mass which consist of 1kg pulp to 99kg water, 1p:99w. For 'Instaspuds' this ratio must be reduced to 9p:1w. How much water must be boiled-off in solar powered kilns on the Martian equatorial surface to achieve this ratio?

1) posit the pulp mass remains constant at 1kg

2) x = end product mass; 9/10 x = 1kg; x = 10/9 kg; 1/9 kg = final water mass in resulting mixture

3) subtract


Yeah, I phrased that poorly. It takes as much evidence to go from 50% to 67% as it does to go from 98% to 99%


Yes, I would word the paradox like:

You have 100 lbs of Martian potatoes, which are 1/100 water by weight. You let them dehydrate until they're 1/50 water. How much do they weigh now?

Now the answer is staring you right in the face.


Cool concept. Took me a minute to think it through before I could make sense of it.

I think it is the fact that they used potatoes that makes it counterintuitive. Had it instead been a glass of water that had 1% of dissolved salt in it, it would have been very straightforward.


The thing the article doesn't point out is why it seems unintuitive.

If you phrased the question as "You have N pounds of potatoes", or with a specific number other than 100, it would come across as less unintuitive. As you read, you see "100 lbs", and "99%", so percents and potato components are both out of 100. So then you see 98%, which is 98/100...


Although it's somewhat true, if you had started with 14 pounds of potatoes instead, I still think most people at first glance would expect the final weight to be around 14, not half of it.


I agree, but I suspect fewer people would make the mistake. Partly because more people would actually do the math, and partly because there's no immediately attractive wrong answer involving "98/100" and "100 lbs". There's still a somewhat attractive wrong answer of multiplying 98/100 by 14 lbs, but there's one less reason to make that mistake.

This seems like the kind of thing that could be tested with a study. Ask a few hundred people each version of the problem (ideally filtering out anyone who has seen the problem before), and see how many get each version right.

(Potato paradox problem paradox: You ask 100 people to solve the potato paradox. 99% of them get the answer wrong. You drop people who answered incorrectly until you have a group where 98% got the answer wrong. How many people are left?)


Gerd Gigerenzer asks roughly similar questions of a wide range of people - mostly medical professionals. It's scary how wrong people are, especially considering that this is information they're using to medicate you or operate on you.

Here's an example from his book:

http://imgur.com/zO4zkl4

Gerd Gigerenzer Reckoning With Risk.


q:100 people are seated in a room. 99% of them are enginners and 1% managers. How many engineers should leave the room to make it 98% enginners and 2% managers? a:50


Its better to leave the "and 2% managers" part to make it a little complicated ;)


If you phrase it that way, the question becomes easy to answer, because you start looking at the engineers, subtract one, look back at the managers an instantly realize, that much more would have to go.

With the potatoes, you imagine a crumbled potato. You have in mind how slow a potato would lose water and how much weight it would retain, which is of course the wrong way to look at the problem.


Here is another variation.

A fresh lake gets infested with algae and the amount of algae doubles every day. The algae covers the whole lake in 10 days. How many days did it take the algae to cover half the lake?


9


I think it's interesting that just the percentage stated makes it hard to comprehend intuitively.

That is, restate the question with a different end water percentage and the answer is immediately obvious:

> You have 100 lbs of Martian potatoes, which are 99 percent water by weight. You let them dehydrate until they're 50 percent water. How much do they weigh now?

And, of course, it's pretty easy to get "2 pounds", but your brain is pretty fixed on the numbers all clustered together in the other example.


I don't think a simple algebra problem should be called a paradox.


I think it's called a paradox (rightly), because it's not immediately intuitive for most people.


Agreed.

> par·a·dox A statement or proposition that, despite sound (or apparently sound) reasoning from acceptable premises, leads to a conclusion that seems senseless, logically unacceptable, or self-contradictory.


I agree. This seems to me an abuse of the term "paradox". Lots of math results are surprising to people who've never worked through them. I think "paradox" should ideally be reserved for things that have two logically incompatible logical solutions.


This strikes me as similar to the non-intuitiveness of compounding interest (and related brain teasers like "bacteria doubling population every day" and "pieces of gold doubling on a chess board").


Another intuitive way of thinking is to think in terms of proportionality between water and potato matter. The weight of the potato matter remains constant, and the amount of water can change, which in our case goes down. To make the matter proportionally twice as bigger compared to water, one needs to divide water by twice.


Not entirely. You need to divide the total by 2, not the water. Dividing the water by 2 would give 49.5 water vs 1 potato matter, for a total of 50.5. The potato matter content would only be 1.98%.


This equation models the amount of weight lost for various percentages of change:

0.99 * 100 - (0.99-x)(100 - y) = y

This assumes that you are starting with 100 pounds of potatoes at 99% water weight. Here's a WolframAlpha link: http://www.wolframalpha.com/input/?i=0.99+*+100+-+%280.99-x%...


This is only a paradox because the framing of the question misleads people , partly by mentioning potatoes, into thinking that they are discussing something that one might actually do in the way it is described. In reality if you did this with real physical objects the part where it says: "You let them dehydrate until they're 98 percent water" hides a process in which one of the operations would involve picking up the potatoes. At this point you would notice that they were much lighter than before.

So as stated it is a trick question, the kind of parlour game found in old books of puzzles.

Finally, it seems to be a common failure of education to allow people to go through their 'mathematical' training and leave them with the impression that 'percent' is some kind of dimension when in fact it is short hand for a ratio: y is x percent of z. If z is not specified then you don't know what y is regardless of how much effort you put into discussing x.

So I suspect that in real life there are not many occasions when the paradox appears surprising.


A better or more shocking way to understand how the numbers interplay is, to reframe this like this. Imagine a vegetable mystical-vegetable, that had 99.9% water and weighed 100 lbs. If I dehydrated it such that it now has 99% water, what would be its weight? Yes, its 10 lbs.


This is bs and you're all witches.


Let p be the percentage of water and m the total mass.

Then m(p) = 1/(1-p) . The "unintuitive" aspect is that we want to think of this function as linear when it isn't.

By taylor's theorem, how bad a linear approximation to this function will be is based on its higher order derivatives. We can get an idea of this by looking at its second derivative:

m''(p) = 2/(1-p)^3. If you plot this graph, you'll see that it really starts to blow up past 0.8, so the nonlinearities start dominating.


This is only confusing because of the potatoes.

If you said you had a pool that was 99% water, that changed to 98% water, the massive weight drop would be much less surprising.


How is that clearer?? :-)

In fact, it's not even the same thing. A 100-gallon pool (for very small people) that is 99% full has 99 gallons. 98% full has 98 gallons...


99% water by volume to 98% water by volume. or another way, sediments increasing from 1% to 2% would make sense if half the water evaporated


Not 99% full, 99% water. Presumably the pool contains a solution of some sort.


Hmm how about: You have a 1 liter bottle of 99% alcohol. How much of the alcohol you should evaporate to have 98% alcohol left?


This is related to the base rate fallacy, the maths of which is frequently misunderstood and can lead to very real consequences:

http://metaist.com/blog/2010/02/disease-screening-base-rate-...


Very good. I also like the birthday paradox. https://en.wikipedia.org/wiki/Birthday_problem


This isn't a paradox at all. It's a slightly non-intuitive result.


Since you're arguing terminology, I'll just quote the second dictionary definition:

a seemingly absurd or self-contradictory statement or proposition that when investigated or explained may prove to be well founded or true. "in a paradox, he has discovered that stepping back from his job has increased the rewards he gleans from it"



Took me an embarrassingly long time to figure this out. The key issue that I've latched on to is that they've lost 99 - 98 = 1% of their water content, which is false.


Makes mathematical sense, yet not quite common sense.


But how is that unintuitive?


I'd call it: "The Potato Algebra Problem"


Any more of these interesting puzzles?


Looks like I found my new diet.


There's a guy (or maybe more than this guy) that has done just that:

http://www.20potatoesaday.com/


Interesting


Why is this a "paradox"? Not sure what this applies to either. It's not even counterintuitive.


As stated in the article, this is an example of a veridical paradox under Quine's classification of paradoxes. Which, a quick link following reveals as…

A veridical paradox produces a result that appears absurd but is demonstrated to be true nevertheless.

Amazing what philosophers will get up to.


The point is that it doesn't even appear absurd unless you have absolutely no number sense.


Typical Mind Fallacy.

Took me ten seconds to get it. Takes most people (IME) a lot longer than me to get similar things.

Maybe "no number sense" is actually "other people's minds don't usually work like mine"?


If you think that, you have more number sense than many people. Perhaps more than most :)


Part of it is also the ambiguous way the question is phrased. amolgupta above mentions the same thing, but using humans (engineers and managers) and the answer is much more obvious in that context.

Particularly with the second form of the question in the article - anyone who's ever had a potato knows that it doesn't shrink to half it's weight overnight. Poor situation selection and confusing wording do not make a paradox.


your comment is just you trying to tell everyone you are smart, but its worse because you are trying to act like you are so smart that you don't even realize thats what you're doing.


There is a certain type of mind that delights in paradoxes, to the extent that even after an explanation has been clearly produced, the person will continue to insist there is a paradox.

Naming things paradoxes attracts this sort of individual, so it's a kind of marketing ploy for an idea: it won't just get it talked about, it'll get it talked about forever, because each generation of paradox-mongers will take it up anew and discuss it and analyze it and do absolutely anything except acknowledge that the specific flaw in the reasoning that leads to the appearance of an impossible conclusion being true was exposed generations ago.

For example, consider the supposed paradox of the evening star and morning star, which I will first state in the a way that makes it clear there is no paradox, then restate in the traditional way.

"The Evening Star is Venus seen in the evening sky." "The Morning Star is Venus seen in the morning sky." "Transitivity implies therefore that the Evening Star is the Morning Star, since Venus seen in the evening sky is Venus seen in the morning sky."

This is obviously stupid: no one would make this claim. The traditional formulation actually depends on a falsehood, or at least on radically incomplete statements:

"The Evening Star is the planet Venus." "The Morning Star is the planet Venus." "Transitivity implies therefore that the Evening Star is the Morning Star, since the planet Venus is the planet Venus."

At this point, you can spend millions of words explaining why the statements identifying lights in the sky viewed at a particular time of day with a ball of rock orbiting the sun are problematic. It will be pointless: paradox-mongers will simply not let their nice toy be demolished.

Most traditional paradoxes have straightforward resolutions (frequently involving inserting a knowing subject into them, like the person who observes Venus) but none of them will ever be "solved" because of this purely psychological resistance to the possibility of their solution on the part of a very vocal sub-population.

One useful trick is to never let a paradox be stated in the traditional way. The first step in any discussion should be to restate the paradoxical situation as completely as possible, usually by introducing the perspective of particular individuals, as I've done above. Traditional paradoxes almost all depend on very specific ways of stating them for their psychological effect, and breaking out of that ritual pattern of restatement often makes them look simply stupid. One can then ask what is missing in the ritual statement of the paradox that makes it not seem stupid.

In the case of the Potato Paradox, the restatements we've seen here, which introduce the ratio 1:99 as the way to think about the problem, is a good example of this. Since the answer is not intuitive, the correct way to introduce the problem is to make it intuitive, not to blurt out a non-intuitive answer and expect the now-confused listener to catch up. That's just bad pedagogy.


The Potato Paradox seems to build on a misleading set-up and peoples disability to picture logarithmic scales. The Venus Paradox seems more like something a freshman says, who heard about Boolean algebra for the first time and now applies it human language all the time.


related: http://spikedmath.com/334.html

(there's also a followup, at http://spikedmath.com/335.html )


https://math.dartmouth.edu/~matc/Readers/HowManyAngels/Wayso...

Quine (who never went by QuineQuine, more's the pity) had an answer to this problem: Differentiate between veridical and falsidical paradoxes.

A veridical paradox is a paradox of the linked type: It appears absurd, but there is no problem in the logic, so it doesn't lead to a contradiction. Therefore, the conclusion is valid.

A falsidical paradox is similar to the Liar Paradox: It appears absurd and has a hole in the logic, such that there is no conclusion (it contradicts itself, and is also called an antinomy, as in the Liar Paradox) or the conclusion is invalid (as in one of the many fake proofs that 1 = 0). In Quine's words:

> In a falsidical paradox there is always a fallacy in the argument, but the proposition purportedly established has furthermore to seem absurd and to be indeed false.


That's not interesting at all. How do I downvote submissions?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: