Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: FizzBuzz for mathematicians ...
30 points by ColinWright on July 14, 2012 | hide | past | web | favorite | 83 comments
We have enough mathematicians here to ask this: What would you suggest as an equivalent of FizzBuzz for mathematicians?

Some suggest proving sqrt(2) is irrational, but many technical types who aren't really mathematicians can do that because they've read enough popular math books. Similarly proving that there are infinitely many primes.

So, mathematicians, what would you suggest?

And for the non-mathematicians, what would you like to have explained, that you think every mathematician probably knows?

Added in edit: I thought this was a great discussion with some interesting suggestions, but I see it's been flagged off the front page.

http://hnrankings.info/4244266/

So that's that, another interesting discussion killed. Oh well, never mind.




>many technical types who aren't really mathematicians can do that because they've read enough popular math books.

Well, fizzbuzz doesn't exactly require a masters degree either. Proving sqrt(2) is irrational requires at least some understanding of how math problems are approached (unless you do it completely from memory, which is also possible with fizzbuzz), so I would say it's the equivalent.


Yeah, the point of fizzbuzz is that it's an extremely simple test that demonstrates basic competence, nothing more.

I have a CS degree and took a bunch of math courses and did lots of proofs, but now (several years later) I'd struggle to write any of the suggested proofs.


I would say that failing to solve fizzbuzz demonstrates incompetence (excepting perhaps people new to the interviewing process). In other words, it weeds out but doesn't indicate competence.


I agree that these could provide only a basic filter like Fizzbuzz, I could solve most problems provided here when I was high school.


Literally any proof is on the level of FizzBuzz. There is a wide gap between the level of competence it takes to prove anything, vs. not understanding how proofs work-once you know any higher-level Math, you know proofs. That's even a slightly higher level of competence than FizzBuzz-anyone can do FizzBuzz if they know even a modicum of programming. You certainly don't have to be a programmer to do FizzBuzz, and you don't have to be a Mathematician to prove that sqrt(2) is irrational.


Derive the quadratic formula.

It's just about as simple as FizzBuzz, and requires primarily symbolic manipulation. Anyone from any branch of mathematics (including applied math and statistics) should be able to do it in their sleep.


But so should a high-school student...


Given how important computers are these days, the same could be said of FizzBuzz.


I suppose I will grant that, with regard to an ideal curriculum. With actual present day curricula, however - the highschool algebra summer course I took many years back had us produce the derivation.


Other recent suggestions include:

* Show that two irrationals can add up to a rational.

* Find three whole numbers in arithmetic progression whose product is a prime.

* Find a prime that is one less than a square. Now find another.


> * Find three whole numbers in arithmetic progression whose product is a prime.

I don't think there's an agreed upon definition of what a "whole number" is. Then again, if you're just looking for how someone reacts to such a question, I suppose it's fine, if a bit silly.


Interesting - can you give me two possible plausible definitions of "whole number"?


Wikipedia has three possible definitions: http://en.wikipedia.org/wiki/Whole_number

All integers, integers >= 0, integers > 0


The following usage is actually fairly common:

    Natural numbers: strictly positive integers
    Whole numbers: non-negative integers


I'm intrigued to learn that some people have the convention that "-3" is not a whole number.


I'm much more of an applied mathematician these days, but IIRC, the usage is especially prevalent in 19th century (and earlier) number theory, most often by authors who take the "natural numbers" to be {1,2,3,...} and therefore require a name for the set {0,1,2,3,...}.

I have definitely come across the usage in more modern settings as well, however.


According to both Wikipedia and Mathworld, "whole number" is sometimes used for:

• The positive integers

• The non-negative integers

• The integers


For 1, suppose that x is irrational, that p is rational and q = p - x. Either q is rational or irrational. If q is rational then p - q is rational which would imply that x is rational. By contradiction, then p - x must be irrational. Therefore x + q = p, implying two irrational numbers can sum to a rational number.

For 2, how about -3, -1 and 1.

For 3, 3 is the only example. One less than a square is n ^ 2 - 1 = (n+1)(n-1). For n=2 the answer is 3 (3*1). For n > 2 the result is clearly factorable into two numbers > 1.


1. Why does every thread about FizzBuzz-esque problems always drive people to prove that they can solve the problem that is by definition minimum-competency.

2. The first is far more quickly solved by definition of the additive inverse, anyway.


> For 3, 3 is the only example.

Or 7, or 31, or 127.


8, 32, and 127 are powers of two, but they're odd powers. None of them are squares.


Ah, very correct! I misread the initial question.


I'm not sure what you mean by the second point. Three numbers whose product is prime? Did you mean sum?

Edit: Nevermind - I just figured these out. Makes me feel a bit dumb now. That's what I get for not doing math for a few years. I really liked the last one - got me thinking.


For the middle one, would you accept the arithmetic progression 1, 1, 1?

I know that 1 is not considered to be a prime now, but it was considered to be a prime by most mathematicians up through the 19th century, and Lehmer included it in his lists of primes as late as 1956.


It's a defensible point of view, but I'd then ask for another.


1 1 2


Not in arithmetic progression.


s/whole number/integer/


I have been using this when interviewing people with math degrees:

Two players play a game with a single six-sided die. The player that starts can only win by rolling a 1. If he or she doesn't win, the other player gets to roll; he or she can only win by rolling a 6. The game continues until one player wins. What's the probability the first player wins (eventually)?


Let me see if I can work through this.

Player 1 wins based on the following series:

Chance to get a 1 (turn 1): 1/6 Chance to be allowed to roll: Previous chance to be allowed to roll * 5/6 (Chance player 1 didn't roll a 1) * 5/6 (Chance player 2 didn't roll a 6) Chance to get a 1 (turns 2+): Chance to be allowed to roll * Chance to get a 1

So... summation( 1/6 * (25/36)^(n-1) )

I don't know what that comes out to be.


It's actually 1/6 * Sum_n (5/6)^2n = 1/6 * 1 / (1 - 25/36) = 6/11. The first move provides a small advantage, as intuition dictates.


I like this one. It can be brute-forced easily enough, but there are also a couple much more elegant approaches that I would expect a good mathematician to produce.


As a mathematician I'll be interested in knowing why you ask that question to people with math degrees.

What information does that give to you as an interviewer?


It tells me if they can at least minimally apply their knowledge and reason about a problem. It's a filter question. If a person with a college degree in math cannot solve this, what's the likelihood they will be able to solve an actual real problem I need them to work on?


I think I'm at least as competent at math as one would need to be at programming to solve fizzbuzz and I have no clue how to prove either of those things.

Fizzbuzz isn't trying to separate the professional programmers from the pretenders, its' trying to identify people who have no capacity for programming whatsoever. Fizzbuzz only requires basic knowledge of the syntax of any programming language and basic problem solving skills. I think a better equivalent would be something like "integrate x*e^x". Or maybe even some college-level algebra.


The idea behind the problem seems to be to separate the mathematicians from the engineers, which the problems seem to do reasonably well.

However, since both problems are fairly trivial proof-by-contradictions, I would like every computer scientist to be able to solve them as well.

That said, my masters degree is in programming languages, so I may be exposed to a bit more math than the average software developer. I do not consider myself competent enough to be a mathematician, though.


I think proving the sqrt(2) is irrational is great. That was the very first question on my first college math homework assignment. It's a great way to weed out people aren't even close to the level they should be at math-wise. Don't worry that others can do it. FizzBuzz can also be cargo-culted. FizzBuzz tells you nothing about how good someone is, but can tell you a lot about how bad they are.

I would also suggest this: Prove exp(pi*i)+1=0.


Is it possible to prove that 0.99... = 1? If yes, how?


This one is just too much geek knowledge I think. E.g. I have read about it a bunch of times and I completely suck at math.

Something less "cool" would be more apt imo (fizzbuzz itself is rather uninteresting).


It's a pretty easy proof. I'm on mobile so I can't elaborate a lot, but just look how do you construct the fraction corresponding to a periodic number. In this case, it's 9 divided by 9. That's 1.

Another proof (requires more knowledge) is based on the structure of the real numbers. Let a and b be real numbers where a ≠ b. Then, there are infinite numbers between them. It's a pretty obvious statement.

Now, try to find any number between 0.99... and 1. You can't. So, if there isn't any number between 0.99 and 1, they are equal.


After reading some of the comments, it seems as though this has a tendency to turn into another one of those interviews where the interviewer gets to feel superior if you can't remember shit you learned 20 years ago verbatim.


Explain and demonstrate the use of Taylor series. Quite a bit easier than many of the other suggestions below, and will weed out total idiots. Kind of like asking a supposed programmer to explain a "for" loop.


Except that not every mathematician will actually have done Taylor Series. Algebraists, logicians, topologists and graph theorists, for example, may very well not have done analysis to that level.


Even logicians took freshman calculus.

This is actually more reasonable than expecting an analyst to produce basic group theory proofs.


For reference, I never took "freshman calculus" and never did Taylor Series at University, and I have a PhD in discrete math. I do know about TS, but it was axtra-curricula.


In all fairness, I never took "freshman calculus" either; that said, most undergraduate math degrees require either real or complex analysis, and at the graduate programs with which I am familiar you would have a very difficult time passing the qualifying exams if you weren't comfortable working with Taylor series.


In which country?


I've gone and checked. They were mentioned, effectively with "when you need them they will be pretty much obvious" - and they were. They never formed a topic in their own right, they just "fell out" of the actual material.

This was in Australia in the early 80s, and to this day I think of TSs as sort of obvious. But I did Pure Maths, and I think TSs are more emphasised in applied where you use them for approximations, etc.


I guess I'm now intrigued as to where your "sort of obvious" threshold lies.

Which aspects of your undergraduate real analysis course did you not find obvious (if any)?


It's way too long ago to have any measure of that now. I did my 4 year bachelor's degree from 1980 to 1982 inclusive, taking nearly 4.5 years worth of courses in that time. I did Analysis I, Analysis II, Complex Analysis and Mathematical Methods all in my first chronological year, and Taylor Series were never actually a subject of discussion. My recollection is that they were defined, and it was just pretty obvious that they worked when everything was "nice", and that things could and would go wrong if things were in any way "odd".

But this was 32 years ago, and while I've used calculus in one form or another constantly, I've never had to do anything "interesting" with Taylor Series. Now looking back I'm hard pressed to say what was easy and what wasn't, what was obvious and what wasn't.

It would actually be interesting to go be an undergraduate and see what is obvious now with the years under my belt, and what needs to be thought about hard.


I suspect the fact my degree was almost as long ago, and I certainly _can_ remember the real analysis topics that were non-obvious, means there were fewer of them for you than for me :-)


Or that your course was harder than mine, or that I was struggling with more advanced courses at the same time, and therefore had a different perspective on it all.


At least here in Europe, a course in basic real analysis is mandatory _in the first semester_ for just about every program which includes some math. I'd even be surprised to see somebody with a math degree who hasn't at least done a little bit of complex analysis at some point…


Jesus.

I really, really, really want to know an example of that.

(Philosophical logicians do not score as examples of this).


What is the goal? If it's to test the mathematical ability of someone who you want to hire to code it might make sense to ask some probability questions.


Prove that the inverse of any element in a group G is unique.

Edit: If this is too easy, prove that g in G, g -> g^2 is a homomorphism iff G is abelian.

Note: I am not a mathematician.


Maybe I'm just a sucker for group theory but that strikes me as too trivial for any mathematician.

In that same vein, I'd suggest proving that a group G is abelian if and only if (ab)^2 = a^2.b^2, where a, b are in G.

It's not as straightforward as your question in the sense that proving it requires a little idea that you have to come up with.


That seems too easy.

(ab)^2 = aabb a^2 b^2 = abab

ab = ba iff G is commutative, so for an Abelian group we can substitute for the middle bit

aabb = a(ab)b = a(ba)b = abab

Which won't hold if G is not commutative.

QED

I recognize that FizzBuzz is supposed to be easy, but it's supposed to recognize programmers with basic competence; I am not a mathematician. (But maybe I underestimate myself or overestimate some of those with advanced degrees in mathematics?)


In fairness, if you handed this in for homework in a sophomore algebra course, you likely wouldn't get credit (you've definitely proven the "only if"; the "if" is a bit murky). However, it's not too much of a stretch to clean it up into a proper proof.


In what way did the

aabb = a(ab)b = a(ba)b = abab

fail to prove the if?

edit: I did fail to re-state it, but figured it was obvious in the not-really-formal-proof setting. If that's all you meant by "a bit murky" then nevermind.


You have very clearly established the following:

    ab = ba --> (ab)^2 = a^2b^2
It is less obvious that you have proven that:

    (ab)^2 = a^2b^2 --> ab = ba
If I were grading a sophomore algebra class, I would expect to see something along the lines of:

    Suppose (ab)^2 = a^2b^2.
    Re-associating gives us a(ba)b = a(ab)b;
    multiplying on the left and right by the
    inverses of a and b gives the result.
In any domain outside of a sophomore algebra class, I happily accept much briefer and more hand-wavy proofs.


Ah, when I read your response I flipped the order of the problem around in recalling it, so had the if and only-if backwards. Yes, I was handwavy there but it seemed clear enough for the setting (which you seem to be granting anyway) - just wanted to be sure I wasn't misunderstanding something. Thanks :)


I don't really see this as being any less trivial than proving inverses are unique. Both require writing down the statements, and a couple applications of the group axioms.


I like my phrasing a little bit more because it at least requires knowledge of the definition of a group hom. Fizzbuzz also requires little more than an understanding of for loops and if statements, so I'd say they're kind of similar.


On the other hand, I know a couple extremely good analysts who would stare blankly at you until you reminded them of the definition of a homomorphism =)

To be clear, however: I think these are pretty good analogues of FizzBuzz.


Wow, I actually edited my comment without reading yours and came up with the same problem.


Gotta love HN :)


Perhaps it shouldn't be solving one problem, but maybe explaining a base concept in a field most mathematicians know, and how it's important - since there's no one right answer, it would also show their experience and so forth - most people probably know what they are, but the more experienced will be able to talk for quite a while. iono, Cauchy sequences, maybe. or groups


Well, depends on your definition of »mathematician« – I'd probably go for a question about the very basics of algebra. Something like: How many elements does the symmetric group S_3 have? (Optionally: Write down its Cayley table.) Is it abelian?

Not exactly relevant to practice, and utterly trivial, but I suppose nobody comfortable with formal math should any problem solving it on the spot.


In what sense are they to be equivalent?

In any case I'm going to suggest this:

Q. Give two different proofs of the fundamental theorem of algebra.


The Monty Hall problem? Like other suggestions, too many people might be able to recreate it from memory.


Or apply Bayes' theorem to a real world problem http://yudkowsky.net/rational/bayes/


To take as much from FizzBuzz as possible - how about "What is the sum of all integers between 1 and 1000 that are divisble by 3 or 5?"

It's simple to figure out with pen and paper but the goal is to just confirm a basic level of competence.


Scarcely requires math, seems only to require arithmetic.


Yea - I wasn't entirely sure what you meant by the "FizzBuzz" for math point. You were looking for something with more of an abstract focus?

You can have the person then turn it into a general case. What is the sum of 1...N given that the numbers divisible by x1...xk. Even that's pretty simple though.


Another very basic question, maybe analogous to FizzBuzz in that regard, could involve some real analysis, e.g.: Let f: [0,1]->[0,1] be a continuous function. Prove that there exists some x in [0,1] such that f(x) = x.


Compare e^Pi and Pi^e ?

EDIT: OK, that one is more for engineer. How about: find a bijection between R and P(N)?


Interesting. Can you produce a bijection between R and P(N)? It might be harder than you think.


I can (actually between P(N) and [0,1), but it is equivalent)). Not sure if I should post it here though. Just email me if you want my answer (email is in my profile).


You have email.


My suggestion: Prove the infinitude of prime numbers.


Hmm. Did you see that I already mentioned that in the question?


urghh. my bad. didn't see it :(




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: