
Great Mathematicians on Math Competitions and "Genius" (2010) - jimsojim
http://lesswrong.com/lw/2v1/great_mathematicians_on_math_competitions_and/
======
delazeur
As a former "successful" mathlete (never did IMO or Putnam, but consistently
made the top five in state competitions), I agree that competitions won't help
you develop mathematical maturity, but it is interesting how how they are
viewed in the wider world. It's been a number of years since my math
tournament days, but people are always very impressed if I mention that I won
math competitions in middle and high school. It is odd to me, because I don't
really care about my math medals anymore and I've since accomplished things
that I value a lot more, but there you have it.

Also, even though competitions won't help you develop as a mathematician, I
still think it was a good experience for me to get out of school for a day and
hang out with a bunch of other math nerds. That part of it was a lot more
valuable than the competition itself.

~~~
squeaky-clean
> Also, even though competitions won't help you develop as a mathematician, I
> still think it was a good experience for me to get out of school for a day
> and hang out with a bunch of other math nerds. That part of it was a lot
> more valuable than the competition itself.

I haven't done math competitions, but I feel the same way about programming
competitions. I don't think I'm a better programmer because of the
competitions. But I do think spending 4 hours, 2-3 days a week in a room with
other programmers who were also there voluntarily made me much better. It's
not like I stopped focusing on my own projects or coursework and did this
instead. It was just an additional 12 hours a week of practice and
socializing.

Some problems were hilariously artificial too. I still remember one, where
10-year-old Suzie had a lemonade stand and had been keeping track of her
profits each day. You needed to find the 3 consecutive days in which she had
made the most total profit. The catch? It needed to run in less than 1 second
and the largest case could (which means it will) be n=9999999. (I may be off
by an order of magnitude).

Little Suzie had been running her lemonade for well over 27,000 years
apparently, but would throw a tantrum if it took more than 1 second to compute
her answer.

~~~
tinalumfoil
> It needed to run in less than 1 second

I've always wondered how these competitions measure the program's runtime
consistently. I guess the easiest way would be to specify the CPU the program
will run on and use the bash `time` builtin, but that seems inconvenient for
participants to obtain that CPU and controlling for the cache might be
difficult (maybe a kernel module that clears the cache before the program runs
and make sure the program runs fully in cores dedicated to that program and is
consistent enough with memory accesses so L3 is comparable).

On the other hand they could just count instructions run, which would be
completely deterministic, but might lead to some "optimizations" that don't
make sense like using `rep movs` to zero memory instead of loops. Using a
higher-level bytecode would have the same problems. They could also give
different values to each instruction and possibly memory access, but even then
students would be ignoring important real-world cache or pipeline
optimizations.

But maybe I'm just overthinking this. This type of problem would be better
suited for GPUs anyway.

~~~
ludamad
The time limits are overkill except if you chose the algorithm wrong.

~~~
ufo
In fact, you can often get a good ballpark-estimate of what big-O runtime your
algorithm needs to have, just from the time limit. This can work as a hint on
how to solve the problem.

------
ivan_gammel
I'm from that world of math Olympics. Never reached IMO, only top places on
regional competitions (math, physics, programming), and know a lot of people
who been there and who never participated. Some won the medals, graduated with
MSc from universities and gave up with science. Some were not so good in
competitions, but became very smart scientists and great math teachers.
There's no rule, and I don't think this system is just a selection of the
best, discouraging children who did not succeed from choosing math.

It's just much easier and more natural to see the beauty of mathematics while
spending enough time on learning, solving problems and engaging in
competitions. Oh, yes, I'm still seeing it despite years in MIPT with some
real math. There's a lot of fun in it. For example, we had "mathematical
battles", when two teams had to present and defend their solutions and get
score points from jury. It's also a very special and friendly environment,
where you can meet people, connect to universities and build social network
that will serve you for the whole of your life. I still have a lot of friends
from summer math schools which I attended in early 90s. For many it's also a
social lift, egalitarian by it's nature, where distinction between rich and
poor is almost invisible (not like on the street or a schoolyard), it allows
many children from small towns across the country to enter top universities
and build successful career, not necessarily in science.

I will never blame this system for presenting math "in wrong way". It doesn't
have to show the world of grown-ups. And, by the way, we never heard the word
"genius" (except applied to Pushkin or Einstein).

------
lpolovets
Like a few commenters on this thread, I participated in a lot of math contests
in high school. Never made the IMO but placed at/near the top in regional and
statewide competitions.

On the one hand, my experience mirrors some of what the article talks about: I
learned very quickly that the things professional mathematicians work on are
very different from math contest problems. (I went to college intending to
major in math, but switched to CS as soon as I took a semester of abstract
algebra.)

On the other hand, the article seems to imply that many great mathematicians
look down on math competitions for not giving an accurate portrayal of math as
a career path. I don't see why that is an issue. My high school was a math
magnet school, and 100s of students participated in monthly contests like
California Math League. Almost every participant that I talked to in those
days did math contests because they were fun, or because they were an
interesting challenge. I never met anyone who said "I want to be a
mathematician, and contests are clearly the first step on that road."

For me, math contests are like high school sports or drama or anything else.
They appeal to certain subgroups of kids, they're fun and hopefully
educational/useful in some way, and they don't have to be more than that.

~~~
ianai
There's definitely an argument to be made that disgruntled, great
mathematicians act as a barrier to entry. Being exclusionary hardly attracts
anybody to an already difficult field. Besides, who's to say that whatever
they describe as "the one prescribed path" must be what works for everybody? I
think the will and drive to study mathematics is less cookie cutter than that.

------
lacker
I was pretty good at math competitions (2x putnam fellow) but never that great
at being a mathematician. But I still think math competitions are great
experience. Winning at any sort of competition teaches you how to be
persistent, how to work hard, how to recover from setbacks mentally, how to
maintain focus for a long period of time, and how to gear up for critical
moments where you need to perform.

For example, when I first went to the math olympiad summer program, I had
trouble focusing on a single math problem that I had no clue how to solve for
three hours straight. It's hard! The training program basically forces you to
do that over and over, so I ended up learning a lot of how to focus for large
chunks of time and do useful things to attack a problem that I didn't
initially know how to solve.

I went into computer stuff instead of math stuff after college, and there's a
lot of stuff I never used again. Algebraic topology, all the geometry theorems
they don't teach you in high school, you name it. But the ability to work
really hard on a single technical problem until you nail it, that's been
constantly useful. Especially in startups.

~~~
lpolovets
100% agree that learning focus and persistence was a very valuable result of
doing math competitions.

Completely random: If you're who I think you are, I still remember seeing your
name on the list of perfect AHSME scores in 1998ish. I think we met briefly at
an ACM competition in '03 (we played Mafia for a while in a big group, and
were briefly introduced by Po-Shen Loh who was one of my ACM teammates.)

~~~
lacker
Yeah that sounds like me! ;-) Although by '03 I was in grad school; ACM stuff
was probably '01 or '02.

~~~
lpolovets
You're right! It was '02, in Honolulu.

------
abetusk
For me, this applies equally to programming competitions and white boarding
algorithms in interviews.

~~~
gautamdivgi
I think some folks overdo the white boarding. But I believe it is needed - may
be one or two problems on the whiteboard with the rest of the interview
focusing on design & software engineering (metrics, resiliency, etc.).

I've seen folks who have a lot of _coding skill_ on their resume fumble simple
white board problems. I have fumbled simple white board problems when I have
also interviewed (It was for a s/w engg position but I had spent the past
several years in architecture and away from any real code - so it was
expected).

The point is white boarding is ok if you have a well defined problem solvable
in 45 mins and if it is just geared to assess your familiarity with code. I
don't think its a reasonable expectation to come up with new approximation
algorithms for NP-complete problems and solve+prove them on a white board in
45 mins.

~~~
onion2k
_I 've seen folks who have a lot of coding skill on their resume fumble simple
white board problems._

Solving a coding problem on a whiteboard tests your ability to solve coding
problems _on a whiteboard_. That's a bias. It makes people who get nervous
standing up and being the centre of attention less likely to pass the test. If
coding on a whiteboard is a part of the job then fair enough, but if it isn't
then you're introducing something to the interview that filters people out
based on something other than their ability to do the job - and that means
you're not necessarily recruiting the best person. I believe that's a good
reason not to use whiteboard tests very often.

~~~
gautamdivgi
To be fair - we don't look for syntactic correctness of your solution. You
miss a semi colon here and there - that's cool. You invent your own
method/function to abstract out things like creating threads, communicating
between process - that's fine (in fact we provide examples of these and say
feel free to use something like this).

What we are interested in is algorithmic correctness. I think for someone who
develops for a profession writing an algorithm on a white board shouldn't
really be a big deal. Agree on the nervousness... I don't know a good way
around it though... We normally do interviews on the phone using collabedit so
the candidate can sit in their own comfort zone. I also make it a point to
mute my phone and not to talk unless asked to.

~~~
p4wnc6
Doesn't matter. White-board-as-IDE can throw you off so much that you can't
think right about the big picture idea, especially if talking in front of
people you just met in a stressful interview. It's nothing at all like
explaining an algorithm to a peer after you've been working there and feel
comfortable, etc.

Whiteboard-as-IDE is just bad, all the time.

~~~
gautamdivgi
What would you use to ascertain good coding skill? It is impractical to
provide someone with a problem set and have them come back after a week. To be
honest - that's the approach I'd really love to do.

~~~
p4wnc6
Why not sit with them at a computer, let them set up their own preferred
working environment, let them have an interactive shell prompt within which to
execute snippets of code while they tinker and develop the solution, etc.?

I don't understand your thinking -- it seems like you picture it as a
dichotomy between asking trivia questions which must be on a whiteboard, vs.
assigning an extensive college homework problem set -- both of which seem like
terrible ways of assessing on the job skill to me.

The questions you would ask at the whiteboard are probably fine questions.
It's the way you allow them to be solved that's the problem.

For example, if someone asked me to write some code in Python that computes
the median of a stream of numbers, I would probably do something using
itertools-based generators, and/or something using the heapq library for a
heap.

I do not have the APIs of these standard modules memorized. I absolutely could
not write down their usage on a whiteboard. It wouldn't just be minor syntax
issues. It would be so much of needing to look up which function argument goes
where, which thing has no return value but mutates the underlying data type,
etc., that it would just totally and completely prevent me from being able to
fluidly solve the problem or explain what I'm doing. The whiteboard nature of
the discussion would be a total hindrance, alien to the experience of actual
day-to-day programming.

And I've used both heapq and itertools for many years, time and again, in
easily many thousands of lines of code each -- and I _still_ always need to
look up some documentation, paste some snippet about itertools.starmap or
itertools.cycle into IPython, test it on some small toy data, poke around with
the output to verify I am thinking of the usage correctly, and then go back
over to my code editor and write the code now that I've verified by poking
around what it is that I need to do.

That's just how development works. It does not ever work by starting with a
blank editor screen and then writing code from top to bottom in a
straightforward manner. It doesn't even happen by writing some then just going
back in the same source file and revising.

100% of the time, you also have a browser with Stack Overflow open, google
open, API documentation open, and you also have some sandbox environment for
rapidly either pasting code into an interpreter and playing with it, or
rapidly doing a compile workflow and running some code, possibly in a
debugger, to see what's going on.

I do not understand why you wouldn't replicate that same kind of situation
when you're testing someone. What you want to know is if they can efficiently
tinker around with the problem, use their base knowledge of the relevant
algorithm and data structure to get most of the way there, and the efficiently
use other tools on the web or in a shell or whatever to smooth out the little
odd bits that they don't have an instantaneous recall or photographic memory
of.

In fact, if they do solve some algorithm question start to finish, it just
means they have crammed for that kind of thing, spent a lot of time memorizing
that kind of trivia, and practicing. That's not actually very related to on-
the-job skill at all. By observing them complete it start to finish, you're
not getting a signal that they are a good developer (nor a bad one) -- only
that they are currently overfitted to this one kind of interview trivia
problem. You do not know if their skill will generalize outside to all the
other odds and ends tasks that pop up as you're working, or as you face
something you don't have 100% memory recall over.

Anyway, the point is you can still ask development and algorithm questions,
but you should offer the candidate a comfortable programming environment that
is a perfect replica of the environment they will use on the job, with their
own chosen editor, access to a browser, same kind of time constraints,
comfortable seating, privacy, quiet, etc.

And you should care mostly about seeing the process at work, how they verify
correctness, how they document and explain what they are doing. If you're
asking problems where _mere correctness_ is itself some kind of super rare
occurrence, like some kind of esoteric graph theory problem or something,
you're just wasting everyone's time.

~~~
gautamdivgi
My reply is about 2 days late. But thank you for the feedback... I am
genuinely trying to improve the process since I've been at the receiving end
of it at one time as well.

I'd definitely like to run something like this but I'd need folks to install a
good screen sharing tool (join.me, webmeeting or some such thing...). But I'll
definitely be open to asking the candidate's willingness to do so. That way
they can get working code in an environment they are comfortable in...

We do most interviews remotely and offer a remote work setup as well. So its
always not practical to physically have the person code in front of me.

~~~
p4wnc6
One of my most enjoyable experiences as a candidate was when a company shared
login information with me for SSH-ing into a temporary virtual machine they
had spun up at Amazon S3 solely for the interview. They asked me what editor
I'd like present, and separately made sure any environment details were taken
care ahead of time.

Then I was able to simply log in with my shell here at home, and the screen
was shared with the interviewers. The whole interview took place in console
Emacs, with the interviewer pasting in a question, me poking around and asking
clarifying questions, then switching over to IPython, tinkering, and going
back and writing code incrementally.

I think all of the modern front-end services that do this kind of thing are
pretty terrible, like Coderpad, HackerRank, TripleByte, or more UI-focused
screensharing tools. Heck, I'd even opt for just a Google Hangout if we had to
do it by UI screen sharing.

I think the low tech route of SSH is vastly superior.

------
Animats
George Pólya claimed that the British emphasis on puzzle-solving had set
British mathematics back a hundred years. He and Hardy tried to get rid of
Cambridge's emphasis on the Tripos exam and the whole "Senior Wrangler"
thing.[1] They didn't entirely succeed.

[1]
[https://en.wikipedia.org/wiki/Senior_Wrangler_(University_of...](https://en.wikipedia.org/wiki/Senior_Wrangler_\(University_of_Cambridge\))

------
CurtMonash
When I got to Harvard, probably the best Putnam/puzzle types of solvers there
(among the students) were Don Coppersmith and Angelos Tsiromokos. Don went on
to do very important work in cryptology. Angelos went on to leave mathematics;
his next gig was as a translator for the common market. (He was probably
better in word games/puzzles -- Scrabble, crosswords, and so on -- in English
than I was, even though it was his third language.)

Ofer Gabber and Ron (Ran) donagi also did very well on a semi-formal Putnam,
and did so at very young ages. They went on to decent math careers.

I also took the Putnam at very young ages, but never cracked the top 100. I
went on to leave mathematics.

Nat Kuhn was perhaps the best of the undergrads then. He went on to be a
psychiatrist.

Andy Gleason was perhaps the best at that kind of thing among the faculty.
Wonderfully nice guy, and my de jure thesis advisor, which was a bit awkward
because he never got a PhD himself and didn't quite understand my stresses; I
didn't realize the no-PhD part until after the fact, when I saw his resume in
connection with his election as president of the American Mathematical
Society.

~~~
tamana
Elder Harvard math professors who have PhDs don't really empathize either,
because they are in the very high end of the talent spectrum, and they came up
in the time when massive breakthroughs were told for picking.

------
mnemonicsloth
Nowadays the word 'coach' usually applies to athletics, but it dates to the
1830s, when Cambridge University started awarding math degrees by competitive
examination. A coach was someone who gave you a straight, smooth ride to your
degree, just like a horse-drawn coach on one of the new paved roads that began
appearing in England around that time. So coaching was hip.

The high scorers on the exam were a Who's Who of British science in the 1800s.
In 1854, for example, the second highest scorer was James Clerk Maxwell, the
greatest physicist of the century, who gave humankind its first look at a
fundamental law of nature. The guy who beat Maxwell became a coach and spent
the rest of his life teaching people how to do well on the exam.

------
sesm
Note, however, that Grigori Perelman was very good at mathematical olympiads.

~~~
papapra
And Terrence Tao who is quated was also extremely good at olympiads. As I see
it, the olympiads are a good cost-effective way of identifying talented kids,
they are in a way just IQ tests.

~~~
kkylin
I think the point being made is that the converse need not be true: kids who
are _not_ good at these sports need not feel discouraged from pursuing
mathematics.

~~~
tamana
But what does it mean when the people preaching this claim are all Olympiad
winners?

The one math genius I know who despised Olympiads, ended up leaving academia
over a famous but wrong proof.

~~~
kkylin
I know plenty of very good mathematicians who did not participate seriously in
this kind of thing. (Disclosure: I work as a mathematician at a research
university.) I'm not sure why the only ones quoted here are mainly ex-winners.
Indeed, while these competitions may have the positive effect of putting like
minds together, it's possible that they have the negative effect of
discouraging those w/o the aptitude for this particular kind of competitive
sport (which is what it is), or who do not have access to the kind of coaching
and practice that successful competitors often have.

------
imocuagau
Keep in mind you can get Fields medal only if you are under 40 years old.
That's more likely when going into academia maths straight away, without
enjoying relatively lucrative industry jobs like most of IMO contestants I
know do.

------
shas3
We also need to hear from an important segment of the math population: those
who burned out or dropped out for reasons related to math perceptions
engendered by the math-competition culture. We all know people who did great
at math or science competitions in high school but just disappeared from the
scene after that. One may say that there are other underlying causes that lead
them to not live up to some hypothetical promises, but I strongly feel that
success and failure at math competitions can be a cause in itself. A lot of
space is dedicated to how math olympiad triple-gold medalists went on to
become great mathematicians. We also hear about those like Grothendieck and
Hardy who weren't big in the competition circuit.

But missing are the stories of those who didn't make it big in spite of great
competition performance, and those who fell out of math because of failing at
math competitions.

In India, for example, competition math is _everything_ at the high school
level. This is because competitive exams like the famed IIT JEE, etc. are
essentially variations on the competitive math theme. A few serious math
enthusiasts do take up broader math-specific exams for math institutes, but
those numbers are minuscule. The worst affected in my experience, are the
talented and the enthusiastic who were discouraged and/or dropped out
altogether because of failing at optimizing their skills and learning for
competitions and similar exams.

------
rafaquintanilha
This seems like a more general portrayal of how innacurate tests demonstrate
domain knowledge. It is true for Math Olympiads, but also true for a wide
range of subjects. Nonetheless, this is stil relevant to provide a _hint_ of
what that person might know.

------
analog31
I participated in my state's math competition, but never made it to the
finals. Yet it didn't discourage me from attending college and majoring in
math.

I also participated in the music competition, called "solo and ensemble
festival." Like the math competitions, music competitions are an artificial
environment -- one student in front of a judge, rarely any audience. But in
some sense they are "real world" because they mimic the auditions that are
very much a real part of a music career, e.g., for getting music scholarships
and entry into most orchestras. I never got that far.

------
zasz
Qualitatively, it's the same problem that a lot of people have in grad school.
Doing homework from a textbook isn't the same as the uncertainty of research.

------
paulpauper
Success at mathematics requires staring at it until you can understand it,
however long that takes

~~~
kafkaesq
Well, no -- it's not enough to just _stare_ at the problem / thing.

What you have to do, effectively, is _become at one with its true nature_.
Which in general is much more difficult than simply starting at it.

~~~
jonnybgood
> become at one with its true nature

That's very vague.

~~~
kafkaesq
It is. Which is why it's so difficult.

------
lintiness
pardon the shitpost, but that is a compilation of one of the most profound and
useful quotes i've ever seen. the difference between superficial achievement
and real contribution is profound, and most of our systems are designed to
reward and reinforce the first at the expense of the latter.

"They’ve done all things, often beautiful things in a context that was already
set out before them, which they had no inclination to disturb. Without being
aware of it, they’ve remained prisoners of those invisible and despotic
circles which delimit the universe of a certain milieu in a given era."

~~~
wrsh07
But it's very hard to design a feedback loop with a short enough cycle to
properly reward the latter.

It makes me think of the line about raising children, that it's better to say,
"I recognize that you worked really hard on that, it looks wonderful!" than
"Good job! You're so smart!" [1]

because one captures the _reason_ why they did a good job. By calling that
out, you can perhaps reinforce behavior with a longer view.

[1]: [http://www.theatlantic.com/education/archive/2015/06/the-
s-w...](http://www.theatlantic.com/education/archive/2015/06/the-s-
word/397205/)

~~~
empath75
Yeah, my parents and all my teachers told me I was 'gifted' from 2nd grade,
and I really feel like it held me back, overall. Yes, I picked stuff up
quickly, but I was also lazy as hell, and given no real incentive to learn how
to not be lazy until it was too late to turn around my disastrous grades.

I have a son now, and I'm going to try to avoid calling him smart or gifted.
Or at least not telling him that he's smarter than the other kids.

~~~
delazeur
I had a similar experience: that kind of reinforcement encouraged me to try to
do the least work possible to get the same results as others in order to prove
how gifted I was, instead of working hard to go above and beyond. It was a
rude awakening when I got to the point in life when I wasn't competing against
my peers to complete set tasks, but rather competing against them to provide
the most valuable contributions to a research group or company.

However, I also believe that this kind of explanation can be harmful because
it puts the blame for my laziness on others. Even though the research supports
the idea that this effect occurred, ultimately I got past it by focusing on my
own agency.

