Also, even though competitions won't help you develop as a mathematician, I still think it was a good experience for me to get out of school for a day and hang out with a bunch of other math nerds. That part of it was a lot more valuable than the competition itself.
I haven't done math competitions, but I feel the same way about programming competitions. I don't think I'm a better programmer because of the competitions. But I do think spending 4 hours, 2-3 days a week in a room with other programmers who were also there voluntarily made me much better. It's not like I stopped focusing on my own projects or coursework and did this instead. It was just an additional 12 hours a week of practice and socializing.
Some problems were hilariously artificial too. I still remember one, where 10-year-old Suzie had a lemonade stand and had been keeping track of her profits each day. You needed to find the 3 consecutive days in which she had made the most total profit. The catch? It needed to run in less than 1 second and the largest case could (which means it will) be n=9999999. (I may be off by an order of magnitude).
Little Suzie had been running her lemonade for well over 27,000 years apparently, but would throw a tantrum if it took more than 1 second to compute her answer.
While you may be correct with respect to actual day-to-day programming, I'm 100% sure that programming competitions make you a much stronger interviewer for typical software developer interviews.
I've never competed in programming competitions, but I've seen the types of questions they ask, and many if not most of the interviews I've experienced recently use the exact same kinds of questions. And in some cases, even the exact same questions!
Create a function that returns true or false if _.
Followed by far more hand holding that I was expecting, but most people got the idea fairly quickly. Only person who failed wanted to add a lot of print statements for no reason. And then did not respond to help.
IMO, good interview questions are such that most people pass. I mean your bringing them in so asking something you had to google is a waste of time. The goal should be to minimize false negatives while still removing some people.
The classic what is an Object, what is an Interface is just filler that tells you little. And very technical questions have ridiculous false negative rates. So, just softball an easy code problem, ask about background and fit, and call it a day.
PS: Remember you're interviewing people for a reason. When a good fit backs out you lose not just time but potentially a great coworker.
That said, you're right. Just this week I interviewed someone with almost 30 years experience, some of it dev, architect, some management, for a pure developer role, and gave him "Write a function(n) that takes an integer and gives you the nth number in the Fibonacci sequence." Got very angry and said he didn't know the algorithm, so I drew it on the board and explained it (red flag for someone who was a former "scrum master" not knowing what the Fibonacci sequence was), and he told me that it was arbitrary and academic and a waste of time. I asked him to do it anyway since it was to see his style and approach, and he got mad and left. Most junior devs solve this problem in ten-fifteen minutes, and I consider it actually more of an interactive ice breaker to get us both standing and talking rather than a stumper.
Junior devs were more recently in school, which is the last time such knowledge is needed. If you have actual technical challenges relevant to your work, why not take that hint and shift the interview to something that might possibly come up on the job and that isn't trivial for a junior dev?
Instead of finding out what that candidate was good at, you just found out which academic trivia he doesn't remember, and that he has a low tolerance for irrelevant bullshit.
Look, someone looking for their first coding job is surely willing to walk you through CS101 assignments. It's a fine icebreaker if someone has no industry experience.
Someone more senior should be trying to have an entirely different conversation with you. Do you understand and value what they bring to the table that a junior dev doesn't? Will you adapt to take advantage of their strengths, or will you insist on following your process just because it's your process? Is there space for them to make a contribution or are you just looking for someone who will code up what you ask for?
Interviewing is a mutual search for fit. There wasn't one. Maybe that's because you weeded out someone who couldn't complete a freshman homework assignment with help. Maybe not.
Is it, now?
In many of these sessions you are unequivocally dinged for having a less-than-perfect answer by the time the bell rints. Or if there is some fuzzy acceptance standard in the back of their minds somewhere -- they certainly won't condescend to tell you what it is.
Instead, what you usually get is: "Uhh, hi. Here's a Google Doc. Can you type a fully working implementation X for me while I boredly watch? BTW I don't normally program in the language you're programming in and so probably shouldn't be doing this session with you anyway, as that will only work against you. But then again, it's not like I care -- I'm just doing this because they told me to."
I've always wondered how these competitions measure the program's runtime consistently. I guess the easiest way would be to specify the CPU the program will run on and use the bash `time` builtin, but that seems inconvenient for participants to obtain that CPU and controlling for the cache might be difficult (maybe a kernel module that clears the cache before the program runs and make sure the program runs fully in cores dedicated to that program and is consistent enough with memory accesses so L3 is comparable).
On the other hand they could just count instructions run, which would be completely deterministic, but might lead to some "optimizations" that don't make sense like using `rep movs` to zero memory instead of loops. Using a higher-level bytecode would have the same problems. They could also give different values to each instruction and possibly memory access, but even then students would be ignoring important real-world cache or pipeline optimizations.
But maybe I'm just overthinking this. This type of problem would be better suited for GPUs anyway.
I'm probably misremembering that specific problem, but they were typically very easy problems, but with a huge n and some small 'gotcha' to the scenario that allowed an algorithmic or dynamic programming trick. Not realistic at all, but they were lots of fun. All of the problems weren't like that, but those were the hardest to get.
I originally participated to hang out with other nerds, but wound up getting a few state competition medals in the computer science test. The multiple topic thing really helped cement my love for computer science/engineering. That was the most valuable part for me. I dont even know where my medals are even at anymore.
Personally as a kid I was often "sent" to these competitions but never bothered to prepare for them at all. I stupidly assumed it was a pure IQ play. Now I regret not taking it seriously. Makes me have even more respect for people that do.
I think this is the biggest disconnect between perception and reality of intellectual competitions (debate, math, chess, whatever). In reality, they require just as much practice to succeed as any other sport.
It's just much easier and more natural to see the beauty of mathematics while spending enough time on learning, solving problems and engaging in competitions. Oh, yes, I'm still seeing it despite years in MIPT with some real math. There's a lot of fun in it. For example, we had "mathematical battles", when two teams had to present and defend their solutions and get score points from jury. It's also a very special and friendly environment, where you can meet people, connect to universities and build social network that will serve you for the whole of your life. I still have a lot of friends from summer math schools which I attended in early 90s. For many it's also a social lift, egalitarian by it's nature, where distinction between rich and poor is almost invisible (not like on the street or a schoolyard), it allows many children from small towns across the country to enter top universities and build successful career, not necessarily in science.
I will never blame this system for presenting math "in wrong way". It doesn't have to show the world of grown-ups. And, by the way, we never heard the word "genius" (except applied to Pushkin or Einstein).
On the one hand, my experience mirrors some of what the article talks about: I learned very quickly that the things professional mathematicians work on are very different from math contest problems. (I went to college intending to major in math, but switched to CS as soon as I took a semester of abstract algebra.)
On the other hand, the article seems to imply that many great mathematicians look down on math competitions for not giving an accurate portrayal of math as a career path. I don't see why that is an issue. My high school was a math magnet school, and 100s of students participated in monthly contests like California Math League. Almost every participant that I talked to in those days did math contests because they were fun, or because they were an interesting challenge. I never met anyone who said "I want to be a mathematician, and contests are clearly the first step on that road."
For me, math contests are like high school sports or drama or anything else. They appeal to certain subgroups of kids, they're fun and hopefully educational/useful in some way, and they don't have to be more than that.
For example, when I first went to the math olympiad summer program, I had trouble focusing on a single math problem that I had no clue how to solve for three hours straight. It's hard! The training program basically forces you to do that over and over, so I ended up learning a lot of how to focus for large chunks of time and do useful things to attack a problem that I didn't initially know how to solve.
I went into computer stuff instead of math stuff after college, and there's a lot of stuff I never used again. Algebraic topology, all the geometry theorems they don't teach you in high school, you name it. But the ability to work really hard on a single technical problem until you nail it, that's been constantly useful. Especially in startups.
Completely random: If you're who I think you are, I still remember seeing your name on the list of perfect AHSME scores in 1998ish. I think we met briefly at an ACM competition in '03 (we played Mafia for a while in a big group, and were briefly introduced by Po-Shen Loh who was one of my ACM teammates.)
I've seen folks who have a lot of coding skill on their resume fumble simple white board problems. I have fumbled simple white board problems when I have also interviewed (It was for a s/w engg position but I had spent the past several years in architecture and away from any real code - so it was expected).
The point is white boarding is ok if you have a well defined problem solvable in 45 mins and if it is just geared to assess your familiarity with code. I don't think its a reasonable expectation to come up with new approximation algorithms for NP-complete problems and solve+prove them on a white board in 45 mins.
Solving a coding problem on a whiteboard tests your ability to solve coding problems on a whiteboard. That's a bias. It makes people who get nervous standing up and being the centre of attention less likely to pass the test. If coding on a whiteboard is a part of the job then fair enough, but if it isn't then you're introducing something to the interview that filters people out based on something other than their ability to do the job - and that means you're not necessarily recruiting the best person. I believe that's a good reason not to use whiteboard tests very often.
While it's true that work samples are substantially better at evaluating candidates than informal interviews, they have their own downsides. For example, I have heard many people balk at multi-hour homework assignments as part of the interview process as too much of a time commitment to one company. In the end, any screening technique will be flawed. That doesn't mean that we shouldn't use them.
What we are interested in is algorithmic correctness. I think for someone who develops for a profession writing an algorithm on a white board shouldn't really be a big deal. Agree on the nervousness... I don't know a good way around it though... We normally do interviews on the phone using collabedit so the candidate can sit in their own comfort zone. I also make it a point to mute my phone and not to talk unless asked to.
* Presenting the solution.
* Determining the solution.
* Presenting themselves.
It's not really an accurate measure of how well they work day-to-day, because none of us show up to work and are given 15 minutes to present a solution to a problem we've not studied in years.
You're basically testing peoples' ability to improvise a solution while discussing it with two or three strangers. It's not surprising that there's a high failure rate in that.
Whiteboard-as-IDE is just bad, all the time.
I don't understand your thinking -- it seems like you picture it as a dichotomy between asking trivia questions which must be on a whiteboard, vs. assigning an extensive college homework problem set -- both of which seem like terrible ways of assessing on the job skill to me.
The questions you would ask at the whiteboard are probably fine questions. It's the way you allow them to be solved that's the problem.
For example, if someone asked me to write some code in Python that computes the median of a stream of numbers, I would probably do something using itertools-based generators, and/or something using the heapq library for a heap.
I do not have the APIs of these standard modules memorized. I absolutely could not write down their usage on a whiteboard. It wouldn't just be minor syntax issues. It would be so much of needing to look up which function argument goes where, which thing has no return value but mutates the underlying data type, etc., that it would just totally and completely prevent me from being able to fluidly solve the problem or explain what I'm doing. The whiteboard nature of the discussion would be a total hindrance, alien to the experience of actual day-to-day programming.
And I've used both heapq and itertools for many years, time and again, in easily many thousands of lines of code each -- and I still always need to look up some documentation, paste some snippet about itertools.starmap or itertools.cycle into IPython, test it on some small toy data, poke around with the output to verify I am thinking of the usage correctly, and then go back over to my code editor and write the code now that I've verified by poking around what it is that I need to do.
That's just how development works. It does not ever work by starting with a blank editor screen and then writing code from top to bottom in a straightforward manner. It doesn't even happen by writing some then just going back in the same source file and revising.
100% of the time, you also have a browser with Stack Overflow open, google open, API documentation open, and you also have some sandbox environment for rapidly either pasting code into an interpreter and playing with it, or rapidly doing a compile workflow and running some code, possibly in a debugger, to see what's going on.
I do not understand why you wouldn't replicate that same kind of situation when you're testing someone. What you want to know is if they can efficiently tinker around with the problem, use their base knowledge of the relevant algorithm and data structure to get most of the way there, and the efficiently use other tools on the web or in a shell or whatever to smooth out the little odd bits that they don't have an instantaneous recall or photographic memory of.
In fact, if they do solve some algorithm question start to finish, it just means they have crammed for that kind of thing, spent a lot of time memorizing that kind of trivia, and practicing. That's not actually very related to on-the-job skill at all. By observing them complete it start to finish, you're not getting a signal that they are a good developer (nor a bad one) -- only that they are currently overfitted to this one kind of interview trivia problem. You do not know if their skill will generalize outside to all the other odds and ends tasks that pop up as you're working, or as you face something you don't have 100% memory recall over.
Anyway, the point is you can still ask development and algorithm questions, but you should offer the candidate a comfortable programming environment that is a perfect replica of the environment they will use on the job, with their own chosen editor, access to a browser, same kind of time constraints, comfortable seating, privacy, quiet, etc.
And you should care mostly about seeing the process at work, how they verify correctness, how they document and explain what they are doing. If you're asking problems where mere correctness is itself some kind of super rare occurrence, like some kind of esoteric graph theory problem or something, you're just wasting everyone's time.
I'd definitely like to run something like this but I'd need folks to install a good screen sharing tool (join.me, webmeeting or some such thing...). But I'll definitely be open to asking the candidate's willingness to do so. That way they can get working code in an environment they are comfortable in...
We do most interviews remotely and offer a remote work setup as well. So its always not practical to physically have the person code in front of me.
Then I was able to simply log in with my shell here at home, and the screen was shared with the interviewers. The whole interview took place in console Emacs, with the interviewer pasting in a question, me poking around and asking clarifying questions, then switching over to IPython, tinkering, and going back and writing code incrementally.
I think all of the modern front-end services that do this kind of thing are pretty terrible, like Coderpad, HackerRank, TripleByte, or more UI-focused screensharing tools. Heck, I'd even opt for just a Google Hangout if we had to do it by UI screen sharing.
I think the low tech route of SSH is vastly superior.
Hey, that's great. But the thing is, you never know what you're going to get.
Some interviewers absolutely do insist on 100% syntactical correctness (along with optimal performance on some made up combinatorial problem) -- even though they aren't giving you a shell or IDE to run your code iteratively. Sometimes they won't even give you a decent text editor -- though it may sound ridiculous, it's become very common, of late, for interviewers to ask you to just type directly into a Google Doc -- with variable-width fonts, autocapitalization and other helpful features enabled by default -- even at places where you'd think they really, really ought to know better.
It also make it a point to mute my phone and not to talk unless asked to.
Again, it sounds like you're hip as to the basics of how these sessions should be run, and that's great.
Unfortunately, it's not generally so, out there. Quite a few interviewers seem oblivious to the basics of phone etiquette (using speakerphones with an obvious echo behind them, for example). Or just aren't particularly communicative for one reason or another. And sometimes it turns out the person you're talking to doesn't really know the language you're coding in -- so you have to burn precious minutes explaining the basics of the language to them, along with the solution you're presenting.
That's the fun part about these sessions. You just never know what you're going to get!
But let's face it, this skill is trivial to an otherwise intelligent person, and it's not the reason whiteboard coding is done at the interviews. It is to assess one's problem solving or even specific coding skills. Unfortunately, in a nearly QM way, observation here affects the outcome.
I did partake in CS competitions at regional level and to me they are less stressful than whiteboard tests. There you just have a console or a sheet of paper and a few hours to hash it over. No 3 pairs of eyes staring at your back. Guess it's the same for many others: the thing that turns reading a figurative newspaper chess column into chessboxing tournament. One might be good at chess and OK in boxing, but not necessarily at the same time.
(and no, unfortunately I don't see a good way to fix this)
To me this seems like a basic requirement like reading and writing - is it really such a hard skill it's worth it to filter for it? I would think it's fairly easy to learn this skill by attending meetings and watching others if somehow one is unfamiliar with this technique? Or is my expectation level of what people generally can do way off mark?
I'm not looking for leaps of insight though (I explain the insight required), and we're both in front of the IDE and can search the web to clarify simple questions. It's more communicating a problem and the outline of a solution, and seeing if someone is able to understand what you say and is fluent in turning ideas into code in their chosen language.
I personally despise gotcha questions that rely on you either having seen the problem before, or getting lucky enough to spot the insight in a pressure situation.
From the article:
From Terence Tao:
professional mathematics is may be quite different from the reality. In elementary school I had the vague idea that professional mathematicians spent their time computing digits of pi, for instance, or perhaps devising and then solving Math Olympiad style problems.
In real life its the same. You don't cook up interview coding problems and solve them whole day. You have real world work to do and often that requires a degree of productivity, not knowledge. This is even more so true given how cheap and easy access to knowledge has become because of the web.
From GH Hardy:
it is useless to ask a youth of twenty-two to perform original research under examination conditions, the examination necessarily degenerates into a kind of game, and instruction for it into initiation into a series of stunts and tricks.
Notice how closely this matches with interviews which mandate people to demonstrate expertise in trivia. Or quickly state the Big-Oh complexity of some sorting algorithm.
From Andrew Wiles:
Real mathematical theorems will require the same stamina whether you measure the effort in months or in years [...]
Almost any real measurement of algorithm expertise is in seeing how good a person in coming up with a new algorithm for a novel problems. What exactly is your knowledge of 100 sorting algorithms worth, when it can be searched for in a google search which takes a few milliseconds.
Interview Algorithm guru's to me are no better than those smart elecs who used to show up in school having memorized multiplication tables and then demonstrate that as some kind of mathematical ability.
In the tech industry, however -- here and there you'll find companies that know what they're doing: they actually put a lot of though into picking reasonable problems to solve, and present the candidates with reasonable conditions for doing so. They're clear in stating both the problem and what they expect; and the interviewers are reasonably personable, and have great communication skills.
But quite often, it's a total shit show: problems are often poorly stated (and sometimes ridiculously complex); combined, importantly, with a poor or completely absent statement of what is really expected from the candidate (As in -- do they want a perfect working solution, on running code? Or does it suffice to just outline the general idea, perhaps with pseudocode? Quite often this is never stated up front); along with gratuitously taxing and sometimes downright annoying conditions in which to tackle this allegedly crucially important problem you're asked to solve (among my favorites being: whiteboards with barely usable markers / erasers; or their electronic equivalent -- Google Docs, or other ridiculously unusable coding "platforms"; voice-only sessions of nearly any kind, but especially those where the interviewer clearly has limited communication skills for one reason or another; and then of course sessions where the interviewer doesn't know the language you're coding in very well, and you have to constantly pause to explain the basic facets of said language, along with the solution itself).
Ofer Gabber and Ron (Ran) donagi also did very well on a semi-formal Putnam, and did so at very young ages. They went on to decent math careers.
I also took the Putnam at very young ages, but never cracked the top 100. I went on to leave mathematics.
Nat Kuhn was perhaps the best of the undergrads then. He went on to be a psychiatrist.
Andy Gleason was perhaps the best at that kind of thing among the faculty. Wonderfully nice guy, and my de jure thesis advisor, which was a bit awkward because he never got a PhD himself and didn't quite understand my stresses; I didn't realize the no-PhD part until after the fact, when I saw his resume in connection with his election as president of the American Mathematical Society.
The high scorers on the exam were a Who's Who of British science in the 1800s. In 1854, for example, the second highest scorer was James Clerk Maxwell, the greatest physicist of the century, who gave humankind its first look at a fundamental law of nature. The guy who beat Maxwell became a coach and spent the rest of his life teaching people how to do well on the exam.
The one math genius I know who despised Olympiads, ended up leaving academia over a famous but wrong proof.
But missing are the stories of those who didn't make it big in spite of great competition performance, and those who fell out of math because of failing at math competitions.
In India, for example, competition math is everything at the high school level. This is because competitive exams like the famed IIT JEE, etc. are essentially variations on the competitive math theme. A few serious math enthusiasts do take up broader math-specific exams for math institutes, but those numbers are minuscule. The worst affected in my experience, are the talented and the enthusiastic who were discouraged and/or dropped out altogether because of failing at optimizing their skills and learning for competitions and similar exams.
I also participated in the music competition, called "solo and ensemble festival." Like the math competitions, music competitions are an artificial environment -- one student in front of a judge, rarely any audience. But in some sense they are "real world" because they mimic the auditions that are very much a real part of a music career, e.g., for getting music scholarships and entry into most orchestras. I never got that far.
What you have to do, effectively, is become at one with its true nature. Which in general is much more difficult than simply starting at it.
That's very vague.
"They’ve done all things, often beautiful things in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they’ve remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era."
> "Though such a competition may have its raison d'être, I think those younger people who are seriously interested in mathematics will lose nothing by ignoring it."
And that's what it comes down to. Are these competitions fundamentally necessary to someone's development as a mathematician? Shimura seems to be saying something like, "Eh. Not really. If you don't like them, you don't need them."
Maybe we need alternative avenues for maths nerds to meet each other.
It makes me think of the line about raising children, that it's better to say, "I recognize that you worked really hard on that, it looks wonderful!" than "Good job! You're so smart!" 
because one captures the reason why they did a good job. By calling that out, you can perhaps reinforce behavior with a longer view.
I have a son now, and I'm going to try to avoid calling him smart or gifted. Or at least not telling him that he's smarter than the other kids.
However, I also believe that this kind of explanation can be harmful because it puts the blame for my laziness on others. Even though the research supports the idea that this effect occurred, ultimately I got past it by focusing on my own agency.
No one cares about your competition results after you start publishing papers, so not really. So if you publish shit papers it doesn't matter how well you did on competitions, you will never get a good job in academia (or anywhere else for that matter unless you learn a useful skill like programming).
It was 2008 and Army surgeon Christian Macedonia had been told there was a high-level opening for a doctor who wanted to change the military's approach to battlefield brain injuries. When Macedonia arrived for the interview, he found himself face to face with Adm. Michael Mullen, chairman of the Joint Chiefs of Staff.
"And he looks at me and he goes, 'Who are you and what are you doing in my office?' " Macedonia says.
Macedonia explained he was there about the job. Mullen replied that he had decided he didn't need a doctor on his staff. "And I said, 'Sir, I'm going to disagree with you,' " Macedonia recalls.
Macedonia, a lieutenant colonel, told the admiral that if he really wanted to do something about brain injuries, he did need a doctor. What's more, he needed one with combat experience, strong scientific credentials and a high-level security clearance. "I said, 'Sir, you really only have one person and that's me.' "
Mullen smiled. He had been looking for someone he might have to rein in, but would never have to push. "And Macedonia fit that model for me perfectly," he says. "He's very outspoken, very straightforward. We talk about out-of-the-box thinkers; he just lives outside the box."