Also just to add - many REALLY good engineers have some form of anxiety disorder, and the standard practice for an interview is to put them in a pressure cooker situation.
"Oh my god, production is down! Quick, someone tell me, from memory, the solution to the N Queens problem!!"
(raspy voice) "Wanna play a game? You have 45 minutes to parse this html in assembly using only half a keyboard. Otherwise... you will not be able to feed your family. From us, at least."
I would absolutely watch SAW: The Software Interview
Don't forget about interrupting the candidate every 5 minutes to ask about the color of a blue moon, effectively preventing him to "get in the zone".
I've been in whiteboard coding interviews and in situations where "production is down" and they are very different kinds of pressure and expectations.
And I’m a good test taker and interviewer.
It's often fun to read these "oh, I can say a lot about this" posts, though.
Source: I interview lots of candidates for SWE positions.
Even if the problem is fully specified, asking clarifying questions is considered a good signal in any interview I've ever conducted.
Unless you deviate from the script the [incompetent] interviewer expect, in which case, you are toasted.
I've pointed out to interviewees that user input in one problem can result in circular references. I point out, as a hint, that thing A can refer to thing B, then thing B can refer to thing A, and you'd wind up with an infinite looping execution in one part of the code. How can we detect this? So then these 3.8+ GPA Comp Sci grads tell me to write a conditional that detects a 2-element loop. Not an algorithm that can detect the cycle, but just a conditional that only detects the 2-element cycle. Then I have to ask, well what about a 3-element cycle? To the credit of most of the applicants, they then try to incoherently describe an algorithm involving some kind of hash table, but then never give anything which is implementable. Once, the applicant didn't yet realize there can be n-element cycles, then proceeded to literally handwave away all graph algorithms.
These same applicants usually have a hard time writing their own recursive algorithm. Would you want to trust your startup coding to people who don't habitually think at least one step ahead? Come on! These things used to be covered in the first algorithms class! This should be Freshman Year stuff!
For many front-end developers, I don't think graph algorithms matter. I suspect they don't matter for machine learning (though I know little about that field). Would you pass up knowledgeable people in these fields?
This is why you need multiple people to ask a lot of different questions and judge people by what they can do, not what they can't do. And try not to entirely forget about hindsight bias.
When I took Comp Sci, you received some very general, broadly applicable tools. What I'd expect from my Comp Sci classmates, would be looking at such a problem for a couple of seconds, then they'd say, ok, you can do [X]. We were educated with a toolset that allowed us to do that. If you are thinking of it in terms of cycle detection being a particular specific obscure thing that you'd never have to do, let me say this. 1) If you think of it like that, and you further tell me that it would take you a couple of days, then that immediately tells me you don't have a particular, very broadly applicable toolset. 2) There are contexts where you have to do the kind of systemic thinking where cycles are something you have to account for.
There have been fields where knowledge has been lost. The British Navy figured out how to stop scurvy, then lost the ability to do so. The Japanese figured out how to stop the beri-beri deficiency disease, then failed to propagate the knowledge. I am starting to wonder if academia in the Bay Area and California can effectively compete with startups and big tech companies for Comp Sci expertise commensurate with teaching. Maybe only the very top institutions can do this?
Since people can learn from each other, it's hard to say what people really need to know the day they join the team.
The Brits and Japanese had no idea of the underlying mechanisms, and our society has the underlying comp sci knowledge in our case, archived in libraries. If an entire generation of computer science grads just fluffs graph theory, it's not like the knowledge will be lost, but it's definitely not a good thing. It's definitely a step in the wrong direction.
However, it was not graph theory which was lost. It's a body of engineering knowledge for how to do applied graph theory; how to recognize you have to deal with graph theory, then quickly clobber the problem with several broadly applicable tools. Why isn't this being passed down from one generation of profs and TAs to the next?
Since people can learn from each other, it's hard to say what people really need to know the day they join the team.
It's one thing to know a high level overview about something, then to be able to go and bone up on a specific area. It's another thing if someone doesn't know enough to recognize the thing without prompting and clearly has no practice solving problems of that general category, whatsoever.
EDIT: Isn't anyone curious about what these tools are?
Um, Not when you are making readers feel incompetent for not having them.
I have interviewed quite a bit because I work as a contractor. Most companies I have interviewed with have no idea what to actually do during an interview. They just copy stuff that they have read about on the Internet. Oh, Google does puzzles, well...
Interviewing is broken.
If you do something, whatever it is, you might as well put an effort to do it well.
In this case it means studying. If you don’t want to, state it upfront.
And the comments:
It is highly unlikely that you already know, or god forbid, remember, everything you need to know in order to be a productive contributor to the task at hand.
It is also highly unlikely that the technical interview is correlated, even remotely, with the task at hand.
You study and perform given the above.
If the employer likes your effort, attitude and skill, they’ll hire you.
If they don’t, the problem solved itself. You did your best and they are not a good match for you.
IMHO, the biggest challenge is to decide what aspects of oneself to focus on. Depending on the context, I could present myself as a technical know-it-all, a process-oriented team-builder, a collaborative problem-solver, an engineer focused on delivering business value, or something else. All of these (I hope) would be accurate representations of myself, albeit incomplete by themselves. Since I don't have the time to showcase all of these facets of myself, I pick and choose based on what I think the organization's interview process is selecting for.
Though I was recently rejected after an all-day onsite interview, so maybe I'm taking the wrong approach; or perhaps I misread the company's hiring objectives; or maybe it wasn't the right fit.
BTW, I hate trivia-style tech interviews, and would be hesitant to work for any company that utilizes them—not because I wouldn't want to go through the process personally (I actually kind-of enjoy them), but because they're optimizing for a set of skills that is very much out of line with the requirements of 99% of software development teams. Our industry needs more wholistic thinkers; most of us face many more people challenges than technical ones.
There are some things I do in programming that are such habits that I have hard time explaining it.
If someone gave you an English test (assuming you are a native English speaker), and gave you a sentence to correct:
I love that antique green big wonderful car that is always parked at the end of the street.
It probably would sound wrong to you but could you explain why? (http://www.gingersoftware.com/content/grammar-rules/adjectiv...)
that is true only to the extent that the interview is related to the position... Whereis most interviews in our industry are akin to a hazing ritual, and it stands for a reason that "Google interview" got it name from that and is also religiously practiced in the other frat-companies like for example FB or stereotypical startups with young founders.
Wrt. the ritual aspect it is thus natural that at Google/FB you first have to pass the interview and only after that you are getting to the project assignment stage ( not sure about specific details at FB as i failed there, while at Google the offered projects were pretty crappy (as well as the offer itself))
You go into the interview, having not studied. Your twin studied and ended up being more impressive during the interview. Out of the two of you, who will get the job?
A good developer doesn't optimise for a specific problem or even group of problems, they optimise for the meta-problem, which is how to quickly find the solution they need no matter what the problem happens to be. No-one can possibly know everything, and these gotcha-type questions say more about the interviewer than the interviewee.
The studying that would have led to a success:
* Memorizing a significant amount of the interview's programming language. Since there is no reference material, you really need to know it. No standard library reference to help you out.
* Solving enough algorithms and data structures problems to be able to minimize the time needed to identify and implement them. There is a time constraint, so blanking out or slowly deriving them is a recipe for rejection.
I'm doing both of the above, and I know I'll have success this time around, but it was jarring the first time.
Honestly, I'm OK with this now, because I am good at studying and learning. I just wasn't expecting it initially, because I wasn't going in for a position at Google or something. The company advertised the position as needing much less experience than that. So I was really surprised (and under-prepared) when I was given the whole day coding interview process.
Even more irrelevant with today's access to Google, StackOverflow and alike. I've done my share of learning by heart pages of demonstration for some obscure quantum physic model, puking it the day of the test, and then forgetting about it. It the company is looking for an obedient monkey, well, I'll pass.
> Honestly, I'm OK with this now, because I am good at studying and learning.
But are you any good at analyzing a combination of problems never encountered before ?
Haven't won a Nobel Prize yet, otherwise most work is done by first examining what I know vs. what I don't, and then using the appropriate tools (search, reference -> think -> implement/test -> discuss, as necessary) to accomplish the next steps.
Regardless of what I think and what I know, or even of my abilities to solve novel problems, the industry has decided upon its entrance exams.
Many times the problems vary on the surface, but the core concepts do not, and so the iterations and combinations thereof are solvable by polling previously encountered experience (study & practice).
It, uh, probably does help to know the language you're being interviewed for, although a good programmer in any language can almost certainly start being productive in another one within a month. A good hiring process knows this, too.
I could write programs in the target language on a computer, easily, but on a whiteboard I encountered a few halts where I would have used the reference docs. When I asked a doc reference question, I immediately could tell from facial expression alone that my interviewer was docking me points.
Was that experience ridiculous? I know it changed my willingness to interview more until I've mastered more material. I'm not going to burn time and money to be grilled for several hours and not be able to nail it.
I'm employed full time but it's time for advancement, so I've been studying a lot.
> For instance, there is a domain of cognitive science
called “expert-novice studies.” Two of its leading figures
are Herbert A. Simon, the Nobel Prize winner, and Jill
Larkin, who has co-authored articles on this subject with
Simon. Their studies provide an insight into the paradox
that you can successfully look something up only if you already know quite a lot about the subject. In these studies, an expert is characteristically a specialist who knows a lot about a field—say a chess master or a physicist, whereas a novice knows very little. Because the expert already knows a great deal, you might suppose that she would
learn very little when she looked something up. By contrast,
you might think that the novice, who has so much to
learn, ought to gain a still greater quantity of new information from consulting a dictionary or encyclopedia or the Internet. But, on the contrary, it’s the expert who learns more that is new, and learns it much faster than the
novice. It’s extremely hard for a novice to learn very much
in a reasonable time by looking things up.
The linked paper is interesting and elaborates this phenomenon a lot more than the one quote.
A CompSci degree/MSc/PhD is just a piece of paper that a lot of other people have.
Employment history is just a paragraph on your resume, that you could have embellished to make yourself look better.
This is why companies do these "Dr. House" style 4-6 hour, multi stage interviews, because they can't evaluate people in any other way.
For instance, we use Mongo DB at my place of work but, we do not take advantage of ALL the features Mongo has to offer at one time. If I were to interview at another company that also uses Mongo it might be to my benefit to dive a little deeper and look for features I may be unfamiliar with on a day to day basis.
This can be applied to all tools and languages.