Hacker News new | past | comments | ask | show | jobs | submit login
Lessons from 3,000 technical interviews (interviewing.io)
692 points by leeny on Dec 28, 2016 | hide | past | web | favorite | 311 comments

The author draws a hard distinction between Udacity/Coursera MOOCs (good) and traditional master's degrees (bad). I'll interject that with Georgia Tech's Online Master's in Computer Science program [0], which is delivered via Udacity and insanely cheap [1], you can get the best of both! (Their "Computability, Complexity and Algorithms" class is one of the top Udacity courses cited in the article.)

Keep in mind that a traditional degree program does have a huge advantage over a strict MOOC: accountability. It sounds good to say that anybody can go push themselves through one of these courses. Try pushing yourself through ten, and actually writing all the papers and implementing all the code, while working full time and having a family. That grade looming at the end of the semester really does wonders for your motivation. Plus you can get help from live professors and TAs, and the Piazza forums for OMSCS are full of smart, curious students who love talking about the subject at hand. There's a richness to the degree experience that I don't think you get with scattered classes.

(Obvious disclaimer: I'm a current OMSCS student)

[0] http://omscs.gatech.edu [1] https://www.omscs.gatech.edu/program-info/cost-payment-sched...

$6,600 [1] for a master's from a top 10 computer science program [2] is an absolute steal! My master's program costs that per semester.

[1] https://www.omscs.gatech.edu/prospective-students/faq

[2] http://grad-schools.usnews.rankingsandreviews.com/best-gradu...

These numbers still look very high from a European point of view. It seems it makes a huge difference whether a society deeply cares about educating the population, or cares just superficially.

For example, in Berlin you spend less than 300€ per semester, a CS master takes 10 semesters regular time. Add two semesters to make the time more realistic, and you end up with 3,600€ which are roughly $3,800.

Oh, and it contains a full time ticket for public transport (which would otherwise cost 970€/year, i.e. 485€/semester). In other words: University education is cheaper than regular public transport, even though it contains a full time ticket.

Oh, and if your parents don't have that money, you can get half of the university costs + half of the living costs + half of the rental costs from the state. [1]

And note that Germany is by far not the best one in Europe regarding education, in universities as well as all other types of schools. It is regularily and heavily criticized for cutting educational expenses more than is good for the country. [2] However, after reading statements like the parent comment, I suspect it is still pretty good.

[1] More precisely, you get a debt called "BAföG", from which you have to pay back only ~50% after finishing - either in rates or all at once.

[2] For example, this forces universities into projects financed by third parties (i.e. companies), which adds a strong bias to the research direction and even more so to the results. Even worse, if this research contains business internals (which is easy to claim by any company), this leads to results being only partly published, or not being published at all. To be fair, the latter is more a problem of the law and not of third-party project. There should be a law that demands everything that is fully or partly paid with public money is Open Access as well as Open Source.

A society that cared deeply about educating the population wouldn't confuse schooling for education.

Yes there's some education conferred along with the schooling, but credentialism is a huge part of it and even more so in Germany than most countries.

> wouldn't schooling for education

I agree that there are not-yet-mainstream concepts for schools/universities/etc. that should be covered by public money as well, at least partly. Currently, this exploration happens entirely in the private sector, which is simply inadequate (read: too small and too slow) for the society to move forwards with its educational system. A society should actively invest into improving their education the same way they improve on science, and that investment is clearly lacking in Germany and many other countries. We advance the educational topics, but not the educational system itself.

But: Investing into classic schools and universities is still better than not heavily investing into education at all, which is the only real-world alternative I've seen so far. (And I would be glad to be introduced into a real-world third alternative.)

> Yes there's some education conferred along with the schooling

Some? Maybe I was just lucky, but at school and especially at university I got a very good foundation and always felt well prepared to educate myself later on (through books, websites, technical manuals, and so on). Contrary to conventional wisdom, I learned more about critical thinking and judging sources in school/university than anywhere else. Not in all lectures, but in enough lectures that I would otherwise have missed. Without that initial foundation, educating myself later on would have been much harder.

Okay, I can introduce you to three real world alternatives I've seen in my own life—self study, direct mentorship and work experience. I'll share one example of each from different fields.

To be clear, I don't think there's a single one-size fits all solution for education. One of the core problems I see in formalized schooling is that by its nature it pushes large numbers of people through the same curricula. This may have been good in the early industrial era, but in today's world most job-related skills that can be commoditized are either outsourced or automated. Non-job related skills are also of great value, though it's not clear that it's best for people to build them in an factory-line style either.

Self-study: I put over a thousand hours into foreign language classes while growing up and got pretty bad results. It's an ancient discipline and curricula have had centuries to adapt, but it's just not well-suited to formalized schooling. I've met literally thousands of people with advanced degrees in English language study who don't speak that well. I've also met a lot of foreigners who graduated with degrees in Chinese who don't really speak or read comfortably. Though I've hired people for positions in which English language skills were important, I've never even considered looking at their related credentials rather than evaluating their results. Foreign languages are very learnable through self-directed study. This is even true for one's native tongue—most really good writers have gotten there through voracious reading and practicing their craft, not generallythrough advanced degrees.

Work experience: Another discipline I've seen schooling fall down is in sales. It's a core business skill, but those I've met who have excelled in it have come from a variety of backgrounds, not necessarily business schools. Almost invariably, the people who really know how to sell have gotten that way through work experience, either for themselves or on commission for someone else.

The third alternative, that of direct mentorship, is probably the most powerful I've encountered. Especially in music, athletics or other extremely competitive fields, there's nearly always a mentor behind the top performer, and often there is a series of several mentors over different stages of the learning process.

Now at this point, I suspect you're thinking about the fact that there are two types of educational goals—getting really good at something and getting to minimum level in all the core skills. Though my three examples were related to the first goal, schooling often fails in the second goal as well. It can succeed, but there are still a lot of people who do what must be done to get the credential they want and little else. On the other hand, it's exceedingly rare to meet someone who reads broadly and doesn't end up with at least a decent education.

Curious reasoning about society. For me, american society seems to care about education more, exactly because it's members are willing to pay for it themselves. In your sentence, you juxtapose 'society' and 'population', but those are exactly the same things.

May be you meant 'government' instead?

For someone who moved from Australia to US, I don't think America cares about education for all. Only education for the elite rich. Universities should not cost as much as buying a house. The country literally puts a student in debt for life. That's nuts

But it's subsidized; so people who aren't getting an education are paying for those who do.

> But it's subsidized; so people who aren't getting an education are paying for those who do.

This is just another way of saying that the society cares about this issue!

To put an analogy: When a society cares about parenthood and children, non-parents pay some share to parents. This is how financial solidarity works. How else should that work? By telling parents that they do a great job, that you appreciate what they do for society, but not giving a single cent to them? That would be hypocritical, not solidary.

So why can't it voluntary via donations to universities?

> By telling parents that they do a great job, that you appreciate what they do for society, but not giving a single cent to them? That would be hypocritical, not solidary.

It's not hypocritical if you compliment someone without paying them money.

> So why can't it voluntary via donations to universities?

You could say that about any tax founded expense?

For some things like courts and police it's required to stop violence and fraud; i.e. to maintain the rules of the game.

But, yes, I am in favor of a small, limited government.

Education is perhaps more important for preventing crime and violence than police and courts are. Remember, police and courts just deal with the crime; they don't prevent it like having a proper education and employment does.

Why wouldn't you see an elected government's policy as voluntary, ultimately?

Why would you see the decisions of the rich elite few allowed to be part of government as reflective of the population?

No, only some portion of the population (possibly not even the majority--see the U.S. 2016 election) decides for everybody including those who are against it.

One could very much argue that having educated people is more beneficial to the society, and therefore for everyone, than it costs the society. Also educated pay more taxes individually since they usually earn more.

But you'd have to argue it, it isn't self-evident that more education is simply better. Look at the number of PhDs who can't secure post-docs let alone tenure track. They may feel some personal satisfaction from the letters after their names, but it's very questionable that "society" benefits from churning them out. Once you educate people to the level of basic literacy and numeracy, say age 16, it's diminishing returns after that.

Yeah, it's sort of like American corporate subsidies: Not every taxpayer owns a petrochemical company. But every taxpayer pays for those that do.

Subsized - yes. But I don't agree to the second sentence: Generally the higher educated people are getting higher salaries afterwards and are paying way more taxes. These taxes are then used for providing education to the next generation (and also for lots of social programs which target the low/no income population).

If education was always profitable like that, governments wouldn't subsidize it, they wouldn't need to. Heck people would be fighting to give student loans.

The reality is lots of people study subjects that don't result in higher salaries but political correctness insists that people be able to study the arts easily, so governments "have" to subsidize education.

Well, no, the cost of education is ridiculous. Not many people get out of undergrad without any loans, and especially if you want to go on to get higher education (yes, even law degrees which pay nicely after you graduate), you'll be stuck with a mountain of debt that you won't be paying off any time soon.

I know someone in particular who went on to get a law degree (and was steadily employed in her field from her time of graduation) but was only managing to pay in the double digits toward her loan's principal on a monthly basis -- the rest went toward interest.

The benefits are ridiculous. Not the costs. Where I live it costs $28,000 to get a BS. I took out loans for every penny. Before college my salary was just under $18,000. After college I was making $48,000. I got my MS and PhD for free, fully funded by my work while in graduate school. I even made a decent salary, $18,000 during the year and $30,000 during summer internships.

My first post-grad school salary was $112,000.

My education was an investment. It paid off.

Yes, but so are the trade schools/apprenticeships where people go that don't go to university, so what's your point? Your education is subsidized either way, unless you don't attend either of those options, in which case the individual is probably not going to pay for anything at all.

> individual is probably not going to pay for anything at all.


When the government pays for health care (like what happens in most european countries) it means that people that are not sick are paying for people that are sick. Why should they, right?

Good question. For saving someone from dying it may be justified to forcefully extract money from others.

But definitely not for education.

'forcefully extract money'

I hate that sort of terminology, money only exists for you to trade because you are a member of a society that collectively decided it was a good idea.

Welcome to social state, where the state thinks the sustainable development of the whole society can be only accomplished by increasing the life level of everyone

Something to consider is that many people in CS grad school pay no tuition at all via grad assistantships, either research or teaching.

You live in the wrong country.

I'm studying a 3 year coursework masters that allows people with no background in CS (just had to do an additional semester of courses) to participate and it aims to give a pretty well rounded CS education. Not a common thing, apparently.

I completely agree, the motivation and environment that it gives you, the reinforcement that you're actually doing something official and serious as opposed to doing something that very few people outside of the software industry take seriously, the structure it gives you, the fellow students you meet and befriend, it's all huge. Additionally, looking at the Udacity/Coursera material, a lot of it is shallow, poorly taught, and would not have taught me nearly as well as my master's program has taught me.

Could you please elaborate? It looks like this program is only for people that already have an CS-related degree.

What courses are you taking?

Other Masters have acceptance programs for this as well. I know that Delft TU has an 'introduction' year which covers everything you need for the masters years (two). It does require a technical (of sorts) bachelor and has a low success rate due to it containing all the difficult subjects in one instead of multiple years like in the regular CS Bachelors and skipping over any form of specialization courses. Not completely undoable, but difficult non the less.

I imagine there are multiple universities around the globe offering the same sort of educational entry.

While it was shown in the article it doesn't matter at interviews, we (at my company) do require technical masters degrees. Though it does not need to be a CS master. The form of individual thinking combined with analytics and methodology we found to be far better at those completing (any/a) Masters degree than those without. Especially self-reliance, which is important if you do not have groups of 10+ devs working together.

Which program are you doing?

Is this UChicago's program? I did a masters there without a CS background, and it was amazing. Incredibly hard, but amazing. The beauty of it is that they ramp you up on the mathematics and programming if required with a few additional classes. They also let you test out of either if you already know it. Doing that masters was one of the best things I've ever done.

I am curious what program you are taking as well.

> Keep in mind that a traditional degree program does have a huge advantage over a strict MOOC: accountability.

I wonder if this is a contributing factor to the success of those who completed MOOCs. There was zero accountability but they STILL manage to complete the coursework and assignments.

While I agree that MOOCs lack face time, I was a terrible traditional student and excel when given the opportunity to structure my time as I see fit.

Accountability is overrated.

I'm im the same camp. I've start studying at a university a few times now, and I keep failing because other oportunities come up that are more important than passing the course.

I need to be able to self study at my own pace according to how my free time allowes. I'm also not that interested in formal tests and exams. And even course projects.

I want to learn what I need for whatever project I'm currenty working on or going to work on next.

You are speaking my mind here :)

Jumping in as another OMSCS student to second your post. The accountability really makes a difference finding the balance between work/life and the schooling (which can be nontrivial considering the courses are certainly rigorous).

Same here. There's much more at stake and it helps push you. Grades and money are extrinsic motivators, but they get you from point a to b.

I so want to enroll and take the OMSCS; I know I could succeed at it if I took it, but the school requires you to have at least a Batchelor's degree to attend (they don't count work experience in software development - so my 25+ years mean nothing).

I found out about the OMSCS after I started my work on Udacity's "Self-Driving Car Engineer" nanodegree course, which I am currently taking. My plan (hopefully) is after I complete the nanodegree, I'm going to take some kind of online BS program (maybe accelerated if I can find one), then hop over to the OMSCS (hopefully it or something similar is still running). I mainly want to do it to prove to myself I can do it - I believe that I can.

I made some early life choices that have led me to where I am at, and while it hasn't hindered my job or career prospects, I have always wanted what I should have done in the first place.

Western Governors University (non-profit, accredited, affordable) offers a few different B.S. programs that you could look at to use as a buffer. I received my undergraduate degree from WGU and highly recommend it. It's self-paced and there are ways to fast-track the degree in less than the typical 4 years.

Hey - thanks for this recommendation - I will definitely look into it!

OMSCS looks pretty intriguing. Were there any other similar programs that came up while you were initially looking into it?

"Whether passing an algorithmic technical phone screen means you’re a great engineer is another matter entirely and hopefully the subject of a future post."

This sentence plus the inverse correlation between experience and "interview performance" shown there. Makes a big smell about how biased are those interviews to themselves and not to real technical interviews.

From the data it looks like the questions asked using that service are the ones you might learn in university and after many years not using them, that knowledge fades away because you're not using it.

This is reinforced by MOOCs being the 101 of the subject they're dealing with. It would be interesting to see if there are trivia questions from 101 courses.

The most obvious bias is in the clickbait title. Those 3K interviews are in a specific platform, meaning they're done in a specific way.

So after checking their results it seems that interviews done using that service benefit people with fresh university or 101 lessons knowledge.

What worries me more is the lack of improvement and perhaps the moral superiority of ending the article with a "these findings have done nothing to change interviewing.io’s core mission". It feels like the entire statistics game shown there was to feed back what they already knew.

Yes, because we all know that your ability to implement a red-black tree from memory has a direct correlation with your ability to implement some random business logic or CRUD app. Oh wait, they don't.

I used to think along these lines. Then I started doing 10+ interviews a month and realized a very clear reality: basic CS knowledge and problem skills is far more important to me and my team then knowing how to slap together some semblance of a working CRUD system.

I ask "algorithmic" questions, normally expressed as a legitimate business case (invent a real world problem, solution is implement some algorithm or use specific data structure). My warm up question typically is a simplistic "find the subset in a given collection that matches this specific criteria" (with a subtle implication of "do it efficiently"). The average coder should be able to solve this type of thing, on their own, in about 10 minutes max, 15 with some feedback on improvements.

Yet, 80% of my candidates take nearly 45 minutes and cannot deliver a workable solution without massive handholding, and I don't even get to my higher order, "real questions". The scenario of a coder who can't solve my warm-up being let loose on code I actively maintain makes my stomach churn.

Until I see the average bar for problem solving go up, I'm going to keep asking basic CS questions in my coding interviews. The job is to solve complex, typically ambiguous problems. Coding is one of the tools - and I want peers who understand the theory behind using those tools.

(I should note, I tend not to pay attention to credentials on a resume. I care more about ability to do the job then past history - though if a candidate has a masters in some field of CS, I might delve into it a bit out of curiosity... they are an expert afterall)

"I hate anything that asks me to design on the spot. That's asking to demonstrate a skill rarely required on the job in a high-stress environment, where it is difficult for a candidate to accurately prove their abilities. I think it's fundamentally an unfair thing to request of a candidate." Scott Meyers

I'll take this guy's word over yours I'm sorry. Your supposedly simple use case is given in a different environmental surrounding than a typical day on the job.

At no point did I suggest I ask anyone to design anything - it's a simple question with a simple solution (if they know how to program).

My opinion, of course, but a "good programmer" can think through the solution entirely in their head without hands touching keyboard. Code is the byproduct of the person's thoughts. I'm looking for people who think like programmers. If that bar is too high, I'm rather disappointed with the state of the programming community.

Some of us don't "slap together some semblance of a working CRUD system", but put a bit of thought into it in order to make a maintainable app that works as it should and is scalable when necessary. That requires plenty of knowledge, but algorithmic CS stuff isn't high on that list.

That requires plenty of knowledge, but algorithmic CS stuff isn't high on that list.

I'm not sure I agree – in my experience there is at least a direct correlation between 'knows about scalability, performance and maintainability' and 'knows about data structures and algorithms'. Maybe they aren't going to be able to tell you about the performance edge-cases of different sorts or whatever, but I'd probably still expect to see at least awareness of a difference existing.

>The average coder should be able to solve this type of thing, on their own, in about 10 minutes max, 15 with some feedback on improvements.

>Yet, 80% of my candidates take nearly 45 minutes and cannot deliver a workable solution without massive handholding, and I don't even get to my higher order, "real questions".

You need to ask yourself why you believe the "average coder" should be able to solve that because clearly your beliefs are not founded in reality.

This is what I cannot understand about interviewers who are constantly frustrated with the population's skillset: You obviously have higher skill standards than the average. That is fine. Just accept you will have less hires because none of you are capable of fixing the entire population's skill levels.

"You need to ask yourself why you believe the "average coder" should be able to solve that because clearly your beliefs are not founded in reality."

Oh, stop.

The problem he describes is trivial, and something that you'll encounter as an entry-level web developer on a regular basis. If you can't solve it, you're absolutely not up to the job. In fact, I'll go further: if you literally cannot find an efficient way to filter a list of stuff based on a criteria, you're not even a programmer yet. It doesn't matter if you've "written" a dozen toy webapps by stringing together NPM modules -- not knowing these basic things makes you a danger to any team that hires you.

You can't judge the quality of a test exclusively by the number of people who fail it. If you resume screen for "has written code before" and 80% of your applicants fail that test, is your standard set too high?

(In case you're wondering, that's not a hypothetical example.)

> If you resume screen for "has written code before"

I wish that was the resume screen criteria at all the places that ignored my applications.

The more I read these threads the more I think resume filtering is part of the problem. It makes some sense; if the bad applicants have to apply to hundreds of jobs to get hired they have likely learned how to game the resume scteen.

Resume filtering is terrible - I tend not to be involved in the process since usually recruiters or hiring managers do this. However, when someone stops by my desk and asks "should we schedule a phone screen with this candidate who lists HTML, CSS, and jQuery, as programming languages" for a senior web developer position, I say no.

A huge problem is people list technologies as keywords on their resumes and rarely indicate they know what those words even mean. This makes screening next to impossible at scale (think 1000's of resumes coming in for a single position).

I know this isn't always the case, but how does one note competency in a given language or with a given tool(set) while fitting a resume on a single sheet. Especially if it's for a senior/lead or higher position?

There's a lot that's broken with the hiring process. One of the toughest thing as a potential candidate is how to properly tailor a resume to fit the bill without "gaming" the hiring process. There is no standard - only methods that work better than others in most situations.

My personal approach is to present myself as an individual with a large amount of skill and experience working in a specific programming discipline (e.g. web applications). I present a short list of the core tech I have experience with, ones I'm comfortable answering interview questions in. I do not list every tech I've used - that list would be half a page long and do no one any good.

I describe the projects I've worked on, what the problems challenges were, and how my work helped solve them. The format I use tends to work itself out as a simple narrative outline, and I tailor every resume to be the most relevant to the position I'm applying to, and I submit a cover letter specific to the job. This isn't "gaming" the process, this is doing your homework on the company/position and selling yourself as a viable candidate. If I'm weak in a desired skill, I call it out in the letter and demonstrate my past experience as an example of how I can learn new technologies quickly (something vital for any programmer, IMHO).

That said, after a certain level of experience, relying on your resume to get you in the door isn't going to get you the job you want in most cases. You want to meet people directly and apply via recommendations or requests. Direct contact with peers working at the company you want to apply to matters a lot. Even communication with a recruiter is better then blind submitting to a job post.

Is he talking about this problem (with a filter at the end)?


Because I have never done that. That's different than just running through a list and picking out items that meet a criteria.

Not that I can see. The OP said:

"find the subset in a given collection that matches this specific criteria"

So basically, a loop through a single table. That's as simple as it gets. You can make the problem more complicated, of course (e.g. "write a method to find the minimum and maximum ages of the male users"), but it's still pretty simple stuff.

A slightly less trivial "algorithm" question that should be equally easy for any decent programmer: I give you a document of english words. Write a function that counts the words, and returns the top ten words seen, by frequency. Now...solve the same problem when it's not a single document, but a stream of words of unspecified length. Don't run out of memory.

Depending on the wording it looks like his go to intro problem is actually easier than Fizz-Buzz as it is just filtering. For example say it was "Given a set return a subset where the values are less than 10." (such as set.Where(i => i < 10))

I'm hard pressed to imagine how this couldn't be done efficiently. Do you have an example which correctly solves the problem but takes too long?

The only thing I can think of is something like the output rewriting an array on update over and over.

Do you have an example which correctly solves the problem but takes too long?


  var AGESORT = rows.bubblesort("AGE ASCENDING")
  var AGESORT1 = rows.bubblesort("AGE DESCENDING")
  var MALE = "None"
  var MALE1 = "None"

  for row in AGESORT
    if row.isMale
       MALE = row

  for row in AGESORT1
    if row.isMale
      MALE1 = row
  print "Min age: " + MALE
  print "Max age: " + MALE1
Stuff like this is usually the result of thoughts like "Ok, I'll break this problem into two, then combine the pieces. First I'll find the minimum age, which is easy once I sort the rows. Done. Ok, now I can see I can slightly alter that code so that it finds the max age! Now, I just put the pieces of code together, and do a little code organization to put like with like. I"

Good point.

Which problem? The first one? Don't underestimate the badness of the average interviewee.

For the "find the min and max of a set" in particular, a lot of folks start out with terrible solutions.

Well either I suppose. For the subset problem as you say a filter critera can be applied while looping through the collection. And that should be it. Now if the question was multiple criteria I could see the solution diverging rapidly. If the filter can be expressed as a rank we could sort the collection first and then binary search through it to find the boundaries. I suppose I'm just trying to imagine some way an interviewee may turn that into two loops, or three loops... more?

I would like to see a solution to your problem with bounded memory.

In particular, the case where I want the top 3 words, you don't know the length of the stream, and you get random permutations of the same 4 words until I stop emitting them (where I will end by emitting 3 to break the tie).

That's not to say your problem isn't interesting -- just that while specifically constructing a problem as an example, you created one that's unbounded in required memory (in both storage per value and potentially number of values to store) and then demanded a solution that doesn't run out of memory.

I think most interview questions are similar nonsense.

"I think most interview questions are similar nonsense."

It's probably a good idea to be careful with your words when you admit that you don't know the answer to a question.

First off: your example (random permutations of the same four words) doesn't require much memory at all. So if you think it does, you're wrong. You might overflow your counters, but that's a different problem.

A stream of random gibberish is certainly more challenging. But the cardinality of the English language isn't infinite (the OED has about 230k words, and that's with lot of words that nobody ever uses), so even a naive solution doesn't require "unbounded memory", as long as you take the problem statement seriously and don't do something ridiculous. That would be good enough to pass an interview.

But OK, let's say you do have a stream of random latin-encoded gibberish. What then? The problem statement is that you have to determine the top-10 words (or in this case "tokens") by frequency. The cardinality of the set is infinite, but the probability of duplication per token is small, and the output set is tiny. Do you really think you need unbounded storage?

In any case, even if you think a problem is "nonsense", it's probably true that the interviewer has thought about it more than you have. The part that frustrates you is highly likely to be the bit worth probing. A bad candidate will bomb out immediately; a decent candidate will provide a solid, if not perfect solution; a great candidate will solve the problem, see the broader theoretical aspects, and investigate those as well.

Do you really think you need unbounded storage?

This is an example of a trap that interviewers run into when they try to arbitrarily reword questions.

While I could be mistaken, an exact solution to the top-k problem requires O(N) space where N is the number of distinct items. I can trivially think of a stream of tokens that would defeat any reasonable computer in both available memory and general "storage" (ex: 1 quintillion distinct words, then the next 10 words are a duplicate of an existing word, then end of stream). Since you asked for an algorithm and not a heuristic, it's clear you are looking for an exact answer. So the answer is "Yes, I would need unbounded storage for the new question as asked". Since this is an interview, I'd expect that'd you'd want me to give you the technically correct, and accurate answer.

I've tripped up enough interviewers who have tried to slightly reword questions, but ended up changing their meaning.

Well, "algorithm" is just a way of saying "procedure", so a heuristic applies (IMO). I would accept a reasonable heuristic without much argument because the problem of random text is much harder...but honestly, if you got to this part, you've already passed the interview. A total flameout on this question is not even realizing that the storage scales with vocabulary size, instead of sequence length.

An exact solution to an unbounded sequence of purely random text probably does require unbounded memory (I say 'probably' only to hedge my bets here). But depending on the definition of "random", you can put pretty tight bounds on it and only be off by a little. I haven't done the math, but my gut says that a bloom filter, followed by incrementing counts only for positive hits from the filter, would scale well. There may be simpler approaches that make use of word length and the size of the latin alphabet (e.g. "all tokens of size N have 1/C(26,N) probability of colliding if letters are chosen from a uniform distribution, therefore...")

But again, if we actually had this conversation in an interview, there'd be no danger of not passing. Unless you were a jerk or something.

If your algorithm overflows counters and produces incorrect output as a result, then you don't have a correct solution.

There's no way to solve the unbounded stream case correctly with finite counters as you admit.

So you didn't actually present a bounded memory solution to the problem.

The number of bytes in a machine number is not the limiting factor here. You could use arbitrary precision numbers (with some constant factor of additional storage), and solve the precision problem.

Being pedantic about this point doesn't get you closer to a correct answer. Read dsp1234's responses. They are correct.

They're not the limiting factor in practice, but the limit behavior of the algorithm -- how it behaves on an arbitrary length stream -- can overflow count for a fixed length counter and if you use arbitrary precision, it's no longer bounded memory. So again, you've failed to supply a memory bounded algorithm that handles the arbitrary case (but won't admit it).

Again, I'm aware of how to solve it in practice and that the problem as stated is not what you meant, I was pointing out that as stated, it didn't have a memory bounded solution.

And that it's commom for people to self-assuredly misdefine interview problems, then not understand when the interviewee points out their implicit assumption.

In the given case, it's actually the "English words" limitation that saves the question. An exact answer to top-k problems is O(N) is storage space where N is the number of distinct items. So since N is ~200k, it's not a terrible problem to deal with. When N is 3 as in your given case, it's not really a problem. The general case, of when the stream is unbounded in the number of distinct items, is much harder, say the stream is 1 quintillion non-repeating words, then starts repeating. This requires an approximation algorithm, but is not likely the solution that was looked for since the person specifically said "slight less trivial", and the streaming approximation algorithm is more complex.

Though it could be still be saved by flipping the question back onto the interviewer's phrasing. They said "don't run out of memory", not "don't run out of disk space". So you could also solve this problem by, for example, writing the data into a database, then using SQL group by and top X functions to solve it once the stream has ended. But as an interviewer, I probably wouldn't be amused.

Hah! Hell, I'd even give credit to the SQL answer, if it came with the rest. ;-)

The only thing I'd add is that you can probably come up with a pretty simple heuristic or three to solve the problem for the case of truly random gibberish, without the need for elaborate data structures. But yes, this would be the answer of a great candidate.

You can't have finite counters per word, else you can't disambiguate between counter length of a single word then the pattern I described and just the pattern I described, despite the fact it impacts the answer.

Your solution is incorrect because it fails to handle arbitrary length streams as specified -- as do the other glib answers.

My point was the specification didn't seem to align with the intended problem, and that interviewers tend to make such mistakes on the description, and then get mad when you ask them to clarify if we're talking practice or theory.

Similarly, you don't account for arbitrary proper nouns, which could theoretically be unboundedly introduced to an arbitrary sized English corpus. I guess you can debate which of these are "English" (itself a proper noun!), but interviewers often treat you as dumb if you don't divine their intent in such underspecified situations.

You can't have finite counters per word

You only need one counter per word.

a quick sketch of the basic algorithm is:

  1.) Create a new list of type <string, number> into COUNTS
  2.) Read a word from the stream into WORD
  3.) if WORD exists in COUNTS, then increment the number for that entry
  4.) if WORD does not exist in COUNTS, then add WORD to COUNTS with a number of 1
  5.) if not end of stream, goto 2
  6.) traverse through COUNTS keeping a running list of the top 10 highest entries (Note that this and step 4 can often be combined to keep a running total of the top 10) into TOPCOUNTS
  7.) print TOPCOUNTS
This algorithm is O(N) space where N is the number of distinct items, as I mentioned previously.

Again, you're assuming count has a bound size per word, which isn't true for an unbounded stream.

Step 3 of your algorithm either requires unboundedness of count (ie, count can use arbitrary amounts of memory) or can overflow on arbitrary length streams of words (and hence, has cases where it produces the wrong output).

Hash tables FTW!

That's more advanced then the question I ask typically. Though I might use something like this if I'm interviewing someone with many years of experience (10+).

As I explained above, my question is a warm up - meant to break the ice, calm nerves. But it's surprising to me how effective even a simple list traversal is at identifying weaknesses in a candidates programming ability.

Depending on the platform, there's at least 2 ways to do that :-)

E.g. - "functional" is simply something like:

    subset = filter( predicate, superset )
... but on a "procedural" platform, it's a bit more challenging, starting with how much space to allocate for the result, or whether to mutate the input, and other wading through the swamp.

But, yeah, I can see how it would get people to start up talking somewhere, or else go into "deer in the headlights" mode.

Just out of curiosity: how would you feel if I answered this question by using existing function [1] in stdlib? Would you consider that as (good) sign of knowing the tools or would you prefer a fresh implementation?

[1] https://docs.python.org/2/library/itertools.html#itertools.c...

I see it as a positive if a candidate uses an existing solution to the problem. That's exactly what a good engineer would do in the real world.

But then I take away the library/method and re-state the problem. Because the point is to see if they can understand the problem, and devise a solution.

Cool. Thanks for your comments.

Heh, at my previous employer the only coding question we asked during interviews was: "Find the largest element in an array of integers." Some of the ways people found to not solve that simple problem were amazing.

Now, I'm generally against asking hard-core CS questions in a live interview... but a simple bozo filter is probably a good idea.

It really does surprise me how a simple programming questions like this does stump the average interviewee. If they need a bit of guidance to get started, I never hold it against the candidate, but if they need a massive hand holding to come to a solution, or if they can't explain how their solution works and how it could be improved, then I know where they stand programming wise.

It is a low bar skills wise, and it tells a lot within a very short amount of time.

You interpreted what he said as "average applying coder", but he probably meant "average employed coder".

The people that weren't successful during the interview today? They are going to have more interviews tomorrow and next week too. Doesn't mean they are representative of the average coder (most of whom are not being interviewed, because they have jobs already).

The population is skilled enough, it's just that of those who are left, most are not (so it takes more time to fill positions).

I think the key to the questions being asked at an interview is "relevance". How relevant are the interview questions to the actual job that I am applying for ? The off the hand warm up questions are fine, but in my personal experience of 18 interviews, I have not been asked a single question that I would consider relevant to the work I did/will do. Trees? yeah. Design this and that? yeah. Puzzles? you name it. Java specifics and countless Big O and Sigma questions. What about Refactoring? Nope. About designing especially for webapps? Nope. Ok, optimizing legacy code ? nope. Database structure and query optimization? Nope. Data migrations? Nope. Multithreaded support for this piece of code? Nope.

The thing is I have an arsenal of useful information and skills that I have learnt over the years that will not be asked in my next interview. I have personally been developing a list of interesting issues that I face on a day to day basis. I share it with as many people as I can some who participate actively in the interviewing process. Many do not agree with me and I do not hold them for it. I have started small and hope make a change.

> implement a red-black tree from memory

Can we quit using this ridiculous example? I'd be willing to bet that less than 1 in 100,000 technical interviews have asked someone to implement an r-b tree considering a full implementation is 100s of lines of code.

Thanks for writing this Aline. As a recruiter for almost 20 years, I wish I had access to all my data and then the time to compile it, and anecdotally I'd expect the finding about MOOCs would be similar.

The most selective of my hiring clients over the years tended to stress intellectual curiosity as a leading criterion and factor in their hiring decisions, as they felt that trait had led to better outcomes (good hires) over the years. MOOCs are still a relatively recent development and new option for the intellectually curious, but it's not much different than asking someone about the books on their reading list.

Unfortunately, demonstrating intellectual curiosity often takes up personal time, so someone with heavy personal time obligations and a non-challenging day job is at a significant disadvantage. One could assume that those who have the time to take MOOCs also have time to study the types of interview questions likely favored by the types of companies represented in this study.

Thanks for continuing to share your data for the benefit of others.

"Intellectual curiosity" was my primary attribute for many years. Particularly when I was young and didn't really know any better. It makes sense doesn't it? Someone who is interested in learning for learning's sake will be a better developer!

Except I'm convinced it's not really true. It's something that is horribly subjective and really self-selective. It's funny, intellectually curious people often have exactly the same interests as whomever is doing the interview. I find that nearly everyone loves to learn, you just have to find the thing they're interested in learning about.

The signal that I have found to be a great indicator of success on my teams isn't about curiosity at all. It's about attention to detail. In the world of scatter brained developers who never seem to really follow through on anything, it's those guys that are the real unicorns.

Our interview process is now designed to bubble that to the top. Vague programming problems with poorly defined requirements provide a platform by which we can see how someone digs into problems. I'll ask for them to send me a couple of things after the interview, it's a really good signal when they pull out their phone and add it to a to-do list.

Those guys may not always be the "smartest" or the most interesting, but man when you're going to spend months working down a really large project they get stuff done.

You had me until, "it's a really good signal when they pull out their phone and add it to a to-do list."

Because it's just one of many possible courses of action for staying organized. For all you know the guy added it to his to-do list on his way to the car, or maybe he employs some other mechanic. While you could argue that an interview is a showboating environment where one is expected to signal certain desireable traits, I personally resent that these attributes seem to manifest in recruiters as specific actions that reinforce their perceptions of a good candidate -- if someone is unconventional then how is looking for specific signals going to tell you anything useful about them?

It's a signal. It's not a binary yes/no, there are many observations we're making throughout the interview. This is just one, and the to-do list isn't the only correct answer. Asking me to email those details to them is another possible right answer. Writing a note in a notebook is fine.

The key thing to remember here is that we're asking this very much in the middle of the interview, and I just want to see that they have some sort of system in place for remembering these things.

In my experience, most of the time, simply going "sure, I'll do that" and depending on memory to actually get it done is a negative signal. Maybe they are particularly gifted in terms of memory, but in my experience that is rarely the case.

In hiring we're willing to accept false-negatives, but never false-positives. You are correct, I may let a perfectly good candidate walk, but I'm willing to accept that as the cost of hiring a bad one is just too high.

I sincerely think the downsides of a false positive are vastly overrated.

IMHO the real problem is the strict barrier between people who are part of company and non-company. Imagine if we had a more permeable membrane with different departments hiring firing as they see fit (a faster flow than this structured hiring). This would test the candidates on working in the real environment rather than the whole magic and mysticism of made-up work scenarios in interviews.

In fact I wish we had a more libertarian system inside companies with more free markets rather than all the central planning which happens now

I would not have written that down, but I can assure you that I would remember to send you whatever you asked: An interview setting is very special, and tend to get hyper focus. I wouldn't go out from there wondering the typical "man, that was intense - wonder how it went??", and then forget to send you those things you asked!

Unless your request was very specific and strange ("send me the completed k3-45 and t9807 forms"), in which case I'd have to jot /that/ down.

Should you not rather evaluate whether your request was fulfilled?

So the requests are often a bit strange (relevant to the interview tho). Usually asking for a citation for some claim made during the interview. I also make a point to ask in the middle of the interview so there's lot of times to forget.

Again I do want to stress: This is a small detail as part of a larger interview. Screwing this one thing up isn't going to sink you, but we combine it with a bunch of other signals and observations to make decisions. I've been doing this for several years, and I'm very happy with the results.

> It's about attention to detail. In the world of scatter brained developers who never seem to really follow through on anything, it's those guys that are the real unicorns.

I've done a few hundred technical interviews over the years and this is absolutely true. It's these guys who make production ready systems possible more than anybody else.

As it happens I have a poor temporal memory so the most important tool I have is I always carry my notebook and jot down everything that gets said, particularly as my previous role involved coordinating and managing teams and projects.

> Unfortunately, demonstrating intellectual curiosity often takes up personal time, so someone with heavy personal time obligations and a non-challenging day job is at a significant disadvantage.

This is true, and one of the things that tends to stack disadvantages against people who are older in the industry. There's a good correlation between free time and age in some ways, as people get to mid-career they tend to marry, have kids, and so on. Maximum free time comes during college and immediately after, for many people.

Valuing intellectual curiosity also tends to mean that this curiosity is valued over direct experience, which is what the older more time-strapped people have to offer relative to younger people who are doing the MOOCs.

It'd be nice if we had on hand a lemma that stated that intellectual curiosity, once "proven" as a property of one phase of a person's life, will remain as a property of that person in all later phases, even if the indicator signal for it disappears. Then we could just look for people who look like they once were intellectually curious (perhaps in college) and then hire them on the basis that they'll be able to pick it right back up if given the time and the motivation.

Of course, nobody has ever done any experimentation to prove such a hypothesis so that it could be used as a lemma.

I often wish there was some sort of prediction market of I/O-psychology hypotheses that you could just throw a random prediction onto. The stakes on the most contentious results would then prompt the investors to leverage their bets with research organizations into grant-funding to study those questions.

It would be nice, but I think we'd be hard-pressed to ever agree on what intellectual curiosity even is. It's one of those "I know it when I see it" things. I could evaluate whether a person has it (to my standard) but would someone else accept my call, 5 years later, for their company?

A proxy might be "did you study something non-trivial, without your work forcing you to, in last 1-2 years?"

I start a lot of things like that (currently trying to learn Rust). A lot of the time other things take over in my spare time and I never actually get anywhere. Having a job where you can put consistent practice into new technologies makes a big difference. I week at work is probably equal to more than a month of my spare time.

I see this pattern time and time again in both my recruiting work and in my resume business. Someone gets a rather non-challenging job that pays market rate or better, starts a family, pays little attention to any tech outside their day job, and suddenly finds that their marketability outside the current job has suffered greatly.

As developers get older, they need to be conscious of the relevance of their work in regards to current trends, and if the day job is largely irrelevant they may need to find access to projects (often unpaid) that will serve to impress new employers.

I don't think that the value of experience has truly diminished, but I think what is now classified as 'experience' may have shifted a bit.

I'm sure this happens, yes. But, that's not what's going on here, IMHO.

Asking a person to traverse and modify a linked-list (a very common problem) or to implement a B-tree are not 'tech outside their job'

Most of these interviews are more like math problems than anything a practical developer would encounter.

Shit, if these interviews were "What do you think of node?" or "What is Go's concurrency model?" I think experienced people would do fine.

> Asking a person to traverse and modify a linked-list (a very common problem) or to implement a B-tree are not 'tech outside their job'

No. No one in their daily jobs implements a linked list or B-Tree. There are predefined libraries in the languages or a user created library that people use. Do you really think every person in the company has his/her own version of a linked list in the same code base? This is a fallacy perpetuated time and again. (I'm not against these type of questions, but they don't represent someone's true ability)

Building a product, app, project requires application of this knowledge: eg., I need to design a distributed data store, should I use hash map? What about collisions? Should the collisions be resolved by chaining (linked List) or another map? What kind of data am I storing? Can that be exploited to make this more efficient, synchronization of data across nodes (?), etc..

Many of these interviews completely overlook someone's design ability and harp on some straightforward (and some obscure) topics which IMO has little to no co-relation to someone's ability as a software engineer.

this, so this.

Why is everyone stuck on getting someone to reinvent the wheel as against getting him to use the said wheel and get things done.

I find the processes in a large company rather robotic (note: we're a startup, so my views might be biased).

>Why is everyone stuck on getting someone to reinvent the wheel as against getting him to use the said wheel and get things done.

A person who can only use a pre-made wheel is a technician (regardless of how ostentatious their job title may be).

A person who can invent the wheel on demand can also usually invent any other type of rolling mechanism one might encounter a need for and that makes that person a full-fledged engineer.

No value judgement of either person is implied; it's simply a statement of capability. Which type of person is needed depends on the company and the position but it is becoming increasingly clear that the two roles in software are separate, just as they are in other engineering disciplines.

> No one in their daily jobs implements a linked list or B-Tree.

Counter case: HFT. Those guys really do implement this stuff. Again and again.

Which is why it's an outlier.

Asking a person to traverse and modify a linked-list (a very common problem) or to implement a B-tree are not 'tech outside their job'

Yes, it is.

More to the point, it's built-in bias toward recent college graduates. Once someone's actually been out in the real world programming for several years, any space in their brain's metaphorical working set that was ever dedicated to remembering how to do this stuff has long since been paged out in favor of the actual day-to-day knowledge they need to do their job. Implementing basic data structures and algorithms is not part of what they need to do their job; that stuff's already provided by the platforms they use.

So when you get to the interview, if you just put someone on the spot and ask them to do one of these things, you're inherently biasing toward an inexperienced recent college graduate who will have this stuff close to the top of their head (on account of having just done it in school and not yet swapped it out of working memory through time on the job).

Which is in turn why there are now books you study -- as an already-qualified programmer -- in order to freshen your memory and ability to quickly regurgitate problems that have nothing whatsoever to do with your day-to-day work, in order to be able to perform them on command in interviews.

Right, as a metaphor, I'd almost compare it to grading a novelist or writer's grammatical knowledge of parts of speech and sentence structure rather than their ability to write a book or compelling article.

I'll be honest, traversing a linked-list and implemented a B-tree (assuming someone reminds you what a B-tree is) are not particularly difficult problems. If you have trouble with those, I'm going to really wonder if your programming skills are really very good to begin with.

Those are bad examples, but real interview questions tend to be "implement from scratch, live and without reference to any materials, these things you've forgotten about because you could take them for granted for such a long time".

Also these days the trendy interview problem is longest common subsequence, which ends up testing "how recently did you review dynamic-programming techniques", not "can you code".

> I'll be honest, traversing a linked-list and implemented a B-tree (assuming someone reminds you what a B-tree is) are not particularly difficult problems

I'll be honest, this is indistinguishable from "I am good at X and I'm great therefore X is a good measure of ability".

Maybe worth rephrasing

They're not particularly difficult. You usually take the algorithm definition and implement one, as you did in the university or in the Coursera algorithms course. Fun, you might learn a thing or two about the language you're using.

If the point of the exercise is a TRIVIA question, leave the interview. On the other hand, if the point of the exercise is not to get the right solution but see how you think and how you use basic programming blocks, then you're in the right place.

I've used the linked list question. And the goal is not for me to check if it is "not difficult" for you. Because I might not be able to teach you what a linked list is in 5 minutes while you're nervous in a technical interview. But to work on technical problems and see the following:

- Can you speak your mind so your peers know what are you doing?

- Will you ask for help?

- Will you accept help and feedback?

- Can we have a discussion about something that there are many "right answers"?

- Do you know the basic building blocks of the language you decided to use? <-- This plus javascript plus references is very funny.

You refer to the interviewers as 'peers' but that's very much not the relationship that exists in an interviewer. The context is different, therefore the behaviour is different.

What you're describing is a confidence bias filter, dressed as an objective test.

Individually, with minor refreshing-study, perhaps no.

But in practice it's merely one of hundreds of things that might get instantly sprung on you as you stand before a whiteboard. In addition, developers with actual jobs can't stay in a continuous cycle of "exam cram" to the same extent that unemployed recent graduates can.

Traversing is a linked list is far simpler than implementing a B-tree.

The problem is its possible to run a B-tree implementation and see if the unit tests pass or non-technical interviewers can compare the official solution to what the candidate is talking about fairly easily.

I've never used Go, don't really plan to unless I have to, nothing wrong with it just not my area of expertise and I'm busy with other toys although I'm sure its a very nice language, and if I went up against a non-technical or technically weak interviewer (aka almost all of them) I could probably BS my way thru based on what I do know about concurrency in general. Although after decades of experience, the reason why I'm not still programming in 6502/Z80 assembler is I'm good at that whole "BS my way thru till I'm an actual expert" thing, so maybe I'm talking myself into agreeing for non obvious reason!

BSing skill is under-rated. If you can't BS your way thru something that you don't entirely understand (because you're lazy), then when you're at the cutting edge developing or debugging you'll have trouble BSing about stuff that no one on the planet understands. Not all the worlds problems have a stack overflow question or a script reader at a support hotline, and how you deal with that gut check says a lot.

Actually BSing in tech makes you look very poor if you don't come out with a convincing answer.

(I listened to a training video given by a Marketing Cloud consultancy at my work. Someone asked "what is FTP" after seeing that as an option to upload files. Lots waffle including "the cloud" and no actual answer. The person asking the question was no wiser, and my opinion of the consultant went way down).

I think my words may have been clumsy. My reference to "tech outside their job" was meant more about having relevant skills to a current job market (marketability), not the interview itself.

That's a problem with... "people looking for shiny things". There are many shiny things, and they loose their shine quite fast.

It is true that many managers, recruiting agencies (almost 100%), and HR, will look for the keywords of the technologies the company is using. So if they use the new Angular 4, they will look for that.

But there are companies that look for the basics: do you know javascript? Have you worked with SPAs? And these are the ones that IMHO deserve your time to apply.

Because tomorrow, today's current trends will be tomorrow's trends. Companies looking for people that can adapt say a lot about what kind of work place will you find there.

One thing that annoys me at present is NoSQL experience (or similar).

I know enough about relational databases and NoSQL to know that unless you have a specific use case, relational is likely a better choice. If you are not scaling beyond one server, and want to view your data from different angles its almost certainly a better choice. As such I haven't really got a great deal of NoSQL experience. I will build a better application than someone who does resume driven development, but won't get past many HR drones.

Interestingly, one of my 'no' signals is people relying on a DB to do the work for them. Don't know how to index this? Shove it in the DB and query it back.

If you don't have a relational query in mind a relational DB is a bad fit, or at least premature optimization.

I am not quite sure what you mean with your comment, as "relying on a DB to do the work for them" sound like a good thing to me. Its usually way faster than doing stuff in the application code, and the declarative nature of SQL usually means less bugs.

> I am not quite sure what you mean with your comment, as "relying on a DB to do the work for them"

Maybe "to do the thinking for them" would be more accurate.

> sound like a good thing to me.

You did what they do - you assumed based on some your usual workloads that a DB and the overhead of using another language, and a non-compile-time checked language, with poor integration (type mismatch), is the solution for the problem I'm working on.

What I expect them to do is ask questions. How many values are there? Are they complex types or something that would map to tables well? What are the querying requirements? Are there any relational elements to the data?

> Its usually way faster than doing stuff in the application code

If shoving things across two sets of pipes, to a general-purpose app not tuned to your workload, is faster - you're writing your main app incorrectly.

> the declarative nature of SQL usually means less bugs.

A declarative language usually means less bugs. But adding a second mismatched language usually means more. Especially when it comes with a whole new system that takes experts to properly tweak.

I don't think curiosity diminishes with age, even if the ability to explore that curiosity may. You can still talk to someone and find out if they are interested in experimental technologies, what they choose to tinker with in the limited discretionary time they have, and so on. You can ask about their opinion on new technical developments and whether they follow any trade publications or forums and the coolest things they've found from those places in the last few months.

Since for programmers at least, such information is not hard-separated from the course of their normal work, they will usually have the opportunity, if interested, to be at least partially informed about such things and have some sense of the overarching zeitgeist. For instance, a C# developer would be fully justified in reading about the changes slated for the next iteration of C# during his/her day job, especially as that iteration neared and achieved release.

On top of this, professional engagement and awareness IS something that deserves out-of-work nurture time, even when you have a family or other demanding non-work obligations. You don't have to have an impressive free time project like a hand-built CPU, but expecting someone to put in a few hours a month to tinker, learn, or explore new things related to their field is not unreasonably demanding IMO, even for experienced/busy professionals.

Obviously "participated in a MOOC [Massive Open Online Course, btw, for those curious]" may indicate curiosity (though it may also indicate someone trying to short-circuit their lack of skill by gaining Yet Another Credential to ride off), but it's far from the only signal.

The best employee is an experienced person who has retained their intellectual curiosity, even if they don't have the time to fully exploit it.

> I don't think curiosity diminishes with age, even if the ability to explore that curiosity may.

Anecdotally I'd assume that intellectual curiosity often increases with age, although that might be attributed to the availability of learning materials now compared to years ago. Some people here on HN would likely be shocked as to how many people in the industry consume almost zero content outside of what is necessary for their jobs.

> Some people here on HN would likely be shocked as to how many people in the industry consume almost zero content outside of what is necessary for their jobs.

Seriously. People significantly underestimate how much better the average HN user is than the average developer.

The median software developer sees programming as just a job and spends absolutely no time on developing their experience outside of work.

> I don't think curiosity diminishes with age, even if the ability to explore that curiosity may

I think it definitely does, depending on the person. Many, many attributes of the person change over time. That's what growth and evolution is about!

Some people take a career path where they become expert at a narrow topic. As they master their field, I think they lose a lot of curiosity because there's simply less to be curious about unless they're on the forefront of research and not just a practitioner.

Other people simply develop other life priorities.

It's 100% an individual thing.

Outside of research and for fields outside of compsci, how else can applicants demonstrate "intellectual curiosity", and can you just expand a bit on what that means?

Thank you, and thanks Aline!

To me, intellectual curiosity is basically "learning for the sake of learning". In other words, picking up a skill or researching a topic that interests you without having a real need to apply said skil/knowledge professionally or personally.

Often the research may be tangentially related to their work, but it doesn't have to be in order to be 'credited' as intellectual curiosity.

Outside of CS, demonstrating curiosity can be almost anything. A non-CS person learning a technical topic is actually a decent example. Non-programmers that buy Arduino and do some home automation project.

It can be almost anything I'd think. Many people don't dedicate much personal time to learning. Demonstrating that learning can be challenging.

I have clients that require interviewees to do a presentation on anything at all during the hiring process (both non-tech and tech employees). Some have demonstrated game strategies they've studied.

Im not quite understand why recruiters asked books on reading list?

What you do at your job during the day is usually imposed on the employee. Few employees get to choose the languages, tools, etc. that their company uses, so their interests aren't always relevant to their day jobs.

Your reading list on the other hand is entirely up to you. It's likely an indicator of something to the person asking the question. People who don't like math (or aren't skilled in math) aren't reading advanced math textbooks for pleasure.

I don't understand that either. I read papers, not books.

I am perplexed why anyone would think that interview performances has any interesting statistical relevance. Much more interesting would be how successful the candidate was after receiving a job at the company.

For two reasons:

1. From the perspective of a job seeker, the interview is what gets you the job, so it's in your interests as a job seeker to learn how to look better as a candidate based on data about how interviewers judge candidates.

2. The purpose of an interview is to be a predictor for job performance, so while this article doesn't address this part of the question, you want interviews to predict job performance, and how to make interviews do that is interesting.

The conclusion is a bit obvious: if you ask people algorithms questions, the people who do the best will be people who spent a lot of time rote learning algorithms material. This means schools which place a lot of emphasis on the theoretical core or people who've taken online algorithms courses.

I'd be much more interested to see performance on other questions e.g. Google's typical curveball for new grads is an architecture question like "How would you implement YouTube?".

These are not curveballs though. Nowadays it's just another category of interview questions - System Design. There are tons of resources to prepare for this type of questions just like with the Algorithms & Data Structures problems.

You're right.

It is weird that so much effort is spent on interviewing and so much effort is spent on performance reviews. But AFAIK, hardly any organization tries to use performance evaluation of current employees to inform their hiring decisions.

I've honestly stopped working for places that interview based on whiteboards and brain teasers. I think it leads to a rather arbitrary selection of candidates and pretty terrible teams.

That is great to hear. I have personally turned down opportunities that ask for these high stress ridiculous tests in front of a bunch of strangers.

To get a job at a company you have to be given the opportunity to interview, and to succeed at the interview. If you only evaluate performance at the job level, you don't evaluate performance of anybody who failed the interview. This is one of the fundamental challenges in hiring - you have a biased sample when trying to determine how effective the hiring process is at screening candidates.

Because interviewing skill is how you get a job.

If I am looking for a job, I care a whole lot about what would make me better at getting jobs.

Interviewing skill is one way you get a job. After you've been doing this for long enough, and have a reputation, you tend to just get jobs by referrals.

I decided in my last change of positions to try interviewing for jobs like most people without referrals, since I had several other referral positions open.

And I was horrible. I was ridiculously bad, and couldn't get a job at jobs that paid less, and were for less skill. It was quite hilarious.

Thankfully, all of the referral jobs were begging for me, and that was fine. There was no real loss, but it was a bit concerning that if I didn't have an extensive connection network that I would be in a significantly worse role.

Which parts of the interviews were you really bad at? How would you change the interviews you had so they don't reject qualified candidates?

I've been interviewing a lot of developers lately, and while I'm fairly happy with my process (I've managed to avoid bad hires so far), I always have to wonder if I'm turning down people I shouldn't.

The technical question part. Because I work on so many different things at the edge of innovation, I have a wide array of experience for a short period of time, and then I move on.

When I interview for jobs, I open up to any of the possible things in my background, of which it will take me perhaps 2 weeks to get back up to speed. But I am not going to take that 2 weeks for each job I interview for.

I have trained my brain to learn things, and discard them keeping only the current tasks in memory. The rest I know how to obtain again.

This means that I get asked questions I know that I've done repeatedly in the past, but can only vaguely explain without a refresher.

To the interviewer, this means I am underqualified at best, and at worst, lying about my past skills.

A lovely example of this during the process was that I was asked a fairly basic SQL question that I failed to explain when I had just landed from my second flight of the day. I didn't get the presales role for working with the database. (non referral job.)

However, I did get offered a job working on development for that database, as well as product management for the database.

So I couldn't get a presales role, that I've done in my past, because I couldn't answer a fairly basic sql question. I knew exactly how to answer it, the venn diagram of sql joins, a half second google, but in an interview, I blanked on the answer.

That is indicative of my experience with non-referral positions. I've given training on sql joins a good 30 times in my life, and couldn't explain the join in an interview.

Just the way it goes for me, thankfully it's not actually impactful in my life.

"I hate anything that asks me to design on the spot. That's asking to demonstrate a skill rarely required on the job in a high-stress environment, where it is difficult for a candidate to accurately prove their abilities. I think it's fundamentally an unfair thing to request of a candidate." Scott Meyers

This is an important data point surveys like this ignore.

Most hiring managers would hope that interview performance is at least partially correlated to subsequent job performance. And once experience is gained as an interviewer and team manager, it is not an unreasonable hope.

The problem is that experience isn't very useful without feedback. And managers and interviewers get so little feedback about how well they do as interviewers I would imagine they don't improve much.

" interview performances has any interesting statistical relevance"

I'm perplexed as to how people don't think it has any relevance at all.

The entirety of every industry is based on 'interview performance' as being one fairly important factor of future success.

Interviews are obviously not perfect predictors, but they are pretty strongly correlated with outcomes.

Someone who does very well in a technical interview is more likely to be a 'good engineer' than someone who fails miserably - though I totally accept that is not always the case.

There's no doubt we could use a little more science to our approach, however.

That, I'm afraid, would be too rigorous for this blog.

Interesting article! Some minor statistical pet peeves:

1. Setting non-significant bars to 0 seems fishy. Leaving them and putting confidence intervals on everything would let them speak for themselves.

2. Calling something effect size is ambiguous. That's like saying you measured distance in units (and the wiki article on effect size linked makes clear there are a billion measures of effect size).

I'm guessing their measure of effect size were the beta coefficients in a multiple regression?

Not minor at all. It's very easy to mislead intentionally or not with statistics and that's why rigorousness is needed. No details are given for the analysis, even basic things like definition of what "top university" or "top company job" even means.

It's just amazing to see how many positive comments a post like this gets without even the hint of methodology.

Interesting bit on the MS degree. I followed the link, and I'm not quite as surprised that the correlation is poor, or even negative, given the way the data was collected and analyzed.

Absolutely agree that some MS degrees are pretty much less rigorous cash cows by now, that allow students to skip the fundamentals such as data structures, operating systems, and compilers.

However, many CS MS degrees actually do require this as a background, to the point where some programs have emerged to prepare non-CS majors for MS degrees, kind of like those post-bac premed programs. It's hard to believe that those MS degrees, which require a decent GPA in those core courses, along with high GRE scores (sorry, but we are talking about interviewing skill, which may be more related to exam taking ability than job performance), wouldn't result in a similar profile to people with CS degrees from top schools.

This is fully acknowledged in the text of the article referenced in a link, but unless people follow it, I do think the message may be a bit misleading.

That's an aside, though. The value may very well be in the prep for these degrees (ie., the post-bac CS coursework required for admissions to a reputable MS program). If you can get that through online courses (udacity or coursera) through genuinely rigorous self-study? Yeah, that might do it, for far less money. I've audited a few of them, and they're the real deal, that's the real coursework there.

I got a CS masters and even though it was a worthwhile experience to me but I never expected to necessarily make more or be chosen over another candidate purely based on education (assuming all other things being equal.

I felt like my knowledge definitely cured during the process with graduate level data structures, architecture, operating systems and networking. The 2nd pass on some of these areas (despite getting mostly A's from a generally-considered challenging but now well known program). I took the opportunity to craft my program of study more than I would have been able to as an undergrad (both due to lack of knowledge and how undergraduates are given a lot less leeway) to include electronics and automation courses. It also gave me the opportunity to work with and mentor other students which was rewarding and I think has benefited me in and out of the workplace.

I wouldn't argue for doing it purely for career reasons and its not something to do just because you've been in school so long you don't know how to do anything else, but I found it worthwhile and you just need to consider what you want out of it.

> Absolutely agree that some MS degrees are pretty much less rigorous cash cows by now, that allow students to skip the fundamentals such as data structures, operating systems, and compilers.

At what point do we not consider operating systems and compilers "fundamental"? What percentage of CS/programming jobs require deep knowledge in these arenas?

Absolutely I think they still should be considered fundamental.

By definition, these are the subjects that allow you to keep going when your abstractions leak[0].

To say a job "requires" them is pedantic, but what's the point of calling something a CS degree if it is not at least signaling an understanding of these fundamentals.

[0] https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...

That's a really complicated topic, of course. CS is in an interesting spot, in terms of how people perceive it and what students expect from it.

On one extreme, people see CS as a kind of trade school housed in a research university, teaching only what is useful on the job. Some people see CS as professional degree, kind of like law - you must teach theory for students to understand the field, but in the end, you're producing practicing lawyers, not abstract thinkers. And then, at the other end of the distribution, you find people who see CS as an academic field, a branch of scholarship, where the question "what percentage of programmers need to understand compilers" would be akin to asking "what percentage of math majors who become actuaries use real analysis?".

I did general engineering in undergrad, and software engineering (not CS) for my master's, so I'm somewhat of an outsider to a "computer science" degree. But if you have 6 whole years to turn someone into a "computer scientist," I would expect that, in ADDITION to compilers and operating systems, they'd also have a healthy dose of physics, chemistry, and especially math. I mean, for crying out loud, the degree has the word "science" in it.

My freshman year physics class had us building computer models of the solar system from Kepler's laws, constructing vocoders by manipulating raw sine waves, and discovering the equations that governed op-amp circuits through experimentation and a little deep thinking.

At my last job, one of my first tasks was to take raw data from truck engines computers (a data point ever 300ms) and turn point measurements like "instantaneous velocity" into data like "how far as the truck traveled over the last week"? Trivial. By combining that with engine speed and fuel usage, you could also say things like "what gear is the truck in?" "Has the engine performance declined recently?" "What percentage of the time has the driver idled in the last month?" And roll all that data up into an efficient database, build a super-flexible API around it, and create a series of web apps to display the useful stuff that managers wanted.

Having a deep understanding of the physical relationships between measurements, understanding what the data can and can't tell you, being able to spot potential problems in a proposed project (for instance, something that relies on an impossibly accurate GPS in a subtle way) -- these are all things that computer scientists absolutely should be able to do.

Yes, they should understand their computers from the electrons on up -- operating system included -- but they should understand the world that the computers interact with, as well.

from Latin fundamentum "a foundation, ground-work; support; beginning." Whether these subjects are fundamental is totally detached from whether they are useful in a certain percentage of industry jobs.

I agree that knowing fundamentals is not that essential for a large number of programming jobs. They're still fundamentals though.

It should be optional or advanced level. If you are a web developer for example OS and compilers are totally not important to get along and have a healthy career.

I disagree, compilers are relevant to web development because of all the compiling and transpiling that a single source file goes through in large application development.

If you are interested in _writing_ compilers then sure. If I just want to use those compilers, then I don't really need to understand them just use them as specified.

Now you'll say that I'll make a mess if I don't know what the compiler does. That's what best practices are for if say I pick TypeScript. Also, optimization is the compilers responsibility.

What about the way she collected and analyzed the data caused the correlation to falsely be poor or negative?

Nothing! The author explicitely states that the data and analysis was not granular enough tp make distinction between the different kinds of MS degrees.

This leaves open the possibility rhat some categories of MS degrees might correlate positively, even though MS degrees overall correlate negatively.


>>>No surprises here. I’ve ranted quite a bit about the disutility of master’s degrees, so I won’t belabor the point.

is literally all she says about MS degrees. Wow the bias of this site is incredible..



Is what she says about MS degrees in another post. No data or analysis is shown, just opinion.

I'm curious what would happen if they looked at, as separate categories, people who mastered out of a doctoral program, people who did the 5-year BS/MS, and people who obtained a doctoral degree as well as a master's.

That could be useful. The article mentioned rhat there was no distinction made between CS masters degrees and Info Systems degrees.

My guess is that pure MS would correlate positively from CS programs that require a background in more advanced math and algorithms. Info sys degrees typically don't.

I'm not knocking those degrees, keep in mind we're measuring performance on a technical interview, which often leans very heavily toward algorithms, trees, sometimes graphs, and occasionally numerical analysis or computing.

If the MS degree data includes a lot of programs that dont include this background, then this woukd not be a surprising result.

Agreed completely. I would suspect that BS/MS and Ph.D. interviewees would be very distinct from a straight MS, but this was a missed opportunity to look at that, even if it is just interview performance.

Um, did you actually look at the article? There was no data shown between interview performance and MS degree status. The graph you saw was interview performance vs. years of experience. This isn't surprising since a person with 20 years experience will generally interview for a much more challenging job than someone with 0 years.

Yes, I read the article. Under the section "Master’s degree & years of experience", the author writes:

"No surprises here. I’ve ranted quite a bit about the disutility of master’s degrees, so I won’t belabor the point."

And provides a link to another post the author has written. Look for ("disutility of masters degrees"). That page will contain more of the content we are discussing.

This is what I was referring to when I wrote:

"This is fully acknowledged in the text of the article referenced in a link, but unless people follow it, I do think the message may be a bit misleading."

I would agree that no data is provided here, the author simply states MS degrees are an indicator of poor technical performance "in my experience" and goes on to list reasons why this might be the case.

I'm sure there's a reasonable case to be made that an MS in CS isn't worth the time, effort, and money, even from a top school. But I don't think that grouping these degrees with MS degrees in Info Sys (among others) and then reporting experiences without that distinction provides much useful insight into the topic.

Interviewing is a funny thing.

I remember when I graduated from a "Top School" and interviewed at "hot startups" from the valley. I aced a lot of the interviews - why? Because I had just taken classes on LinkedLists, Binary Trees, HashMaps, etc... So when they asked me to whiteboard a "shortest path algorithm" it was just rehashing what I did in school.

Years later, looking back, I fail to see the relevance in most of the technical questions. In fact, if I had to do those questions over again today I would probably fail miserably. Yet, I have been in the industry for a while now and have worked with countless more technologies and have accomplished far more than my younger self.

Just because someone performs well in a technical interview doesn't mean they will do a good job. That is the data that really matters. I've interviewed hundreds of candidates as a hiring manager for some big startups, and from my experience technical interviews are not a great indicator of success.

I'm saying this coming from someone who has gone to a "Top School" and done multiple Coursera/Udacity/etc classes.

Yes, someone might be able to whiteboard a random forest or write a merge sort, but do they know how to engineer a system? Can the candidate:

> Communicate well with others in a group?

> Solve unique technical problems?

> Research and learn new technologies effectively?

> Understand how to push back to product owners if there's scope creep?


These are all things that are not really analyzed in many technical interviews.

As I'm reading this analysis all I can think of is that it is pretty useless - if not dangerous for the industry.

What I've found is that it is critically important that someone knows how to code at some basic level. But their ability to code and explain algorithms on the fly, while probably relevant in academia/research, is such a minor part of the day-to-day of a programmer - At least from my experience.

That has been my experience, too. I remember deriving the optimal solution for an interview question that was a variant of a radix sort and impressing the interviewer for one of the hot startups at the time. That was partly the result of my ability to recognize the similarities between the problems but even more a consequence of having recently studied radix sorts. Twenty years later, I'm a much better developer than I was then and have shipped many projects with significant degrees of complexity, but I couldn't answer that interview question or ones like it as well today as I did then.

The takeaway from this is that those who do best are those with:

- the wealthiest/most financially supportive parents/relatives

- upbringings that are conducive to academic success

- the most free time

as those are the ones who, by a large margin, attend top schools, work at top companies, and have time to spend on self-learning. Another data point of confirmation of a well-studied idea.

Assortative mating: http://www.economist.com/news/briefing/21640316-children-ric...

Few poor at rich schools even all these years later: https://www.nytimes.com/2014/08/26/education/despite-promise...

Why people care about elite schools: https://medium.com/@spencer_th0mas/why-america-cares-about-e...

I did not see in the data set where the interviewee's family wealth or upbringing were mentioned. Would you mind providing a link?

"Having attended an elite school" is a proxy for that. Info available at 2nd link.

I see. It doesn't seem like the data used by the linked article provides any evidence for or against your hypothesis.

The data here is perfectly compatible with a contradictory hypothesis: upbringing does not matter and top schools/employers are good at selecting future high performers.

I'm not endorsing either hypothesis, but the data here supports the above just as much as your hypothesis (which is to say, not much).

It's not that good of a proxy though. There are still tons of poor students at elite universities.

From the article >In 2006, at the 82 schools rated “most competitive” by Barron’s Profiles of American Colleges, 14 percent of American undergraduates came from the poorer half of the nation

That doesn't sound like 'tons'.

14 percent is still thousands of students. Equating elite universities with privileged upbringings is stereotyping. Like most stereotypes, it's grounded in truth but also unfair

I worked really hard to be able to get into an elite university and to get the scholarships to pay for it. When people automatically assume that my education means my family is wealthy, it discounts the enormous amount of work which I (and other low-income students) did to get there.

You are confirming the point that was made!! It is harder for "poor folks" to get in. It is easier for wealthy kids to get in.

You should state ".. and I had to work my way in there - my folks aren't wealthy!" as a bi-sentence every time you mention your education. I bet most people immediately would understand what it implied: This is an achiever!

No I'm not.

Even if fewer poor students get in, that definitely does not mean you can make the reverse assumption: that everyone who attended an elite university is wealthy. For all we know, these results are dominated by poor students who worked hard enough to get into an elite university (which, by the way, wouldn't be a crazy assumption: my CS classes had a lot more income diversity than, for example, political science ones).

Great post. If you have a worse background, you have to fight your way up, whereas those with more family resources just have to not screw up badly and they'll probably end up alright.

Hypothetical question: If I could show that people who complete Coursera/Udacity courses tend to be poorer than average, would that be a "data point of disconfirmation of a well-studied idea"?

"The student population tends to be young, [already] well educated, and employed"

"The MOOC Phenomenon: Who Takes Massive Open Online Courses and Why?"


Not to harp on the "technical interviews are disconnected from actual work!" angle too much, but I'm reminded of a comment from a thread about the creator of Homebrew failing a Google interview. Someone pointed out that it goes to show that it's possible to create widely-used software without an intimate knowledge of CS. I wonder if that's a disconcerting fact for some employers to grapple with.

To be fair, the creator of a bit of software that is widely used may not necessarily be a good fit for Google.

For many reasons.

I find some creators put a lot of ego into their products, not that they are egoists so much, but doing something on your own can be emotionally challenging. I don't mean 'ego' in a negative way - just that when you are on your own you have to make so, so many decisions. APIs, compatibility, etc. etc. you're often a little overwhelmed, you often have to 'just do it' and 'go with your gut' in many areas. Which may or may not be a good practice/attitude for a Google employee.

They might have a stronger sense of the 'outcome' (i.e. what it does) than the capacity for the underlying code. Maybe it 'does the job' but is poorly written? Which again, might not work in Google environment.

... and working on a very large team is night-and-day from working on your own.

I think we all want to 'feel it is unfair' that such an important contributor somehow 'can't get a job at Google' ... maybe he's worthy, maybe not, but I can totally grasp someone mighn't be a good contributor at Google.

Maybe it's easier to see it from a non-technical standpoint: it's often very difficult for founders of their own companies to get along in a place like Google. I think that's easier for us grasp. The same thing can apply in the technical domain.

Made a similar comment above, but that was entirely true for myself. I have worked on amazing things as we all have in life, petabyte databases, and just the most incredible setups.

But when it comes to an interview, I cannot get a job several levels below my own (without referrals).

With referrals, I am offered the moon and the stars, complete pick out of a tremendous number of amazing jobs.

But any random interview, and I totally bomb it. I wonder how many out there are like that as well. It feels rather strange from my perspective, but maybe it's fairly normal?

You cannot get a job at a top company without referral/recruiter looking you up. Any such company is totally swamped by the number of resumes submitted and the chance of getting an interview is practically zero.

This is not even close to being true. People get hired into top companies all the time.

Google's specific interview standards only select for academic rigor, not just successful software skills.

Look at how many failed or poorly designed products and libs Google has released.

I've always said there seems to be a weird relationship in tech between "the industry" and academia. On one hand, there's a growing disconnect between the everyday API gluing-frameworks galore-trudging through someone's code instead of rolling your own nature of day to day work vs. the clean algorithms of school. On the other, there's a youthful, even immature, vibe of all-nighters and casualness for the sake of casualness and Silicon Valley perks.

Maybe this is what happens when an industry is built around worshipping whiz kids. If CS in SV was about venerating NASA/aerospace code, less flashy but very very stable, well-engineered stuff that lives are dependent upon, it'd look a lot more like EE and other more traditional engineering fields.

Until recently I worked at a startup as Machine Learning Engineer/Data Scientist. There I got some experience interviewing people and looking at their resumes. In my experience, which is very limited compared to this post, people who put an MOOC on their resume are usually less qualified compared to people who don't.

There is nothing wrong with MOOCs, but they are almost always beginner-level. If you put them on your resume it kindof implies you don't have a lot of experience beyond that. Putting the Coursera Machine Learning course on your resume would be the equivalent of putting Java 101 on your resume for a Software Engineer.

I would recommend anyone to put projects on your CV instead. Even if you don't have a lot of work experience, just put side-projects and school projects on there.

That's absurd. Everything in a masters degree is "beginner" when you've worked 10 years in a field. When I took my MSc in computer science, there hardly was any "big data" and deep learning around. That you actually would fail me for taking a course in this field is utterly beyond me.

I would definitely not reject you because of that and I took MOOCs myself in the past.

I think the point here are folks taking MOOCs for enrichment, not as a gateway to get a job. So for instance someone taking the ML course just for knowledge, not to try to get in the door as a data scientist

Yeah, like I said, there is nothing wrong with MOOCs, but I would not put them on my resume as a candidate nor would I value them as a positive signal from an employer's point of view when I see them on a resume.

I don't think we can assume that, unless the participants were specifically asked this.


Interesting and surprising, especially the experience thing. I think I am a significantly better engineer than earlier in my career, so I assumed experience would count for a fair bit. Then again I have inherited projects from experienced guys who make crap high level architecture decisions and the code is way more difficult to work with than it ought to be.

But then this article seems to be measuring interview performance, not actual ability on the job. So is any of it actually relevant at all?

Most of the things that make us better during our career have nothing to do with programming faster: If anything, they can make us program slower, and do worse in interviews.

For instance, an interview I took last year came with a premise that I found downright bonkers. It was based on code the company had in production, but it stopped being a good problem to look at years before. I was having trouble coming up with the right tradeoffs for the implementation because all my experience was telling me that the entire approach was misguided in the first place, so the problem should not be solved. I passed, but it was a far rougher performance than I would have liked.

There's also how being far from college makes the least important knowledge gained fade away, and the least important thing I studied was memorizing algorithms. I write new algorithms at work sometimes, and I implement off the shelf ones too, but I don't have to recall them off the top of my head. Nobody has to implement distributed consensus algorithms under time pressure, or write HyperLogLog from memory.

So ultimately, there's what easy to measure, and then there's what is valuable and important. We go with what is easy, and those are things that are taught in college. Understanding the right level of testing or designing a system for observability are far more valuable in the long run: It's crazy how much downtime in well known companies comes from people not learning those things in college. But since we are bad at measuring those things, and kids right out of school don't know them, we don't interview for that.

And sadly this is why we all end up hiring by network so much: We can't tell if someone is good in a day, and we can't really ask people to dedicate two weeks to work with us in a probationary period if they have real jobs, but we sure can recall quality former coworkers and ask them to join in.

Most pedagogy, over time, gravitates toward what's easy to measure, not what's valuable to learn. So we've been very lucky in hiring by network even though we kinda feel terrible about it. On the other hand we're hiring more women now too as a result, not least because women are now much more conscientious about managing their network of other female developers than men.

Hey, author here. You're absolutely right. We hope that with time and data, the interview process can get better at predicting on-the-job performance -- one of the things we're really interested in is seeing how predictive different interview questions can be. To do that, we're also working on collecting data around what happens after interviewing.io users find jobs.

What are some metrics that you expect to use to measure on-the-job performance?

I am conducting interviews at the moment. I would say experience gives you a much better chance of getting to interview but once you are there other factors can be more important, like motivation and communication abilities.

I guess it might be because with experience your expectations also rise. You may be easily qualified for starting programming position somewhere, but now, when you're more experienced, you'll aim higher, right?

If someone works at a firm with bad processes / poor or no code review, this can actually make their experience a detriment as they continue to use and reinforce bad habits.

I wonder if "took courses" could be a stand in for "prepared heavily". It seems like people with all the other attributes might think they didn't need to study. People without them might think they did and took courses to "catch up". In my experience, preparation is the key driver of performance in these types of interviews.

It seems reasonable that a person who took a MOOC might have prepared in other ways as well while people who didn't probably didn't prepare much at all (since watching a few Algo lectures seems the most accessible refresher.)

Top school is probably serving as a proxy for intelligence in this analysis...a well known predictor of both interview and actual job performance.

Top school is also serving as a proxy for wealth and privilege. Good grades are also a proxy for wealth. Kids who have rich parents get private tutoring.

The privilege to value education and to spend time studying, yes. Almost all resources can be found online nowadays. If you've ever been to an SAT tutoring class, you know that the only benefit the kids have is the benefit of being forced to take practice tests.

A top school is a good signal for how much time someone spent studying in high school, except for affirmative action students who get into top schools with much worse scores and GPA.

> Almost all resources can be found online nowadays.

While this is true it highlights a major misgiving in the attitude with regards to the reality of socioeconomic status. If your poor, yes you theoretically have the ability to utilize those resources, but what good are those resources if you don't have a conducive environment to engage in study?

Lower socioeconomic status is correlated with lower stages of Maslow's Hierarchy of Needs. Self-actualization i.e reaching one's potential is more dependent on the satisfaction of basic and psychological needs, which are generally more accessible to those of privilege.

It's not just about resources being available, it's resources being available and making sure people have the opportunity to engage in those resources.

or, you know, legacy admissions. The way you wrote your post, you'd think the rich are inherently more virtuous and deserving

> except for affirmative action students who get into top schools with much worse scores and GPA.

This dismissive "quota tokens" attitude really irks me. It's one thing getting in, it's a total different ballgame surviving and coming out the other end with a decent GPA. I've seen people of all socioeconomic backgrounds fail and some from remarkable poor backgrounds do exceptionally well.

Getting in is harder than surviving or even excelling. The median grade at Harvard is an A and the graduation rate is 97.5%.

> A top school is a good signal for how much time someone spent studying in high school, except for affirmative action students who get into top schools with much worse scores and GPA.

For the truly competitive schools, the affirmative action students still need very high scores and GPA, they just tend to be given a little more of a pass on the extracurricular activities and essays in the application. I'd argue that for the most part, students of any demographic have to do some pretty crazy things to get the attention of top schools. The probabilities of an affirmative action student being granted admittance are just much higher than for an equivalent non-affirmative action student, but that doesn't mean that affirmative action students can get away with having poorer grades and scores. I'd argue they just have to be less well-rounded.

A top school is a good signal for how much time someone spent studying in high school

Assuming all high schools are the same.

And the classes the student took. And the environment a student grew up in. And how much support the student received. And any one of about a million other things.

There are virtually an infinite number of variables that contribute to university admission and it's extremely reductive to suggest that it's purely a function of the amount of time a student spent studying in high school.

Wealth also correlates with intelligence. Intelligence is mostly determined by genetics (studies show ~70% to 80%).

> Wealth also correlates with intelligence.

Given that many environmental conditions to which exposure (in many cases, particularly early childhood exposure) demonstrably adversely impacts intelligence are more likely to be avoided with wealth (that is, both inversely correlated and with a clear causative mechanism for that correlation), this isn't at all surprising.

> Wealth also correlates with intelligence

Can you cite a source for this?

I can see how it might follow from the assumption that intelligent people are more likely to succeed in their careers but keeping the GP in context, it should also be noted that not all intelligent people succeed because of various factors.

Generally this is incorrect. http://www.investopedia.com/financial-edge/0912/how-intellig...

But there does seem to be a correlation between IQ and income. https://thesocietypages.org/socimages/2008/02/06/correlation...

This latter correlation could be explained by the wealth of the parents, as IQ may be a cultural artifact and income is correlated on education.

Unless I missed it in the article the data is all about passing the interview not acutally seeing if any of these things correlate to the employees working out in the 1,3,5 year time spans.

With this data you're just biasing towards people who interview well, which, I don't think you actually care about.

Well I mean I guess you do if you're a recruiter (if you're a moral recuiter you care about both), but not really if you're an employer.

I think you are seeing the effect of people who have decided for themselves to pursue lifelong learning. The Udacity/Coursera thing just clusters these people in a way that you notice them in the stats. But remember that statistics do lie. You need to dig into the reality behind the numbers, and question whether you are measuring all the right indicators.

My experience comes from several decades developing software and from time to time, hiring people. The people that worked out best, either as colleagues or hires, always seemed to be learning new things and were ahead of the curve trying out new techniques or tools before they became popular.

If you understand how a tool/technique becomes popular as the mass of software developers wrestle with new problems and finally find a way to master them, then it makes sense that constant learning makes some people stand out of the crowd. They happen to be the first ones to learn the new tool/technique and if they do not introduce it to their development team, then when management does make the decision to introduce it, the folks who know how to drive it have a chance to excel and appear to be rocket scientists.

Searched the article and the comments here for "Pluralsight", with zero hits. So what makes Udacity/Coursea preferable? TLDR, I'm asking this as Pluralsight was a significant contributor to my landing my latest role after redundancies.

The long version: I recently landed a role after some time off, having changed from mainly back end Php/Coldfusion to C# in the last year. I was able to make the switch in my last role. For me, moving to C# was a big transition; as well as guidance from a (fantastic) mentor, I used Pluralsight to learn C#, asp.net and DDD - e.g. from Jon Skeet, Scott Allen and Julie Lerman, to mention but a few.

Being completely burnt-out on the old stacks, I was set on making my next role a C# one. I've come to love what Microsoft are doing with Core, open sourcing etc, as well as the strictly typed C# language and ability to use NCrunch with live unit tests. So I signed up for a year after relinquishing my corp subscription, kept doing their courses, and found the training material highly accessible with great quality content. Each interview was a learning process, when I didn't know something from a test, I'd go and study it so that I'd be better prepared for the next role. One of these was the study of data structures and basic computer algorithms, where I was lacking. I might not have had years of experience, but the experience I had was mostly best practice.

During my search, I typically got great feedback on the fact that I was doing Pluralsight courses, and it was a significant factor in being hired for the new role - it showed cultural fit, in addition to passing their tech tests (which happened to involve structures). My company had interviewed a lot of candidates, struggling to find the right talent. Just possessing technical skills is one thing, having the right attitude towards learning is another.

At any rate, I'll keep using Pluralsight to raise my proficiency in my new stack - even as an old timer, I am having a newfound level of enthusiasm towards my whole profession which I haven't felt since I coded in assembly on the good old Amigas. I would be interested in knowing why Coursera / Udacity might be better or more accepted in the marketplace though.

It's rather shocking how much effect Udacity/Coursera had on interview performance - more than graduating from a top school or being employed at a top company:

"...only 3 attributes emerged as statistically significant: top school, top company, and classes on Udacity/Coursera."

Interviewing is a specific skill in itself that most people have to develop independently, so studying other things independently should be a strong correlation.

> It's rather shocking how much effect Udacity/Coursera had on interview performance

Having done Udacity and also taking a couple CS classes at Cornell, this doesn't surprise me at all. People who take Udacity classes are doing so voluntarily on their own time, so they are going to generally be smarter and of higher socioeconomic status.

If you look at people who take CS classes over the summer at college when they don't have to be and when there are no student loans, you're going to get a similar population.

I don't buy the high socioeconomic status bit. MOOCs are hugely popular in countries like India where education is extremely competitive. I help run an international science camp for late-teenage kids. More and more we're seeing applicants who list the MOOCs that they've taken, and most often they're not from 'developed' countries.

If anything the opposite is true. MOOCs have enabled vast swathes of economically challenged people to learn from high quality video material, whereas before they were limited to textbooks.

The key is that these kids are absolutely determined. And similarly if you really want that job at one of the Big Four, you might also consider soaking up as much prep material as you could.

Some schools, and now some bootcamps and online courses, include a separate unit on technical interviews, covering common types of problems and how to approach them and come off well.

So this is unsurprising. The thing people should really be thinking about is why, "how to pass a tech interview" is considered a separate and unrelated skill from "how to code", given how many people claim their interview processes are supposed to separate people who can code from people who can't.

(spoiler: it's because those interview processes don't accomplish their stated goal)

Why is it shocking?

"I need to pass an interview where I'll be asked about specific technologies/algorithms/etc. I will take a free online course to memorize/rehearse that information before my interviews."

In other words, those who prepare beforehand have more favorable outcomes than those who don't.

It's not just 'impact' it's 'correlation' (or both).

Universities don't teach programming, so it's not surprising you don't get applied skills out of the box with a degree.

'Good courses' are focused directly on teaching specific programming skills.

I think they are a good idea because learning programming by 'osmosis' (as you go along) can result in 'not knowing' a lot of key things, even when they are right in front of your face the whole time.

Most importantly - people who are going to take the time to specifically learn new skills, are displaying the kind of conscientiousness that you want in your talent - learning the actual skills is another benefit of that.

> Universities don't teach programming, so it's not surprising you don't get applied skills out of the box with a degree.

I disagree with this statement. Some universities teach programming, some not so much, some only need a Java 101 to graduate.

There is a huge variance. We'd need a statistical analysis accounting for the quality of education given.

I view it as evidence that their interviews are easily gamed. You cram in some algorithms knowledge and off you go.

It would be interesting to see if all 3 predictors of performance held up in a simultaneous regression. Most likely, the predictors are highly intercorrelated. If so, then only one or none would be significantly related to interview performance in a multiple regression analysis.

The master's in CS can be useful if:

1. You have an undergrad degree in liberal arts 2. You pay as little tuition as possible 3. You take no time off and continue to work FT

These apply to me -- my undergrad was in English, I paid 6k total (27% of the 21k total cost) and went to school at night over 4 years while my career continued to progress.

Most of the people in my program couldn't write a FOR loop if their life depended on it, they viewed it (incorrectly) as a jobs program while the school needed the $$ to keep the dept afloat, so I'm not surprised they fared poorly in technical interviews.

But that doesn't mean the degree isn't useful. If you're already a programmer, it helps get your foot in the door at many places. HR managers/recruiters feel more confident forwarding on your résumé, they can't parse your GitHub repos.

The degree is icing on the cake, it's not going to magically turn you into the Cinderella of Programming if you have no real-world experience. I got my master's with a QA and a paralegal and today? They're still a QA and a paralegal.

That being said, timed technical interviews are almost universally asinine, IMHO. When in real life do you have 10 minutes to figure out a problem? Or are prevented from Googling the answer? The measure of successful programmers is how efficient and professional they are in problem solving, not how much useless information they can keep in their head.

Things I've never had to do in 'real' life: -Never had to split a linked list given a pivot value -Never had to reverse a string or a red/black tree -Never written my own implementation for Breadth First Search

etc etc

Personally I'd rather see take-home assignments that roughly approximate the type of work you'd do, which in my career has been churning out new features or applications. Does knowing the time-complexity of radix sort vs heap sort really have a material impact on your effectiveness as a programmer? No.

> The measure of successful programmers is how efficient and professional they are in problem solving

They are not asking you to split a linked list given a pivot value or reverse a string without using a builtin function because that's what you going to be doing on your job. They're doing that so they can see how efficient you can be given a new problem. That's why companies have interviewers sign NDA's - so their interview questions escape as little as possible, so it's not just problem memorization.

> so it's not just problem memorization.

How is it not?

Grasping the fundamentals of algorithms and data structures: important.

Memorizing the best greedy algorithm for traversing a linked list: not important.

Any super specific answer to a problem is likely not worth committing to memory because 1) it'll probably change as platforms evolve 2) Googling specifics lets you save space in your brain for things that actually matter.

It should be noted that these technical interviews are biased to a particular style, so the data only really is of relevance for these types of interviews.

On the master's front, I went down a slightly unusual path. I enrolled in a master's program in music technology at NYU [1]. I already had a master's in engineering from Princeton [2], but after time away from the software world, I wanted to retool for a return to engineering, but with a focus on applications that actually mattered to me.

It turned out to be a very expensive, but very fulfilling decision, and it paved a route for a very successful past four years.

Compared to my first master's, it was less theoretical and much more project-based. In that sense, it was fantastic preparation for career work, because every semester, I had to conceptualize and ship 4-5 different projects in all sorts of subject areas. The value of that shouldn't be underestimated. It also directly led me to cofounding a startup that had a brief lifetime, but effectively converted me to a full-stack engineer.

Today, I don't use much of the subject matter I learned in my day-to-day, but I draw on the creativity, problem-solving skills, and work patterns every day.

My Princeton program was great too, but I thought I'd share about the NYU program, as that was the more outside-the-box choice. There's something special to be said for a master's degree, when it's interdisciplinary and let's you focus on the intersection of engineering skills and subject matter expertise.

[1] http://steinhardt.nyu.edu/music/technology

[2] http://ee.princeton.edu/graduate/meng-program

yes, this is absolutely startling:

For people who attended top schools, completing Udacity or Coursera courses didn’t appear to matter. (...) Moreover, interviewees who attended top schools performed significantly worse than interviewees who had not attended top schools but HAD taken a Udacity or Coursera course.

Possible explanation might be that people going through regular degree typically spread themselves thin over many subjects (digital electronics, compiler design, OS theory, networking etc) while MOOC folks sharply focuses on exactly the things for interviews (i.e. popular algorithms). Its like interval training for one specific purpose vs long regime for fully rounded health. The problem here is not academic system but how we measure performance in interviews. I highly doubt if results would be same if interviewers started asking questions from all these different subjects instead of just cute algorithm puzzles.

We really need a further correlation between people who pass the interviews and job performance a year later. I do a lot of interviewing at my current job and we have found no strong correlation at all between CS skills and actual ability to "get things done".

We toned down the CS type questions since they tend to take too long. We still ask a few basic tree and string manipulation questions to weed out the people who have no idea how to program and get insight into how the person thinks.

I still feel at the end of the day we could flip a coin on accepting an interview candidate once they have shown basic competency and have the same results.

I have been telling candidates that a public github repo with a nice commit history carries much more weight with me then a CS degree since we have been burned so many times before.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact