Here are two thoughts on tech interviews. One controversial and one (hopefully) not.
1. It's good that companies like Google are able to reassess their hiring policies and learn what works and what doesn't. It's easy to criticize companies for doing things that don't make sense or that we know to be bad, but often these things looked like good ideas at the outset and you need some amount of trying new things (many of which won't work out and may have negative consequences) in order to improve things.
2. The way hiring works in tech isn't really that bad. It doesn't really measure what matters for job performance but tech pays well and so many people want to do it that you need barriers to entry. The real losers of the hiring process are the companies that waste lots of money on inefficient hiring practices. If you compare the effort required to get a high paying tech job (really just doing practice problems for a few weeks/months) to any other high paying, professional field we have it much better.
I mostly agree with both of your points, but I also think "we have it much better" depends on which "we" you're talking about. Our practices are better for people trying to break in and worse for seasoned professionals. It's more time consuming (and less well-paid) to become a nurse, but once you have done so, you don't have to do it again every time you switch jobs.
My wife is a long-time hospital nurse, and she will be the first to tell you that hiring a nurse today is a much more tedious and time-consuming task than it was back in the day. Entry-level educational requirements are higher now, for one, and the whole hiring process at her hospital (which used to be relatively quick and straightforward) is now both tedious and time-consuming. Not surprisingly, this leads to an on-going shortage of nurses, with significant job-hopping from one hospital floor to another, from one facility to another, or from one organization to another. And surprisingly (or maybe not) the "new and improved" hiring process hasn't necessarily led to an improvement in the overall candidate pool, and in at least some cases has caused a noticeable decline.
As to "you don't have to do it again every time you switch jobs", nurses are often required to go through continuing education just to keep their licenses current. But making the jump from one type of nursing specialization to another, or even upgrading from an RN to a BSN (often required today if you want to keep your job - probably the exact same job that you've already been doing for ages) can be quite grueling - almost like starting over. In one recent case, a well-respected fellow nurse upgraded her education only to be to told that even that was no longer considered sufficient. She was already doing management-level work but didn't yet have the official title or the pay, and was told that she needed to go up an additional level in education in order to get that, so she just bailed - decades of experience walked out the door for no good reason.
Continuing education is different, and in my opinion far more valuable, than preparing for tech interviews. I may still complain (because it's time consuming), but I would feel much much better about needing to learn some actual new stuff to move up a level. Instead, we go back and refresh old stuff which we don't remember very well because it hardly ever comes up.
1. RNs have people's lives in their hands
2. CPAs have a fiduciary responsibility to their clients, especially in tax services
I think it is generally accepted that licenses don't signify "quality".
What are you hoping to gain with a license? Some badge that somebody can show to a prospective employer, so that they don't quiz you on capabilities?
I think we can agree that the field and knowledge required to perform as an RN doesn't change that often and the same goes for CPAs. However, the world of software is changing constantly. How could a licensing process ever stay up to date? Sure we could just test CS fundamentals and give people a stamp -- essentially what a degree is supposed to do. But if you're a software developer, like myself, that has interviewed many people and been subjected to many interviews, I know it's not a perfect process, but I've come across people that have exaggerated their abilities on resumés. One of the great things about software is GitHub, Bitbucket, etc. Why? Because it sometimes provides a window into the work of a developer before they make it on-site. I'd be happy if the industry as a whole just accepted this as the resumé and leaving the in-person interview as one that's non-technical, save for maybe something like FizzBuzz.
From my perspective, I think the idea of licensing is just an okay process for other fields. Without an interval of re-testing, how can we ensure that service providers can still deliver? I have friends that work at medical centers with doctors that have been there for decades that seem to have forgotten a thing or two, but refuse to accept the answer to a symptom or problem that is fresh in a new or recent grad's mind that's had fatal consequences. It's sad.
On the other hand, as a nurse my wife can relay stories going back to her earliest working days where more experienced medical personnel often had to step in to prevent a newly minted doctor from harming or potentially even killing a patient. I remember that giving oxygen was often the issue here, where the new doctors seemed to think that giving more oxygen was always better, but the nurses knew that giving too much oxygen can put a patient into crisis. (I forget why.) She hasn't mentioned this particular issue in ages now, though, so I guess these days medical schools are doing a better job here.
According to the Stack Overflow's 2016 Developer Survey [1], "56% of developers in fact do not have a college degree in computer science or related fields"
Admittedly, the data is a few years old (and I didn't want to pull the same data from the 2018 survey), but such a requirement would put many people out of a job. I also haven't seen any concrete data that would support any correlation between the efficacy of a software engineer and whether they had obtained a college degree, so this seems like a poor metric.
I share your incredulity regarding the lack of any sort of "occupational license" for the industry, though I remain unconvinced that instituting one would have a positive effect.
Except that the degree itself may not be worth much in practical terms. For example, as a working student (paid computer nerd by day, student by night) I quickly came to the conclusion that much of what I was being taught in the CS program just wasn't going to be useful out in the real world. For example, at one point I was taught the detailed in and outs of two's complement arithmetic, but that's something that I've never used in practice, and I've only even seen it referenced maybe once every decade or so, if that. (See the following link for a very recent drive-by encounter with this stuff: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p090...) Meanwhile, pretty much anything that's been of any real value to me on the job I've taught myself on a just-in-time basis.
As more recent examples, a few years back I took all three of the original free MOOCs from Stanford. In the ML class, Andrew Ng at one point said something like "If you've ever taken calculus then you will recognize this particular equation as being a derivative; otherwise don't worry about it because that's not really important." Later he said something similar about linear algebra and matrix multiplication. And indeed I had taken both calculus and linear algebra back in the day so I recognized what I was looking at, but it just wasn't important within the context of that advanced class.
I also took the Database MOOC, and during that the instructor taught quite a bit of formal database theory. (I forget the proper name for it.) But despite having worked with databases for decades now I've only seen such theory referenced maybe twice, and that just in passing, so it's hardly the "critical, need-to-know" stuff that the instructor insisted that it was.
Meanwhile, in the AI MOOC almost everything taught there was new to me, despite my having taken several AI classes back in the day; so basically zero overlap between then and now. (I did recognize some of the statistical stuff, though, from my statistics class way back when.) But will any of the new stuff ever be of any practical use to me? Probably not. Interesting certainly, but probably not at all practical.
You could do per-state licensing as happens in other professions.
So perhaps, California would have the highest standards with Google/Facebook employees pushing them to do so to create barriers to entry, but Mississippi would end up with the bare minimum.
I wouldn't find it terrible if Software Engineering were licensed, in the same way that law is---with the board consisting of prominent practicing members of the existing population, and all current members grandfathered in so it only affects upcoming generations.
You know, if they went down the licensing path then they would probably have to pretty much ban Windows for any serious professional use, since it's not really "fit for purpose" and never has been. (Minimal acceptable standards would probably be some type of hard-core Unix-like OS.) The same might be true for a lot of the open source stuff which is currently so popular.
I am very happy to take a month of brushing up on algorithms and so on every few years as I want a new job as opposed to an additional several years of school.
I'm sure I felt like this when I was jazzed at how easy it felt to get my first good job with just a BS. After a few times through the job switch ringer at this point I'm rather bored of it and would much prefer to have a widely respected professional accreditation to point to instead of all the repeated pointless work. But I don't think the extra-school way (like in my nursing school example) is the only way to achieve that; what I would personally like to see is an apprenticeship / craftsman model. I think a PE requires both a master's degree and apprenticeship. Maybe we could do the latter without the former?
But this is idle wishing: the current system likely has to much inertia for real change at this point.
Do you have a family and other dependencies where taking a month off isn't really tenable after you leave your 20s, or whenever you settle down? It seems kind of absurd to take a month off every few years just to practice for a new job, whether the skills you're practicing are relevant or not to said job performance.
To me it doesn’t seem any more absurd than front loading 3 or more years of expensive professional school as do many other professions like lawyers, architects, and doctors.
Just pretend you went to law school and have to make monthly loan repayments each month. Put that money in bank account and you will have more than enough savings to take off work for a month every few years.
Median wage for programmers is nearly double the US median household income and has few barriers to entry. If the only downside is having to take a month off work every few years, I’m not complaining.
The model I envy isn't lawyers, it's PEs. Their professional degree isn't exorbitantly priced and the education is very useful for the career rather than busy work (by the accounts of those I know). Once the credential is earned, it is widely respected.
Edit to add: In general, I don't think our system has to be worse than all others for it to be worthwhile to consider what might be better.
The business model I envy is actuaries. The exams are rigorous, but leave it open how you wish to prepare for them. I considered this about 15 years ago when I was leaving grad school in engineering. To get through he first few exams, it's pretty clear that you need to understand vector calculus, linear algebra, and differential equations at the level you'd expect from the general undergraduate sequence for most STEM majors. I read that the most common undergrad major for the exams is math, but you're free to take the test if you've majored in something else, and I think you're welcome to self-study as much as you please. You just have to be able to pass the exams.
It's a rigorous exam, but to me, this is so preferable to a 5 hour interview on differential equations, stats, and linear algebra every time you interview. And that's pretty much what we do in software, it's the equivalent of asking a senior actuary to do an integration by parts or a complex derivative at the whiteboard during an interview.
What I also like about this approach is that we'd avoid that situation where a cartel like the law schools can put a 150k, 3 year degree in between talented people and the profession they'd like to enter.
Another thing is that tech exams are very capricious - you can end up with a totally unqualified interviewer, there's really no quality control on the questions or how the exams are administered. A more formal exam could have a lot of benefits, though I do agree that a required long degree program that is $$ would do more harm than good.
EDIT: There's is something that causes me to question some of what I wrote above - I have grown wary of many of the people who want to regulate software as a profession. I'm still stinging a from my collisions with the test-driven development crowd dating back over the last decade. They really were saying that to merely question the value of the methodology should render you un-hireable. Imagine if they got control over formal licensing. I do shudder at the thought. When I think of that possibility, the Wild West approach, unsavory as it may be, seems more appealing.
I love this comment, both pre- and post-edit. It's definitely a tough problem, wariness is warranted, and there are worse models out there, but the problems are also real and worth discussing.
Sure, but doesn’t it require 4 years of school + 4 years working under a licensed PE, then state exams?
It is a very different set of trade offs than what we have in CS, where companies are more than willing to take a chance on bright people with non traditional backgrounds, including people that never went to college or dropped out. As a lifelong self-learner, I’ve certainly appreciated the lack of “credentialism” at tech companies.
The one part of the PE thing that I would like to see borrowed is the exams. If all companies are just running everyone through the same kinds of general CS problems anyway, might as well save everyone time and improve consistency by making a standardized test.
I don't mind the standardized exams part, but the part I really like is the apprenticeship. Elsewhere in this thread I suggested that maybe we could pick that part up without going all in on the schooling side.
A month off? I completed an MSCS (including a thesis) while working full time with two young children at home (my daughter was born in between my first and second year). I can't imagine educational/licensing requirements that would be more difficult than that; you can continue your education without dropping out entirely.
Everyone in the medical profession needs continuing education (CE) credits. Not much different from certs that support folks need to obtain if they work government/ DoD positions.
CE classes for nearly all professions are a joke compared to the shit software engineers are expected to go through to rehearse for interviews.
As for government work, I'm not sure where you got that idea from. I did government work for 14 years and never needed any certifications. If I had, the company would have paid for it and it would have been on company time.
Imagine CE credits for things like distributed systems and machine learning as you move along in your career: wouldn't that be more useful than brushing up on that Trie algorithm you'll never implement?
Sure. So get that executive masters or Coursera certificate, and then spend an hour convincing an employer that you actually learned something in your course.
It would be great if the continuing education most of us actually enjoy doing were a valuable thing to do. As it is, if you want to switch jobs, it's much more valuable to study old things that help with whiteboard interviews than to spend time learning new things.
I come from academia and can confirm your #2. If you want to get a professor job, it's all about letters of recommendation, publication in prestigious journals, talks at prestigious conferences, etc. Programming interviews are extremely fair and meritocratic by comparison.
How is academia unfair in that respect? "letters of recommendation, publication in prestigious journals, talks at prestigious conferences" are a reflection of your academic abilities in your field, no?
Sure, there is some noise; papers get rejected where they should not have been. And other papers that should have been rejected, do get in, etc. However, by the time you're applying for a professorship, you're abilities should clearly shine through that noise.
It's who-you-know, not what-you-know. I should mention, by convention, you do not get to see the letters of recommendation yourself, so you're at the writers' mercy. (At least in my field, mathematics; others might be different.) Papers and conference talks become harder to do if you're no longer affiliated. Your abilities will shine through if you're Albert Einstein, but the 99.99% of "rank and file" professors don't need to be Einstein.
I'm not intending this as an indictment against academia. They're faced with a hard problem: many more candidates than job openings. They couldn't possibly do whiteboard interviews like in software, just because of sheer logistics. I don't know how academia could do it differently. Software can do it better because there's a LOT more money in software.
Who you know is very important for academics. The professors I know are always on the lookout for good collaborations and to discuss new problems. You can only be successful in academia if you have good ideas, can articulate them clearly, and know what other people care about.
Being smart and knowing your areas are only some of the requirements for being a professor. Having just those two is insufficient.
>You imply that strong reference letters, strong publications and talks at top conferences are obtained unfairly. Why do you think so?
There's no implication that the letters are obtained unfairly, but that doesn't necessarily mean that the resulting outcomes are fair. The point is that if you find yourself in a situation where you can't get hold of these things and yet you are, in fact, really good, there's nothing much you can do about it short of waiting a few more years and hoping to build up a better professional reputation. In contrast, you can just show up at a coding interview, demonstrate that you can in fact code (at whatever level is required), and land a job.
Case in point: I moved from academia to coding by doing this (without any relevant qualifications or having worked professionally as a software engineer). There is no equivalent progression for someone who wants to move to academia from coding.
The equivalent to acing a code interview for an academic is submitting a terrific paper. Papers are typically submitted to open calls without any referrals. If the paper has an impact, the rest of it follows.
Prestigious invited talks don't come out of the blue, but usually as a result of consistent work that people care about.
If a researcher can't get their papers published, in an activity where that is a key metric of success and the key communication medium, what's the basis of the 'really good' assessment?
Look at the list of keynote speakers at any major conference you understand. Are those the wrong speakers? Did they get this privilege unfairly?
I finally got one of the papers I'm proud of published! It was in review at a major journal for over a year, and then was soft-rejected for being too specialized (nice comments from referees though). Then it went to another journal, and was most of the way through the review process when the entire board quit to start an open-access journal. So we withdrew the paper and submitted to the open-access journal. They finally accepted with revisions last fall, and so I think it'll get published soon. Only three years from start to finish, where "start" refers to posting the finished paper to the arXiv!
Every job application process has its own kinds of luck and failure modes. The time scales of academia lead to very different distortions than those in software engineering.
Congratulations! A three-year ride is a outlier. I hope your arXiv version got some citations to keep you going emotionally and professionally in the meantime :)
It's quite absurd to suggest either that (i) the review process is entirely fair or (ii) that it is possible to land an academic position merely by publishing papers. (If so, where are all the professors without PhDs these days?) I don't claim that the process by which hiring decisions are made in academia is notably unfair by the standards of life in general or in comparison to many other career paths. But when talking to another (ex) academic, you must know that you can't get away with those kinds of simplifications.
I don't understand your rhetorical questions. Clearly, yes, conferences choose bad speakers sometimes.
>The equivalent to acing a code interview for an academic is submitting a terrific paper.
You can get a coding job just by acing a coding interview (and having basic social/professional skills). So this part of your comment did appear to suggest that merely publishing papers can be sufficient to obtain an academic position.
What possible reason would I have to interview someone if they had neither relevant experience, nor a degree? Should I just ask random people I see on the street to come in for an interview?
If they can write a good covering email, have a few github repos, and some other indications that they might be smart/capable. I know from experience that it is easy to get interviews on that basis.
If you're talking about people who have no experience in the sense that they've never written a line of code, then no, obviously not. But professional experience and relevant qualifications are not necessary.
As with any job application, you need to have something going for you. It just doesn't have to be any of those two things.
Ehh, I guess, but lacking relevant qualifications and experience doesn't even make it difficult to get a coding interview. It is no doubt the case that most people being interviewed have relevant qualifications and experience, but that's not because there's all that much of a barrier to people who don't. There is certainly far less of a barrier than there is in academia.
"It doesn't really measure what matters for job performance but tech pays well and so many people want to do it that you need barriers to entry. The real losers of the hiring process are the companies that waste lots of money on inefficient hiring practices."
The other losers are people who have skills and aptitude that is actually needed, but don't have the thing that interview measures. Yet other losers are people forced who have to work with those who ace interviews and get rewarded, but are not actually good in actual work.
The side note is not just that hiring don't really measure what matters for job performance. It is also that companies themselves don't really seriously care about it nor spend time thinking about it. That affects not just hiring, but also promotions and other rewards. Which in turn causes additional problems and unhealthy workplaces (excessive crunch being one of those, huge amount of socialization that is expected in some companies another, ridiculous technological decisions too and so on and so forth).
The focus on hiring is not about assessing job performance. This is absolutely clear to me after spending a bit of time as a contract worker with an exemplary record and a page full of recommendations from full time employees of many levels, who still has to pass through the same hiring gatekeeping process. Its now HR required that everyone be interviewed in the same manner. Performance means nothing in the interview - and the interview absolutely has nothing to do with the job, its just a bunch of alg questions. Its incredibly frustrating and disheartening to know how much they are skewing towards people who can pass the test rather than people that can excel at the job.
I think that "everyone is interviewed in the same manner" thing can eventually evolve into better hiring process - because it will overtime produce data that can be learned from.
But, I also think that it will end up split into different kinds of interviews for different positions. The older I am the more I believe that there is no single "good in tech" aptitude and that such notion is nonsense. Rather, different tech jobs require very different amounts of abstract thinking, logic, patience, memory, aesthetic feel, detailed knowledge of some sub-area, ability to communicate, self control, punctuality and so on and so forth.
You can be quite crappy at assembler and heavily talented for enterprise java for example. You can be good at smaller projects and completely fundamentally fail in large codebase many people situation.
> I think that "everyone is interviewed in the same manner" thing can eventually evolve into better hiring process - because it will overtime produce data that can be learned from.
I agree also in theory, but as you said there are vast differences in speciality. Some programmers are far more product focused and have an eye for design and communication, and some only do well when they're deep in the weeds of some obscure perf optimization. The latter gets hired quicker, while the former can often do a lot of useful things for an organization.
This has been my experience too, and I’ve thought long and hard about how we’ve gotten to hiring practices that rewards 6 week boot campers more that seasoned pros who do outstanding work. I’ve come to the same conclusion, that it’s got to be because of bloated HR. It’s harder for me to pass interviews today vs 5 years ago, despite being less junior and more than capable of doing an A+ job.
> If you compare the effort required to get a high paying tech job (really just doing practice problems for a few weeks/months) to any other high paying, professional field we have it much better.
So here's the selfish-as-a-hiring-manager side of not liking the current style of interview questions: I don't want my bar to be so low that studying for a few weeks or months is all it takes.
One or two medium-level problems to make sure they're not full of shit about being able to write code at all, sure, but the rest is gonna be a lot more targeted to the role.
If you are a great programmer, then within two weeks of working with me you will be great at my company - in fact when asked how you are doing I will be shocked when I realize that you have only been here 2 weeks instead of 5 years.
This can only be true if your codebases are very small (e.g., at a rinse-and-repeat site mill).
For anything non-trivial, as your code grows the amount of time required to understand it will grow too -- even if you're using "standard" frameworks, etc. -- because the business logic driving your design decisions is learned with experience. For example, we have a guy at our large company who's a terrible programmer but we keep him on the team (albeit somewhat quarantined) because he's been there forever and has enough institutional knowledge to be valuable.
As your systems grow even more, there will not be off-the-shelf solutions. You'll have to sit down and do some real-life architecting and that won't be learned in 2 weeks. Even if you could perfectly document all your systems and design intentions (you can't), well documented decisions still take time to be read and "soak in".
Exactly. Add interaction with highly complex custom hardware and you're looking at more like 2 years rather than 2 weeks to get to the point where anyone can claim expert level knowledge.
I've been on both sides of this with large code bases, and I disagree. A good programmer can get into even very messy code bases quickly. Even in the worst case of ugly code there is a structure/architecture trying to keep thing understandable.
Or maybe it is easier in large code bases because the worst programmers fail? I don't know, I haven't worked with many tiny code bases.
I've been at a new company for a few months, and still find the codebase confusing/frustrating. At my previous job, I was a tech lead/architect. Here I feel like a dummy. Talent can't necessarily overcome poor documentation.
So what are you saying here? You only hire great programmers? And they all shock you in the same way? Or you hire some mediocre ones too who don't shock you that way? And how do you interview them?
We hire a mix. The good ones (there are a lot of good ones) are quickly making useful progress. The bad ones sound good and seem to be asking the right questions, but never really make progress. My boss said he knew in a week if someone was good because that is the longer it took for the team to say "he knows what he is doing, we rarely check on him", while the bad ones "He is making progress but isn't ready to be left alone". Only after a couple months are the other programmers willing to admit the guy who seems like he knows how to code can't write code.
I generally agree with this - I've had pretty solid impressions of capability after a few weeks that have largely been proven out.
The challenge I'm still facing is crafting interview questions that most effectively match those two-week impressions.
My issue with purely sticking to coding questions is that I've seen a handful of people study enough to nail those and then be completely uncreative or unmotivated beyond that. Specificity seems to be a key thing in other types of questions, to at least weed out the hopeless. You built this thing? You had that experience? Tell me the details. Tell me the details of those details. Etc.
One of the things making it hard is that the true standouts are rare enough that I've only hired a couple of them, so it's hard to find what's meaningfully shared between them that's identifiable in a single day.
I tend to disagree with this close minded judgement. Only because I’ve seen employees who I’ve written off after a few weeks become the goto brain after 6 months on the job. They exhibit a quality that can turn things around. Maybe you haven’t worked with enough really good programmers.
I asked because the relevance was not evident to me, in a discussion of hiring processes, unless it was directly related to his own hiring process in some way.
It's not the interviews. It's what it takes to even get an interview. Doctors have 4 years of med school, then 3-7 years of residency, then probably a fellowship if you're in a competitive specialty. Lawyers get 3 years of law school and then probably only get a job at a good firm if they went to a top law school. Then they work terrible hours. Finance jobs are very hard to get unless you have the right background. Other engineers might have an easier interviewing process but they don't get paid as well (plus they need a real engineering degree).
Petroleum Engineers (US median - 129,990USD) and Computer Engineers (111,730 USD) are some of the highest paid engineers there are. And quite a few are in the same realm as Software Engineers (100,690 USD):
And when you consider the skewing of those salaries due to the majority of software engineers working in LA (and surrounding SoCal areas), SF, Austin, Denver and New York vs petroleum engineers working throughout the interior US, the gap is much larger.
Software engineers are paid well, but there are definitely other engineering fields beating them.
Software engineer has a huge range, from $80K/yr senior engineer at Epic in Ohio, to a college hire at FAANG getting $200 total comp in year 1, $300k/yr in year 5.
Which is why I used the national median. And also pointed out the skewing. Since there are a ton of SWE in SF making 140-190k (and more with experience + title increases). But you're unlikely to make anywhere near that in Indianapolis, Cleveland, Mobile, etc.
Petroleum engineers, on the other hand, make 100+k in pretty much any region.
Well being a petroleum engineer does limit you to regions that have oil and gas (or I guess you could work at a refinery too), but I see your point that there are some engineers who are compensated similarly to software engineers. I am not sure what their hiring process is like though.
I recently interviewed at a well-known software company, was turned down, and have very mixed feelings about the entire situation. I was tasked with a total of 7 coding problems, 5 of which took place in an onsite interview that lasted around 5 hours. The problems were fun, the interviewers were great, and I'd ultimately rate the experience as the best hiring process I've been a part of. I personally think that these problems were exactly the type that tech companies need to be using as they were focused on creativity and your thought process as opposed to some "genius detector" brain teaser. I passed each problem with flying colors, except for the final one.
The point of this final exercise was for me to talk through my thought process. It was a debugging problem, and I eventually succeeded, but not without fumbling around for 15 minutes or so. The solution was a stereotypical example of being "right in front of you", but the nature of this exercise threw me off. When I look back, the analogy that makes the most sense to me is the boxing glove wall from the TV show, Wipeout [1]. It's a simple problem of getting from point A to point B with the twist of getting punched in the face, or the stomach, or the leg, at seemingly arbitrary intervals. I can see this as a good simulation for the reality of an engineering job (being faced with unplanned interruptions and having to bounce back from them) but, at the same time, I've never had to verbally talk someone through every keystroke and mouse movement that I make while debugging. I was frustrated with the exercise and with my inability to immediately diagnose the issue. The interviewer was amazing at getting me back into the right mindset after noticing I had fallen off the wagon each time, which showed me just how good of a manager they were. However, the more I realized how terrible I was doing, the worse I performed. I left with the unmistakable feeling of dread having set-up camp in my stomach. Despite arriving at the solution, I knew I had blown that exercise.
In the end, I was turned down. The official reason was that another candidate had accepted an offer for the position, but the hiring manager also told me that my programming skills were perceived as sub-par due to the final exercise. They said I'd be a great fit culturally, and that their team enjoyed the conversations we had, but my Python skills just weren't good enough. Being someone that really identifies with their abilities as a programmer, with Python being my language of choice, this verdict has kept me awake at night. I'll never know if the actual reason was due to the other candidate accepting or due to me being my own worst enemy during the final exercise.
The tech hiring process is one of the only times I've gotten 100% of the problems correct yet still failed. Half of me says that's bad, but another part of me understands in situations that require context like mine above. It's not the manager's job to babysit me when I psych myself out, or when I'm having a bad day. Engineers need to be reliable and stable enough to work efficiently on their own. Had I been alone, I would have found the solution much quicker. But you're almost never alone in a real work environment. Some of these exercises are measuring much more than just algorithmic competency.
I wouldn't worry about it. It is their loss.
I would imagine that if Google did a randomized controlled trial among a pool of qualified resumes (say after they pass a phone screen) or a small coding challenge.. that they would find that job performance is not correlated with technical interview performance. They already found that GPA and where you went to school doesn't matter. In fact Bock thinks it's only behavioral traits that determine performance. So here's a challenge to Google or Facebook: ask the behavioral questions, ask some technical ones. Score only on the behavioral and hire. In one year see what the "performance" levels are... I'm guessing they did the latter test where they accepted based on technical performance and ignored the behavioral pieces which is why Bock nows says that behavioral are the most important...
My hypothesis is that the bar is still too high and that if you have a basic technical baseline you'd still perform well regardless of your white-boarding abilities. Google could run this experiment and in fact they might already have enough data to answer it with a causal analysis.
The key issue is how do you define "performance" both in the interview and in your yearly review... nobody has a clue how to actually measure performance.
Yes, a time-consuming disaster, the tech interview has become. Same thing just happened to me yesterday. I'd be willing to challenge the interviewer to an impartial programming competition.
> The real losers of the hiring process are the companies that waste lots of money on inefficient hiring practices.
In this case though, how is 'lots of money' wasted? It seems that Google hired good candidates despite some of the questions being bad. I think the point here is that Google is looking to increase the efficiency in interviewing, but they're already at a high efficiency level anyway so they continue to be successful in attracting and keeping talent.
Well sure, there are a lot of good candidates. Most bad candidates don't even apply.
What if instead of reading resumes/interviews/whatever else they did they just put all the names into a hat and made offers to everyone selected? This is your baseline, does your process beat this? If not, you could save a lot of time/effort using the hat selection method. (note that it is possible to be worse than the hat method)
You know, there are a lot of cool things I've learned that I wouldn't have learned if reposts were strictly controlled.
I would even hypothesis that an ideal web community in the vein of reddit or HN is not completely void of reposts. There must be some ideal percentage that makes the site most useful/successful.
Don't know where on the number line those figures land, but sharing something that crosses my mind when I see a repost. HN seems to have a decently low number, IMO.
Surfacing old news for Hacker News to build karma points is too-frequently a successful play. All you have to do is slap a (2010) and all gets forgiven.
I'm somewhat baffled by the article. I started at Google in 2004 and it wasn't using brainteasers like "A man pushed his car to a hotel and lost his fortune" or "Why are manhole covers round" even then. It was mostly challenging algorithm and programming problems. I think there's a lot of urban legend in these brainteaser questions.
I was interviewed in 2007 and I got an entire phone screen full of them. Thankfully I had gone through "How to Move Mt Fuji" and already knew all but one of the questions the interviewer gave me.
People tend to forget the historical context of a policy. Google was small in the early 2000s, but it was one of the hottest companies in the valley. Therefore, it could afford being picky. It was solving really tough computer science problems in massive scale, so it needed to find talented true geeks, those who live and breath math, engineering and computer science. The so-called brainteasers were usually thinly disguised and well-designed math or algorithm questions. There were few interview-prep books back then, either. Therefore, those who could solve the teasers were likely creative geeks.
Fast forward to it today, we have huge amount of prep materials and prep schools, so it becomes hard to tell who's truly talent and who has great memory. Google doubled its workforce in the past three years from 35,000 people or so to 70,000 people. You just can't expect that everyone is passionate and talented about math or engineering. Besides, Google does not need too many of this kind of people either.
However, if a small company is tackling a hard problem that does require tons of math, computer science, or pure ingenuity. Hell ya, brainteasers can be of great help. They don't even need brainteasers, they can just directly ask math problems.
>...However, if a small company is tackling a hard problem that does require tons of math, computer science, or pure ingenuity. Hell ya, brainteasers can be of great help. They don't even need brainteasers, they can just directly ask math problems.
Yes, if the job requires tons of math, it makes sense to ask math questions, it doesn't necessarily mean that people who are good at brain teasers will be good at the job. According to the article, what google found in regards to asking brain teasers:
>...“We found that brainteasers are a complete waste of time,” Laszlo Bock, senior vice president of people operations at Google, told the New York Times. “They don’t predict anything. They serve primarily to make the interviewer feel smart.”
Aren't a lot of so called 'algorithmic problems' similar to brain teasers? If you have seen it before, you'll know it. But otherwise, there is not much chance of you cracking it without an 'Aha' moment, which may or may not come during the one hour allocated for an interview.
Weird tricks in bit manipulation, linked lists, array, hash problems etc. are all standard in interviewing and are still used, even at Google, at least 2 years ago, when I last interviewed there.
This works though as Google's, and the industry in general has a policy of rejecting candidates, rather than accepting them, because hiring is very risk-averse. Candidates switch jobs frequently at the beginning of their career and so there is a rotating pool of good candidates the companies can pick from.
Yes they are, and as you, I consider themselves exactly the same. An example one that I used to ask during interviews were the "traverse a matrix in inverse diagonals" or "traverse a matrix in circular way" or the typical "first duplicated number" (that one is on CodeFights and CodeWars).
Those are problems that you either know the "trick" or you don't.
The main problem with the majority of them is not the problem itself. The main problem is that originally they are meant to understand the process of problem solving of a candidate. However, as they are passed among interviewers, they become an "solved or not solved", because it is the path of least effort.
I once created a problem that I love to ask in my interviews, it deals with binary search to get the total elements from a list that is only avilable through a "broken" API. A lot of people I interviewed told me that they loved the question, however, when some colleagues started adopting it, I realized that they were basically expecting the specific answer they knew... when the real value of the exercise is to work with interviewees to "solve a problem together".
I think this is also because no one really teaches you how to interview. Giving multiple interviews doesn't, at least IMHO make you knowledgeable to be on the other side.
It is kinda at a certain level in your career you are expected to just know how to do it. For other skills like management, you at least have your peers who are doing the work day in day out, who you can learn from. Interviews are closed room, usually, though I have had some where one person was shadowing the interviewer.
Also, if you take a terrible interview, its not gonna really impact you. There is no performance report that'll negatively impact you. There will be another candidate, who you'll have a rapport with. Or someone else will interview someone else.
As you said in your anecdote about the binary search problem, not everyone is best at being an interviewer.
But aside from there being a dedicated person for interviewing for big companies, or outsourcing interviews for startups, I don't really see a solution. And outsourcing, whether internally or externally comes up with its own set of problems.
> I think this is also because no one really teaches you how to interview. Giving multiple interviews doesn't, at least IMHO make you knowledgeable to be on the other side.
This is something that I have ameliorated in my teams in the following way: When an Engineer is going to start interviewing people he goes through the following process:
0. Before starting, we provide some guidance on what we look in interviews and what we ask (1 hour meeting with the new wannabe interviewer and the manager)
1. First he shadows an interview done by a Tech Lead or Sr. Engineer (those are the ones that generally do the interviews)
2. Then he does two interviews, where he asks the questions but is shadowed by a Tech Lead of Sr. Engineer. He gets feedback about his interviews and is asked to explain his feedback and we tune it to the expectations from the team.
3. Finally he starts interviewing people on his own.
This has worked well enough for me, specially once we had a shared GDocs with a) The list of questions to ask (along with notes on what to look for and how to guide guys) and b) A "competency matrix" ( like http://sijinjoseph.com/programmer-competency-matrix/ ) tailored to what we were looking for.
Finally, one of the things I always emphazise the people that is interviewing for my team, is to remember how they felt during interviews. And be aware that as interviewer ALWAYS you have the upper hand. I hate being interviewed, I hate the feeling there, and the fact that you have 60-90 mintues to demonstrate that you know what the company is choosing to ask you, without regards of all other stuff you know that will be useful for the position, but they don't ask. And the nerves.
With interviews, you are essentially trying to predict the job performance of a candidate when they are at your company, so why not make it as close to that as possible?
With face-to-face tell them to bring their laptop or you can provide one, then explain a real actual hard problem you've solved or one of your colleagues has, that took roughly 2 hours, leave the room and let them try and solve it on their own accord without any pressure of people staring at them whilst coding. Come back and ask them to explain their solution and quiz them on different parts.
This has been by far the best test I've given that predicted real on job performance.
To be a true scientist, you shouldn't necessarily do what seems obvious. For all I know, maybe whiteboarding is a better indicator of performance than 1 hour of coding on a laptop. How well does it compare? I bet that's a closely guarded secret.
If you ever interview with me, be aware that I'm only allowed to asked from a pre-selected list of this type of question. PLEASE PLEASE PLEASE spend time thinking about your situations the day before! I expect to come with about 5 scenarios in mind that you can then twist to be the answer to the questions I ask. Come up with some situations in advance, you don't want to have to think of them on the spot. (actually you will have to think them up on the spot, once you get past the first question you will relax and be able to think on your feet better - for some questions a new situation will then occur to you on the spot)
I know you had to work with a difficult person in the past, remember something that happened. Don't make that person look too bad - you don't want to come as bitter so select a different situation if that might happen.
I know you had to make a decision without enough information. I know you had to do something controversial.
When I'm interviewing I'm only allowed to write down and consider The Situation/Task, your Action, and the Result. (This is called the STAR system, there are other variations)
If you can make sure I clearly understand each that is to your benefit, though I will ask clarification questions to get that.
I think I would prefer a role-play / what-if scenario for the reason that I simply don't care about revisiting certain types of work experiences. As a result I am likely to not remember details and even more likely to be unenthusiastic in the retelling. Interviewers should also consider that human memory is fundamentally malleable so any recollection will be at least a partial fabrication.
In role-play you can more easily lie about what they would do - in fact they can't help it because you know what you want to do, even though in the heat of the moment you will do something else. Thus your role-play is not very useful.
Research says that it is fairly easy to tell if someone is fabricating their situation (I've never caught someone, but I've only interviewed a couple people so my sample size is not significant). Thus I as an interviewer know you are telling me how you react to a real situation when under stress.
You have been in many situations over your life. Don't bring up ones you don't want to talk about.
Role-play is useful in that the candidate is telling you how they would act in a situation, which is how they think people should behave professionally. If they are making that up, then they would likely fabricate past situations as well.
I don't agree that it is easy to tell when someone is fabricating, if that someone is a proficient liar then it will be impossible, and that is the exact type of person you want to exclude from further consideration.
I'm not sure why I struggle to answer these questions. Maybe I could be better at telling stories.
On the other hand, I feel like it could be more to do with my ability to talk about myself. For example, when asked of my major achievements and challenges, I think of scenarios where we over came problems as a team. I think more in "we" than "I". I feel like many of these questions are looking for answers in the form of "I did all these amazing things".
Further to that, maybe I just don't rate myself as much as I should do. In the past, interviewers who spend the time to go deep have historically said something along the lines of "wow, you've done all this. Why didn't you tell me earlier". I had one person say "It's like getting blood out of a stone". So maybe I need to be more self-promotional. Gotta back yourself right?
Maybe I can add some context. A bullshitting interviewee loves the "we did this" type of answer since it deflects the question from him. So a good interviewer will dig in and ask "that's great, so what was your specific role in resolving that situation?" The interviewer isn't interviewing your team, he's interviewing you so he needs to know about your specific skillset, responses to situations, outcomes, etc.
We vs I is an interesting question. If the company really values teams, then saying we and pointing out your place on the team is better than successfully convincing them that you did it alone.
Beware of this though. Some companies will say they value teams, but the reality is they do not. Others value it so highly they hide that, lest you hide that you don't work well with teams but need the job enough.
Agreed. I feel like the best thing to do is say we overcame these challenges as a team and this is the role I played. And these were my personal challenges.
Hopefully there's a different bar for different roles.
If you're someone who's been a technical manager for 5 years, and I'm asking you about, say, a time you had to give (or receive, for that matter) negative feedback, then if you've never had to do that I might question the breadth of your experience and readiness a bit.
I think the key here is to be very specific about the type of situation you're looking for, and to ask a bunch of them to get a broad look at the person's experience.
It is hard, but it's very common and pretty easy to prepare for. I don't think it's unreasonable to expect candidates to have stuff like this ready to go.
" describe a time they solved a difficult problem."
I actually like this type of question either as being interviewer or the interviewee. At least it allows me to demonstrate the thought process I put into a problem I actually encountered instead of talking about stuff I never had the need to do like a lot of algorithmic stuff.
I like them as an interviewee, but as an interviewer I find that they can be very challenging to grade in a consistent way. The upside to "invert a binary tree" type questions is that it's a lot easier to put together a rubric (although the downsides are myriad).
As an interviewer I also like to follow the thought process. I have had candidates who could solve a lot of questions pretty quickly but as soon as the question deviated a little from the regular path they were completely lost and had no ability to adapt. I am really happy with stories that follow a path like "Tried A, didn't work because of X, then tried B, didn't work because of Y, then tried C and things came together".
It is really amazing, when you get into it, how many people with excellent-looking CV and experience just can't code their way out of a wet paper bag.
They don't know what cache is or how to tell when you've run out of it; can't use pointers, can't write a loop; don't understand object lifetime or that objects have one; or think a null pointer is, or should be, the same as an empty string. (This last is most amazing because it seems to be official policy at Google!)
The elementary coding and debugging exercises that are so annoying are necessary because most candidates, by far, can't begin to do them -- MS:CS, real-time OS thesis project, and N years experience notwithstanding. What have these people been doing?
I have seen brainteasers used well, but only as a jumping-off point for a conversation that tends well away from the puzzle.
I used to use a design problem (no coding) that bright undergrads solved in two minutes, BSes in five, MSes in 20, and PhDs either never, or as fast as an undergrad. Everyone who solved it used identically the same hand gesture when describing the solution.
ITA Software used to publish great programming puzzles that you were invited to solve and send with a CV. (Google bought them out and eventually took down the puzzles, back in 2010, but they might still be in wayback). To learn how your solution compared to theirs, you had to send it! It might not work so well anymore, since everything gets posted nowadays.
Riddles fell out of favor, but now we're faced with the next disastrous trend which I just wasted an hour on yesterday---coderpad.io.
Imagine a binary pass/fail test that determines your future, next to a suitcase full of cash, clock ticking, and another developer watching over your shoulder, also feeling smarter because they already know the answer. An answer they researched in solitude at a leisurely pace.
Conclusion: "We can't find any qualified people, there must be a shortage."
I much prefer coderpad or HackerRank, live with the interviewers, over whiteboarding. I find talking through a first approach, improvements, debugging, etc. is very useful -- both as an interviewer and candidate.
The timed HackerRank and such sucks though. Way too stressful. And easily subverted. They enforce fullscreen but you can just use another computer to look things up.
I wonder if, comparatively, other fields have as many "fakes" as I've seen try to get jobs in mine. I'd say 1/20 candidates I've interviewed aren't imbeciles.
Better than coding on a whiteboard is hardly a complement.
Maybe I didn't write clearly enough, but the reason you are attracting so many "fakes" is that you are testing for "nerves of steel" rather than coding ability and misattributing a lack of the former with the latter.
We don't ask anything hard. For JS candidates, given [1, 2, 3, 'a'] return an array without the letter. Debug an AJAX call (the URI protocol has an obvious typo). For react candidates, implement an event handler for a button that increments and shows a value (all prewritten, just add this.setState({value: ++value});
We also guide them and give hints and encourage interaction, or if they need to think we leave them alone.
If that requires too strong of nerves, uh, oh well I guess. I don't know how else to judge their technical ability and soft skills.
The businessinsider.com article linked by posted qz.com article has some hilariously bad answers for the leaked Google interview questions. Most of them distill down to "here's how we would make a clever attempt to avoid answering that question, by answering an easier one" and some are followed by "here's a smarter answer, submitted by a reader infuriated by our published response".
If google really wanted a fair and objective hiring process, it would be easy. They don’t. The coding interview is one of the most subjective processes I’ve seen in hiring. The interviewer reviews the resume, and then gives a vague problem that has many answers, only one of which they are looking for. Then they either let you flounder or straight up tell you what they want. At the end, code is erased, and interviewer gives subjective feedback. If they wanted a good process, it would be double-blind, consistent, and adjusted for efficacy. The stats that they do share show the opposite: Google’s top performers were massively over-represented among those that had the lowest interview scores. Steve Yegge himself interviewed 6 times before getting an offer. one question you might ask is whether Google even wants top performers. I don’t think so, honestly, and they say as much with their focus on culture. They want people that will go along to get along and not cause conflict by disagreement or doing things their own way. This type of cohesion is much more important to google than individual performance. The idea is that the interviewer is the expert and you are the Padawan trying to learn by exploration. Be humble and take hints. If you disagree with an interviewer, you’re done. If you do something they don’t understand, don’t try to explain, just undo it. The interviewers are randomly chosen among volunteers, and the ones that volunteer the most were shown to have the worst performance in assessment. So the lesson here is that it’s a social game. You don’t need to learn every algorithm, just know the basics of any programming language, and I’d say even prefer things like for loops to functional maps because most likely, they won’t follow. The social game, however, is an unlimited liability. You need to make the interviewer feel like the expert in their problem, and feel like they taught you something profound. This is not your performance; you are just the magician’s assistant, acting out their will.
The biggest problem with hiring, IMO, is finding an effective filter.
My employer now works with contracting firms where about 80% of the candidates they send us are excellent and we hire. The interview is almost a formality. When we reject a candidate, it's usually because the candidate is a bad fit for the job. (Edit) The rejects are always very talented and would succeed in different assignments.
When we hire for employees through a recruiter, the process is much harder. Most of the candidates recruiters bring us are, to put it mildly, horrible and incompetent. (Edit) The recruiter doesn't really filter for competency.
The medical profession has an industry-wide certification process. I wish we had the same, because it would be easier to filter out the morons.
Most of the time, when a company needs to develop software, it doesn't make sense to hire all developers as on-site, full-time, permanent positions.
Some people are only hired for the project and go away when the project is done. Those people are hired through a contracting firm; and the contracting firm finds them employment when the project is complete.
For us, it's easier to "hire" through the contracting firm because they usually worked with each other in past projects. A recruiter has very little direct experience with a candidate.
Ironically, things would be easier if recruiters always sent us "formality" interviews and the contractors needed tighter screening. This is because it's easier to hire a contractor for 3 months and choose to renew the contract.
I was gonna say, I haven't heard anyone talk about brain teaser interviews in years. Now everyone just complains that whiteboarding is not a good measurement instead.
Google interviewers were never supposed to ask brainteaser like questions, even before the company "admitted" they don't work.
The title is disingenuous, and implies that Google switched tactics after finding out that these questions were not useful, whereas in reality Google never started doing it because they found that these questions were not useful.
I don't think these questions are uncommon in the industry. I've seen brain teaser questions from Microsoft, for example. I think when people write about them they just attach the examples to something emblematic for marketing, and then it becomes common lore.
Google and lots of companies continued to use brain teasers even when they realized it made no sense. Sure they wish to say this after using it for a long time. What I’ve realized sometimes there are barriers to prevent certain people and give a sense of meritocracy while justifying discrimination. It’s like when looking for an apartment, some landlords will only check some people’s backgrounds and not others. The same is true for credit card companies offering some people much lower apr than others for no reason other than prejudice. We could go on and on with examples, but that’s just a few.
In the end every company becomes average because there is only so many people they can hire. This has nothing to do with brain teasers per se.
I’ve worked in tech for over 10 years and I have to say the places I did the best work are interviews I somewhat bombed. Not a surprise!
I'm unclear whether "brainteasers" are synonymous with "Fermi problems", or whether they are a disjoint set, or a superset. From some of the descriptions in this thread, it seems like some "brainteasers" are clearly not Fermi problems.
I would be interested in knowing if anyone is hiring based on ability to do Fermi problems. I see these as not being about knowing a "trick" but about knowing how to utilize your own knowledge, which is a much more general capability.
At last, they do agree that brainteasers are not a measure of how well you can code. In my _opinion_ code & communication skills should be enough, immaterial of whether you can figure out how to measure 4 gallons using a 5 & 3 gallon jug.
One they asked me in 2010 was "How many people are in the air right now?"
I've conducted loads of interviews at Google since 2012 and none have included a question like this from myself or any other hiring committee members. IMO the questions about what they do for fun are way better at determining if they'll fit and do well.
Yup, I left MSFT in 2005, and sometime before that they packed all the hiring managers in a room and told us: "brainteasers: quit doing them, they are not an indicator of anything useful." The practice didn't stop immediately, but it certainly fell out of favor shortly after amongst my crowd.
I got asked the water jug brainteaser in an MSFT interview, as luck would have it I had recently watched Die Hard With a Vengeance so I knew the answer :)
Google changes the way people interact with the internet ->
Google changes the way people think about how other people think ->
Google uses it's own metrics of intelligence to determine intelligence WHILE Google is actively changing what intelligence is (or at least what people think it is)
1. It's good that companies like Google are able to reassess their hiring policies and learn what works and what doesn't. It's easy to criticize companies for doing things that don't make sense or that we know to be bad, but often these things looked like good ideas at the outset and you need some amount of trying new things (many of which won't work out and may have negative consequences) in order to improve things.
2. The way hiring works in tech isn't really that bad. It doesn't really measure what matters for job performance but tech pays well and so many people want to do it that you need barriers to entry. The real losers of the hiring process are the companies that waste lots of money on inefficient hiring practices. If you compare the effort required to get a high paying tech job (really just doing practice problems for a few weeks/months) to any other high paying, professional field we have it much better.