Being a good programmer has a surprisingly small role in passing programming
interviews.
And that just says it all, doesn't it? I agree that interviews should test candidates on certain basic skills, including (time/space) complexity analysis. But do you really learn anything by asking the candidate if they can recite the time complexity of a moving window average algorithm (as I was asked to do by an interviewer yesterday)? What does the candidate's ability to code a palindrome checker that can handle punctuation and spaces tell you about their ability to deliver robust, maintainable code?
I don't have the answer, but I just don't see how the questions typically asked in programming interviews give you a good picture of the candidate's actual programming ability. I much prefer "homework" projects, even if they involve me working "for free", because I feel like they ask for actual programming skills rather than the "guess the algorithm" lottery of phone screens and whiteboard coding.
I personally also prefer take-home projects over being grilled on my ability to solve obscure algorithms problems under pressure.
However, this second route also comes with a number of issues. The most annoying of which in my experience is the amount of time investment each interview requires from the candidate.
At least in the traditional technical interview, interviewers and candidates tend to be roughly equally invested in the interview process in terms of time spent, so interviewers have to value candidates' time because failure to do so will end up wasting their own time as well. In take-home projects, this balance in time investment completely breaks down, and thus there's very little left to discourage interviewers from issuing ridiculously time-consuming projects.
I've done a few of these in the past, mostly to get some practical experience with a new framework & ecosystem I'm not too familiar with yet, but I think in the future I'll likely just politely decline any projects that looks unreasonably time-consuming, or at least try asking for certain adjustments to the spec first.
I've gotten a take home test that took all Saturday. I finally finished it happy with myself and with my work and emailed it in. The response was "thanks, are you ready for the next exercise?" i was like.. um.. I really need to get on with my weekend. This was deferred to the big boss who said "It's ok just do it whenever you have time next" .. ook so that was my next Saturday. Worst part is that after turning that in, and I am fairly certain both were done really well, I never heard a word back from them. I was frustrated as hell but you know, that's all part of the job. Sometimes you waste a little time while you look for work. Look at other industries such as acting, actors could easily spend a day going to auditions waiting in lines preparing for characters and never getting a callback. My dad is in construction and for him to make an appraisal he has to study the house and waste a ton of time doing crazy calculus mapping out exactly how much area it takes and how much siding it will need considering the weird shapes around the chimney and how many hours it will take to cover it. You want to give yourself some padding but then he most likely won't get the job because the owner will go against other "free" appraisals and take the lowest bidder. If he goes too low he's risking working extra days for free or earning very little on the job. So doing all that free work is pretty much part of his job. As I hiring manager I try to waste as little time as possible, but at the same time I feel like a lot of programmers are spoiled and quick to complain about their time and pretty much expect to just get hired after 2 hr interview. This is someone you're hiring potentially for years (and any mistakes are hard to fix later) so yes, time will be wasted on both sides to try to ensure a good fit.
As somebody who regularly employs developers, and being a developer myself, may I suggest you put your Saturdays' effort into an open source github repo or something similar? Sure, this takes time too, but afterwards you can just point your prospective employer to your superbly styled and documented code on github. Both parties win: you don't have to spend time on stupid exercises anymore, they can get a very decent impression of your coding skills.
I was thinking about posting some of the solutions to past problems on github. The problem is that new employer has no way of knowing it was me alone that provided the solution and how long it took, so I couldn't blame them for not just accepting my public repo as proof of my ability.
For example I had one interview where I struggled to complete a WPF test project simply because it was coded in a way i've never done before using controls I've never used and the test was estimated to take 10 min. Meanwhile in my bag on my laptop I had a complex WPF solution i've been working on in my free time for months as a possible product to market and sell which more than showed my competence in the platform. People in my interview didn't even want to see it on pretext that they have no way of knowing it's my code.
It's just that with the programming tasks that you bring home, the hiring manager isn't wasting the that time with you. With programming tasks that you do during the interview, they are at least wasting the time with you.
Or, like when I interviewed once at a very large broadcasting org, they put you in a meeting room with a laptop (windows 7 which I never used before, bad keyboard, bad editor, no internet or reference), give you some half arsed questions and piss off for 30 minutes without you being able to ask what they actually want. Then 10 minutes later someone else comes in and asks you what the hell you're doing in that meeting room, you have no way to contact your interview partner, no idea where he went, they hassle you for about 10 minutes about it, and when your interviewer comes back you haven't been able to even get into the task, let alone write some code ...
Waste of time also, especially if you factor in a 2 hour commute each way.
+this ... it's exactly why, when I do "favors" for people, my requirement is that they're in the chair next to me... even if they aren't doing anything and are bored as sin... so they understand the time/effort that it takes.
It's generally inappropriate for someone to ask you to do more than a day of work for a take-home hiring exercise.
Simply because it is quite difficult to explain the concept in writing.
They might have a different point of view as to the quality of your work. Or they may have taken other factors in consideration (your portfolio, communication style, etc).
But not contacting you with any kind of a status afterward is definitely shabby, on their part.
And asking you to do a second assignment without taking a moment to evaluate your first (and apparently without bothering to tell you in advance that not one, but two or more "quick" assignments would need to be done), all the more so.
You can also recycle components of coding exercises. For instance in a project with a front end visual component, it is no longer just the bare bones functionality, as you've implemented fonts, networking layers and data models, and co-opted some design patterns from one company's UX person (but changed the assets) during that interview.
> there's very little left to discourage interviewers from issuing ridiculously time-consuming projects
There's also no disincentive for interviewees to spend an unreasonable amount of time on the project. So the test is biased against employed people and/or people with kids.
This can be easily countered though. Send out the assignment at a predetermined, convenient time and require it be returned an hour or two later.
Send out the assignment at a predetermined, convenient time and require it be returned an hour or two later.
Except that these places very frequently tend to either (1) misstate the problem in some major or minor way, or (2) wildly underestimate the time required to produce a professional quality, bug-free, bulletproof-tested solution. Which can be easily countered by having one of their own team members sit down and take the test first. But of course, none of these places ever do that.
(Well, not "none." I'm being hyperbolic. I just mean in that in general, they probably estimate the round-trip time on these programming guizzes the way they do micro-projects on their own jobs -- as in, "Oh, I can do that in an hour" -- but in real life, it often takes 2x-3x longer).
Pull down this data from Instagram API and create a tagcloud. Should only take a couple of hours.
Except working out how to register and authenticate Instagram's API took me over two hours, than after faffing about with it I realized I only had some sandboxed version that returned metadata and not the actual data I was looking for.
The task would probably only take a couple of hours if the whole environment was set up, but the set up was the problem.
I have never used the Instagram API and had to look up what a tagcloud is just now. I can immediately say this will take more than 'a couple of hours', whenever I work with a new API it takes time to setup the environment, digest how it works and research the appropriate APIs I need to use. I am not surprised by your experience. Actually makes me angry that someone thinks a reasonable person could do that in two hours with the background I have described. Yes two hours is the environment is setup to make calls to the API and perhaps you have some familiarity with it - so I can see why the person who asked the question would think 'I could do it in two hours, on my Macbook where the environment is already set up and where I know exactly what API is required and what the data format looks like in the response'.
This seems to happen on around 50% of the take home tests that I have been given. Funnily enough I only take the time to do 50% of them.
Hackerrank is even worse. They have strange ways of wording the questions. I have to Google around to work out how to use their input and output (I work with databases all day long, not reading and writing to STDIN/ STDOUT). No step through debugger (which makes a lot of sense for the algorithmic type questions they ask). Cut and paste only works in some browsers.
No one expects you to set up a database and connect to it in that sort of timeframe - yet I would probably manage it better as I do it regularly.
My personal favorite, was when, after I had apparently survived a 4-hour interviewing + whiteboarding stint, and thought I'd be able to head out onto the street (as it was already quite late in the day)....
...a math PhD told me "hey, I got one more ya..." and proceeded to give me a mis-articulated mathematical search problem which -- going by his own statement of the problem, ended up having as its "solution" -- an empty class.
> Which can be easily countered by having one of their own team members sit down and take the test first. But of course, none of these places ever do that.
At my employer we send out homework exercises, and I personally did the backend developer exercise before we sent it to anyone. I did this specifically to test how long it took. (For the frontend exercise, we didn't have anyone skilled enough on staff to do it, which is why we were hiring a frontend dev).
Did you do the test blind? i.e. Did someone give you the problem without you knowing/hearing it before? If you wrote the problem, or even heard it before you had to solve it, you had a big leg up on someone who's never heard it.
Fair points. FWIW, we've asked people how long the problem took and they all said it took a few hours, so I think it's the right scope. It really is a fairly easy piece of work.
Actually, I think this whole fear of candidates pervasively lying about time-to-completion is something of a red herring.
(1) It may seem counterintuitive to some, but my own general policy is, when it comes to little stuff ("how long did it take you to do X"), you just have to trust people, to a certain extent. Sure, some people may blatantly or grossly lie. But most likely these people will reveal their slipperiness in other ways, very very quickly.
Meanwhile -- and I think people are quick to overlook this -- playing the "policeman" role in every transaction with the candidate brings substantial negatives. By definition, it's adversarial. And generally there are (nearly always) non-adversarial ways to get the same information about the candidate ("Are they basically honest?") you're looking for. They take creativity (and an ability to read emotions and pick up on other signals), but they're there.
(2) Much bigger -- really, there's no need to sweat about the time to completion at all. Just look at the quality of the code.
It all just comes down to the fact that everything is interconnected: Good people generally turn out good stuff in reasonable amounts of time. When you're looking at good code, there is, I find, an intrinsic aura of ease and comfort which shines through it -- such that you just can't imagine it took them very long to produce it. Everything just flows -- just like it does when you talk to them.
Mediocre (and dishonest) people, on the other hand... never produce good stuff in virtually any amount of time. Sure, they can take the whole weekend to polish off their code... but it will still look bad, or at best, "Meh".
There might be some false positives (or outright frauds) by this approach), but I suspect very few. And those that do slip through, are easy to spot by other means (such as asking them to talk about their solution, for even a couple of seconds).
I think the concern is that the homework task could be too large and because people have an incentive to appear competent, they might lie about that to make themselves look bad. That means that we're giving too hard a task but we'll never find out.
As to how likely it is that people will lie, given that we've hired several people who did this homework and they've proven to be as competent as we believed, there's at least some anecdotal evidence that some people have not lied.
You can't ask a candidate how long it took, they have no incentive for truth here. Either they say it took all night and you think poorly of them, or it really did take a few hours.
As others have pointed out, your interviewees have little to no incentive to give you an accurate time estimate.
Beyond that, I think "a few hours" is a bit too much to ask for, especially since you are presumably following up with an in person interview centered around the assignment. That's a big time commitment, and the commitment on the assignment especially is very asymmetric.
I'm not a big fan of assignments for this reason. They are just so asymmetric. I've been given interview homework that was supposed to take "about an hour" and it was more like 4 to 5 in reality. It felt like an unreasonable time request. I did the assignment, and did in fact tell them it took considerably longer than their resonates. I was invited for an on-site after all that, but declined for other reasons.
It took a few hours and "it really was an easy piece of work?" How considerate of you. Who was it an an easy piece of work for, someone with unlimited free time. An unemployed person. What a joke. Any problem is easy when you yourself contrive it. Here's an idea, how about you pay someone for the 3 hours or work you are giving them. Say a lot about you.
I would prefer that we pay people to do this, but unfortunately that's not my call. That said, no one seems to complain so bitterly about the all day interviews that places like Google do (and insist you fly out for in person).
Isn't asking people to fly out for 6-8 hours of interviews (which effectively takes 3 days minimum out of your life) a much greater burden than spending 2-3 hours on some coding which you can do at a time of your choosing? (Followed by 2-3 hours of video chat interviews spread across multiple days, no flying required).
Also, just to clarify, the homework problem is _not_ something related to our business. The solutions are of no business value to the company.
Google is hardly the only company to do in-person all day interviews. I've had several in my career, none of them with Google. IIRC, they were with Shopzilla and Grant Street Group, and Livetext (not exactly Google-famous companies). I wasn't thrilled about flying out for 6 hours of interviews, but at the time I thought it was reasonable for them to ask for this. I'm not sure I'd do it again unless I was incredibly excited about the position in question, but I'm older and more jaded now.
As to how much we pay ... Our pay is very good, and all but one of the positions for which we've had the homework requirement have been telecommuting as well. It's actually a pretty desirable place to work if you like good pay, telecommuting, working at a small company, a low pressure environment (we've been profitable for many years), and various other perks (training budget, flexible hours, blah blah blah).
The last time I was in the hiring side of the process... I had what I considered a pretty easy assignment... it was for a full-stack JS developer (node). The assignment was to read in an XML file (preferably via an input stream) and write it out to a JSON file of a predetermined object structure.
There were no other limitations put.. "bonus points for stream based input" "bonus points for test cases" ... only a couple people actually delivered a "working" solution (one that ran and outputted anything), none of which had the correct output (one was close enough), and none had any tests.
It was truly something that should take a "skilled" developer a couple hours. And not something that should be entirely alien. To say the least, I was really disappointed with the results. I did the project myself in about 2 hrs, with test cases, 100% code coverage. (before I even gave it out, it was as trivial a challenge as I could come up with for a real world problem).
Why should I have to pay the couple dozen people for their 3 hours, when none delivered a correct solution?
In the end, the person with the closest to correct solution, was the one with the least experience... that person got the job.
I actually do what you suggest. I pick problems I've never did before and try to solve it in less than 1/3rd the time I allot for the interviewee to account for interview time pressure. Even then there are probably biases because I pick problems that I can solve so I periodically review the candidate success rates on problems and change them out if they are too simple or too complex.
> Send out the assignment at a predetermined, convenient time and require it be returned an hour or two later.
This honestly sounds like a great idea to me, except maybe with a slightly longer time allowance to remove some of the pressure. Definitely hoping more interviewers will start to adopt this method for take-home interviews.
But this method also hinges on the interviewer's ability to design projects that can be completed in a reasonable amount of time and still give good insight into a candidate's skills. I think it's safe to say this will be a difficult task for most interviewers.
A day or two should honestly be fine. Have them talk about the solution after turning it in. It's a lot easier and more interesting to talk about code you just wrote than it is to make someone whiteboard something on the spot.
It doesn't need to be a time trial. If you're impressed with the code and hire the candidate, worst case is you get someone who takes a little more time but writes great code.
The time requirement doesn't just set a limit for the candidate though, it also serves as a limit to consider for the interviewer when speccing out the project.
Having a time limit of a day or two is fine if the actual project should reasonably take a couple of hours, but if the project actually takes more than a whole work day to complete, then that's a different story (in my humble opinion, anything that would take more than a couple of hours is an unreasonable demand on the candidates time unless you offer some kind of compensation).
> There's also no disincentive for interviewees to spend an unreasonable amount of time on the project.
I don't see any problem with this, as an interviewer. The take-home is supposed to be an example of the work the candidate does, they should take however long to do it. I want to see the best-case scenario of the code they write (given the problem at hand, etc, of course). The entire idea is removing the time pressure.
Right, but workaday programers with two kids and a full time job won't be willing or won't be able to spend, say more than two hours on something like this.
> I don't see any problem with this, as an interviewer.
The issue for the interviewer is missing out on good potential hires because your selection process is biased against people with little free time.
I have a family and I'm doing part-time study in the evenings. If I'm looking for a new job, then I can probably find time for 1 exercise a fortnight. If one company tells me they have a "4 hour assignment" and the other a "1 hour", then I'm far more likely to do the 1 hour exercise and pass on the long one.
And if I do the 1 hour test, I'd expect to be assessed accordingly. If you're comparing one person's output after 1 hour with another who actually spent 5 hours on it, then you will be more inclined to hire the person that spent longer on the project, even though that's not really going to corelate with on the job performance.
Take-home projects can be a double edged sword... for a given project that is estimated to take 4 hrs, i will typically spend about 16 hours on it. working straight through the night, chugging coffee and/or beer. I work by banging out an ugly PoC, and then I refine it drastically over several iterations. My final versions are award-worthy, but the early ones are really bad and sloppy. I am the type that is great at simplifying, but bad at coming up with the initial statement.
Ignoring the time investment for a moment: That is not a bad approach in my experience. Iterating allows you to learn from the versions before and guide your improvements. As long as the first version is at least good enough to do the job AND good enough to be improved upon (often the harder part) it is absolutely fine.
The only problem with her/his approach is that she/he said it took 16 hours where they would've liked to only spend 4 and be done (as 4 hours was what was expected). 4 hours may be unrealistic for what was delivered in the end but it would make me feel at least slightly awkward putting 16 hours into a 4 hour assignment as it indicates my performance is not where it should be (I'm off x4 which would be a lot to me) or my priorities are not at all aligned with the company's regarding this assignment.
Don't get me wrong everyone takes whatever time she/he needs and comparing results and time spent is hard unless you only look at "does it work" which in many cases does not do the work justice (and may not even be the most important metric in the long run). For that reason take-home assignments may be better than the almost comical interview often described on HN but they have their flaws that make them far from ideal as well!
I think it is hard to judge the true performance of a potential employee in a company team without actually having the candidate be part of the team (and even then it'll take a good amount time before someone settles in). Some folk may not be the best programmers but are good catalysts in a team, smoothing relations between other colleagues and increasing team output overall. Or they might have a habit of happily taking up tasks that are wildly unpopular and thereby, even if they are not the most performant, solving problems colleagues or perhaps a faster candidate wouldn't have solved. I could go on about this but I think it's clear what I mean.
There is a lot more to a role as programmer than just programming and that is often completely neglected in these discussions.
> [...] without actually having the candidate be part of the team (and even then it'll take a good amount time before someone settles in)
Some companies are trying to answer this issue by signing a potential employee on for a 2 week "trial," where they hopefully get paid. The trouble there is figuring out how long a trial really needs to be to get a good idea of how that person works and fits in with the team -- too little and it's still a crap shoot, too much and you've already essentially hired them.
In the meantime, test trial runs only work for developers currently out of a job; how do I skip my current gig for 2 weeks to go sit at a potential new employer's office? I certainly have no safety in quitting to go do it since I may get dropped after that two weeks, and there's only so much vacation you can take before you run out of personal time.
I remember hearing about take-home projects that amounted to free work for a company rather than a test of the programmer's skills, which is pretty smarmy. Anything can be abused or misused. Anyhow, the disincentive would probably be a decrease in applications if they start piling on the homework. Unless you're offering some amazing compensation and perks, or you're hiring for a project that could make a person's career, fewer people will bother when there are so many other companies out there hiring.
Maybe the least worst answer is to have people who have been through the hoops before, and can empathize with candidates, running the hiring processes.
Yes. I refused to do an exercise because it was quite obviously from someone's todo list, was a rather large endeavor, and didn't demand any particular skill aside from trying to find a sensible way to handle the many special cases (think parsing linux network configuration files and a pile of environment-specific behaviors).
I read it and told them (a) I had no interest in a firm that behaved that way, and (b) I had no interest in a firm that didn't understand what configuration management tools were for.
Ironically enough, Triplebyte's own take-home projects were some of the worst I've ever had, and did a horrible job of respecting the candidate's time.
When I went through the their take-home interview process, there were 4 projects to choose from, with only one having anything remotely to do with my area of expertise (it was a multiplayer game, and I was looking to work as a web front-end/full-stack developer). For all their talk on how you should select a practical project to talk about because it correlates better to ability to get real work done, it's really rather ironic just how utterly academic and unpractical the projects they offered were (that game was literally the most practical one on the list).
Now they tell you that you're expected to spend at most 3 hours on the project. This might be true for some of the other more academic projects on the list if you had any expertise in the respective areas, but it definitely wasn't true for the multiplayer game project I chose, which had a non-trivial front-end and back-end component, for which testing alone could easily take 3 hours (Or maybe I'm just bad, which is definitely possible, but I've asked many of my more experienced peers how long they think a project like this might take for them, and the lowest estimate I got was a whole work day of 8 hours).
Now they also tell you that it's OK if you don't finish the project. There was also a second part to the interview where they may ask for extensions to the original project, where they'd give us more time to work on it, so I assumed if we don't finish, they'd ask for us to finish the original spec along with some extensions for the second interview. Since I had already finished the front-end component for the game, and had worked on the project for well over the expected 3 hours, I decided to call it a day and work on preparing for some of the other interviews I had that week.
However, that it was OK to not finish the project for the first interview might have been an outright lie. I won't ever know for sure because I was rejected after the first interview for not finishing the back-end as well as being unable to resolve some performance issues the interviewer pointed out (which at no point in the interview did he even ask for me to fix, so I assumed fixing that performance issue would also be a part of the extension). Incidentally, I went back to the project a while after the interview and resolved the performance issue in 5 minutes flat.
The whole process just left a rather bitter taste in my mouth, which was all the more disappointing because I went in with great hopes after reading all their great blog posts on HN. For anyone else considering Triplebyte, I'd highly recommend going with their traditional interview route until someone at Triplebyte can confirm the process has changed for the better. At least with that route, if you get rejected, you'd have wasted less time in the process.
Now they also tell you that it's OK if you don't finish the project.
Yeah, they always say that -- but it's never really true.
They should just be honest and say "If you don't finish the project in time -- then don't feel bad, but perhaps the test isn't right for you, at this time. Feel free to apply again in 6 months."
I've actually been through two interviews where I wasn't expected to finish the project, and I ended up getting the jobs. I think the difference there was it wasn't set up that it's "OK if you don't finish", but that "these requirements were crafted such that we don't expect you to finish".
"These requirements were crafted such that we don't expect you to finish" is exactly how we explain our coding tests to candidates, and it seems to work really well. We actually find that many candidates will spend their own time finishing the project _after_ the interview. Programmers do like a challenge, after all, and nothing says "challenge" like "we don't expect you to finish this".
I don't understand this. No company is so special that I would be throwing away hundreds of hours in pointless meaningless work every few months just to join them! Unless I'm in need of a job again.
And whats the big deal even if I join them after six months, I'd be working to maintain some code, fix bugs and may be occasionally do a big important project.
Its not like they are sending Neil Armstrong to the moon all over again that I would like to be a part of this history.
Me neither, it is like some companies like to feel special.
After being invited twice for Google interviews, I mean really invited by their HR, not me applying for them. On both occasions I failed the process with their stupid questions.
I started replying to their HR, if I am so good to be invited but on their eyes unable to devise a inode search algorithm for unlimited hard disk sizes with a specific set of hardware and search time constraints, over the phone interview, then why couldn't they just please stop inviting me!?
That was the last time I heard from them and I don't care a bit about it.
I am sorry that you had a bad experience with us. Evaluation is a really complicated thing. The bar that we use for evaluating the take-home project is to treat it as real work, e.g. would a teammate feel good if you were working with them on this task, and you came back after half a day with this. Because we can't see process, all we can do on the take-home track is judge of the finished result is professional-level programming. We do indeed pass people who do not complete the project on the 1st call, but they need to be on a good path (where we think they can finish by the 2nd call).
I do not know who you are (and would of course not post details here), but from what you say, it sounds like we did not think that you were on track to finish, and has some concerns about the design that you selected (we track whether a number of milestones in the game have been reached). It is totally possible that we were wrong. We much prefer to see a working front-end / back-end combination that has missing features than we do just a front-end or just a back-end.
Again, I apologize for your bad experience. I hope we can make the take-home interview better in the future with some tweaks.
Just curious: are you making sure your interviewer are actually first trying to code the project in 3 hours before judging candidates?
I mean, it seems to me that most of my colleagues (and myself) are always very optimistic with respect to "how long it will take". So if you are judging someone based on your expectation without having went through it yourself, it can lead to a perception gap.
This is a good question because I find that my colleagues from previous companies spent a lot of time thinking of 'good ' interview questions and leave it at that.
None of them actually tried solving it within the constrains that a candidate is put through.
I think that you almost have the right idea, but 3 hours is too long. I believe a programmer can demonstrate his ability to perform the basics in 1 hour or less. This respects the candidate's time and it also encourages the test's designers to select the most trivial project possible that shows the basic skills they're looking for. That's really what's important.
Larger projects take up more time and introduce a larger possibility that some matters of opinion or taste will impact the candidate's performance. I don't believe differences in taste are important as long as the candidate demonstrates that he is able to comply with a defined style guide.
The actual test is going to depend on the position at hand, but one of my favorites is a very simple program that asks the candidate to use GitHub's API to display the list of public repositories under a user-inputted username. That's it. I tell them they can use any language they want.
This simple test gives all the information we need about the basics:
a) the candidate is able to go online, provision himself an API key, find docs, and reference those docs to see an external vendor's API format
b) the candidate is able to use that information to craft a program that successfully interacts with the vendor's endpoint
c) the candidate is able to present the information in a concise, desirable manner.
d) the candidate is able to do all of this with a simple 1 paragraph description of the project.
The choices the candidate makes in the process of completing this simple task tell you a lot about his process, style, habits, and preferences, even though the project is very minimal in its actual requirements.
I've had people give me web apps, command-line apps, and GUI apps that accomplish this same goal. Many candidates would go above the requested specifications and many candidates would reply the same day they were given the test, which to me was an excellent signal that they felt their time was respected and that we were doing a good job of engaging them and making them interested in working for us.
As I stated, this test is not appropriate for all positions, but I think most tests should be modeled after those principles. Give the candidate room to express himself and demonstrate relevant practical knowledge.
I hope you'll consider a minimalist project like this over something like "design a multiplayer game ... in 3 hours".
Evaluation is only a very small part of my complaint. The choice of projects that were offered, and the scope of the projects are what really need to be revisited (finishing the project wouldn't have been a problem for me if there existed a project that was relevant to my work that could be completed in 3 hours).
In terms of project choices, to me, it seems like you guys erred on the side of choosing projects developers would find interesting and challenging technically over projects that are practical and accurately represent the kinds of work most developers will actually be hired to do. I'm not going to post the details here for obvious reasons, but out of the four projects offered, only the multiplayer game was even remotely relevant to front-end/back-end or even application development in general, which are areas that probably account for the vast majority of development work available from startups. I realize there needs to be a balance to be stuck here, but in my humble opinion, as a recruiting firm, you should be erring on other, more the practical side in terms of project choice. I applied to Triplebyte to find a job, not to fulfill my intellectual curiosity (I can do that better on my own time without needing someone to assign projects to me).
In terms of project scope, I'm not really qualified to comment on the other choices, because they were way outside my area of expertise, but the multiplayer game definitely didn't feel like a 3 hour project. As suggested in another reply, I really hope you guys can actually give the project a try yourself and see what level of completion can reasonably be expected from three hours of work on something like that. Take whatever times reported by candidates who have successfully completed the project with a grain of salt, because people will have a tendency to understate the level of effort they spent to make themselves look more efficient (no matter how much you tell them you don't care). Here's another possible idea for making projects that take reasonable amounts of time to complete: just take your traditional interview questions and slightly extend them a bit with extra features, and simply expect better polish, architecture, test-coverage, and overall code quality, etc during the code review.
Anyways, my experience with Triplebyte's take-home interviews definitely didn't leave a great impression, but I still recommend you guys to my friends and colleagues because I do want to support what you guys are trying to do. Hopefully you can take some time to revisit some of the issues people have mentioned and make the necessary improvements. I'm happily employed now, but I'd love to give Triplebyte another try the next time I'm looking for work. =)
Anecdotally, I had a really positive experience writing the HTTP server with TripleByte. I use interview projects to learn new skills and domains, doing so aligns my interest such that even if it doesn't go well I'm better for trying. My project review went reasonably well- we caught a bug, fixed it and tested. I turned down round two due to taking another offer, but genuinely felt like these guys cared about my progress and experience.
I'm now in a position where I'm interviewing and helping shape my organization's hiring practices. We've debated all the different approaches, some people like projects, some like algorithms, and some don't want to do either to get the job. At the end of the day, I really just want data on a candidate's ability so that I can say Yes.
> I use interview projects to learn new skills and domains
There probably lies the disconnect. For me, interview projects should assess how well I could perform in the position I'm applying to. And thus, if nothing else, interview projects should be relevant and practical.
It seems to me that Triplebyte's project choices were made based on how interesting developers might find them, and sheer technical challenge. Some might appreciate this, but personally, I'd rather learn new skills and challenge myself on my own terms.
> At least in the traditional technical interview, interviewers and candidates tend to be roughly equally invested in the interview process in terms of time spent
Not really true unless you go to your interviews completely unprepared :-)
As the interviewer, I'm not going to show up unprepared either. Typically, having an interview means learning about the candidate and what would be interesting to talk to them about, learning what team they might be a good fit for and what they need, and syncing with the other interviewers to make sure we're on the same page. And then afterwards, we'll meet up to discuss how the interviewee did and whether or not we should hire them.
All in all, it's at least an hour of prep time needed, and that's only counting my time, not the time of the other interviewers, managers and recruiters.
Since the article seems to be about startup interviews an hour of prep time from the employer's side actually seems fairly low. I'd expect closer to a full day. Isn't hiring one of the most important strategic decisions for a startup?
I got one that was just: "write a web application". That's literally all they gave me for parameters. I decided not to be a dick and send them 5 lines of code returning http 200 so I set off and started a side project I'd been wanting to do for a while. Low and behold I wasn't even close to done after the weekend was over so I emailed them and said "sorry, but I need more time if you want the application I'm working on". I got an email inviting me in for an in person interview after I talked to an engineer for about 20min, so problem solved I guess.
not to mention you're asking them to code a (potentially) complex take home project, that could take over an hour, for free.
I had a friend of mine who told me her company doesn't use online programming tests (like hacker rank) because she doesn't think legit programmers would bother with positions that required them.
Having taken a couple of these on-line automated tests, I don't think I'd take them again. Problems which I'm sure I had written correctly, dealing for edge cases and testing in browser; I submit to get like a 6%. Programming simple things in front of people in an interview I do fine.
> not to mention you're asking them to code a (potentially) complex take home project, that could take over an hour, for free.
An hour, I'm fine with. It's less than what I'd schedule for an interview, and far less than I'd schedule for an in person interview (which might include a flight out). On the other side of it, though, I'd be concerned about cheating. It wouldn't be too hard to hire someone to take the test for me, I'd imagine.
Ah. I just provide all my answers in an obscure language they won't know, but has lots of "nerd-cred", so they feel uncomfortable questioning it. Like OCaml, Erlang, or Haskell.
Then in the rare case they question it, which usually goes something like, "Can you do it in Java? We don't use whatever language it is you're using there." I respond with, "Oh, will I be spending a lot of time at this job coming up on-the-fly with somebody's PhD thesis from the 60's?" To which the answer is always, "No." Which finally ends the conversation with me saying, "Well then you asked me to solve an irrelevant problem, so I'm happily providing you an irrelevant answer."
I have a fair bit of contempt for the pitiful state of technical hiring and assessment process out there. I think that comes mostly from my opinion that hiring is probably the most important thing any company or manager does, and that if any one of these companies/interviewers put even 10% the amount of effort into learning about educational/occupational assessment, psychology, and neurology that they do into obsessing over the dubious idiosyncrasies of the latest framework-rehash-of-the-month, then these terrible interview practices wouldn't endure very long, and we'd all be spared the indignity of the farce on both sides.
> Which finally ends the conversation with me saying, "Well then you asked me to solve an irrelevant problem, so I'm happily providing you an irrelevant answer."
I already saw such a colleague training hard at competitive programming, everyday. Yet, he wrote that kind of C++ code:
ressource AClass::CopyACriticalRessource()
{
{
std::lock l;
} // Optimization for the lock.
return ressource_;
}
Having a good understanding of every aspect of your architecture, taking your time instead of rushing for blazing fast (O(1)) but hacky solution, always having in mind that someone must have already solved your problem... these are some valuable skills not coverable during a 1h interview in front of a white board. I would like someone googling a solution in front of me during an interview, trying to UNDERSTAND it, check its complexity, compare it with other search hits, adapt it to his problem, ask for my review.
The best technical interview being a homework with enough time for a normal human being, requiring tricky algorithms to solve it in a fancy way, a good architecture and great computer-science knowledge in general. Afterwards, a code-review/interview of 1 hour with debriefing on all the choices.
Whiteboard coding doesn't necessarily have to involve guessing the algorithm - any whiteboard questions I've asked avoided doing so.
On the flip side, my experience with take home projects is that they are much more of a hazing/pressure cooker ritual - at the least, the time pressure of a Google/FB/etc. interview lasts only a half hour. Most will expect candidates to spend a huge amount of time, which many won't have if they're actively job searching, or are busy people in general - I prefer to pour that extra time into open source work, since at the least it benefits others. One company assigned a project to me once that suspiciously seemed like implementing the company's whole business model.
It doesn't take long to figure out if someone has the right skills oftentimes if you ask the right questions - candidates shouldn't be penalized for companies' ineptitude in assessing interviewees.
> I prefer to pour that extra time into open source work, since at the least it benefits others
As someone who pushed his company to adopt a "take home" assignment for our interview process, it should be perfectly reasonable to reply with "here's my commit history on a project relevant to what you're hiring for."
I much prefer giving (and taking) take home assignments because it lets the interviewer see _what someone will actually produce on the job_. If you can do that without jumping through our specific hoops, great. If not, that's why we have the assignment.
It also at least somewhat ameliorates the pressure cooker of an interview, which many people cope with poorly. It can be hard enough to communicate clearly when all eyes are on you, let alone get your thoughts together and solve a logic problem. If that's actually analogous to your work environment, well...
I don't think that is really a problem. We hire people to do a good job and most of the times kick-ass programmer isn't the best person for the job. At least for a smaller company we need someone who can understand business needs, can communicate better and Add Value. This is a far complex job than implementing red-black trees.
Consider this:
We need to find the most shared URLs on Facebook in last 24 hours.
I can perfectly see why a lousy coder might achieve this objective better than a kick-ass one. A coder with average skills quickly figured out that Buzz-sumo has a public webpage with that content which can very easily be scraped using phantomjs. Job got done.
Another great coder suggested to me I should buy $10K per month Facebook firehose. He is not wrong and that is a good solution too but he failed to see that we are building a POC and not a full featured product.
Needs of companies are complex, many times if you can pay great salary just having a filter for IQ is good enough. But in most cases I think it is far better to look at an individual and judge.
"But do you really learn anything by asking the candidate if they can recite the time complexity of a moving window average algorithm"
No, but it may be a good question, anyways. A good interviewer probes you about things you might encounter in the job to find the point where you cannot recite things, and looks how you handle that.
Having said that, I don't understand the focus on coding interviews I read on HN. I don't know whether it is cultural difference between the USA and Europe, whether I just haven't noticed how you are supposed to prepare for interviews, or whether I am too smart to need to bother, but when I apply for a job, I look up what the company does, try to figure out what its culture is, but do not prepare for the technical side of things. An interview isn't a one-sided affair of you being quizzed, it is two parties figuring out whether they fit together,
I helped a group within my organization with their hiring process recently, and we had pretty good success with assigning a short "take-home" exercise, vs. trying to haze them with a programming problem over a google hangout interview. A problem focusing on a small part of what that group does, but scoped to be doable with 1-2 hours of work.
"Hey here's a fizz buzz question, if you feel confident answering it now go for it, otherwise we'd be just as happy for you to do it in a take-home fashion?"
Homework projects don't work some cases, when developers come with a refined copy/pasted code or code by another programmer. Better way is to go through the previous code that developer has written or go through the code written by other programmers to know the capability of the programmer to understand different code patterns
> Better way is to go through the previous code that developer has written or go through the code written by other programmers to know the capability of the programmer to understand different code patterns
Doesn't this have exactly the same problems as homework projects? You can still copy and paste or plagiarize.
Every week HN has a topic on the front page about how inaccurate and unfair job interviews are. They're always going to be inaccurate and unfair. Interviews must use proxies for candidate quality, and there will always be false positives and false negatives.
The most practical thing is to work with it as it is.
> But do you really learn anything by asking the candidate if they can recite the time complexity of a moving window average algorithm (as I was asked to do by an interviewer yesterday)?
This is actually a trivially easy question that gets to the heart of whether you understand the point of moving averages or not (that you can update a sum by subtracting out the value leaving the window and adding in the value entering the window).
"Programming ability" isn't one dimensional. Whether or not the above question is useful depend entirely on what skills are important in a candidate.
It's not so much specifically about the moving average problem. I would want to see that a candidate can reason about the performance of some code / algorithm. I would not expect them to be able to recite the performance for specific algorithms from memory, however.
It wasn't. I was literally asked, "What is the time complexity of the moving window average algorithm over an array?" and when I asked for clarification, I could hear an edge of... I guess frustration in my interviewer's voice.
Granted, by this time, we'd been through a couple of other problems, and time was running short, but I still think it was pretty unprofessional of the interviewer to let frustration or any other sort of negative emotion show during the interview. That, more than anything else, contributed to my own frustration and perception of unfairness in the entire interview process.
Right, but "recite" was editorializing. That implies that the interviewer expected you to produce the answer from memory, as opposed to thinking about it. It's an easy question if you're familiar with moving window averages and know what the interviewer intends. If it was asked apropos of nothing, a request for context seems reasonable, though. It sounds like you probably had a bad interviewer. There seems to be no shortage of software interviewers lacking in "people skills."
Wouldn't the question about the moving average be a good one though? It should be quite obvious that you can calculate a moving average without remembering more than the current average and the window size. Given that, time and space complexity should be obvious. I think it would be a fine question to see if the interviewee has any idea what complexity means.
Or was it a more complicated moving average case (exponential etc) where the algorithm was given and you were asked to determine the complexity?
Figuring out how to efficiently calculate a moving average seems like a good question for basic maths skills to me.
About complexity theory, there is already a lot of discussion about it's relevancy in the comments. My point was meant more along the line that, if you value a basic understanding of complexity theory, the question asked of the GP seems reasonable.
Coding interviews are very stressful, and churn through a lot of really awesome potential hires. I imagine there are tons of false negatives, but it's nearly impossible for a terrible programmer to get through a gauntlet of programming interviews. However, I agree. They don't give a full picture of a developer's abilities.
As a developer, I prefer take-home projects. As an interviewer, I prefer a few coding interviews, followed by a take-home project.
I have not tried it.
But in my next business venture, I plan to actually do peer programming with the candidate.
This is will allow members of my team (or myself) to get to know the candidate, evaluate the 'wave lengths', and how effective he/she is at finding patterns on the internet/books -- rather than thinking things up.
It also demonstrates to the candidate commitment on our side, and it naturally forces us to find problems of a proper size/effort (as we are spending the effort too).
We would not do it for all applicants, though -- only for once that pass basic screening / competency process (there are no trick questions or exercises there
I have not tried it. But in my next business venture, I plan to actually do
peer programming with the candidate.
As a candidate, I can highly recommend peer programming. One of the greatest interview experiences I had was interviewing with a local shop where the employee and I reimplemented a Set class using TDD and pair programming. The employee sat at the keyboard, so I didn't have to deal with not knowing keyboard shortcuts or the unfamiliar operating system (OSX), but he was very careful to only write the code that I asked him to write, and to let me make my own mistakes.
It was one of the most enjoyable, and, dare I say it, relaxing interview experiences I've had. Though I didn't get an offer, I wouldn't hesitate to recommend that company to any of my peers who're looking to make a change. Also, like you said, it was a second tier screen; after an initial screen consisting of a more traditional phone interview where I had to write some Javascript.
Yeah. You can save a lot of your time, and the candidates' time with a simple coding interview - even FizzBuzz will knock out half of the candidates. Then pair up and/or give a take-home, then discuss the solution in a one-on-one session.
As an interviewer I only look at the resume and published open source projects the candidate did.
Everything else just a random impression, nothing serious to bother.
The questions are for something else.
I like this approach the most. Evaluate code that they've already written, maybe make sure that they can solve a similar problem to make sure they actually understand what they did and didn't pay someone else to do it, and then a personality fit. Chances are if they really wanted to be hired to learn how to program they're aware of the importance of contributing to open source.
This is frustrating for the many engineers who have no open source work. You're shutting out a massive amount of talent that has only worked on proprietary software.
I imagine there are tons of false negatives, but it's nearly impossible for
a terrible programmer to get through a gauntlet of programming interviews.
Depends on what you mean by "terrible", I suppose. Yes, coding interviews do a good job at screening out the bozos who just can't program, period. The ones who don't understand the difference between a for loop and a while loop, or the ones who can't handle boolean logic.
But I've found that even once you get past the outright bozos, there are quite a few programmers who can program quick one-off things, but have no sense of design or maintainability. They can deliver functionality, but deliver in a way that piles on technical debt and damages the long term health of the codebase. I think the traditional technical interview format ironically encourages this sort of behavior, by encouraging applicants to focus on narrowly solving the problem at hand, as quickly as possible, both in terms of machine time and programmer time, even if that means the code is an unmaintainable mess in the long run.
Put another way, think back to the last time you had to do any sort of whiteboard coding, as part of an interview. Are you proud of the code that you wrote? Would that code pass code review at your current position? If so, then congratulations. You're a better programmer than I. The code I've written on whiteboards has been pretty uniformly terrible. Sure, it met the correct Big-O complexity requirements, and it was correct, insofar as it produced the correct output, given correct input. But there was no error handling. Variable names were single letters. The functionality wasn't broken up into logical functions because writing additional function headers takes more time, and my handwriting is messy enough when I'm not rushing. All in all, it's code that you'd see in a prototype, or a programming contest entry, not a robust system that's usable by customers.
Lately, I've seen more and more such code being produced by new graduates not only in coding interviews, but also as part of day-to-day programming. There's an incipient attitude of, "Well this code would pass in an interview, so it's production ready." I find it deeply troubling, and my concern is that programming interviews are setting up incentives by which this sort of code becomes, if not normal, then certainly more accepted than it was in the past.
This is why I advocate so heavily for take-home projects. When a candidate submits a take-home project, you can be assured that they had enough time to design and code the assignment in a maintainable way. You can see whether they added unit tests. You can see whether they split the code logically into objects and functions, or whether they smushed everything into a 500-line main(). I accept that take-home assignments aren't as scalable, either from the interviewee side or the interviewer side, but I do worry about the long term effects on norms that programming interviews are having.
Maybe, but at the moment I have inherited a bit of a mess of an application in my new job. I think it takes more skill to work out what it is doing and replace the crap code with something simpler and more efficient, more maintainable and easier to read.
My favorite sorts of interviews are ones where you're expected, either by pairing with an employee on your own, to fix a bug in an open-source project you use. Has all of the benefits of a take-home project over whiteboarding, but the end result is you have something tangible to show for your efforts even if you don't get the job.
"Being a good programmer has a surprisingly small role in passing programming interviews." "And that just says it all, doesn't it?"
An interview is for both prospective employer and potential employee to meet and find out about each other. If you as a candidate notice something you don't like during the interview (including the interview itself), that already is a valuable information for you, as you already may assume things based on it (like, what kind of people you may find past the in-process interview as they all more or less were filtered in by it). So you can always stop and say "thank you" and walk away without wasting more time. You'll allow yourself to stay longer only when the interview's quality warrants it.
Here's an idea (perhaps naive): why not add all the methods and give them optionally to the programmers. E.g.
- normal interview is mandatory (you need something to base your ideas on)
- portfolio is optional but when chosen it weighs heavily
- take home project is optional but when chosen it weighs heavily
- etc.
With 'weighs heavily' I mean that it influences the criteria for which those methods are good for.
The downside of this is that interviews are more unstandardised, which is a trade-off worth considering.
Unfortunately, this is much the same as virtually any broad examination process. I'm primarily thinking of schooling up to 16. There's a lot of work to be done in ironing out one-size-fits-all testing in a lot of areas, primarily those that "require" a human to do the processing of the data.
When I was interviewing in the past I was usually refusing any take home projects, if you don't want to spend 30 minutes talking to while watching me code it tells me something about your priorities.
However whiteboarding is also terrible.
I prefer getting a real world / work related scenario problem, and solving it TOGETHER with the interviewer: if I'm stuck, as opposed to a hackerank codility challenge, there is a good chance they will let me know and hint in the right direction, and they will also see how I communicate, how I think, how I respond to hints. An online challenge can be easily done by the candidate sitting next to 2 more senior coder friends who help him / her along the way. Unless it is proctored, you can never know. I had people whispering someone answers during a phone screen, people typing my question in Google and reading me the first result (I google it at the same time).
I think that giving you a hackerank or codility take home problem unless this is an entry level job will simply drive away experienced people.
If someone can pass your take home test easily, you will still need to phone screen them before you fly them over in most cases, and if they are that good, they will probably prefer companies that don't waste their time and jump straight to the real person phone screen.
However, some people are more nervous when someone is watching over their shoulder, so here is what I would do: I would ask the candidate what they prefer:
1. work related a hands on assignment with plenty of time (e.g. should take 30 mins but you give 1 hour) and ability to search online and ability to compile use an IDE just like in a real world scenario
2. Skype screen with a lot of small questions on a topic they really feel they know about (no Googling allowed), and a relatively smaller coding challenge (something you can code in 15 minutes)
3. Work related scenario but with a real person, where there is no one best answer to the question, but more of a balance of tradeoffs and more open ended but 100% involves coding (just like most phone screens, but more work related and not just puzzles)
4. The standard puzzle challenge but alone - you have 30 minutes to solve the problem once you see it (Googling is allowed or the test is proctored to make sure you don't, then you get more time)
5. the classic - a cracking the code interview kind of question, but with a real person, code sharing (for crying out loud not google docs, at least have your candidates use something that offers color coding and easier indentation) - some might choose still this
If you let your candidate chose what is best for them, you already made an amazing impression, and you might have much less false negatives.
I think this is one of the most inane things to be asked during an interview. personally, I've never found myself in a situation where I truly needed to choose between a vector/map/list/hashmap. Or had to find the O(x^n) and replace it with O(x^2)
Obviously it depends on the application, but many jobs are simply maintenance coding: find bug, fix bug, test fix. Often times it makes absolutely no difference whether you use a list or a vector, or else you'll get the paradoxical "vector-is-always-faster" because of locality of reference.
In my (admittedly limited ) experience, most of the effort is spent simply making it work, not being bogged down because you used a map instead of a hashmap, or didn't know about some esoteric, bleeding-edge probabilistic data structure.
I've seen this plenty of times. I've worked both on a trading platform and a large website, and both times encountered many performance issues that were solved with a more appropriate algorithm or data structure. I've even seen this with a list as small as 10 items - a O(n^3) algorithm was making multiple network calls each time; changing it to O(N) alone made a huge improvement in speed.
But I think it's more telling if a programmer knows how to profile his program and find the performance bottlenecks, recognize them for what they are, then fix them appropriately than if they can recall a specific optimization for a specific use case on demand.
Most of my work on an ecommerce platform doesn't need much attention to algorithmic complexity, but everyone on my team still curses the guy who wrote an O(n^4) algorithm in our checkout pipeline (discounts, promos, shipping, tax, etc). More than a couple items in your cart and you couldn't checkout because the thread would spin forever. I want to work with a team of people who can recognize these things immediately, even if it's not an absolute requirement for the job.
That has a lot more to do with mechanical sympathy, and awareness of when you've exchanged cleverness for complexity disguised as cleverness, than it does with knowing the big-O of operations on a datastructure.
What you want the person to identify is that they've made your simple iterative checkout process into multiple unbounded tree traversals with no circuit breaker.
Knowing that searching the tree is O(log(n)) isn't very helpful when your problem is an inability to identify that you've made (n) an unnecessarily huge problem space.
Most likely you think about it, though. When you're coding, and you have a triple loop, do you think, "Oh, this is O(n^3). Is n going to be too big here?"
It may be something that's so intuitively obvious to you, that you don't even think about it. So you naturally use the hashmap, where someone else might try a list and then start doing a lookup in a loop. Then while that particular instance might not break things, it'll slow things down, so overall the application feels sluggish instead of snappy.
I've never found myself in a situation where I truly needed to choose between a vector/map/list/hashmap. Or had to find the O(x^n) and replace it with O(x^2)
This entirely depends on the kind of product you are working on. When you get to a large scale with any programming project, optimizing computational resources will cut costs, and can often add value to the customer as well.
This is special pleading, though, right? It being relevant to "any" large scale project really just means that projects can get big enough for it to matter, but how many jobs involve this family of software? How many interviews for positions directly related? Very few, I would guess.
I find myself having to think about efficiency at least a couple times a week. I'm working on database implementation, and in the query processor we have to consider all the time how to evaluate various things efficiently.
I would say it starts becoming important for any application that has multiple concurrent users. If it's a single user running it on some device and there's no shared resource (I.E. back end), then chances are it's not going to be an issue.
"Or had to find the O(x^n) and replace it with O(x^2)"
The other thing that really seals the deal for me as an inferior interview question is that you don't need to have a clue what O(x^n) is to wrap some code in a simple time call, see that the code you think ought to run in microseconds is running in seconds, by visual inspection notice stupid nested loops, and fix it. Self-taught programmers may not be able to say "O of exx to the enn" but that doesn't stop them from fixing it.
So... seriously, what good is the question anyhow?
I would recommend going through some undergrad CS datastructures and algorithms lectures to any self taught programmer. My process of reading code improved dramatically. And the big O concept is, once you wrap your head around it, an intuitive way to think about speed. Also once you've timed your code and found the slow bits you need to know how to speed it up, not all speed ups are as simple as unnesting loops.
Same here. last week I was sent a Codility test, tried the demo and failed miserably.
Then I noticed the test was expressing constraints using time/space complexity, concepts I was completely unaware about for my previous 15+ years in the profession.
So now I am reading about algorithm theory face-palming at the realization I have reinvented the wheel many times during my career instead of just re-using a PhD. researched algorithm.
What if it isn't stupid code? What if it is straight forward and simple and looks correct but is just slow because it has to traverse a list instead of using a hash-table or some other data-structure. You can't get there by intermediate steps, you have to rip out the code that uses it and rewrite it with a hashmap. You can only do that if you know the space complexity and when a hash and a list is appropriate.
"You can only do that if you know the space complexity and when a hash and a list is appropriate."
You seem to be confusing "knows how to say 'oh of enn squared'" with "the loop gets slow when I iterate over a lot of things". One is generally a product of education, the other, merely experience.
The idea that a thing can only be learned in a classroom is perhaps in the top ten most pernicious ideas in the modern world, and probably one of the more surprising ones to show up in that list. You do not need special courses to discover that your code runs slowly, and as I've seen often enough, having had those special courses does not confer immunity against writing slow code.
Now, I am also a believer that formal education has its place, and if you are going to get a formal education in computer science, big-O analysis absolutely must be part of it or you are literally missing out on an entire rather important sub-discipline. But the idea that it's some sort of touchstone between Good and Bad programmers is just ludicrous nonsense. Slow code is slow. There are abundant tools that can be used to figure out why. If you can't work out why your O(n^3) loop is running slowly after a couple of years of practicing the art, you don't need formal education, you need a different job.
I've found that employees who are able to discuss the benefits of certain data structures and their associated time complexity are generally able to solve problems quicker than those who struggle to discuss these fundamentals. That said, the thing that matters most to me when hiring programmers is proof that they can write decent code.
This is a very under-appreciated point. If you profile a program on non-pathological input, the profiler won't tell you what's going to explode later on when your program hits a rare case that you hadn't expected. Theoretical upper bounds don't have this problem.
The most fundamental tool of the trade is a profiler. The tool which is used in reality to find performance problems, unlike BigO, which is used in theory to find performance problems. Does BigO help? Sure. Is it the silver bullet people seem to think it is? No.
The tool which is used in reality to find performance problems, unlike BigO, which is used in theory to find performance problems.
Sorry, but this is just wrong.
I can guarantee you that BigO is very often used not "in theory", but in practice, to find real-world performance problems. And not having "situational awareness" of certain commonly occurring complexity profiles can be a significant source for performance headaches and technical debt.
One trivial-seeming, but frequently occurring example: not knowing when to use hash maps.
In fact, many people solve performance problems (including both those where order-of-magnitude performance really is the most important factor, and various other kinds) not by using a profiler but by, you know, understanding the code and thinking about it. Being as profilers, while they can tell you a lot about certain kinds of performance issues, are still generally quite limited in what they can tell you.
Is it the silver bullet people seem to think it is? No.
Of course not, and I've never heard anyone saying that it was, either.
It's not a silver bullet, it's only a model of the problem. If you profiler tells you some code is slow, you can model why it must be slow by using big-O. In fact it's the standard way of explaining such things. Without it, you must spend your time babbling about special cases.
It's like, yeah, you don't "technically" need to know any 2+ syllable words to be a programmer, but you're really not helping yourself by avoiding them.
Big O took about 15 minutes to teach. Sure, it went on and on a bit because a rigorous class will introduce proofs, but you are absolutely right - it's a trivial concept, and often flawed in practice (list is supposed to be faster for random inserts/deletions according to big O, but in practice they are almost always slower).
Honestly, I consider an instinct for complexity analysis the most important thing I learned in school, and the thing that I've gotten the most use out of. I don't know what case you're making here: are you saying that high-level architecture is so hard that choosing a map or a hashmap should be a coinflip, or the one you see first? Having had some criteria for making the choice makes my life a lot better when everybody is panicking because something is running like shit and no one understands why.
To me it was a bunch of rote memorization, just like a biology course. I never - never - have needed to know how bubblesort/heapsort/mergesort actually work, except to appease interviewers.
I'm not saying I'm pro writing-inefficient-code, but if you want to talk big-O during an interview, I"m going to roll my eyes about as much as you asking me who the 19th president was.
I ask such questions at the end of an interview, but mostly to see the sanity/reaction or thinking process - wrong answer would do, rolling eyes - would not :)
Oh shit, my code is O(1), looking at it in the profiler tells me that it's not a bottleneck (it says < 5ms). Yet this single function call somehow takes 78ms in wall time when it should at worst be 1ms (and even that is too much). The reason? JVM classloading/JITing. This quite annoying when you have shortlived applications.
That's fine, as long as you _never_ need to trust said dev to do anything complex. It's fine to have mediocre developers perform mediocre tasks, but if you want more from them someday you may be in trouble.
That's not really fair, people don't grow unless they're challenged.
Some people really do never advance past a certain point, but a lot of people write someone off as mediocre when they're just still in the process of gaining skill. Also, they only assess ONCE, which is pretty bad if we're trying to establish how good you are forever and always.
That's true, but how do you discern between someone who has potential and has not yet been challenged, and someone who simply doesn't care? In my experience, the best will challenge themselves to learn new things on their own.
to me, complex, and complex-ITY are entirely different matters. I want a smart programmer who can figure out really complex bugs (something you cant figure out from google/wikipedia). Not someone who memorized the big-O performance tables of 8 different data structures (something you CAN look up on wikipedia ).
Exactly. I once identified memory leaks in a managed runtime implementation, worked around them, and showed that third party networking libraries from the hardware vendor were causing irrecoverable crashes but I can't spout off runtime complexity of algos. The former saved a multi million dollar contract and my company's reputation, the latter is something I look up when I need it.
Optimally, I would like both. I agree that wrote memorization is not a very useful skill, but I'd like to know that they can grasp concepts, and I find it hard to believe that there are a bunch of skilled devs out there with great potential who don't or can't understand algorithmic complexity. Interviewing is hard though, I'm not trying to say that any one question or metric should necessarily rule someone out.
Here's how you should reply: "Sorry, I don't have those complexities memorized. When I really need to look them up (which is nearly never), I refer to bigocheatsheet.com."
This can happen even on small systems. I once replaced a O(n^4) with O(n^2*ln(n)) on an embedded target which made a minutes long user facing process take seconds. The catch is that the original used a good algorithm, but made an implementation error which I caught in profiling. So complexity analysis is good, but the only way to get better at something is to measure it.
You should have an intuitive idea around complexity and when it makes sense to optimise.
I don't think though you need to know that for example fastsortx is n log n in the best case off the top of your head in an interview situation. You should be able to reason your way through why it is faster than some other sort though.
What the heck is fastsortx? This is another problem I have. I study all these algorithm books then when I get to the interview they ask something that either isn't in the books or they've come up with some nickname for it and expect me to know it.
That's how I failed my Google interview - they said they expect programmers for any position to have good knowledge of algorithms, yeah, well... :) Frankly I don't like trivia-style interviews.
I think it's easy enough to ask for this skill if the job would require the interviewee to apply this skill.
If your job is mostly frontend, yeah, you probably won't need to worry about this problem. But if you're hiring somebody to work on graphics? You better be doing complexity estimates in your sleep.
Exactly. Performance issues happen. Most of them can be solved by re-indexing a database or adding caching. If I really need speed at the algorithm level I can look it up.
That guy in Google who screwed up Android thought just like you. Now, as the number of text messages stored on your phone grows, the entire system slows down. Such a pity that cretin was not screened out on an interview!
If you're a programmer, you like to solve puzzles, so think of passing the interview as just another puzzle! I mean, yeah, it shouldn't be, but here we are.
If you're a programmer, you like to solve puzzles,
No -- we like solving problems. Especially those that have a legitimate context (i.e. are explicitly linked to some actual, real, business or social problem). And for which our skills are truly relevant needed (specifically, for which no one has a readily available answer, at the moment).
But made-up "puzzles"? For which the asker already knows the answer, so they sit back and watch us dance?
Homework projects could work well if the hiring requirements are small, but won't work well when a company has to hire say 25 engineers a quarter, which we had to. At that point, the process becomes too long, and it's easy to lose good candidates to a long process.
This method of interviewing has been around ever since, and is going to be around for the foreseeable future. Nobody loves it, including the interviewers, but there just isn't a better way to do it at any sort of scale. Especially when there are much bigger problems to solve when you're running a business.
It's best to take the bull by the horns. I run http://InterviewKickstart.com, which is a bootcamp for preparing for such technical interviews. We do almost exactly what is in the blog post. It works. Spectacularly.
Simply because it is quite difficult to explain the concept in writing. What does it mean to have an intense bootcamp just to prepare for interviews? What's the method? etc.
I want to take the time to talk to everyone who is interested in the course. Because the concept is new, people have all sorts of questions. Can't possibly address all of them in writing. I don't have a team of salespeople and haven't spent a penny on advertising.
If you still prefer email, please feel free to send one. It's on the site. It may just take longer to do back and forth.
Or possibly because it's a new concept, which takes more text to describe. More text than what can be included above the fold, and more than what most people read attentively these days.
After all, it's an 8-week intensive course, mostly for CS grads, that grills you hard, and is not cheap. As a consumer hence, I'd highly prefer to talk with someone. Not to mention, all educational institutes have an enrollment process that needs you to talk to a human.
At some point, when it becomes more common and well-accepted, we will condense it, but it feels a little too early to do so.
I had it there at one point. But the concept is difficult to understand and hence it takes a bit to understand the pricing. I didn't want random discussions on pricing flying online, by people who hadn't taken the time to understand what the course was, and what the upside is.
Pricing is also nuanced based on whether you're an experience engineer or student, whether you're taking the course remotely or on-site. Plus, there are recruiting firms who have access to our pipeline, who return a significant portion of our fees to you directly (we don't take a cut).
And those who are super curious, can always google it :-) In fact, most people who call have already googled for it before calling.
Rest assured, we're a real business, running classes every week. Batch after batch. Those who work hard, are getting their work rewarded.
> candidates who have worked at a top company or studied at a top school go on to pass interviews at a 30% higher rate than programmers who don’t have these credentials (for a given level of performance on our credential-blind screen).
Welcome to Silicon Valley meritocracy.
And it's much worse for founders seeking investment, where there are no hard skills to test at all. It's almost purely about being the same class as the investor.
Which is why you get only upper class people funding upper class people, which then hire upper class people. The 99% only makes it in because there aren't actually very many qualified people among the "elite".
Problem is, that there just isn't enough time to evaluate everyone who applies.
In my last job, I was a Director of Engineering at Box. Every job post we put up, had hundreds of applicants (thanks to job-boards which let candidates apply to jobs like putting in a shopping cart). What do you think we, as hiring managers, are going to do at that point? We'll have to start forming biases. And if we have to start forming one, it's better to start with good schools and good companies. (I'm sure VCs have a more severe problem).
Problem gets worse when you're hiring at scale, and you want to hire before the holiday season nears, because if you miss the season, the company is doomed. At that point, there is almost panic. A resume with brand names on it, naturally gets higher preference.
Homework projects could work well if the hiring requirements are small, but won't work well when a company has to hire say 25 engineers a quarter, which we had to. At that point, the process becomes too long, and it's easy to lose good candidates to a long process.
This method of interviewing has been around ever since, and is going to be around for the foreseeable future. Nobody loves it, including the interviewers, but there just isn't a better way to do it at any sort of scale. Especially when there are much bigger problems to solve when you're running a business.
It's best to take the bull by the horns. I run http://InterviewKickstart.com, which is a bootcamp for preparing for such technical interviews. We do almost exactly what is in the blog post. It works. Spectacularly.
I dunno - part of leading an eng group is taking heat to do the right thing. I've been in the growth phase a few times now, been under tremendous pressure rapidly build a team. Taking the time to find the right people is absolutely key - better to hold off then get the wrong people in. Maybe it's because I've seen complete dipsh?t Stanford and MIT grads or maybe it's because I didn't go to a marquee school, but I put a big line in the sand on that one...
Taking the time to find the right people is absolutely the key, but the unwritten part of the rule, is not to do that at the cost of business. Better to hire engineers who are good enough, than to miss the holiday season.
The process of DS/Algos doesn't necessarily find wrong people. It's just the fastest way to find engineers who are good enough.
In some sense, they almost secretly WANT you to succeed, by "standardizing" the process.
By forming biases where applicants from "good" schools/companies, wouldn't you wind up losing out on plenty of potential hires that could actually be better?
I don't go to a top school, but I've spoken with students in similar degree programs who don't do nearly as much as me outside of class to learn. In some cases, my breadth (and depth in some cases) of skills and knowledge surpass what those students have and know.
It would seem unfair to give them a pass simply because they had the chance to go to a "better" school.
Of course you will miss some good candidates and it's unfair to them.
But as soham explained, it's a trade off. You might miss the best candidate, but you will probably find the second or third best candidate spending significantly less time.
It's totally unfair, yes, but it's obvious why it's happening and there's not much that could be done to change it. So while the candidate is losing on this one, the company is (probably) winning.
Yes, you do. But do you have a choice? The key part in soham's comment is "at scale." Obviously the most optimal strategy from the perspective of finding the single greatest candidate is to interview everyone and throw out no resumes. But there is an additional time constraint. So you start throwing out resumes.
I don't know what other industry you have experience in, but this is fantastic compared to the rest of the world. In 'soft skill' jobs, I'd bet the house that credentials, prestige, and 'reputation' end up doing a lot more than a 30% higher acceptance rate.
Should we improve it further? Absolutely, but to pretend that this isn't better than other industries is silly.
I've worked for LA and NY companies (among others) and never seen anything like the elitism that exists in Silicon Valley.
Silicon Valley is mostly funded by a few elite institutions, so it shouldn't be a surprise that they fund elite VCs, which then fund elite founders (and hire elite employees).
The funding sources in LA and NY are much larger and more diverse, so the elitism is far more diluted. It's a market opportunity that SV investors are so biased. Crowdfunding with equity might totally upset the applecart at some point.
Which is why you get only upper class people funding upper class people
In my experience it's even narrower than that. In many cases, there is a pre-existing relationship between the investor/founders.
I once met (in a restaurant, because we had kids the same age that were making eyes at each other!) a serial entrepreneur. When we got to know each other he confided that his investors were friends from school (some Ivy league school, don't remember which) that would give him money for some idea, he'd start the company then they'd find a buyer. He'd done this 3-4 times already, and he was about 40.
I know this example is one data point, but I've run across it in other situations, too.
We flat-out ignore credentials in hiring decisions.
If an elite school is intended to signal competence and brilliance in someone, then those qualities should shine through in a candidate without us having to know where they attended college.
Simply stated: we're hiring you, not your certificate.
Doesn't that support the fact that it is a meritocracy? Is it not reasonable to expect that top companies and top schools are more likely to employ people who have more applicable talent and skill?
At the end of the day, going to a top university or working at an impressive company is always going to be a huge and relevant signal. It's difficult to see a problem with that.
>>Is it not reasonable to expect that top companies and top schools are more likely to employ people who have more applicable talent and skill?
No. Its more like a club.
Join this prestigious institution X, and then you shall enjoy life long benefits of employment, higher than average salary, bonuses, stock, opportunities to travel etc. Even if the person is actually the worst possible employee, or is barely productive. Merely have X on your resume, guarantees you life long privilege.
To know how worst it is you come see how it is in India. There are people who join IIT(Indian institutes of technology), a sort of a chain of colleges which is supposed to be Ivy league. Who says so? They themselves, because saying anything other wise means putting your own career in danger. There are coaching institutes, who train you just to get an entrance. Doesn't matter what you go and do there, in fact from there on you may do nothing in your life at all. The whole purpose of getting into those colleges is to enjoy lifelong privilege of having access to alumni who will ensure you a good career regardless of your performance.
The day you remove the real metrics of merit and put in artificial flags. People will do no real work and try to gather as many flags as they can.
Right, I'm shocked that everyone can't see that it's a club, to fund people who went to your school over others. I bet the original guy who posted that was in the club :-)
You're assuming that the interview is measuring skill correctly in a few hours, but the candidate's several hundred hours of course project work and exams at the forefront of his life over several years are irrelevant. This seems like a big assumption. Why do you think your whiteboard regime measures skill better than a well-designed set of CS classes?
It's different because other fields haven't built up an entire mythos about how it's way more meritocratic than everyone else.
Tech prides itself in being more objective, more rational than other fields but in reality is no different.
In other fields the effects of class and network are openly acknowledged, in tech you to even address the issue you first have to punch through the mythos.
In other fields the open acknowledgment of these issues has resulted in some action to de-bias the system (see: blind auditions for orchestras, residency matching for doctors). These efforts are imperfect, but nonetheless still way further along than anything we have.
No, in reality it is substantially more meritocratic. In most professional/white-collar industries, not having a college degree would wreck your career. The fact that it only merely disadvantages your career in Silicon Valley is not evidence SV is not meritocratic.
> And it's much worse for founders seeking investment, where there are no hard skills to test at all. It's almost purely about being the same class as the investor.
yeah I figured this was how things happened :/
but this is just the reality when you let people feel free to choose and make their own decisions, they are going to find safety in numbers and people similar to them.
This explains the disproportionate lack of African American and Latino Americans in tech and the 'Bamboo Ceiling' that many Asian Americans experience in the corporate and academic world where they cap the number of Asian American applicants in Ivy league schools. Jewish Americans were also capped and barred from attending Ivy league hundred years ago but not anymore so this probably means that change will happen soon (even if it took a fucking century for racist ass mentality to change)
On the positive side, no one can stop you from making lots of money using the internet. And if you have something that really takes off, those same investors will line up at your door.
Right, but it's still discouraging to be forever branded a "2nd tier engineer/human/etc" because of where I went to school (unless I get into a good grad school).
This is very specifically the case in Silicon Valley, the insular center of the world, and not the case elsewhere. Most companies I've interviewed at couldn't care less about where I went to school as long as I could build their stuff.
Naw - places that hire like that are myopic are shitty and doomed, so you don't want to work there anyway. If you're a good coder and not a complete dick, you can build a nice career for yourself.
Yup, no doubt. There are lots of good areas for tech that aren't elitist like this, but compensation wise I don't think any can touch the top 3 (Seattle, NYC, SFO). That's just what I've heard ¯\_(ツ)_/¯, I don't claim to know much.
Another tip which I give: Interviewers vary widely in how much they care about whether your syntax is accurate, whether you handle invalid inputs, and whether you write unit tests. It's really useful to ask the interviewer whether they want you to worry about those things.
If you handle invalid inputs for an interviewer who doesn't care about that, they're going to be a little annoyed by you going more slowly than needed. If you don't handle invalid inputs for an interviewer who does care, then they'll think you're careless.
This happened to a friend. He confirmed that pseudocode would be acceptable, but then as he was writing it out the interviewer got on him about not terminating lines with semicolons (I suppose the pseudocode looked C-ish). So yeah I'd say make this clear.
I interviewed quite a bit last year (on the hiring side). I was really surprised by the variation in pseudocode written by the candidates. Most wrote something JavaScript-like, a few stuck to mostly proper Java or C. But then one dumped a giant web of crazy on the board (but still made his point) and one wrote something that looked suspiciously like COBOL - still not sure if he was trolling me.
My pseudocode used to be a sort of relaxed Haskell, because it's closer to how I think about a solution... but some interviewers rejected it as not resembling any kind of code, so now I use something imperative and Pythonesque, which hasn't gotten complaints so far. The sad thing was that in some cases the Haskell "pseudocode", unlike the Python, would have actually compiled and solved the problem quickly (within a factor of ~4 of C), and it took me about a minute to write.
Unfortunately I think Haskell is disproportionately well-suited to these kind of toy problems, so being able to answer interview questions in Haskell doesn't tell the interviewers much except that you think yourself especially clever.
I wrote some Ruby in an interview. It was so terse, I had to explain to the interviewer (who favored Java) what the code did, and why it was linear instead of O(n^2). That was actually kind of fun.
If your code is sufficiently terse that it's not very understandable (such that the complexity isn't very understandable) surely that's a realistic red flag?
If it's idiomatic Ruby (which some, like me, are not familiar with), I think it would not be a red flag if they could explain the details of what the syntactic sugar represents, and why its runtime is what it is.
My shipping boxes hold an even number of widgets, but I "have to" sell odd quantities and those need expensive mil spec styrofoam peanuts added to fill the hole. Here, have an array of possible shipment sizes. Given that array, if its shipping an odd number of widgets I wanna add an additional half widget shipping charge.
newshipping = oldshipping.select{|i| i % 2 == 1 }.map{|i| i + 0.5 }
My ridiculous fictional writing about shipping widgets is way more confusing than the idea that you can select and then chain right into a map.
This probably looks really weird to a java guy but its not really all that mysterious. I wonder what that looks like in Java.
I've had people interview claiming to know X and then not code in X correctly. So... that's a red flag.
We allow interviewees to pick their strongest language. But if you end up picking something that doesn't exist, well, you aren't earning yourself any points.
> But if you end up picking something that doesn't exist, well, you aren't earning yourself any points.
I don't know about your personal interviews, but I'd find this reasoning slightly strange if I were being asked to write computer code on a whiteboard. I'd find it much less strange if I were actually handed a laptop to write a functioning program on.
Expecting perfectly correct code on a whiteboard seems to me to be a slight abuse of the medium. Whiteboards and chalkboards specifically exist to sketch things out in an adhoc fashion, often in a collaborative and easy-to-edit way.
I don't think he meant perfect code. But I've had candidates claim their main language is Java, but were unable to write a proper for loop or know basic data types like arrays or ArrayLists. I've met such people with PhDs and impressive CVs.
Interview enough people and you'll encounter some that are very convincing until you dig down into details. So you have to dig into details.
To "not code in X correctly" is ambiguous, but I assume/hope the parent poster means that someone makes fundamental, non-syntax errors in their code - in C++ this would be something like returning a pointer to an object that's on the local stack.
If you're trying to filter for people can be productive in a particular language from anyone else, that's what you need to look for.
If you let the candidate pick their strongest language and they still make fundamental errors, you know they're not going to be immediately productive in any language.
I wouldn't pass then since I live in post 2000 and am used to let the IDE handle the nitty gritty details while I focus on the actual meat of creating software
I've had this problem as well. I go back and forth between Obj-c, python, javascript, matlab etc. so much without spending a significant amount of time on any one language that I often feel intellectually deficient because I don't know the nitty-gritty details of any of them. Curious to see what others think - is this something I should stop and focus on? Or in today's development environment is it considered acceptable to have to occasionally lookup language nuances in any given situation?
For example, I couldn't tell you off the top of my head how to test for null in python. I'd assume it'd be if(obj), but after a quick google search it seems like if(obj is not None) would be the correct answer.
I used to be in a very similar situation, but I was convinced otherwise by this article [1]. The fact of the matter is that, yes, it's easy to become familiar with a variety of programming languages, but I think you actually learn a lot more when you double down on a language (platform, ecosystem, etc.) for a long period of time.
Quoting from the article:
> Leaky abstractions mean that we live with a hockey stick learning curve: you can learn 90% of what you use day by day with a week of learning. But the other 10% might take you a couple of years catching up. That's where the really experienced programmers will shine over the people who say "whatever you want me to do, I can just pick up the book and learn how to do it." If you're building a team, it's OK to have a lot of less experienced programmers cranking out big blocks of code using the abstract tools, but the team is not going to work if you don't have some really experienced members to do the really hard stuff.
In areas that I'm just learning or dabbling in (for me, Objective-C), I look things up or reach out to experts. But there are areas where I want to be the expert that others reach out to.
http://exercism.io has helped me to write more idiomatic code (submit a thing, get comments, refactor, also comment on other people's code.) Maybe it'd help you for the languages they have examples for?
I don't think it matters. No one want's to see import/includes on blackboard and they don care much if you remember if method on some object was called len(), size(), count() or length.
I don't get why. When I'm organizing my thoughts in code, I normally write something pythonish, but not really any real language. Stopping to think about the correct syntax does not make me solve the problem any faster or any better, and since I am writting on paper or a board I am going to have to rewrite everything anyhow. Maybe I'll even have an IDE to do most of that meaningless effort for me. It's like lazy syntax evaluation; don't do it until you have to.
I've been writing a comment as a placeholder for real code stating the problem in simple English when I'm stuck lately. Usually going through putting it into words really guides the code. It'll usually look something like:
# the problem is that our query only matches rows where the ID from foo table equals the ID from bar table, but we want rows from foo table that match the first part of our query regardless
This also makes it easy to ask for help, since now you've turned your "it no workie" into a question which you could ask another person on your team or in e.g. IRC for help with. They might then have additional questions, but I've found more often than not that simply getting a few minutes with someone else is enough for them to bring not-your-entrenched-perspective to the problem and hand you the (sometimes super obvious) solution in short order.
I've had interviewers look at an unweighted keyword digest from my resume, apparently without reading said resume (which clearly states my current skill focus on the top, which has evolved quite substantially over time). And then start "grilling" me on a language that appeared on a job description from 10+ years ago.
Take out any of that old stuff. It's not necessary. Your resume should fit on one page, two at absolute most, and only include things that you would expect to be grilled on. If you are annoyed about being tested on something on your resume, take it out.
It's interesting that different companies will want different things on a resume. This is why no two jobs I've applied for get the same resume. If they want lots of experience in a lot of different things, sometimes they DO want the laundry list of acronyms (make sure you know what they all stand for). You might not even get through the first selection if they use XSLT heavily and you didn't think it was relevant that you had worked with it before on a project.
I've also had interviewers rip the other pages out of my resume in front of me, but everyone is different. At the end of the day, don't feel too bad about not getting an offer. A lot of it is luck.
I still think that in general, people who can't be bothered to read important documents, and instead just eyeball them for keywords (and start shooting off questions accordingly) -- aren't my cup of tea to work with, anyway.
I understand your point, it drives me crazy in interviews as well. But by the same token when I'm interviewing, I want to be able to grok someone's resume as quickly as possible - I don't want to see stuff that they themselves don't think is relevant.
There's no need to be "grilled" on something you did 10+ years ago (unless it's a requirement of the job, of course). Perhaps a "have you used Pascal since leaving ...?" would suffice.
A resume should reflect your current skills and abilities. You should feel free to leave in old positions, and the interviewer can ask about it if they want, but you should not leave in technical things you don't want to be asked about.
It depends on what "not code in X correctly" means. If they missed a few syntactical things, it's fine. If they're obviously still "thinking in a different language", then no. For example, if you ask them to loop over a list of items in Python, they shouldn't write:
for i in range(len(items)):
do_stuff_to(items[i]))
I ask questions about the language to determine if they know the language. Correct syntax isn't going to show me you understand prototypal inheritance.
This is kind of the point, right? Most places I've interviewed are far more interested in your communication skills, logic and thought process than writing perfect code on a whiteboard.
Many of my friends have failed to see this is actually the reason they have you write code on a whiteboard.
I hear that interviewers are really interested in seeing how you think and communicate constantly. Yet in my experience, if you solve the problem exactly the way the interviewer is looking for in a reasonable amount of time, you pass 9/10 times. If you don't, you fail.
You should be able to carefully review a few lines of code for syntax errors. This is important because incorrect syntax might be ambiguous as to what it could mean.
You should not be expected to write syntax-error free code on your first pass while solving a problem , without machine assistance
It sort of makes sense. If someone knows a language well, they shouldn't have much trouble writing it syntactically correctly on a whiteboard. Especially in languages which have simpler syntax, like Ruby vs eg Scala.
That's far less true if you use several languages on a regular basis. .size .length .count, which one is used in _? Does it use () after it?
Interviews are often based more on what the interviewer knows than the project / resume.
Now what happens when someone asks about a language that you have not used in 3 years? Well it gets fuzzy. Ramping back up on an old language might take a few hours, but that’s meaningless in terms of a job.
In his "Programming pearls" book, John Bentley stated that he always first writes non-trivial algorithms in pseudocode, and only then transforms them to the destination language.
The point is, it's much easier to focus on the idea of the algorithm when writing it down in pseudocode, without having to worry about c / c++ details that obfuscate the idea.
I wouldn't call Ruby's syntax simple. Elegant, yes, but not simple. I'd consider myself a pretty seasoned rubyist, but my IDE catches syntax errors for me all the time.
Good point, I've encountered this. Some candidates unfamiliar with the process may not even realize they want you to ask that, I didn't know when I started out and used to think a good interviewer would specify what they want, that may not be true, although it would be a nice thing to remind a candidate they can ask for clarifications not just about the question but about testing and such.
I know some interviewers may be interested in helping but it's important to note assholes exist, especially at larger companies. There can be head games and assumptions made where they needn't have been. Even when I've been hired it can feel like if I'd done it again I may not have been. Try your best but don't be too upset if it goes badly either. Similar questions often come up too, it's actually amazing how many questions there are about linked list and trees.
I also think that it's not obvious that the interviewer is doing the wrong thing here. The claim "good programmers should always think to guard against invalid input" isn't ridiculous on the face of it: maybe checking for valid input is a sign that they're careful and methodical, and of course you want to hire careful and methodical people!
Or the other way around: I can imagine someone thinking "this person spent ages on checking for invalid input; I bet their code is always bloated and ugly".
The problem is that programmers do this one way or the other based on personal preference, not because of actual differences in ability. Once you know that, it makes less sense to care one way or the other.
There's a correct place for every class of input validation. The point is that you don't want multiple levels of input validation for the same thing. Most of what passes for "defensive coding" is superfluous. For example, if you are passing a pointer into a function, you don't need to reflexively null check. However, null checks are important.
You can always preempt the whiteboard issue by bringing a laptop along.
"Hey, I'm a lot more comfortable writing code on a keyboard and with an IDE. Lets program this together in a text editor instead of a whiteboard".
As a candidate, I feel that it is fair to stand up for this as well. If the interviewer wants psuedocode, I'm happy to whiteboard it. But I've been whiteboarding before and had the interviewer say, "that code wouldn't compile, you're missing a bracket." So I said, "If you want code that compiles, bring in a laptop and I'd be happy to put it in to Visual Studio [it was a .NET position] and have it be syntactically correct; but if I'm whiteboarding, it's going to be psuedocode."
Some places I interviewed did mostly whiteboarding, but had a problem that required producing working code. One place had a standard desktop setup aside for me - with the most popular IDEs pre-installed. Another had me bring my laptop.
What was their response to this? I think it's perfectly valid to state this, and may even throw the balance of the interviewer/interviewee dynamics but I can also see other people seeing this as obtuse. But pointing out a missing bracket (in a nonconstructive manner) on a whiteboard is pretty obtuse too...
It was the last interview of the day, and I was tired and had already decided I was almost certainly going to decline any offer, if given. So, honestly, I probably said it with a little bit of an edge and that was inappropriate on me.
Nevertheless, the interviewer was gracious and replied, "Fair enough" and stopped nitpicking my brackets and semicolons.
I got asked to find the intersection of two squares for a Django cosing job. I pointed out that it had absolutely nothing to do with Django (after getting a solution).
I was told they thought I might be difficult to work with.
Cool, since you opened <preferred text editor> and you have a dev environment. Write compiler/parser in BNF for the editing commands your editor supports. Assume non-standard encodings are possible for the key presses. Here are the examples for vim/emacs. Should work with both.
Indeed, having the candidate use their laptop gives the interviewer a valuable signal, too: whether the candidate has a coding environment they are comfortable and fluent in. Are they stumbling on their editor, or is it an extension of their mind?
When I interview, I usually ask pseudocode first, then write it out in actual code. Think of it like writing a short essay. Outline first, then actually write the essay with proper grammar to the best of your ability. There will be typos and grammatical mistakes which I don't really care about but I do want to see your style and how you use the language to express what you want it to do.
Great point. I tend to be the person that realizes that interviews are stressful and I'm not gonna hold it against you if you miss sanitizing your input or similar issues, but I will call you out on it with a question like "well, what if you get X" and at least verify that you know that's an issue and let you then add the code in to accommodate for that.
And the most humorous interviewers are those that stare at you and slowly answer "uh, mmm, whatever seems fair..." when you ask about these reasonable questions.
For a while, we had a non-typical interview strategy: A take-home project. We would give the candidate a week or so to work on a smallish project, the requirements of which we would specify. After they completed the project, we would do a group walkthrough with them.
We've hired five engineers over the last three years. For the first two, we did the take-home project. But, then I started to wonder a bit about if it was reasonable to ask programmers to work a weekend on a project. There were a bunch of persuasive comments in HN threads on the subject saying it was unfair -- a job seeker would have to spend an incredible amount of time on each application. And one candidate that I really liked aborted the interview process once I told him about the take-home test.
So I changed the process to something much more typical, with live, in-person coding exercises. We hired three more engineers under this system.
So, how did they compare? Well, the engineers hired when we were doing take home projects have worked out INCREDIBLY well. They are independent and very resourceful. They are excellent.
The engineers hired under the more typical system have not done well at all. We had to let go of two of them, after months of coaching, and the third isn't doing that well.
Random chance plays a huge role here, I'm sure. Maybe we just got lucky with the take-home project engineers. But personally, I think it makes a lot more sense to have the interview process match the work. /shrug
Right there with you; we do take home tests. It's all about the expectation. We outline out front what we're looking for (logical code separation, FP-inspired, sound code), and what would be nice to see (testing etc.)
Mostly, we want to see how well the developer can explain their decisions throughout the process. Maybe they made a shitty decision. If they know it, and explain why it was shitty, that's a pass.
When I joined the company, the exercise took me about 3-4 hours. I don't think that's a ton to ask, especially if you make the onsite interview less intensive (which we do).
I instituted take home tests within my group. We're up front that the interviewee should not spend more than 4 hours on the project and that we don't care if it works.
I am more interested in whether or not I can read the person's code, follow their logic, and if they think about logging, unit testing, etc.
Coding skill, while important, is a very small part of whether or not I am interested in a candidate. Soft skills are a much stronger part of the equation IMO. I could care less if a person can write a recursive function or if they don't know the performance difference between different ways of doing things.
Take-home projects are just a more complete and accurate representation of the development experience someone would have on the job. You have much more data about them than a 30-60 minute live coding session.
Though my current company doesn't do them, I was hired for a previous position from a take-home project and generally support them. As an engineer, I'd rather put more effort into a smaller number of take-home projects than a larger number of live coding sessions...
I really wish that at some point during my CS education I would have realized how typical programming interviews worked and just how impossible they are for me. None of my internships had this sort of stuff and after a long string of failures interviewing after graduating, I can openly admit that being able to solve algorithm stuff just isn't in my blood.
It doesn't matter how many books I read or questions I practice, I just can't work these kinds of problems. If I had an inkling of what it was like beforehand, I would have switched majors or dropped out. Half a decade of hard work, tens of thousands for tuition and a useless degree at the end.
I don't know whether to laugh, cry or jump off a cliff. Maybe all three.
One thing that helped me was to actually implement things that made use of the "algorithm stuff." Not just practicing in front of a whiteboard, but real executable code!
In particular, anything involving a Tree was never very intuitive until I tried implementing a script to search for files by name. Suddenly both the recursive and the iterative approaches made sense. I understood the trade-offs because they applied directly to my work. That opened the door for more complex algorithms, like a Huffman Coder (which I wrote in C, as part of a CS class).
The other thing that helps me in interviews is to just talk through everything I am thinking. It feels like I'm stating the obvious over and over, but a good interviewer will be able to follow your thought patterns and help you along when you get stuck. Also it just helps to hear yourself say things outloud sometimes. If you just stand there silently decoding a problem in your head, the interviewer can't help you and has no idea how you'd approach a real problem on the job.
> actually implement things that made use of the "algorithm stuff."
Part of the problem is that the overlap between "concepts used in technical interviews" and "practical things I would actually implement" is very small. So to take your approach, I'd have to go out of my way to implement something that, in the end, would have no utility to me.
Granted, there are developers out there who do take on really technical projects for fun. It seems like every language has at least one implementation of the GNU `coreutils`, which I'm sure requires at least some algorithm expertise. And there are, of course, jobs where the technical is really important (systems-level, research divisions, etc).
But for the most part, the types of interview questions they ask in no way correspond to the actual on-the-job requirements. I find it strange how common it is to interview web developers (or, engineers whose job will effectively be web development) as if they are C programmers.
> The other thing that helps me in interviews is to just talk through everything I am thinking. ... If you just stand there silently decoding a problem in your head, the interviewer can't help you and has no idea how you'd approach a real problem on the job.
This.
Most important thing for me as an interviewer is for you to talk out loud about what you're doing and what you're thinking.
Just let it flow - all of it.
Programming is often a solitary thing - we go through a lot in our own heads when writing code; e.g. say I ask you to write some code to iterate through some website log files and build a site-map from the URL paths in the log entries. I bet that your mind - even as just a reader of this comment - has already started working out some of the steps to do this ... "load the files, iterate through line by line, some sort of map or DB to store the files, some way to handle duplciates" etc. You might even already have some questions lined up about number of files, frequency of running, shell-scripts vs "real" code etc. Great - but I need to know that you thought about that stuff and I wont unless you talk about it out loud.
Standing silently at the board is not giving me the interviewer as much to go on. Even if you get stuck and/or make a mess of it and dont have a working solution at the end, if you've talked out loud about what you're doing, why you're doing it (and why you're NOT doing something else etc) might be enough on its own to get you pass that interview despite your solution not working (everyone has off days, but show me your thoughts!)
I am sorry that CS education has not worked out for you so far. I want to emphasize the "so far" part of that. I hope that that's hyperbole at the end of your post. If not, I have to encourage you to reach out to friends and family, and maybe take a break from interviews. You can get in touch with me (email in my profile).
There is a huge amount of randomness in interviews. I usually say this as a bad thing, but in your situation now, it can actually be a good thing. Interviewers look for a WIDE variety of traits. Most ask algorithmic questions. But if you do enough interviews, you will find a company that values the skills that you have (I am assuming here that you can program productively). Smaller companies particularly have more variance in what they are looking for. I encourage you to just treat this as a numbers game, and get applications out to as many small and medium companies as you can.
This sucks. But it can and does get better. After you have a few years of experience under your belt, companies will look at you in a very different light.
EDIT:
Also want to add that we made Triplebyte to help people like you. We'd love to have you apply.
There are a lot of people who leverage their CS degrees for non programming jobs - anything from QA, to Product Management, to Sales Engineering. Please don't discount the value of your degree so easily! It's not a waste - you will find a way to leverage it in a related field.
There is a real need for people who understand programming, even if they aren't heads-down, programming geniuses themselves.
Calm down, buddy. There are a lot of places where you can apply your CS and software dev learning without having to have an encyclopedic knowledge of algorithms. I would even go so far as to say that those jobs are in the minority.
Are you committed to working as a professional programmer? If not, a rapidly growing area where lots of help is needed is the intersection of computer science and law. Lots of bad laws exist on security and privacy, because the people who drafted them don't have a CS background.
If you do want to work as a professional programmer, try just typing through solutions and understanding them. After lots of exposure, you'll start picking up patterns. Dynamic programming and greedy algorithms are taught so quickly that I'm surprised students generate intuition for them! Feel free to get in touch if you're struggling. Best of luck.
There are jobs for CS that care about other traits.
System design, user interface design, HCI, software engineering (methodologies, management, architecture), infrastructure, etc that don't lean as heavily on the algorithmic side of CS as they do other aspects.
I know that while interviewing where I work, a candidate's attitude and enthusiasm for programming are much more important to me than their ability to solve some riddle in half an hour. It's not the solution I'm looking for, it's how they get there.
There are jobs where you don't have to answer this crap. You'll probably enjoy working in those types of places anyway. Have you tried companies that use software as a tool instead of being software companies? e.g. banks
> This situation is not ideal. Preparing for interviews is work, and forcing programmers to learn skills other than building great software wastes everyone’s time. Companies should improve their interview processes to be less biased by academic CS, memorized facts, and rehearsed interview processes. This is what we’re doing at Triplebyte.
Thank you! This is a good write up and just like it concludes it's far from ideal.
I'd love to see more interviews based on real-world type things like maybe code a project, come in and explain the architecture, reasons for your data structures, performance questions, etc. Shows how you code and communicate and even better: work with others and maybe walk someone through extending your project or something similar.
Honestly though my biggest issue with interviews is the lack of response with a negative result. For instance one of my last interviews I spent literally months with the company interviewing on and off on the phone and in person. I never heard a SINGLE negative thing from anyone, always answered every question correctly, shot the shit with many of them and everything seemed perfect. Even the team lead asked me not to go after something else because he wanted me. Then I was ultimately declined with the only reason given was "lack of experience". But I had over 12 years of experience, all of the interviewers told me I went above and beyond, said they agreed and liked the solutions I came up with, that I talked through them well, etc etc. I was never able to get anything else out of that.
If something is wrong with a candidate and they don't fit that's perfectly fine. But please give them accurate and detailed feedback where possible. This leaves me absolutely nothing helpful and instead of possibly improving on something I'm left thinking they made a mistake or everyone just simply lied to me during the entire process. I was even exchanging some texts and emails with the lead up until I was given the negative result and then nothing.
Honestly though my biggest issue with interviews is
the lack of response with a negative result.
Exactly mine too. However:
If something is wrong with a candidate and they don't fit that's perfectly fine.
But please give them accurate and detailed feedback where possible.
Detailed feedback is hardly ever possible. Not only because of a fear of litigation (which is a big factor too) but also because the hiring decision is a matter of balancing so many different points.
> Detailed feedback is hardly ever possible. Not only because of a fear of litigation (which is a big factor too) but also because the hiring decision is a matter of balancing so many different points.
You're right but if a candidate is going to spend hours or days working with your company I think it's the least they deserve. If there is a legitimate reason for not hiring them I'd like to think litigation would be rare.
I mean when you work with a sales guy to buy something and ultimately don't buy you usually tell them why especially if you've been working with them for days. The inverse is true as well if you decide not to sell a product to someone. It just seems weird to me that a person can spend so much time with a company and possibly not even get a good learning experience out of it (when you're left with the impression that everything went as smooth as statistically possible and then you're declined without any useful data how do you know how to improve if at all? Hell maybe someone was just better than you or they decided they needed something else for the job; telling the person could save them so much trouble).
If you want someone to spend hours or days trying to join your company I feel like you should be able to give them feedback. I don't like the trend of big companies not giving anything. Interviewing is a two way street but there is just a weird stigma or legal worry preventing feedback.
Maybe? I like to consider myself pretty valuable (ego++) :D
But you're right. I wonder what the statistics are for interviewing; are more people interviewing while they don't have a job or do have a job and are looking. I feel like it has to be the former like what you were suggesting but I haven't seen data either way (not even sure on the logistics on collecting that in a good, representative way).
I totally agree with this. Companies should give (constructive) feedback when they say no. They do not partially out of a fear of being sued, but also because they often don't know really why the reject people. The default state at most companies is rejection. If no one really liked you during the interviews, at most companies this will result in a rejection.
We're doing this differently at Triplebyte. We give everyone we don't work with (who does our final interview) a several hundred word personal email, with an explanation and advice on how we think they can improve.
I think another reason is a fear of starting an endless argument with the candidate, when candidate believes he was actually right and the company made the decision already so it's rather pointless. Not sure how likely is it happen though.
True I could EASILY see that happening. I think the important point is to provide the feedback then stop communication (unless it's like a simple thank you or maybe some clarification you could give them or something). Arguing back and forth with an employer who already made the hiring decision isn't going to help things but it's hard not to be defensive if you think someone is wrong.
Not to excuse your case but a company I work for just got funded and posted a job ad on AngelList, it's been a week and there are 50+ applications sitting in the mailbox. We produce and sell a physical product so one of the founders keeps the operations going, other does biz dev and I do tech. This year alone we need to hire about 10 more people. It's overwhelming. Especially so when hiring the first key employees for roles you don't have experience with.
I've interviewed for a lot of YC companies and companies that frequently post in the "who's hiring" thread and the programming interviews they give are absolutely horrendous. I've had programming test where companies look at my resume and go "so you are very experienced in Ruby? Great, solve these algorithms in C++ for us. I've actually had someone give me a ACM-ICPC world finals question.
I don't have a problem with pushing someones limits and see where they stumble but a lot of these interviews seem intentionally trying to screw you over with the abstract way these questions are asked and the brain teasers they give you.
Hiring talent is a skill most founders do not have because they haven't done it before. It seems like the most egregious oversight ever for an investor to make.. "oh you have a really good core team of 3 people who you've been friends with for years? Here's a million dollars go expand your team with 10 new people".
I remember the best interview I had was at a company offering open source software. For the coding components of the interview they give applicants a task (they cherry pick one of the easier ones) from their JIRA backlog and told applicants they've got two weeks to come up with a patch. It didn't matter if the applicant could fully solve the bug/feature, since it's not particularly fair to expect the applicant to be familiar with the ins and outs and gotchas of the system they are working with, but what did matter is that they A) submitted something, and B) could talk their way through their thought process and how they arrived at their current solution.
This sort of process really clicked with me. Obviously it's not as straightforward for some organisations that may offer proprietary software, but the process could certainly be adapted for a lot of organisations, in my opinion.
This is interesting because without knowing how their system is architected or designed, it allows you to state assumptions and you can use that time to assert knowledge of best practices of a given framework. Sort of an implicit test for depth though.
Edit: I missed where you said open source company. Though I still think this thought for a closed source company that uses open source pieces (like a web framework) could be useful.
What if designers had to go through a similar interview process? Here are some colors, please arrange them in palette groups that are color coordinated for a given visual effect? Why is red font on blue background bad, please justify? That would simply be hilarious.
> Here are some colors, please arrange them in palette groups that are color coordinated for a given visual effect?
"Please arrange these colors in complementary, analogous, and triadic color schemes". This is color theory 101 -- something every visual designer should now.
> Why is red font on blue background bad, please justify?
Again, a valid question. It all depends on the brightness/saturation of the colors, and how much contrast is between them.
Edit: Obviously these are ridiculous and unnecessary interview questions because frequently a designer provides a portfolio of work that demonstrates their skill. This may not be possible in a programming interview if the candidate has been not been working on open source code or side projects.
Many (most?) times you provide samples for a software engineer job, they don't get looked at or serve to provide much help to your cause. That you have samples seems to help get an interview but after that it's at if the samples were never even provided. I link to many samples right from my resume so interviewers don't have to depend on HR or whoever passing on extra files. I've often asked the interviewers, "just curious, what did you think of my samples?" Almost without exception they say sheepishly say something like "sorry, didn't have time..." Which is probably true but geez you'd think a portfolio would matter more and merit a required look.
On a related note, I love the story about the Homebrew guy getting rejected by Apple because he couldn't invert a tree or some such thing. Even if it was sensationalized (or plain false for that matter), we aren't even surprised by headlines like that anymore given the current interview climate.
When interviewing, I'm interested in seeing the process a candidate takes to get a result, not just the result. I like looking at code samples to get a feel for code quality and software engineering practices (everything from variable naming and program structure to Git commit messages)
Real-time coding during the interview shows me how good a candidate is at collaborating when solving a problem. One good indicator is asking follow-up questions to make sure requirements are clear. Other things to look for are, e.g. TDD, which can help solve a problem with predictable correctness and pace. I have interviewed quite a few candidates who throw a bunch of code into the editor and start pseudo-randomly tweaking the code to try to make it work -- this is not a recipe for successful problem solving. Some aspects of code quality can be evaluated during the interview as well, which probably explains why many interviewers don't bother looking at code samples.
It's important to ask for a solution that can be easily deduced -- trick questions are bad for both parties, because the candidate is often stumped, and the interviewer thinks they are bad because knowing the trick makes the question simple. Many classical algorithmic questions are "tricky" in this way.
The questions also have to be "real world". Like you mentioned, you can be a successful engineer and not know the specifics of inverting a tree. If you can correctly recognize the need for the algorithm and implement it from pseudocode, you have solved your problem, which is what engineers are paid to do.
I have a portfolio of past work (I made games that are still available, I even bring an iPad with them playable on them). I still get asked a ton of technical questions, and I'm lucky if I can even show them my past projects, because they never trust that I was the actual programmer on these projects (even though in the credits for a couple of them it says 'Lead Programmer: [cableshaft]').
It's a totally broken perception, and it needs to end. We're one of the few professions that are routinely asked to prove our abilities in every single interview we attend. It wastes so much time on both ends and there needs to be a new way.
Well really, they don't even usually let me get to the point with showing my work where I can show my name in the credits. My point is they could have checked that ahead of time and seen my name in the credits if they wanted to verify that I actually did what I said I did in my resume, and they don't.
The handful of people I've had to interview I always checked the interviewees portfolio ahead of time if it was available.
I've had interviews with some notable startups that gave a take home design test, such as here is problem x now design a solution and show your process. I did it once in my naive young years when I really wanted to work for a specific company. Now, I'm fine with telling them I deserve to be paid for a day's work if we're going that route.
You jest, but my family members in the restaurant industry asks similarly technical questions of a potential chef. Stuff they may not need to know on a day to day basis, but is the technical knowledge of their profession.
I discovered their interviews and mine are largely the same. Technical questions, some time-management questions, and some soft stuff to make sure they'd fit in the team.
The typical Silicon Valley style interview equivalent for a chef would be to have said Chef participate in Chopped. Plenty of professional chefs do awful in Chopped, because the required skills have little to do with what you need, day to day, in a kitchen. However, it's easy to judge. What is missing is the correlation between a hard test and actually on the job performance.
> "That’s exactly the point. These are concepts that are far more common in interviews than they are in production web programming."
The list includes things like Big-O analysis. While, formal analysis is certainly not a day to day occurrence of most programming, knowing what the runtime complexity of the code you are writing is almost always important.
While, I generally don't care for most algorithmic problems, I am honestly concerned when someone can't tell me that something runs in O(n^2) and good probably be optimized.
EDIT: I should note, I'm not necessarily looking for them to use precise term here, but they should be able to clearly articulate it.
So, I see where you are coming from (I actually love academic CS). But the VAST majority of the programming work out there does not require any Big-O analysis. It just does not. It's used as a tool in interviews to (essentially) look for rigor. The problem is that this harms people who are rigorous as hell in low-level details of JS and V8 (something I'd posit is actually more useful to many more companies), but never studied academic CS. There's nothing wrong with valuing the skill of complexity analysis. But there's a mismatch in how common it is to look for this.
But the VAST majority of the programming work out there does not require any
Big-O analysis.
Your point is simultaneously valid and irrelevant. The vast majority of programming doesn't involve any Big-O analysis. But if you can't do Big-O analysis, there are problems where you will be stuck. Your code will be running slowly and you won't know why, and all the micro-optimizations in the world can't make a O(n^2) algorithm run faster than an O(n) algorithm on even a moderately large data set.
To make an analogy with driving: the vast majority of driving doesn't involve parallel parking. But you still need to know parallel parking to pass a driving test.
You don't need to know how to prove a big-O to optimize an algorithm. Anyone who can think about what their code is doing will have at least some sense of how much it is doing.
Honestly, I don't think calculating the O(f(n)) is ever worth it outside of academia. Intuition is only slightly worse, and it is just so much time-cheaper.
Honestly, I don't think calculating the O(f(n)) is ever worth it outside of
academia. Intuition is only slightly worse, and it is just so much
time-cheaper.
Sure. I've never been asked for a formal mathematical proof of the Big-O complexity of my algorithms in an interview. And, as an interviewer, I've never asked. Intuition is exactly what interviews are looking for. If an algorithm has a nested for loop, and the inner loop is traversing the full data set, you should be able to say, "Oh, yeah, that looks like O(n^2) time complexity." If an algorithm is storing every element of a data-set in memory, you should be able to say, "That's O(n) space complexity."
Big-O notation, in practice, is a handy shorthand for talking about various classes of algorithms, and ranking them in order of how much time/space they take.
But if you can't do Big-O analysis, there are problems where you will be stuck. Your code will be running slowly and you won't know why, and all the micro-optimizations in the world can't make a O(n^2) algorithm run faster than an O(n) algorithm on even a moderately large data set.
That's actually incorrect. Not knowing Big O does not prove that you do not know how to optimize an algorithm.
However, it does mean that you don't speak the common language of computer science, that would allow you to easily communicate the effect of your optimizations to other programmers.
The whole point of the argument is the difference between what's on the test and reality, though. And simply "being on the test" doesn't make it valuable.
Needing to parallel park on the driving test has little relation to you being a good driver. In reality, your actual ability to park is hardly relevant, because you could avoid those parking spaces, or even half-ass it by driving in forward, or whatever.
> But if you can't do Big-O analysis, there are problems where you will be stuck. Your code will be running slowly and you won't know why, and all the micro-optimizations in the world can't make a O(n^2) algorithm run faster than an O(n) algorithm on even a moderately large data set.
This may be a result of the domains I have worked in, but I have never encountered a situation where I could change the order-of-growth on an algorithm. In fact, most algorithms I have worked with have been defined by non-computational performance.
I don't know formal complexity analysis, and personally think the resolution of Big-O is far too coarse for practical analysis anyway. I can still look at any piece of code and give you approximate polynomials for its runtime, memory requirements, and other computational features.
> Your code will be running slowly and you won't know why
Ever hear the expression "tools, not rules"? Using big-O analysis while coding is almost the textbook definition of what makes someone a bad developer.
That's of course not true if you're developing some foundational tool meant for other developers like Redis or whatever, but for the average developer doing complexity analysis at work is probably a red flag that they're prematurely optimizing their code, using company time to work on a side project, or otherwise doing something that's not contributing to the success of the business.
I have more than once in my professional career run into programmers who tested on small inputs and assumed the timing would scale at least close to linearly.
Fair point. But is it not much more common to encounter a programer who cannot break a complex system into loosely coupled components? Or who does not use consistent naming conventions? Or who is smart but lazy? I'd rate all of these as equally important things to try to evaluate in an interview.
I am a strong believer in looking for strength in an interview. Someone really rocking complexity analysis is a strong positive (shows that they are smart and pedantic in a good way). But so does clean, smart code and great loose coupling.
Oh, I'd never disqualify an interviewee purely for not knowing this stuff, but it is useful to know.
[edit]
Also, I care far less about whether they can solve some problem on paper about asymptotic complexity than if they have some sense of what it means. Someone with little formal training who discovered the classic python "Add to the end of string" algorithm is N^2 and figured out a working knowledge of N^2 vs. N is better than someone who memorized a bunch of math but can't apply it because it went in the "math box"[1]
There are a lot of situations where a "worse" algorithm will be significantly faster that another algorithm that's faster in theory, due to memory locality. In practice, it is very hard to know beforehand what parts of your program will scale and what parts won't.
This is categorically untrue if you are talking about large numbers and asymptotic complexity. It is true that algorithms with a constant factor larger number of operations may have such properties, but O(n) will always beat O(n^2) eventually, and in nearly all real-world cases at a fairly small n (1000 L1 cache accesses is slower than 1 memory access so n=1000 will be enough to counter any locality issues).
> But the VAST majority of the programming work out there does not require any Big-O analysis. It just does not.
I just don't agree with this. Maybe it's true for people doing strictly front end web development (i.e. pure HTML and CSS), but basic algorithm analysis comes up all the time when writing any kind of real code.
I think your attitude is actually part of the reason software sucks so bad nowadays. People act like efficiency doesn't matter at all and Big-O is useless and then turn around and act surprised when browsing a website causes Firefox to use 800 Mb of RAM, or their top of the line server only handles 50 connections a second. There's a connection there.
I'd have to disagree. I think the interview process desperately needs an injection of pragmatism. If the actual job never requires Big-O analysis, then asking it during the interview is a waste of time. I've never had to do Big-O analysis in the real world, but I have had to fix N+1's. Ask about that.
Maybe I'm missing something but isn't N+1 the difference between O(1) and O(n)?
Edit: Anyone want to explain how I'm wrong rather than just downvoting?
Edit 2:
Understanding a N+1 problem is the equivalent of understanding the difference between O(1), i.e. fetch all data with a constant number of queries, versus O(n), i.e. the number of queries scales linearly with the number of elements.
I strongly agree here. I wouldn't want someone to implement the "get it done version" of a request, where I am expecting a rushed implementation to be O(n^2), then somehow this dev produces an unmaintainable mess that performs in O(n^2n+api.Google.com*n).
It seems like there is a large middle area of concepts in between low-level algorithms/data structures and high-level system architecture that are left out of many of these interview prep guides:
* Principles and patterns of object-oriented (or functional) design
* Relational (or NoSQL or analytics) database design
* Unit, integration and system testing
* Logging, profiling and debugging
* Source control (e.g. branch/PR/merge flows)
* Deployment and devops
Do these subjects really not come up in some programming interviews?
From my experience, if they do come up, it's only because I was asking them what they used. I don't think I've ever been asked about any of these topics, with the exception of database design.
It's a shame, since these topics could make for a rich conversational interview that probably has more to do with the actual job than whiteboarding an algorithm. Just taking source control as an example, a hypothetical interview question could be something like:
"You've been assigned to implement feature X in our product. Assume that the specs are clear and you have a pretty good idea what code you are going to write. Also, assume that your teammates Joe and Jane have been assigned features Y and Z respectively with roughly the same due date as your feature. Ideally, describe how you would make your changes, test them and coordinate with your teammates to get all three features merged into the master source code."
Obviously a lot of details are missing, but they can be fleshed out in conversation (during which the interviewer should also describe the current process used on the team the candidate is being hired for). This would give both parties some insight into experience and expectations.
What is expected from something like "model a parking garage in code?"
Something like a garage class that has properties like numberOfCars, maxAmountofCars, maxHeight and methods for insertCar(), removeCar(), isGarageFull() ?
When the the job interview become a quiz show? I've spent plenty of time on both sides of the interview table. Sure, I've asked problem-solving questions. It was never to pay stump-the-candidate or to see if they could come up with some CS proof on the fly. It was to examine their approach to problem solving, and the way they interacted (back-and-forth questions). The right answer never entered into it, and if the interview question has a "right answer" then it is probably a lousy interview question.
I knew a college recruiter at Large Semiconductor Maker. She had been around the company many years, starting out as a mask designer. She knew nothing about engineering, but had worked with engineers on a daily basis for 10 years. She was great as a recruiter for new-grad engineers. One of her favorite questions was: "My back yard is 60 feet wide, and I want a brick fence across it. How many bricks do I need?" You would be surprised how many people never said another word, worked out a numerical answer, and told her the number. boggle. The point of the question is to find out how good the candidate is at uncovering and understanding the customer's wishes and what the customer's vision is of the desired end result.
Somewhere along the line there has evolved a group of people who never learned how to interview job candidates for problem-solving oriented jobs. A quiz-show lottery is a lousy way of finding out if someone can flesh out under-defined problem statements, replace a stupid problem statement with a better one, and creatively explore the dark corners of a solution space.
> If you’re interested in what we’re doing, we’d love you to check out our process.
Well, I say this as someone who was apparently blacklisted on TripleByte despite ostensibly getting a perfect result on the programming taks, getting an interview would have been a start. Or at least getting told what and why it happened, instead of trying to request a pre-screen phone interview without result over and over until I "got the message".
I understand the TripleByte team is perceiving problems that exist in the programmer hiring process, including the fact that interviews are not necessarily designed to gauge a candidate's qualities as a programmer. I also understand that you probably can't change the status quo in that space without first establishing yourself firmly in the existing field. But my own experiences make me skeptical about how transparent TB really wants to be.
How I did it for a Microsoft interview: cram for a weekend with a good algorithms text book. I do UI development where 99% of the work is figuring out the UI framework but the study session definitely got me into the right mindset for interviews.
It really makes the whole thing feel like school right? Every time I'm on the job hunt I have to study all the stuff I never actually use while I'm working.
I still think you can learn valuable things and become a better coder this way. Preparing for a Google interview was eye-opening for me, I learnt a ton of things that I immediately started applying day-to-day.
I take issue with that recommendation. You should use whatever language you feel most comfortable with. If it's C, use C. If it's Java, use Java. You don't have the luxury of an IDE or anything like that, so you need to have enough of the language in your head to write a program without looking something up.
I haven't used C professionally in a long, long time, but I naturally gravitate to it when doing technical interviews.
One of the reasons, I think, is that the language itself is "small" enough you can actually hold most of it in your head.
That, and for certain problems it gives you the opportunity to demonstrate understanding of things like pointer arithmetic that you wouldn't have if you used Java. I remember an interviewer at a Java shop being impressed a few years ago when I used C and pointers to reverse a string in place (I wasn't very comfortable with Java back then).
All that said, I'm getting old and cranky and whiteboard interviews are starting to get really annoying.
Another great post from Triplebyte, but I am confused about their model. Why would candidates want to apply to Triplebyte, if they still have to go through the companies' full interview process on top of the Triplebyte process ?
If you apply to Triplebyte you don't go through the full interview process at the companies we introduce you to. You skip the technical phone screens (most companies do 1 or 2 hour long phone screens before bringing candidates onsite) and go straight to on-sites.
Where we can really save time is focusing on the matching process of candidates to companies. Interviewing is tiring and we find candidates often stop talking to companies they were initially excited about because they're exhausted from interviewing (usually after 4 on sites) and just want to accept an offer
We wrote before about how much hiring preferences vary across YC companies (http://blog.triplebyte.com/who-y-combinator-companies-want). By using data we get from the Triplebyte interview, we can send you to the companies where you've the strongest likelihood of passing the technical onsite. The result is getting the best set of offers to choose from, rather than picking from what's available before interview burnout sets in.
I don't really see how you are relevant if I still have to go through a technical interview - it still means the interview process takes way too much time and I am better of finding a way around it.
Usually the only way to circumvent the technical phone screens at a company are if you're coming in through a strong referral. That's great if you already have a strong network, not so much if you either don't have personal connections or a resume with the credentials recruiters are trained to look for.
Just focusing on interviewing time, if you're talking to 3 or more companies that's approximately 3 hours of technical phone screens (usually repeating similar problems). With Triplebyte, you interview for 2.5 hours and save 3 (or more as you talk with more companies).
I think the main draw is that Triplebyte will get you past the initial screening mechanisms. This is particularly useful if you can write good code but have no formal education or otherwise cannot put together a resume.
Triplebyte is also great interview practice that happens over the net. You can do pretty intense (2.5 hour?) interview without missing a day of work if you are on the east coast.
You can schedule the interview by first taking a simple quiz for like 15 minutes and then selecting from a calendar, you don't need to negotiate with a person, go back and forth, etc.
As a junior in university looking for internships this summer, I can attest that going through the programming interview process is a pretty foreign process compared to traditional interviews. I just stumbled through my first programming interview last week.
I believe in the future point 3 will be especially helpful. The hardest parts for me were trying to figure out how appropriate it was for me to be rambling as I coded (something I'm not used to doing at all), and trying to understand what was and wasn't appropriate to ask the interviewers about my code.
Time to brush up on breadth-first search and hash tables!
What I've realized is that interviews are like anything else in life. People will tell you to expect x, y, and z and your own experiences will look like a, b, and c. Interviewing, like a lot of other processes, is something that can be "hacked".
It gets better! After a few interviews you start to get the hang of things and can begin to read the situation and understand what's in your best interest to do. Some interviewers like to test your knowledge of C.S. curriculum like you just mentioned, others prefer a friendly person who isn't afraid to ask questions and be honest about what you can and cannot do, others prefer both.
I definitely will be far more prepared for my next programming interview from the experience I gained going through the process once already. It's just sad that my first interview process had to be with a higher-profile company that would have been a ticket into silicon valley. Those interviews don't come easy in the first place. Haven't heard back from anyone else yet, but I trust they'll come along.
You'll look back at it in a year or so and laugh. Personally my resume and interviews were absolutely atrocious (I thought an online resume builder was a good idea..).
I guess it depends if you are going for a job that REQUIRES these techniques then yes it is important, but for web application development - even sophisticated web application development - not needed.
You should know a number of things on this list if you do back-end webapp development, particularly for large/hairy enterprise stuff. So maybe you are referring to front-end only.
Almost all of those are handled by a standard library, so why bother.
When issue arises then you look for a book/website and fix the problem.
Source: Doing backend (and some frontend) web stuff for the last 10 years.
When I interview candidates I sit with them for half an hour or so to get to know them. Then I give them purposely broken, poorly written piece of code which I tell them to pull apart. This proves incredibly effective as even if they miss some of the more obvious errors I can at least point them in that area and then see if they can see the problem on their own. There are about 100 different things to talk about so it really gives me an idea of the level they are at, and also the type of programmer they are; passionate, lazy, smart, meticulous, inexperienced, confident etc.
Then if I feel they are worth a second interview, I get them back to sit with me and my team for the day to see how they fit in with the team. Then all being well I offer the job.
But they're conducted as if they're pass/fail -- quite often before any real (two-way) discussion can take place.
Imagine if prospective dates gave you a 4-hour take-home test before making eye contact. That's actually the way many employers like to start of the conversation process with candidates, these days.
Isn't it already? I feel like all of the interview questions come down to things you learned in a Data Structures or Algorithms course. I suppose the design questions aren't necessarily taught in school but presumably you would learn that as you program more.
It's my belief that the best experienced programmers don't have to go through programming interviews. Even if you don't actively make an effort to network, you have a network of people who know you're good enough to outright hire without the whole rigmarole.
Companies who aren't finding people like this are missing out on many of the best.
> Even if you don't actively make an effort to network, you have a network of people who know you're good enough to outright hire without the whole rigmarole.
Not sure why you think this is the case. It is really easy to end up with a worthless network (I managed), and many larger companies insist on forcing every applicant through the HR hiring funnel for compliance reasons.
It's been my whole career. Yeah, I've had to go in and do the grip'n'grin interview and talk to people for a couple hours to make sure I'm not a martian or something. But every job has always come about because someone at the company either knew I was competent (from working with me in the past) or asked someone I knew who told them I was.
I've never had a job that involved whiteboarding or any sort of coding test.
Alternatively we could come up with a better system for interviewing. Instead of concentrating on being able to remember algorithms and write them out in a completely non-normal way (white board) we could, instead, give them a very small project to do then have them come in and explain it, walk someone through extending it and / or work through a problem together. You'd get real experience seeing how they write their code, work with others and their ability communicate.
Drilling someone to do tree traversal or various Big-O exercises doesn't exactly come up in the real world. Yeah performance and understanding data structures is important but that's why you give them something real, see what kind of data structure they come up with and why and go from there.
Well at least in my opinion. I've interviewed a lot of people and have been interviewed myself. I've unfortunately haven't been able to gather the data to see how effective my method is BUT I like it a hell of a lot better.
While true in the end I would expect it to take less time to do a small project, come in and review / work with it than the typical 1-2 days of interviews. Not sure if it would actually happen that way in practice I admit.
I'd like to try it and attempt to gather data in either case.
We used to have something called the "recitation class" one hour a week for solving homework problems out loud. I was under the impression these were making a comeback as classes "flipped" to watching the lectures outside of class. People would skip recitation sometimes because they werent learning new material like in the lectures or doing their problem sets which took enough time already. If these were made more mandatory somehow, they'd be like interviews.
I'm sorry but could someone explain who tripplebyte is and why they are so "precious"? This blog post in my opinion is symptomatic of of this bizarre silicon valley interview culture. This whole "how to ace the programming interview" bullshit is really disturbing. Are you looking for people who are good at test taking or people that can produce good product? Yes CS fundamentals are important - data structure/algorithms, but I'm much more interested in whether a candidate understands the problem space and can reason about it. I want to know about systems they've designed in the recent past why they made the choices they made. Being able to code up Kruskal's spanning tree perfectly on a white board while being timed is a neat parlor trick but if you don't understand the bigger holistic picture of systems I don't think it means that much.
As far as these take home assignments go, I find this a disturbing trend as well. Especially egregious is telling someone there is a time limit on your working for free. If you're going to pay me for my time awesome lets talk, otherwise lets maybe look at some code I've already written.
This industry seems to get more up its own ass every day.
We help programmers find great jobs at Y Combinator startups.
No resumes, just show us you can code.
*
Our Company
We believe hiring should be about what you can do, not what you say you can do.
Our mission is to build the world's best technical hiring process.
We don't care where you went to school or which companies you've worked at. We only care if you can code.
If you can, we'll do everything we can to find you the best startup to work at.
*
That mission statement plus the blog post means that they are recruiters for financially unstable companies (aka startups). Makes the blog post that much harder to believe since startups are beggars not choosers when it comes to hiring.
The practice section doesn't mention anything other than the book. Are there any other resources that people use to prep for an interview? Looking for something that tests algorithms and data structures more than solving tricky problems.
I think Cracking the Coding Interview is very popular if you're specifically targeting interview questions. There's also the accompanying website www.careercup.com
I personally also like to do problems on hackerrank, SPOJ, or Code Jam, but are probably overkill (especially Code Jam) as they're much harder than what you probably should expect in an interview. Still, I find that it helps my nerves when I solved harder questions because then the interview questions seem much simpler so it might help you too.
I use a combination of Coursera and Udacity to learn the algorithms, and Hacker Rank to find places to implement them.
I actually just cruised through this course and found it super useful. Edit: By cruised I don't mean did it easily but instead just watched the videos, because I was cramming whenever I could fit time in.
My go-to resource for practicing is http://leetcode.com. There are a lot of questions ranging from easy to hard and an online judge to check your solution by running test cases. If I remember right, there's also a discussions section. They hit a lot of classic interview questions that help you prepare for those questions that wrap those classic problems in obscurity.
For a refresher on algorithms and data structures, I also like the Harvard CS50 videos up on YouTube. They walk through sorting algorithms and cover the bases of various data structures.
You can choose your languages and it's all wrapped in a nice leveling-up game style. You pass and rank up based on your solutions to predefined community problems, and have a test-driven approach enforced in the editor.
I've been interviewing in the past month as I need to find a new role and it is just crazy.
It is SO random, a lot of useless questions, small startups having a long hiring process harder than the big 4.
Just one example: I've received an offer from one of the big 4 after going through their process. I was lucky in the questions - things I had studied.
I also applied for around 40 start ups / small companies. I made through the final on-site interview in only 5 of them. Lots of white boarding, silly technical questions that don't proxy to day-to-day work and etc.
I really think that passing in a process in a company like Facebook and not passing in other companies working in a much less complex environment says a lot.
Another thing that annoyed me a lot was that in some companies, when I froze upon a problem and was in a dead end, instead of them trying to help me or give constructive advice they would just keep adding pressure. It's craaazy. You're in a white board in a position of someone judging you in a on-the-fly-absurd-problem and the guys is trying to talk you down instead of help.
Perhaps the smaller/scrappier startups are trying to have more rigorous processes because small startups don't have as much resources to train new hires when compared to 'big 4' companies, so they really need engineers that can do it all themselves, quickly and under-pressure. They also may be more risk-averse when it comes to false-positives, because if your entire engineering team is 5-10 people, you're much less likely to be able to afford to get it wrong and hire someone that won't end up working out.
It is widely believed that if you are able to do deep engineering you should also be able to get the basics right. It takes a lot of time evaluate a real world project and most of applicants would not agree to do one anyway, so basic CS is the easy approximation.
What makes implementing qsort/bsearch/etc "the basics"?
It seems rather arbitrary, and it mostly measures how well you are able to recite from CS books.
The weird thing is that some of the "basics" aren't even particularly basic. Finding cycles in a linked list was an open research problem for a while. One can argue that interviewees need to know about the Tortise and Hare algorithm, but it's crazy to think that someone who hasn't should be able to come up with it on the spot.
They are the stuff you should understand anyway if you build software and are easy enough to figure out in an interview session if you have any analytical skill.
On the other hand, many so-called real world questions measure only how well you are able to recite from API documentation of interviewer's favourite framework or at best the language specification.
A large number of bad things influence interview decisions (credentials, targeted practice, how well you know the specific algorithms that come up again and again in interviews). I hope that more programmers getting better at interviewing skills will help move companies toward measuring actual programming skill.
My credential represents hundreds of hours of programming projects over several years. For that reason alone it is a much better signal than an interview will ever be.
It also establishes depth and breadth of familiarity with a variety of fundamental topics demonstrated through exams and large programming projects: program design, networking, operating systems, security/cryptography, team software engineering practices, algorithmic techniques and analysis, etc.
You fundamentally cannot assess this as well as my alma mater can because my alma mater has 4 years of me working under realistic conditions (laptop, internet, colleagues, deadlines on the order of weeks) and you have, what, 5 hours of me standing at a whiteboard?
Credentials are seriously underrated.
CS programs are not created equal. If you find that candidates from a particular school aren't necessarily competent, then you should value candidates from better, harder schools. If you find that candidates from well-respected schools aren't necessarily competent, then you should respect them less (US News isn't always right) and respect other schools more.
You're not being compared to people who lack hundreds of hours of programming projects. You're being compared to people who have that same experience, either at a worse school, or on their own. Credentials correlate with being good, absolutely. But many, many more people lack credentials than have them. This means that there is a large number of great programmers who don't have them (I think a larger number than who do). There are also bad people with good credentials. Strong filtering on credentials harms companies who miss good programmers, and harms programmers who can't get jobs. Using credentials as one factor among many in a screening step, however, makes good sense (although we don't do this at Triplebyte because we want to force ourselves to get as good at possible at directly measuring skill).
>You're not being compared to people who lack hundreds of hours of programming projects.
I don't think that is quite right. Interviewers claim coding interviews are necessary to weed out the large fraction of our industry that cannot write code at all. If they cannot code, then they did not successfully complete a good undergraduate program's worth of coding projects (they cheated, rode on the coattails of group members, were graded way too easily, or something). Otherwise they can code.
>Strong filtering on credentials harms companies who miss good programmers, and harms programmers who can't get jobs.
On the other hand, strong filtering on whiteboarding unnecessarily filters out people who don't do well under that kind of pressure but may be great programmers given a computer and a more realistic deadline. They also privilege people who have optimized well for whiteboard interviews but may be destructive in the longer term.
Which is why the optimal solution seriously considers both factors. (I'm arguing with you because you called credentials influencing hiring a "bad" thing).
Credentials and prior experience are used in all companies to establish your title, level, salary and position - things that actually matter when you start working.
Coding interview is just a baseline that everyone hired is expected to exceed. In most companies it does not really matter how well you did in coding interview - the only thing that matter is that you passed it. After that coding interview results are not really considered when choosing level (other than for junior engineers). Nobody expects senior engineer to be much faster when coding simple loop but he sure should be able to code it.
On the other hand design and behavior interview results are often have much more influence on final salary and level.
Your experience doesn't directly translate to what is relevant for the company you are applying to though. You may _think_ it does based on their general description, but it may not. One of the questions we give has you design a data model to store certain information and then query it out. It's not complex at all, and they are in full control of the design. You'd be surprised how few people with 15 years of experience in senior level positions are unable to query out the information using a data model they designed themselves.
I guess that's possible, but I think you might also be surprised how much better people are at SQL when they have access to references and ability to develop iteratively with feedback from a real computer.
Obviously I'm biased because I'm building a company around it (interviewing.io), but I don't think biases around credentialing are going to go away until interviewing is anonymous.
The other stuff is harder to crack. I'd really like to see interviews that are more tightly anchored to real work/layer complexity by building on themselves, etc etc.
A candidate has an impressive
STEM field educational background,
say, Bachelor's, Master's, or Ph.D.
degrees, has peer-reviewed publications
of original research in the STEM fields
including in computer science, has taught
computer science in world famous US
research universities, has created
original, fast algorithms in computer
science, has written successful software
for a wide variety of applications in
a wide variety of programming languages,
and, then, somehow needs to learn some
additional, special lessons on "how
to pass a programming interview"?
Such an interview is by the Queen in
Alice in Wonderland or from someone
well qualified in computing?
Why the heck the needs for special
lessons to do what the candidate
has been doing successfully for years?
Because passing a programming interview is not what the candidate has been doing for years.
Interviews are artificial, stressful situations which in many cases do not resemble what you will actually be doing at your job.
Like well-known blogger Steve Yegge once commented about Google's interview process, sometimes it gets so bizarre that two interviewers in your queue wouldn't have hired each other! Focusing on the details one interviewer likes will make you lose points with the other. He calls this phenomenon "the interview anti-loop".
I guess being well-drilled about common questions and tricks helps counterbalance some of the pitfalls of the interview process.
So, right, programming interviews are
from Alice in Wonderland and not
about programming.
Gee, I'm glad I'm programming, for my own
startup. Wait while I ask my founder, CEO
if my programming is good -- got an answer
back right away, my programming is fine!
> So, right, programming interviews are from Alice in Wonderland and not about programming.
I don't understand your irony. No-one is saying that. We're saying that programming interviews are often not ONLY about programming, and unfortunately the parts NOT about programming tend to overshadow the parts that are. Therefore, interviewing requires preparation.
Are you bitter that programming interviews are like this? If so, fine. So am I.
Are you saying that programming interviews are NOT like this and do not require training for? You are mistaken. It's a fact of the world, whether we like it or not.
Or are you saying you lucked out and didn't have to go through this hell? If so, congrats! It's known to happen. But still, it pays to be prepared for the average case of difficult, stressful interviews.
I'm saying that programming interviews
have descended into tea leaf reading.
People with obviously high qualifications
are being rejected for no good reasons.
The situation was not always so.
Apparently the people doing the interviews
are more interested in being nasty than
hiring people to get more work done. For
this situation to hold, there has to
be not much demand and a big supply. So,
the process is free to descend into totally
silly games. It's the Queen in Alice in
Wonderland and "Off with their heads".
Academia is not industry programming. So the candidate had highly specialized knowledge and skills, big whoop. Guess how much of my original academic research I use? None. If that candidate wants to use those skills they should go into research.
> The good news is that interviewing is a skill that can be learned.
When I hear folks complain about programming interviews, I point to that.
The month I spend gearing up for coding interviews usually guarantees me a job, that offers at minimum a $10k raise. I consider that a very good use of my time.
There is a problem. (job interview are selecting incompetent people)
Ho, but there is a solution to walk around the problem of job interviews being unable to select good developers: so let's avoid investing in being a good developer and fix the problem at hand and just be good at interviews.
Problem solved.
Brilliant!
Are not interviews kind of de facto selecting scammers by giving them an unfair advantage, then?
Is it not causing a problem of credibility of the profession, hence the of the value of our earnings?
Growth is shrinking, recession is coming. Will they keep people whose values are uncertain when time will come to get rid of the fat?
> There is a problem. (job interview are selecting incompetent people)
Of the developers you hired, how many nailed the interview process, then went on to being classified as a bad developer?
From my experience hiring candidates, the typical software interviews that I have been a part of, tends to produce very few false positives, and instead do produce false negatives.
"They eat large sprawling problems for breakfast, but they balk at 45-min algorithm challenges."
Care to guess what we're looking for in screens and interviews?
Some of our best performers were poor interviewers. Likewise, we've had interview aces not pan out. Rather than expecting the world to become interview clones (and make our hiring decisions even more difficult), we're learning how to be better interpreters of people to get to the answer of our real question -- is this person an engineer we'd like to have on our team?
It's still a work-in-progress and we're nowhere near perfect, but we're simply not going to outsource our decision-making to the status quo of technical interviewing in 2016.
Given that none of the most popular dynamic languages know how a good hash table should be implemented with dozens of half-way experienced programmers over many years, and most of them would not be able to write a proper binary search, these questions are certainly too hard for a jobseeker.
Those languages still survived with improper implementations
for decades. So will TripleByte.
A good programmer must be able to survive mediocre colleagues and terrible managers. How to check for that in an interview?
While there are several resources to practice the use of algorithms and data structures, I find there aren't as many good resources to prep for the 'system design' interview.
If one wants to switch from a completely different domain - like working in investment bank tech or embedded development, there is no way one can have prior knowledge to tackle the design of a Google Docs style system.
It seems like something that can only be gained through on-the-job experience.
Is there no hope for newbies?
If you can't talk about design implications quantitatively, nor have a rigorous understanding of how to build data structures - not memorize anything - you can't get mad that I make $100,000 more than you do and work half as much.
If you're a career programmer, and you've never bothered to hone these skills, don't be surprised when you can't easily find work in 10 years. The cheaper guy or girl who comes with less risk will beat you.
My solution has always been pretty simple -- I do poorly in exam interviews, and usually I don't get the job. Regurgitating Algorithms 301 -- nope, not for me. They'll end up with someone who's great at regurgitating. They're happy and I'm happy.
I can explain very clearly how I would go about solving a problem, maybe whiteboard it, flesh out a design, ask them some key questions to show I'm a thinker and not just a follower. That kind of interview usually goes well for me.
As a result, I usually end up with jobs where I have a lot of creative latitude, where I can think up new product ideas and prototype them out, where I can solve problems my own way.
I wish I could pass those Google tests, though. It would be nice to have it all. But some of us just have limitations, I guess. Luckily, it hasn't held me back from having an enjoyable and relatively well compensated career.
Probably today the thing is to have a couple of apps on the Android or Apple appstore, a Github page with some interesting toys and experiments, some open source contributions, and of course the old-fashioned networking that is how many of us still get our best jobs and most successful business relationships.
At my startup, I've had some success hiring mobile devs through "audition programming" on Google Hangout.
I create a "real-world-lite" task like "connect to this simple JSON API I built and implement a recursive product category browser on top of it". I've done this task myself already with a timer and am confident that it will take about an hour to implement. Then I ask the candidate to share their screen and implement it in Xcode while I watch. As they develop, I can get a sense for how they attack problems (quick and dirty, slow and methodical, stack overflow copying, etc), and afterwards I can ask questions about their thought process.
If they did well in the first one, we block out a second one for another hour, and a third for another hour after that, each one testing different skills.
This avoids the time imbalance inherent in take-home projects, because I'm spending just as much time as they are. And it avoids the painful "implement a red-black tree" whiteboard questions by focusing on real-world work in their own dev environment. It also means I have a decent sense of their skills before I ever invite them to an on-site interview.
It very much depends on the gig, but I would add another item to the list: Show a basic understanding of UX design. Companies need people who can mediate between designers and programmers. While demonstrating an understanding of Photoshop and Illustrator says nothing about your skills as a programmer, it could be the thing that makes you stand out from the crowd.
We've not seen very much focus on design in interviews (very few companies talk about this when telling us why they like/dislike candidates). We do see a lot of focus on interest in what the company does. I think that pitching design skills as a passion for making the company's product better is the best way to go.
But "an understanding of Photoshop and Illustrator" is to "a basic understanding of UX design" as "an understanding of vi or emacs" is to "a basic understanding of distributed application architecture"
The only reason I mentioned Photoshop and Illustater is because they're easy things to put on a resume. "Knowledge of design principles" is a little more amorphous.
To past a sane interview with sane people, focus your mind on the "how to be a good programmer" question - programming as a cooperative human endeavor.
Of course, it may that a bit of craziness may be involved and they'll ask ridiculous little or big problems but someone with a reasonable amount of can answer those.
Or it may be that a lot of craziness is involved, things veer into bit-twiddling assembly, top management steps in unannounced to shoot random questions, suddenly syntax or whether you are "server side oriented" or whatever matters a lot - "we want the absolute best programmers on the market and we cement their loyalty by paying well under market rates..." etc.
Now, the more craziness appears, the less you'll actually want the job. But markets being what they are, you may need the job. By the end, there are no easy answers. Keeping is probably the main advice.
While not strictly programming I just finished interview number four/five with a company about an hour ago. SQL and data modeling. First, one non-technical phone interview, one technical phone interview, one online take home test. Today, two different sessions, second one primarily technical, and lunch.
White boarding a simple data model of a real world scenario was surprisingly difficult even though the interviewers were very cordial. The exercise was used to gauge my question asking skills (interviewer was acting as subject matter expert) as much as data modeling and SQL. It was kind of fun as I used it as an opportunity to expand my ability to problem solve in stressful environments.
The company says:
"We help programmers find great jobs at Y Combinator startups."
What's so great in Y Combinator startups? I mean why do they narrow only to such startups? Wouldn't it be better to just say they help with finding jobs in startups?
I had an interview some years ago which demanded a difficult CS problem to be solved and besides coding the offer stated that you should happily accept being helpful with the IT needs of the marketing guys installing their Antivirus, email...
I had a bunch of interviews in my life and everyone had its own "special sauce"...
My first one was with a Java company and the hiring manager wanted me to draw UML diagrams, which was the only thing that I learned in the software engineering course in university.
Another one was about programming a game. Nothing much, but it needed a few 2D transformation I knew nothing about, so I failed miserably.
Most jobs I got were just "talks" about what I can do.
What's the least stressful way to pass a programming interview?
Not to have one at all!
This obviously doesn't apply to people just starting out, but I've found the easiest way to get a job is to have worked with someone at the company in a similar role. Many of the issues that interviews are designed to highlight (attitude, flexibility, stick-to-it-ivness, culture fit) simply are non issues if you have someone on the inside who has experience with you.
If you have an unbounded abundance of good candidates, it is a different story then when you are a new startup fighting for talent.
At highly targeted companies such as Google, Facebook et al, I'm sure that if they have a dryspell of good candidates in a given month (can't think of a reason why), then they revert to things like: "We don't care if you don't get the 'trick' immediately, we'll give you hints" and "we just want to see how you think and how you code" and "just talk through the problem" and "you should not learn specific problems and if you see one you know just tel your interviewer" or "cracking the code interview type of questions are banned".
But the reality is that people who apply to Google (or Apple or Amazon or Facebook or Microsoft...), are very smart, and want to work there very much, so while they can probably do well without preparation, due to the fact they will have competition with all this year's new Stanford / MIT / CMU graduates on a limited amount of positions, they take no chances. I have a friend who has a masters degree from a target school and it took him 4 attempts to get into Google.
He is smart, he probably did well in the interviews, but you are being compared to others, so until he went and worked on those pretty useless skills of: practicing writing fast on a whiteboard, getting interview books and practicing tricky problems, doing a lot of online judge problems, he didn't get in. Why? because if you have two candidates, both smart, one has practiced whiteboard coding for all of the problem sets on geeksforgeeks / careercup / glassdoor, and one haven't, then even if both are presented with a new problem, most chances it might be a variant of one of those other "usual suspects". e.g. after I solved the famous water trapping problem (tough one if you don't get hints), the idea for the largest water container problem just pops to mind, and if you know that you can find the only non duplicate number in a list in O(1), then the problem of finding if an unsorted list of numbers is an arithmetic series with just O(1) memory is practically the same trick.
So think of two developers, both are awesome, both know CS and both are fast coders.
One practiced whiteboard coding and knows the XOR trick for duplicate numbers, his code for such a question will be written in 1 minute
public int findDupe(int[] nums){
int dup = 0;
for (int num : nums){
dup ^= num;
}
return dup;
}
The other guy, who didn't see these kinds of problems will probably use a hashmap and x2 more lines for the same problem.
Both are O(N) time an O(1) complexity, but the hashmap guy might accidentally say it's O(N) memory (common mistake for frequency maps)
Bottom line, both are good candidates, and the only reason the first one thought of the XOR solution in 1 minute without a hint is that they either saw it before (it's not that rare) or a real genius (statistically less likely, but still possible)
If you don't have enough good candidates, you might have the time and energy to really see which one of these will perform better at work using work related questions other than tricks like this.
But if you have unlimited good candidates coming in, the one that will solve it in 2 minutes will be the one that will stand out from the crowd, there is simply no other good way to filter out so many people. I'm sure they have tons of false negatives. (and probably also a few false positives, but I doubt it's too many)
So the system is broken, but also SATs and GREs are broken. Popular schools, popular jobs, will have to put filtering systems that are not only directly related to the ability to do the job. Someone at Google is simply writing CRUD apps for a living all day, I'm sure. But I'm sure his interview tested him on a much harder set of problems.
>But the hashmap guy might accidentally say it's O(N) memory (common mistake for frequency maps)
Wait why is it O(1) memory for the frequency map? As you keep adding elements doesn't the hashmap have to resize to prevent too many hash collisions?
> finding if an unsorted list of numbers is an arithmetic series with just O(1) memory
Is the strategy to solve this to first find the common difference `d` with one pass through the array (by finding the largest and smallest) and then sweeping through one more time xor-ing each element a[i] with a[i] + d and checking if the result is equal to the (minimum) xor (maximum + d)?
> >But the hashmap guy might accidentally say it's O(N) memory (common mistake for frequency maps)
> Wait why is it O(1) memory for the frequency map? As you keep adding elements doesn't the hashmap have to resize to prevent too many hash collisions?
Presumably because you'll have a constant number of keys (I'm not sure what the exact problem he's referring to is).
The problem was to find the only non duplicate number in a list.
I'm not sure how you would have a constant number of keys. I mean I guess if you consider a worst-case hash table with bucket for each integer you would have 2^32 keys (which is technically O(1) space since the size remains fixed regardless of list length).
But using Big-O in this case is clearly disingenuous since the space allocated is far, far more than the one for xor solution.
My bad, I meant for a variant of the question that uses chars and not numbers. With numbers the hashmap will be still constant memory just like you said, but it's a big constant (2^32) - but still O(1) memory. This is because the input is of ints, and it can only be one of ~2^32 numbers for either values or keys. And since we only count the number, we don't need a map, we can just use a set (a map with boolean value true if the element is in the set) we add to the set and remove when we see the element again, the only element left is our non duplicate item. But the max we can have is sizeof(int)/2-1 duplicate numbers + 1 non duplicate, so memory can't be more than a constant.
In the char variant, the worst case number of keys in your hashmap is the total number of chars in Unicode. You can't have more than that no matter how big is the input.
So both cases are a very, very big constant, but still a constant.
Time complexity however is theoretically unbounded as you can have any size of input, but again, this is limited to the max size of an array, so TECHNICALLY the time complexity is an integer in Java at most 2^32 as well.
But I would not risk saying that in an interview, O(1) memory will pass, saying O(1) time because the size will never be longer the the max array length sounds more risky.
if it's an int though, it's an O(1) and O(1) memory if you really want to be technical. So are all interview questions involving an array of integers I guess :)
And what was the intended solution for finding if an unsorted list of numbers is an arithmetic series in constant memory? You say it's the same xor trick, but I don't see how it's applicable. Do you xor all the values with all the values shifted over by the common difference?
"arithmetic series" and "unsorted list" are mutually exclusive statements. Unless you mean something like "the sorted version is an arithmetic series".
This only works if the numbers are in a known range (say sequential from 1 to 100), and you XOR in the index (plus 1) as well. Then each number is XOR'd two times, except the duplicate, which is XOR'd 3 times (and thus remains at the end). The fact that the code is wrong shows why this question is a very bad interview question.
EDIT
The given code works to find the only non-duplicate item in a list (perhaps that was what was intended)
Yes, my wife was waiting on me for lunch so I made tons of typos, yes I meant find the only non duplicate number. and I meant O(1) memory O(N) time complexity.
Recruiting is F*&^%$
No matter what you do to the interview process
Hiring managers are looking for @#$#%
and they may be #%@%@ and
their company maybe $@%@
and their team dynamics are ##%#
Candidates are looking for $@#!@
and they may be $@@$ and their
personal situation is $$@@ and
their salary expectation is ##%@@
This is a very good piece of advise, matches my past job seeker's experience really well. I think I got most of my jobs largely thanks to enthusiasm I had for the stuff the companies were doing. It was genuine but I'm sure it could be pretended as well. Everything else spot on, very honest stuff.
I think it's also important to mention that you should come in with some good questions of your own that you've prepared for them. A lot of times the interviewer uses this to see how much homework you've done on both the company and the job you're applying for.
I'm pretty sure I could never get an engineering job at companies that interview like this. I've shipped tons of real world products over the last 20 years, founded companies, and just reading about what it is like to interview scares the shit out of me. I'm a firm believer in proving real world skills, not silly/irrelevant high pressure short time domain stuff.
Safe to say, we don't interview with whiteboard hazing. We do a phone screen mainly for personality and to see if the candidate deeply knows about a project they recently worked on. We dig in a bit there and look for the spark of passion. Then, if we are moving forward, we give a take-home project in a private git repo that is relevant to the role and tell them to spend 4-8 hours on it depending on their availability and ask them to time it and be honest. They choose the scope of work they want to accomplish. If the code looks reasonable we have a video conference where we do a code review and dive deep and ask "why" a lot.
We have been really surprised at the difference in quality of these take home projects. Some people can barely get started and struggle to produce anything. Others build full on, useful, applications.
Obviously we are screening for people that are self-starters, so being able to choose scope and regulate and make larger decisions is important to us.
An example project for a full stack c# developer:
Feel free to search around and work on the challenges as if you were on the job. Let me know when you think you can have this stuff done by. Please do this on your own time, with your own equipment and tools, and not your employer's. We don't want any lawsuits or IP ownership questions. Also, for the code, you can retain copyright if it's something you'd like to publish on GitHub, etc.
Clever Code
Can you send me a code snippet in the language of your choice of something you have done that solves a hard problem in an elegant way?
For instance, here's one of my snippets from a few years back. It solves the complex problem of transactional optimistic concurrency control using ~30 lines of code. The usage of lambda expressions in C# and generics makes it succinct and expressive.
Production Backend Question
Let's say I have a very popular consumer facing service with 10 load balanced front end web servers running the latest asp.net mvc and averaging 100 simultaneous requests each, appropriately sized ms-sql servers, and a heavily used memcached cluster of 4. Everything is running great until we do some capacity planning and double the number of web servers to 20 and experience a 25% increase in simultaneous requests. Suddenly load shoots up on the ms-sql tier with far more transactions per web request than typical, overwhelming a ms-sql cluster that should have handled twice as much web traffic. Web server requests start timing out and throwing 500 errors to the clients, and the system practically comes to a halt. There were no code changes. Describe how you would troubleshoot this and what you think might be a few likely causes.
Failure
Tell me about a project you have worked on that failed. What was your role? Why did it fail?
The Challenge
This is meant to be a practical exercise, and is representative of the types of challenges we face. We want to see if you can make reasonable product choices under time pressure as well as write code. You get a lot of ownership here. Please use any frameworks/services/etc you want. We suggest using something you know very well. Feel free to search Google, go to the library, call a friend, or whatever you need for research. Please write all the code/copy/etc yourself. The output can run on whatever OS or PaaS/SaaS provider(s) you want, but it should be something we can also easily run. Please try to limit this to 4-8 hours of your time.
Create a web site with the following characteristics:
Make the hackathon/minimal/proof-of-concept version of some popular consumer web app that you think could use some love. Craigslist, eBay, reddit, whatever...?
Site must be responsive, gracefully scaling down to smartphone resolutions and up to full screen desktop displays. It does not need to be beautiful.
Supports all the way down to IE8, with appropriate feature degradation as needed.
Lean toward implementing things in-browser rather than in back end code.
Bonus points if you do a Show HN and your implementation makes it to page 1 on hacker news.
tl;dr -- current programming interview format should be drastically changed.
I have interviewed 100+ candidates so far. I'm doing interview less and less recently simply because I found those coding/design questions less meaningful to judge how good a candidate is.
With websites like leetcode.com/lintcode.com that collect interview questions and provide online judge, all you need is to put enough time practicing. 10-years ago we use "reverse a linked-list" and nowadays we maybe use questions like "topo-sorting". The latter is significantly harder than the previous one -- but if candidate saw solution beforehand, it's actually easier to implement.
I look forward to seeing more articles that find ways to game the interviewing process. Perhaps showing interviewers how easily their hiring process could be gamed will 'inspire' improvements in interviewing.
Good article.
Anyway, I am also disappointed with many interview processes.
Sometimes I think there are courses or books about how to hire that spread a particular concept or practice that is common in some period.
"Line up offers" only works if you already have a job, I suppose. If you're unemployed, you generally have to take the first offer you get, or you lose your unemployment benefits.
"You need to be able to write a BFS cold, and you need to understand how a hash table is implemented."
Great advice! The list provided in this blog post is an excellent description of what you should know, cold, before you go into an interview. The reason you need to know them "cold" is that you (probably) won't be simply asked to code up mergesort. Instead, you'll be presented with a problem that can be reduced to mergesort. You need to know it cold so that you can reason more abstractly with it.
While this is great advice, it also demonstrates why people eventually develop interview fatigue over a career. I'm not talking about fatigue from your third interview this week, I mean, I mean fatigue that sets in over decades.
See, a year ago, just before I interviewed at google, I could have done all this "cold". I could code up a BFS, mergesort, find the shortest path between two nodes, print all permutations of a set, and so forth. Cold. And you know, I think in many knowledge-intensive fields, most practitioners are required to do stuff like this cold. But I probably wouldn't be able to do it all cold now. I could figure it out, but not in 45 minutes at a whiteboard, and certainly not in time to reason abstractly. I wouldn't be able to do this with partial differential equations or shakespeare's plays, two other subjects I was highly prepared for exams in a couple of decades ago.
See, people in other professions have to do this, but they do it once. Actuaries need to know vector calc and linear algebra, cold, to take their exams. But they don't have to remember how to integrate by parts when they are interviewing for a Sr Actuary position 15 year later. Physicians need to know Organic Chemistry, cold, at some point in their lives. But an experienced anesthesiologist isn't expected to answer whiteboard questions about undergraduate oChem.
I don't have an easy solution, since I actually do completely understand why tech employers rely on these exams. But I do think they take a serious toll on the field, and are a major contributing factor to attrition (as well as aversion among people who never go into the field at all). We, as developers, really do have to re-load complicated undergraduate coursework into exam ready memory over, and over, and over.
I'll finish the way I always do: if you interview like this, that is your choice, and you should feel free do do so - I really mean this. But why then do these employers act mystified that there is a "shortage" of developers? It seems to me that aversion and/or attrition is a very natural outcome for the way we do things in software. "No thanks, I'll do something else" seems like a very reasonable response to an industry that hires like this.
At the risk of repeating myself what I've said elsewhere on the page:
This method of interviewing has been around ever since, and is going to be around for the foreseeable future. Nobody loves it, including the interviewers, but there just isn't a better way to do it at any sort of scale. Especially when there are much bigger problems to solve when you're running a business. And a lot of other reasons.
It's best to take the bull by the horns. I run http://InterviewKickstart.com, which is a bootcamp for preparing for such technical interviews. We do almost exactly what is in the blog post. It works. Spectacularly well.
You've copy-pasted the exact same text several times in this thread already. Your worry about repeating yourself is justified. This seems like blatant advertising to me.
If you're given a take home project and you take a lot of time on it, it's usually a signal that we shouldn't hire you. We're not using you for free work, and if you think your take home project is assigned in that vain, then you're probably not qualified for the job. It's great personal work ethic when you tell us you worked super hard and spent a week on it, but we give you the expected time so you can filter yourself out. Also, shame on you because it makes us feel shitty to have to reject you after that.
Well, technical interview questions are why I've given up on pursuing a programming career path and I was shocked that employers were still giving out technical interview questions for a product manager role, it didn't matter if I was a software dev years ago. Rote method still trumps real world experience building real commercial software that people will pay you for according to a lot of interviewers....and I've heard about few managers hiring neophiles based on their 'algorithmic' performance on a whiteboard build shit on Node.js, panic when it's far more work necessary and pull the plug on their CMS (reinventing wheels) because PHP is 'slow'...what in the fuck did I just hear you want to rebuild a static CMS website in Node.js because you think it's going to give you wings?
well at least that has been my experience so far trying to get a job and I'm coming up dry every time. I technically have no work experience because I've been holed up writing a big data mining SaaS tool for a few years and since I was self employed it seems to mean jack all for credentials.
I don't know I'm in a bit of a jam. Starting a complex SaaS product from scratch that thousands of people have used is simply useless against a fucking sorting algorithm that will be used heavily on the actual product.
Like I feel like I'm living in a bizarro world sometimes...I have all this experience and knowledge in this one area, building shit and getting people to pay for it, and it's going to waste as I'm half heartedly applying for jobs I know I will not be able to pass the second round of interviews when the technical algorithm questions begin....I'm sure if I wanted to learn more about the different variety of sorting algorithm I would've fucking consulted stackoverflow already....come on man I just wanna solve real world problems with real world product experience not write fucking code on a whiteboard. I'd be happy to architect out an entire stack powering your product in to the future on a white board but fuck man if you want help on your sorting algorithm just google stackoverflow.
You come off as a technophobe. As an employer, how am I supposed to quickly judge your independent work if you hate algorithms and new tech? I'm not going to pour through a repo; I don't get paid for that or care, to be honest.
If you're talking about PHP being better than JavaScript in a startup interview, you're not going to be hired most likely. Startups typically use new technology, and you clearly don't.
Is it weird that I find this entire blogpost eerily similar to PUA strategies?
I think it ultimately boils down to confidence. To use the dating analogy, nothing drops the proverbial panties (or boxers, if you're into that sort of thing) faster than having confidence. Being physically attractive (i.e. fit and in shape) doesn't hurt either.
Take my analysis with a grain of salt, though. I happen to be hilariously bad at interviewing, and don't get me started on dating.
And your advice is eerily similar to the non-advice people give guys who aren't naturally good with women. "Just be yourself" or "Just be confident."
Those statements aren't advice they are platitudes. And some people, despite people who are naturally good not understanding, need actionable advice about what to do exactly.
Ummm, this should be titled 'how to pass an interview', period. Most of this advice applies well outside of programming and reads like any of 1000 self-help books of the past many decades. Sure there are a few specifics to coding but all the themes are as old as the hills.
The root comment wasn't off-topic, but it wasn't helpful either, because of the snarky second sentence. It's typical for such a comment to nudge the thread in the wrong direction.
I think they really downplay how much of a panic/anxiety these type of closed room with no ventiliation with solemn looking people judging your every move.
I could never calm down and think but the interviewers wanted to test how you would code on a whiteboard under duress
Have you ever been waterboarded? Putting waterboarding on the same level as whiteboard coding interviews is minimizing the horrific experience of those who have been waterboarded.
Not many people have been waterboarded, and most were terrorists, so it's not the same as a holocaust joke. In a world where we're not allowed to laugh at anything connected to some kind of injustice, then yes, that wasn't funny.
Putting waterboarding on the same level as whiteboard coding interviews is minimizing the horrific experience of those who have been waterboarded.
I guess it is, if you read it literally. But I don't think people mean it literally when they joke about it, nor are they making a serious comparison between the two experiences.
ok sure, but this conversation is figurative not literal. it would be horribly wrong to literally put whiteboarding on the same level as waterboarding. I don't see any problem with using figurative language and expressive metaphors to help convey a particular emotion felt about something that is common to the lives of those participating in the conversation.
Its good to see how people handle stress. You can weed out a lot of crybabies by analyzing their performance under pressure, regardless of whether they produce the "right answer".
Social performance stress is not technical performance stress. You can weed out a lot of undesirable work environments by seeing which potential employers confuse them.
Is this seriously what interviews are trying to discover? In what practical situation would this skill manifest - a meteor is about to hit the Earth, and you need to write some algorithm to stop it RIGHT NOW?
Or is it "we need to know how you handle the stress of critical bugs that are existential threats to our client relationships", which shouldn't exist in the first place if the company is run well? If yes then I guess you know what I'd say next...
Maybe - but I've also seen folks who ace that sort of thing be the worst engineers for getting real work done "under pressure", and fiddle around with reinventing the wheel and academic debates about trivial details too.
Thankfully I'm not a boss. But when I go to an interview and they put me under pressure, I dont automatically discount them because they put me in an uncomfortable situation. I assume they're gauging my mettle.
Rote memorization of a guide or following a "script" is not the same as qualifying for a position.
A sufficiently experienced and knowledgeable person will be doing things similar to what's in the guide because they've learned to do so through experience, not because they read it in a silly guide.
People seeking out the guides are typically the same ones who don't have the experience to back up what the guide tells them to do.
I'd guess very few people use every one of these things on a day-to-day basis and in my experience the bigger companies actually encourage you to brush up on algorithms before interviews with links to resources you can use. If you're really hopeless it's not like two weeks of study are going to teach you everything you need to know.
I hated this article. A good technical interview reveals an aptitude for programming or a lack of same, and can distinguish a true aptitude from an ability to fake it. I've been interviewing programmers for a very long time and I'm pretty good at avoiding "false positive" results with a few straightforward questions.
If you have aptitude and talent, brush up on your algorithms and try to have fun with the interview. If you don't, then I'm sorry, but maybe you'd be happier doing something else.
So, you may be right. Perhaps you're great at telling good programmers from bad. But almost no one does the analysis to really know if this is true (false negatives are the huge unknown). And most interviewers are not as good as you. The consistency between interviewers at the same company is low (we measure this). The consistency between different companies is low. We (interviewers) can do better than this!
I make no claims about false negative results. But I know that people that I've recommended hiring have basic analytical skills, can correctly code a Boolean predicate with a few connectives in it, and aren't confused by linked data structures. It's not a high bar that I set, and that's kind of my point: ask at least some low-bar questions! You can debate all day about how well candidates have done with the hard questions, but it's pretty easy to know what to do with those who can't cope with something like "do two (x,y,width,height) rectangles intersect or not?".
Care to mention these questions? Most companies are suffering HARD from false positives. I'm sure a lot of people would love to know what's working for you.
Well, I'm not going to spill my current repertoire, but in short it's all about "programming in the small". Can you construct a precise Boolean predicate to test for a well-defined condition? Do you "get" pointers and recursion? Find sample problems in your own work.
Not having any does not necessarily mean you are good at avoiding them. It may mean you are lucky, or the false positives weren't there to be had in the first place, or some other reason is keeping false positives out of your company.
I don't have the answer, but I just don't see how the questions typically asked in programming interviews give you a good picture of the candidate's actual programming ability. I much prefer "homework" projects, even if they involve me working "for free", because I feel like they ask for actual programming skills rather than the "guess the algorithm" lottery of phone screens and whiteboard coding.