Hacker News new | comments | show | ask | jobs | submit login
How to Pass a Programming Interview (triplebyte.com)
1020 points by runesoerensen on Mar 8, 2016 | hide | past | web | favorite | 552 comments



    Being a good programmer has a surprisingly small role in passing programming 
    interviews.
And that just says it all, doesn't it? I agree that interviews should test candidates on certain basic skills, including (time/space) complexity analysis. But do you really learn anything by asking the candidate if they can recite the time complexity of a moving window average algorithm (as I was asked to do by an interviewer yesterday)? What does the candidate's ability to code a palindrome checker that can handle punctuation and spaces tell you about their ability to deliver robust, maintainable code?

I don't have the answer, but I just don't see how the questions typically asked in programming interviews give you a good picture of the candidate's actual programming ability. I much prefer "homework" projects, even if they involve me working "for free", because I feel like they ask for actual programming skills rather than the "guess the algorithm" lottery of phone screens and whiteboard coding.


I personally also prefer take-home projects over being grilled on my ability to solve obscure algorithms problems under pressure.

However, this second route also comes with a number of issues. The most annoying of which in my experience is the amount of time investment each interview requires from the candidate.

At least in the traditional technical interview, interviewers and candidates tend to be roughly equally invested in the interview process in terms of time spent, so interviewers have to value candidates' time because failure to do so will end up wasting their own time as well. In take-home projects, this balance in time investment completely breaks down, and thus there's very little left to discourage interviewers from issuing ridiculously time-consuming projects.

I've done a few of these in the past, mostly to get some practical experience with a new framework & ecosystem I'm not too familiar with yet, but I think in the future I'll likely just politely decline any projects that looks unreasonably time-consuming, or at least try asking for certain adjustments to the spec first.


I've gotten a take home test that took all Saturday. I finally finished it happy with myself and with my work and emailed it in. The response was "thanks, are you ready for the next exercise?" i was like.. um.. I really need to get on with my weekend. This was deferred to the big boss who said "It's ok just do it whenever you have time next" .. ook so that was my next Saturday. Worst part is that after turning that in, and I am fairly certain both were done really well, I never heard a word back from them. I was frustrated as hell but you know, that's all part of the job. Sometimes you waste a little time while you look for work. Look at other industries such as acting, actors could easily spend a day going to auditions waiting in lines preparing for characters and never getting a callback. My dad is in construction and for him to make an appraisal he has to study the house and waste a ton of time doing crazy calculus mapping out exactly how much area it takes and how much siding it will need considering the weird shapes around the chimney and how many hours it will take to cover it. You want to give yourself some padding but then he most likely won't get the job because the owner will go against other "free" appraisals and take the lowest bidder. If he goes too low he's risking working extra days for free or earning very little on the job. So doing all that free work is pretty much part of his job. As I hiring manager I try to waste as little time as possible, but at the same time I feel like a lot of programmers are spoiled and quick to complain about their time and pretty much expect to just get hired after 2 hr interview. This is someone you're hiring potentially for years (and any mistakes are hard to fix later) so yes, time will be wasted on both sides to try to ensure a good fit.


As somebody who regularly employs developers, and being a developer myself, may I suggest you put your Saturdays' effort into an open source github repo or something similar? Sure, this takes time too, but afterwards you can just point your prospective employer to your superbly styled and documented code on github. Both parties win: you don't have to spend time on stupid exercises anymore, they can get a very decent impression of your coding skills.

[Edit: corrected typo]


I was thinking about posting some of the solutions to past problems on github. The problem is that new employer has no way of knowing it was me alone that provided the solution and how long it took, so I couldn't blame them for not just accepting my public repo as proof of my ability.

For example I had one interview where I struggled to complete a WPF test project simply because it was coded in a way i've never done before using controls I've never used and the test was estimated to take 10 min. Meanwhile in my bag on my laptop I had a complex WPF solution i've been working on in my free time for months as a possible product to market and sell which more than showed my competence in the platform. People in my interview didn't even want to see it on pretext that they have no way of knowing it's my code.


It's just that with the programming tasks that you bring home, the hiring manager isn't wasting the that time with you. With programming tasks that you do during the interview, they are at least wasting the time with you.


Or, like when I interviewed once at a very large broadcasting org, they put you in a meeting room with a laptop (windows 7 which I never used before, bad keyboard, bad editor, no internet or reference), give you some half arsed questions and piss off for 30 minutes without you being able to ask what they actually want. Then 10 minutes later someone else comes in and asks you what the hell you're doing in that meeting room, you have no way to contact your interview partner, no idea where he went, they hassle you for about 10 minutes about it, and when your interviewer comes back you haven't been able to even get into the task, let alone write some code ...

Waste of time also, especially if you factor in a 2 hour commute each way.


+this ... it's exactly why, when I do "favors" for people, my requirement is that they're in the chair next to me... even if they aren't doing anything and are bored as sin... so they understand the time/effort that it takes.

It's generally inappropriate for someone to ask you to do more than a day of work for a take-home hiring exercise.


Simply because it is quite difficult to explain the concept in writing.

They might have a different point of view as to the quality of your work. Or they may have taken other factors in consideration (your portfolio, communication style, etc).

But not contacting you with any kind of a status afterward is definitely shabby, on their part.

And asking you to do a second assignment without taking a moment to evaluate your first (and apparently without bothering to tell you in advance that not one, but two or more "quick" assignments would need to be done), all the more so.


Add them to your portfolio and showcase them. Not a waste.


You can also recycle components of coding exercises. For instance in a project with a front end visual component, it is no longer just the bare bones functionality, as you've implemented fonts, networking layers and data models, and co-opted some design patterns from one company's UX person (but changed the assets) during that interview.


> there's very little left to discourage interviewers from issuing ridiculously time-consuming projects

There's also no disincentive for interviewees to spend an unreasonable amount of time on the project. So the test is biased against employed people and/or people with kids.

This can be easily countered though. Send out the assignment at a predetermined, convenient time and require it be returned an hour or two later.


Send out the assignment at a predetermined, convenient time and require it be returned an hour or two later.

Except that these places very frequently tend to either (1) misstate the problem in some major or minor way, or (2) wildly underestimate the time required to produce a professional quality, bug-free, bulletproof-tested solution. Which can be easily countered by having one of their own team members sit down and take the test first. But of course, none of these places ever do that.

(Well, not "none." I'm being hyperbolic. I just mean in that in general, they probably estimate the round-trip time on these programming guizzes the way they do micro-projects on their own jobs -- as in, "Oh, I can do that in an hour" -- but in real life, it often takes 2x-3x longer).


Yes, I got one recently:

Pull down this data from Instagram API and create a tagcloud. Should only take a couple of hours.

Except working out how to register and authenticate Instagram's API took me over two hours, than after faffing about with it I realized I only had some sandboxed version that returned metadata and not the actual data I was looking for.

The task would probably only take a couple of hours if the whole environment was set up, but the set up was the problem.


I have never used the Instagram API and had to look up what a tagcloud is just now. I can immediately say this will take more than 'a couple of hours', whenever I work with a new API it takes time to setup the environment, digest how it works and research the appropriate APIs I need to use. I am not surprised by your experience. Actually makes me angry that someone thinks a reasonable person could do that in two hours with the background I have described. Yes two hours is the environment is setup to make calls to the API and perhaps you have some familiarity with it - so I can see why the person who asked the question would think 'I could do it in two hours, on my Macbook where the environment is already set up and where I know exactly what API is required and what the data format looks like in the response'.


This seems to happen on around 50% of the take home tests that I have been given. Funnily enough I only take the time to do 50% of them.

Hackerrank is even worse. They have strange ways of wording the questions. I have to Google around to work out how to use their input and output (I work with databases all day long, not reading and writing to STDIN/ STDOUT). No step through debugger (which makes a lot of sense for the algorithmic type questions they ask). Cut and paste only works in some browsers.

No one expects you to set up a database and connect to it in that sort of timeframe - yet I would probably manage it better as I do it regularly.


This seems to happen on around 50% of the take home tests that I have been given.

That was my rough estimate for the prevalence of this craziness, to.


My personal favorite, was when, after I had apparently survived a 4-hour interviewing + whiteboarding stint, and thought I'd be able to head out onto the street (as it was already quite late in the day)....

...a math PhD told me "hey, I got one more ya..." and proceeded to give me a mis-articulated mathematical search problem which -- going by his own statement of the problem, ended up having as its "solution" -- an empty class.


> Which can be easily countered by having one of their own team members sit down and take the test first. But of course, none of these places ever do that.

At my employer we send out homework exercises, and I personally did the backend developer exercise before we sent it to anyone. I did this specifically to test how long it took. (For the frontend exercise, we didn't have anyone skilled enough on staff to do it, which is why we were hiring a frontend dev).


Did you do the test blind? i.e. Did someone give you the problem without you knowing/hearing it before? If you wrote the problem, or even heard it before you had to solve it, you had a big leg up on someone who's never heard it.


Fair points. FWIW, we've asked people how long the problem took and they all said it took a few hours, so I think it's the right scope. It really is a fairly easy piece of work.


Actually, I think this whole fear of candidates pervasively lying about time-to-completion is something of a red herring.

(1) It may seem counterintuitive to some, but my own general policy is, when it comes to little stuff ("how long did it take you to do X"), you just have to trust people, to a certain extent. Sure, some people may blatantly or grossly lie. But most likely these people will reveal their slipperiness in other ways, very very quickly.

Meanwhile -- and I think people are quick to overlook this -- playing the "policeman" role in every transaction with the candidate brings substantial negatives. By definition, it's adversarial. And generally there are (nearly always) non-adversarial ways to get the same information about the candidate ("Are they basically honest?") you're looking for. They take creativity (and an ability to read emotions and pick up on other signals), but they're there.

(2) Much bigger -- really, there's no need to sweat about the time to completion at all. Just look at the quality of the code.

It all just comes down to the fact that everything is interconnected: Good people generally turn out good stuff in reasonable amounts of time. When you're looking at good code, there is, I find, an intrinsic aura of ease and comfort which shines through it -- such that you just can't imagine it took them very long to produce it. Everything just flows -- just like it does when you talk to them.

Mediocre (and dishonest) people, on the other hand... never produce good stuff in virtually any amount of time. Sure, they can take the whole weekend to polish off their code... but it will still look bad, or at best, "Meh".

There might be some false positives (or outright frauds) by this approach), but I suspect very few. And those that do slip through, are easy to spot by other means (such as asking them to talk about their solution, for even a couple of seconds).


I think the concern is that the homework task could be too large and because people have an incentive to appear competent, they might lie about that to make themselves look bad. That means that we're giving too hard a task but we'll never find out.

As to how likely it is that people will lie, given that we've hired several people who did this homework and they've proven to be as competent as we believed, there's at least some anecdotal evidence that some people have not lied.


You can't ask a candidate how long it took, they have no incentive for truth here. Either they say it took all night and you think poorly of them, or it really did take a few hours.

You have to ask someone with no skin in the game.


As others have pointed out, your interviewees have little to no incentive to give you an accurate time estimate.

Beyond that, I think "a few hours" is a bit too much to ask for, especially since you are presumably following up with an in person interview centered around the assignment. That's a big time commitment, and the commitment on the assignment especially is very asymmetric.

I'm not a big fan of assignments for this reason. They are just so asymmetric. I've been given interview homework that was supposed to take "about an hour" and it was more like 4 to 5 in reality. It felt like an unreasonable time request. I did the assignment, and did in fact tell them it took considerably longer than their resonates. I was invited for an on-site after all that, but declined for other reasons.


You are begging for people to lie to you.


If that's the way you think people are... then those are the kind of people you'll attract.

It's how the universe works, basically.


It took a few hours and "it really was an easy piece of work?" How considerate of you. Who was it an an easy piece of work for, someone with unlimited free time. An unemployed person. What a joke. Any problem is easy when you yourself contrive it. Here's an idea, how about you pay someone for the 3 hours or work you are giving them. Say a lot about you.


I would prefer that we pay people to do this, but unfortunately that's not my call. That said, no one seems to complain so bitterly about the all day interviews that places like Google do (and insist you fly out for in person).

Isn't asking people to fly out for 6-8 hours of interviews (which effectively takes 3 days minimum out of your life) a much greater burden than spending 2-3 hours on some coding which you can do at a time of your choosing? (Followed by 2-3 hours of video chat interviews spread across multiple days, no flying required).

Also, just to clarify, the homework problem is _not_ something related to our business. The solutions are of no business value to the company.


Are you paying as much as Google? Is your job going to give the same prestige as having "worked for Google" on your CV?

I am not saying that you are wrong, but applying to Google does sound like it will have many benefits above your average company.


Google is hardly the only company to do in-person all day interviews. I've had several in my career, none of them with Google. IIRC, they were with Shopzilla and Grant Street Group, and Livetext (not exactly Google-famous companies). I wasn't thrilled about flying out for 6 hours of interviews, but at the time I thought it was reasonable for them to ask for this. I'm not sure I'd do it again unless I was incredibly excited about the position in question, but I'm older and more jaded now.

As to how much we pay ... Our pay is very good, and all but one of the positions for which we've had the homework requirement have been telecommuting as well. It's actually a pretty desirable place to work if you like good pay, telecommuting, working at a small company, a low pressure environment (we've been profitable for many years), and various other perks (training budget, flexible hours, blah blah blah).


And the above poster would seem to have an eminently sensible interviewing style, himself:

http://blog.urth.org/2016/03/08/tech-interviewer-theory/


The last time I was in the hiring side of the process... I had what I considered a pretty easy assignment... it was for a full-stack JS developer (node). The assignment was to read in an XML file (preferably via an input stream) and write it out to a JSON file of a predetermined object structure.

There were no other limitations put.. "bonus points for stream based input" "bonus points for test cases" ... only a couple people actually delivered a "working" solution (one that ran and outputted anything), none of which had the correct output (one was close enough), and none had any tests.

It was truly something that should take a "skilled" developer a couple hours. And not something that should be entirely alien. To say the least, I was really disappointed with the results. I did the project myself in about 2 hrs, with test cases, 100% code coverage. (before I even gave it out, it was as trivial a challenge as I could come up with for a real world problem).

Why should I have to pay the couple dozen people for their 3 hours, when none delivered a correct solution?

In the end, the person with the closest to correct solution, was the one with the least experience... that person got the job.


Well you're one of the brave and true, then.

And I had someone on my previous team make a point of bringing up this fine point at a meeting, also ;)


I actually do what you suggest. I pick problems I've never did before and try to solve it in less than 1/3rd the time I allot for the interviewee to account for interview time pressure. Even then there are probably biases because I pick problems that I can solve so I periodically review the candidate success rates on problems and change them out if they are too simple or too complex.


> Send out the assignment at a predetermined, convenient time and require it be returned an hour or two later.

This honestly sounds like a great idea to me, except maybe with a slightly longer time allowance to remove some of the pressure. Definitely hoping more interviewers will start to adopt this method for take-home interviews.

But this method also hinges on the interviewer's ability to design projects that can be completed in a reasonable amount of time and still give good insight into a candidate's skills. I think it's safe to say this will be a difficult task for most interviewers.


A day or two should honestly be fine. Have them talk about the solution after turning it in. It's a lot easier and more interesting to talk about code you just wrote than it is to make someone whiteboard something on the spot.

It doesn't need to be a time trial. If you're impressed with the code and hire the candidate, worst case is you get someone who takes a little more time but writes great code.


The time requirement doesn't just set a limit for the candidate though, it also serves as a limit to consider for the interviewer when speccing out the project.

Having a time limit of a day or two is fine if the actual project should reasonably take a couple of hours, but if the project actually takes more than a whole work day to complete, then that's a different story (in my humble opinion, anything that would take more than a couple of hours is an unreasonable demand on the candidates time unless you offer some kind of compensation).


> There's also no disincentive for interviewees to spend an unreasonable amount of time on the project.

I don't see any problem with this, as an interviewer. The take-home is supposed to be an example of the work the candidate does, they should take however long to do it. I want to see the best-case scenario of the code they write (given the problem at hand, etc, of course). The entire idea is removing the time pressure.


Right, but workaday programers with two kids and a full time job won't be willing or won't be able to spend, say more than two hours on something like this.


> I don't see any problem with this, as an interviewer.

The issue for the interviewer is missing out on good potential hires because your selection process is biased against people with little free time.

I have a family and I'm doing part-time study in the evenings. If I'm looking for a new job, then I can probably find time for 1 exercise a fortnight. If one company tells me they have a "4 hour assignment" and the other a "1 hour", then I'm far more likely to do the 1 hour exercise and pass on the long one.

And if I do the 1 hour test, I'd expect to be assessed accordingly. If you're comparing one person's output after 1 hour with another who actually spent 5 hours on it, then you will be more inclined to hire the person that spent longer on the project, even though that's not really going to corelate with on the job performance.


Take-home projects can be a double edged sword... for a given project that is estimated to take 4 hrs, i will typically spend about 16 hours on it. working straight through the night, chugging coffee and/or beer. I work by banging out an ugly PoC, and then I refine it drastically over several iterations. My final versions are award-worthy, but the early ones are really bad and sloppy. I am the type that is great at simplifying, but bad at coming up with the initial statement.


Ignoring the time investment for a moment: That is not a bad approach in my experience. Iterating allows you to learn from the versions before and guide your improvements. As long as the first version is at least good enough to do the job AND good enough to be improved upon (often the harder part) it is absolutely fine.


The only problem with her/his approach is that she/he said it took 16 hours where they would've liked to only spend 4 and be done (as 4 hours was what was expected). 4 hours may be unrealistic for what was delivered in the end but it would make me feel at least slightly awkward putting 16 hours into a 4 hour assignment as it indicates my performance is not where it should be (I'm off x4 which would be a lot to me) or my priorities are not at all aligned with the company's regarding this assignment.

Don't get me wrong everyone takes whatever time she/he needs and comparing results and time spent is hard unless you only look at "does it work" which in many cases does not do the work justice (and may not even be the most important metric in the long run). For that reason take-home assignments may be better than the almost comical interview often described on HN but they have their flaws that make them far from ideal as well!

I think it is hard to judge the true performance of a potential employee in a company team without actually having the candidate be part of the team (and even then it'll take a good amount time before someone settles in). Some folk may not be the best programmers but are good catalysts in a team, smoothing relations between other colleagues and increasing team output overall. Or they might have a habit of happily taking up tasks that are wildly unpopular and thereby, even if they are not the most performant, solving problems colleagues or perhaps a faster candidate wouldn't have solved. I could go on about this but I think it's clear what I mean.

There is a lot more to a role as programmer than just programming and that is often completely neglected in these discussions.


> [...] without actually having the candidate be part of the team (and even then it'll take a good amount time before someone settles in)

Some companies are trying to answer this issue by signing a potential employee on for a 2 week "trial," where they hopefully get paid. The trouble there is figuring out how long a trial really needs to be to get a good idea of how that person works and fits in with the team -- too little and it's still a crap shoot, too much and you've already essentially hired them.

In the meantime, test trial runs only work for developers currently out of a job; how do I skip my current gig for 2 weeks to go sit at a potential new employer's office? I certainly have no safety in quitting to go do it since I may get dropped after that two weeks, and there's only so much vacation you can take before you run out of personal time.


I remember hearing about take-home projects that amounted to free work for a company rather than a test of the programmer's skills, which is pretty smarmy. Anything can be abused or misused. Anyhow, the disincentive would probably be a decrease in applications if they start piling on the homework. Unless you're offering some amazing compensation and perks, or you're hiring for a project that could make a person's career, fewer people will bother when there are so many other companies out there hiring.

Maybe the least worst answer is to have people who have been through the hoops before, and can empathize with candidates, running the hiring processes.


Empathy isn't enough, in my opinion. There are problems with most every alternative:

1. Show-off quizzes are of limited relevance.

2. Take home work is a huge stressor for people with limited amounts of outside time.

3. Open-source contributions are a) too restrictive as a filter, and b) favor people with lots of time, just like #2.

4. Personal connections narrow the pool that you have, promote nepotism, and tend to exclude people who are already at a disadvantage.

5. Looking at previous jobs screws over lots of junior people and just means you're depending on the last interviewer's shitty decision.


Yes. I refused to do an exercise because it was quite obviously from someone's todo list, was a rather large endeavor, and didn't demand any particular skill aside from trying to find a sensible way to handle the many special cases (think parsing linux network configuration files and a pile of environment-specific behaviors).

I read it and told them (a) I had no interest in a firm that behaved that way, and (b) I had no interest in a firm that didn't understand what configuration management tools were for.


Empathy might be in short supply and the interviewer might simply try to replicate the gauntlet that they themselves had to run in order to get hired.


Ironically enough, Triplebyte's own take-home projects were some of the worst I've ever had, and did a horrible job of respecting the candidate's time.

When I went through the their take-home interview process, there were 4 projects to choose from, with only one having anything remotely to do with my area of expertise (it was a multiplayer game, and I was looking to work as a web front-end/full-stack developer). For all their talk on how you should select a practical project to talk about because it correlates better to ability to get real work done, it's really rather ironic just how utterly academic and unpractical the projects they offered were (that game was literally the most practical one on the list).

Now they tell you that you're expected to spend at most 3 hours on the project. This might be true for some of the other more academic projects on the list if you had any expertise in the respective areas, but it definitely wasn't true for the multiplayer game project I chose, which had a non-trivial front-end and back-end component, for which testing alone could easily take 3 hours (Or maybe I'm just bad, which is definitely possible, but I've asked many of my more experienced peers how long they think a project like this might take for them, and the lowest estimate I got was a whole work day of 8 hours).

Now they also tell you that it's OK if you don't finish the project. There was also a second part to the interview where they may ask for extensions to the original project, where they'd give us more time to work on it, so I assumed if we don't finish, they'd ask for us to finish the original spec along with some extensions for the second interview. Since I had already finished the front-end component for the game, and had worked on the project for well over the expected 3 hours, I decided to call it a day and work on preparing for some of the other interviews I had that week.

However, that it was OK to not finish the project for the first interview might have been an outright lie. I won't ever know for sure because I was rejected after the first interview for not finishing the back-end as well as being unable to resolve some performance issues the interviewer pointed out (which at no point in the interview did he even ask for me to fix, so I assumed fixing that performance issue would also be a part of the extension). Incidentally, I went back to the project a while after the interview and resolved the performance issue in 5 minutes flat.

The whole process just left a rather bitter taste in my mouth, which was all the more disappointing because I went in with great hopes after reading all their great blog posts on HN. For anyone else considering Triplebyte, I'd highly recommend going with their traditional interview route until someone at Triplebyte can confirm the process has changed for the better. At least with that route, if you get rejected, you'd have wasted less time in the process.


Now they also tell you that it's OK if you don't finish the project.

Yeah, they always say that -- but it's never really true.

They should just be honest and say "If you don't finish the project in time -- then don't feel bad, but perhaps the test isn't right for you, at this time. Feel free to apply again in 6 months."


I've actually been through two interviews where I wasn't expected to finish the project, and I ended up getting the jobs. I think the difference there was it wasn't set up that it's "OK if you don't finish", but that "these requirements were crafted such that we don't expect you to finish".


"These requirements were crafted such that we don't expect you to finish" is exactly how we explain our coding tests to candidates, and it seems to work really well. We actually find that many candidates will spend their own time finishing the project _after_ the interview. Programmers do like a challenge, after all, and nothing says "challenge" like "we don't expect you to finish this".


>>Feel free to apply again in 6 months.

I don't understand this. No company is so special that I would be throwing away hundreds of hours in pointless meaningless work every few months just to join them! Unless I'm in need of a job again.

And whats the big deal even if I join them after six months, I'd be working to maintain some code, fix bugs and may be occasionally do a big important project.

Its not like they are sending Neil Armstrong to the moon all over again that I would like to be a part of this history.


Me neither, it is like some companies like to feel special.

After being invited twice for Google interviews, I mean really invited by their HR, not me applying for them. On both occasions I failed the process with their stupid questions.

I started replying to their HR, if I am so good to be invited but on their eyes unable to devise a inode search algorithm for unlimited hard disk sizes with a specific set of hardware and search time constraints, over the phone interview, then why couldn't they just please stop inviting me!?

That was the last time I heard from them and I don't care a bit about it.


Devise a inode search algorithm for unlimited hard disk sizes with a specific set of hardware and search time constraints?

Curious -- was this problem reasonably related to the kind of work you'd be doing in the role you were applying for?

Or did they just want to find out if, you know.... you had that "spark"?


It was a position related to compiler development.


Weird... what were they thinking?

But thanks. Another data point added to what others have been saying about their hiring process.


I am sorry that you had a bad experience with us. Evaluation is a really complicated thing. The bar that we use for evaluating the take-home project is to treat it as real work, e.g. would a teammate feel good if you were working with them on this task, and you came back after half a day with this. Because we can't see process, all we can do on the take-home track is judge of the finished result is professional-level programming. We do indeed pass people who do not complete the project on the 1st call, but they need to be on a good path (where we think they can finish by the 2nd call).

I do not know who you are (and would of course not post details here), but from what you say, it sounds like we did not think that you were on track to finish, and has some concerns about the design that you selected (we track whether a number of milestones in the game have been reached). It is totally possible that we were wrong. We much prefer to see a working front-end / back-end combination that has missing features than we do just a front-end or just a back-end.

Again, I apologize for your bad experience. I hope we can make the take-home interview better in the future with some tweaks.


Just curious: are you making sure your interviewer are actually first trying to code the project in 3 hours before judging candidates? I mean, it seems to me that most of my colleagues (and myself) are always very optimistic with respect to "how long it will take". So if you are judging someone based on your expectation without having went through it yourself, it can lead to a perception gap.


This is a good question because I find that my colleagues from previous companies spent a lot of time thinking of 'good ' interview questions and leave it at that.

None of them actually tried solving it within the constrains that a candidate is put through.


I think that you almost have the right idea, but 3 hours is too long. I believe a programmer can demonstrate his ability to perform the basics in 1 hour or less. This respects the candidate's time and it also encourages the test's designers to select the most trivial project possible that shows the basic skills they're looking for. That's really what's important.

Larger projects take up more time and introduce a larger possibility that some matters of opinion or taste will impact the candidate's performance. I don't believe differences in taste are important as long as the candidate demonstrates that he is able to comply with a defined style guide.

The actual test is going to depend on the position at hand, but one of my favorites is a very simple program that asks the candidate to use GitHub's API to display the list of public repositories under a user-inputted username. That's it. I tell them they can use any language they want.

This simple test gives all the information we need about the basics:

a) the candidate is able to go online, provision himself an API key, find docs, and reference those docs to see an external vendor's API format

b) the candidate is able to use that information to craft a program that successfully interacts with the vendor's endpoint

c) the candidate is able to present the information in a concise, desirable manner.

d) the candidate is able to do all of this with a simple 1 paragraph description of the project.

The choices the candidate makes in the process of completing this simple task tell you a lot about his process, style, habits, and preferences, even though the project is very minimal in its actual requirements.

I've had people give me web apps, command-line apps, and GUI apps that accomplish this same goal. Many candidates would go above the requested specifications and many candidates would reply the same day they were given the test, which to me was an excellent signal that they felt their time was respected and that we were doing a good job of engaging them and making them interested in working for us.

As I stated, this test is not appropriate for all positions, but I think most tests should be modeled after those principles. Give the candidate room to express himself and demonstrate relevant practical knowledge.

I hope you'll consider a minimalist project like this over something like "design a multiplayer game ... in 3 hours".


As a wannabe junior rails developer, I really like the idea of a technical assessment like this. It seems very practical, yet it's not overly complex.

Have you considered requiring a thought process journal as well?


Evaluation is only a very small part of my complaint. The choice of projects that were offered, and the scope of the projects are what really need to be revisited (finishing the project wouldn't have been a problem for me if there existed a project that was relevant to my work that could be completed in 3 hours).

In terms of project choices, to me, it seems like you guys erred on the side of choosing projects developers would find interesting and challenging technically over projects that are practical and accurately represent the kinds of work most developers will actually be hired to do. I'm not going to post the details here for obvious reasons, but out of the four projects offered, only the multiplayer game was even remotely relevant to front-end/back-end or even application development in general, which are areas that probably account for the vast majority of development work available from startups. I realize there needs to be a balance to be stuck here, but in my humble opinion, as a recruiting firm, you should be erring on other, more the practical side in terms of project choice. I applied to Triplebyte to find a job, not to fulfill my intellectual curiosity (I can do that better on my own time without needing someone to assign projects to me).

In terms of project scope, I'm not really qualified to comment on the other choices, because they were way outside my area of expertise, but the multiplayer game definitely didn't feel like a 3 hour project. As suggested in another reply, I really hope you guys can actually give the project a try yourself and see what level of completion can reasonably be expected from three hours of work on something like that. Take whatever times reported by candidates who have successfully completed the project with a grain of salt, because people will have a tendency to understate the level of effort they spent to make themselves look more efficient (no matter how much you tell them you don't care). Here's another possible idea for making projects that take reasonable amounts of time to complete: just take your traditional interview questions and slightly extend them a bit with extra features, and simply expect better polish, architecture, test-coverage, and overall code quality, etc during the code review.

Anyways, my experience with Triplebyte's take-home interviews definitely didn't leave a great impression, but I still recommend you guys to my friends and colleagues because I do want to support what you guys are trying to do. Hopefully you can take some time to revisit some of the issues people have mentioned and make the necessary improvements. I'm happily employed now, but I'd love to give Triplebyte another try the next time I'm looking for work. =)


This guy has a great beard.


Anecdotally, I had a really positive experience writing the HTTP server with TripleByte. I use interview projects to learn new skills and domains, doing so aligns my interest such that even if it doesn't go well I'm better for trying. My project review went reasonably well- we caught a bug, fixed it and tested. I turned down round two due to taking another offer, but genuinely felt like these guys cared about my progress and experience.

I'm now in a position where I'm interviewing and helping shape my organization's hiring practices. We've debated all the different approaches, some people like projects, some like algorithms, and some don't want to do either to get the job. At the end of the day, I really just want data on a candidate's ability so that I can say Yes.


> I use interview projects to learn new skills and domains

There probably lies the disconnect. For me, interview projects should assess how well I could perform in the position I'm applying to. And thus, if nothing else, interview projects should be relevant and practical.

It seems to me that Triplebyte's project choices were made based on how interesting developers might find them, and sheer technical challenge. Some might appreciate this, but personally, I'd rather learn new skills and challenge myself on my own terms.


I wrote 4 http servers in 20+ years (perl, java, shell, js). I don't want to write new one because I will develop no new skills.


> At least in the traditional technical interview, interviewers and candidates tend to be roughly equally invested in the interview process in terms of time spent

Not really true unless you go to your interviews completely unprepared :-)


As the interviewer, I'm not going to show up unprepared either. Typically, having an interview means learning about the candidate and what would be interesting to talk to them about, learning what team they might be a good fit for and what they need, and syncing with the other interviewers to make sure we're on the same page. And then afterwards, we'll meet up to discuss how the interviewee did and whether or not we should hire them.

All in all, it's at least an hour of prep time needed, and that's only counting my time, not the time of the other interviewers, managers and recruiters.


Since the article seems to be about startup interviews an hour of prep time from the employer's side actually seems fairly low. I'd expect closer to a full day. Isn't hiring one of the most important strategic decisions for a startup?


not to mention you're asking them to code a (potentially) complex take home project, that could take over an hour, for free.

I had a friend of mine who told me her company doesn't use online programming tests (like hacker rank) because she doesn't think legit programmers would bother with positions that required them.

Having taken a couple of these on-line automated tests, I don't think I'd take them again. Problems which I'm sure I had written correctly, dealing for edge cases and testing in browser; I submit to get like a 6%. Programming simple things in front of people in an interview I do fine.


> not to mention you're asking them to code a (potentially) complex take home project, that could take over an hour, for free.

An hour, I'm fine with. It's less than what I'd schedule for an interview, and far less than I'd schedule for an in person interview (which might include a flight out). On the other side of it, though, I'd be concerned about cheating. It wouldn't be too hard to hire someone to take the test for me, I'd imagine.


I got one that was just: "write a web application". That's literally all they gave me for parameters. I decided not to be a dick and send them 5 lines of code returning http 200 so I set off and started a side project I'd been wanting to do for a while. Low and behold I wasn't even close to done after the weekend was over so I emailed them and said "sorry, but I need more time if you want the application I'm working on". I got an email inviting me in for an in person interview after I talked to an engineer for about 20min, so problem solved I guess.


Don't worry it should only take an hour or two; an afternoon at most!


Ah. I just provide all my answers in an obscure language they won't know, but has lots of "nerd-cred", so they feel uncomfortable questioning it. Like OCaml, Erlang, or Haskell.

Then in the rare case they question it, which usually goes something like, "Can you do it in Java? We don't use whatever language it is you're using there." I respond with, "Oh, will I be spending a lot of time at this job coming up on-the-fly with somebody's PhD thesis from the 60's?" To which the answer is always, "No." Which finally ends the conversation with me saying, "Well then you asked me to solve an irrelevant problem, so I'm happily providing you an irrelevant answer."

I have a fair bit of contempt for the pitiful state of technical hiring and assessment process out there. I think that comes mostly from my opinion that hiring is probably the most important thing any company or manager does, and that if any one of these companies/interviewers put even 10% the amount of effort into learning about educational/occupational assessment, psychology, and neurology that they do into obsessing over the dubious idiosyncrasies of the latest framework-rehash-of-the-month, then these terrible interview practices wouldn't endure very long, and we'd all be spared the indignity of the farce on both sides.


> Which finally ends the conversation with me saying, "Well then you asked me to solve an irrelevant problem, so I'm happily providing you an irrelevant answer."

So how many of those companies made an offer?


3 out of 5 where it's come down to that kind of bizarro pissing contest.


I already saw such a colleague training hard at competitive programming, everyday. Yet, he wrote that kind of C++ code:

  ressource AClass::CopyACriticalRessource()
  {     
     {
        std::lock l;
     } // Optimization for the lock.
     
     return ressource_;
  }
Having a good understanding of every aspect of your architecture, taking your time instead of rushing for blazing fast (O(1)) but hacky solution, always having in mind that someone must have already solved your problem... these are some valuable skills not coverable during a 1h interview in front of a white board. I would like someone googling a solution in front of me during an interview, trying to UNDERSTAND it, check its complexity, compare it with other search hits, adapt it to his problem, ask for my review.

The best technical interview being a homework with enough time for a normal human being, requiring tricky algorithms to solve it in a fancy way, a good architecture and great computer-science knowledge in general. Afterwards, a code-review/interview of 1 hour with debriefing on all the choices.


Whiteboard coding doesn't necessarily have to involve guessing the algorithm - any whiteboard questions I've asked avoided doing so.

On the flip side, my experience with take home projects is that they are much more of a hazing/pressure cooker ritual - at the least, the time pressure of a Google/FB/etc. interview lasts only a half hour. Most will expect candidates to spend a huge amount of time, which many won't have if they're actively job searching, or are busy people in general - I prefer to pour that extra time into open source work, since at the least it benefits others. One company assigned a project to me once that suspiciously seemed like implementing the company's whole business model.

It doesn't take long to figure out if someone has the right skills oftentimes if you ask the right questions - candidates shouldn't be penalized for companies' ineptitude in assessing interviewees.


> I prefer to pour that extra time into open source work, since at the least it benefits others

As someone who pushed his company to adopt a "take home" assignment for our interview process, it should be perfectly reasonable to reply with "here's my commit history on a project relevant to what you're hiring for."

I much prefer giving (and taking) take home assignments because it lets the interviewer see _what someone will actually produce on the job_. If you can do that without jumping through our specific hoops, great. If not, that's why we have the assignment.

It also at least somewhat ameliorates the pressure cooker of an interview, which many people cope with poorly. It can be hard enough to communicate clearly when all eyes are on you, let alone get your thoughts together and solve a logic problem. If that's actually analogous to your work environment, well...


I don't think that is really a problem. We hire people to do a good job and most of the times kick-ass programmer isn't the best person for the job. At least for a smaller company we need someone who can understand business needs, can communicate better and Add Value. This is a far complex job than implementing red-black trees.

Consider this: We need to find the most shared URLs on Facebook in last 24 hours.

I can perfectly see why a lousy coder might achieve this objective better than a kick-ass one. A coder with average skills quickly figured out that Buzz-sumo has a public webpage with that content which can very easily be scraped using phantomjs. Job got done.

Another great coder suggested to me I should buy $10K per month Facebook firehose. He is not wrong and that is a good solution too but he failed to see that we are building a POC and not a full featured product.

Needs of companies are complex, many times if you can pay great salary just having a filter for IQ is good enough. But in most cases I think it is far better to look at an individual and judge.


Would the Buzz-sumo programmer be expected to investigate their terms of service too? (Often scraping is prohibited.)


I am just giving an example. For a POC it does not matter much.


"But do you really learn anything by asking the candidate if they can recite the time complexity of a moving window average algorithm"

No, but it may be a good question, anyways. A good interviewer probes you about things you might encounter in the job to find the point where you cannot recite things, and looks how you handle that.

Having said that, I don't understand the focus on coding interviews I read on HN. I don't know whether it is cultural difference between the USA and Europe, whether I just haven't noticed how you are supposed to prepare for interviews, or whether I am too smart to need to bother, but when I apply for a job, I look up what the company does, try to figure out what its culture is, but do not prepare for the technical side of things. An interview isn't a one-sided affair of you being quizzed, it is two parties figuring out whether they fit together,


I have come to believe it is part of an industry-wide negging style to keep people in their place.


> I much prefer "homework" projects

I helped a group within my organization with their hiring process recently, and we had pretty good success with assigning a short "take-home" exercise, vs. trying to haze them with a programming problem over a google hangout interview. A problem focusing on a small part of what that group does, but scoped to be doable with 1-2 hours of work.


I guess it works well for people unemployed with nothing else to do, no family, lots of time.

Everyone else, this sucks!


Interviewing does take time. Most people will be able to spare an hour to find a new job.


So you are saying you don't spend any time preparing for your interviews :-)


Homework projects don't work some cases, when developers come with a refined copy/pasted code or code by another programmer. Better way is to go through the previous code that developer has written or go through the code written by other programmers to know the capability of the programmer to understand different code patterns


> Better way is to go through the previous code that developer has written or go through the code written by other programmers to know the capability of the programmer to understand different code patterns

Doesn't this have exactly the same problems as homework projects? You can still copy and paste or plagiarize.


Why don't companies offer it as an option?

"Hey here's a fizz buzz question, if you feel confident answering it now go for it, otherwise we'd be just as happy for you to do it in a take-home fashion?"


Every week HN has a topic on the front page about how inaccurate and unfair job interviews are. They're always going to be inaccurate and unfair. Interviews must use proxies for candidate quality, and there will always be false positives and false negatives.

The most practical thing is to work with it as it is.


> But do you really learn anything by asking the candidate if they can recite the time complexity of a moving window average algorithm (as I was asked to do by an interviewer yesterday)?

This is actually a trivially easy question that gets to the heart of whether you understand the point of moving averages or not (that you can update a sum by subtracting out the value leaving the window and adding in the value entering the window).

"Programming ability" isn't one dimensional. Whether or not the above question is useful depend entirely on what skills are important in a candidate.


It's not so much specifically about the moving average problem. I would want to see that a candidate can reason about the performance of some code / algorithm. I would not expect them to be able to recite the performance for specific algorithms from memory, however.


I agree, but I suspect "recite" was editorializing by the parent.


It wasn't. I was literally asked, "What is the time complexity of the moving window average algorithm over an array?" and when I asked for clarification, I could hear an edge of... I guess frustration in my interviewer's voice.

Granted, by this time, we'd been through a couple of other problems, and time was running short, but I still think it was pretty unprofessional of the interviewer to let frustration or any other sort of negative emotion show during the interview. That, more than anything else, contributed to my own frustration and perception of unfairness in the entire interview process.


Right, but "recite" was editorializing. That implies that the interviewer expected you to produce the answer from memory, as opposed to thinking about it. It's an easy question if you're familiar with moving window averages and know what the interviewer intends. If it was asked apropos of nothing, a request for context seems reasonable, though. It sounds like you probably had a bad interviewer. There seems to be no shortage of software interviewers lacking in "people skills."


Wouldn't the question about the moving average be a good one though? It should be quite obvious that you can calculate a moving average without remembering more than the current average and the window size. Given that, time and space complexity should be obvious. I think it would be a fine question to see if the interviewee has any idea what complexity means.

Or was it a more complicated moving average case (exponential etc) where the algorithm was given and you were asked to determine the complexity?


How many jobs will it be relevant to?


Which part?

Figuring out how to efficiently calculate a moving average seems like a good question for basic maths skills to me.

About complexity theory, there is already a lot of discussion about it's relevancy in the comments. My point was meant more along the line that, if you value a basic understanding of complexity theory, the question asked of the GP seems reasonable.


Coding interviews are very stressful, and churn through a lot of really awesome potential hires. I imagine there are tons of false negatives, but it's nearly impossible for a terrible programmer to get through a gauntlet of programming interviews. However, I agree. They don't give a full picture of a developer's abilities.

As a developer, I prefer take-home projects. As an interviewer, I prefer a few coding interviews, followed by a take-home project.


I have not tried it. But in my next business venture, I plan to actually do peer programming with the candidate.

This is will allow members of my team (or myself) to get to know the candidate, evaluate the 'wave lengths', and how effective he/she is at finding patterns on the internet/books -- rather than thinking things up.

It also demonstrates to the candidate commitment on our side, and it naturally forces us to find problems of a proper size/effort (as we are spending the effort too).

We would not do it for all applicants, though -- only for once that pass basic screening / competency process (there are no trick questions or exercises there


    I have not tried it. But in my next business venture, I plan to actually do 
    peer programming with the candidate.
As a candidate, I can highly recommend peer programming. One of the greatest interview experiences I had was interviewing with a local shop where the employee and I reimplemented a Set class using TDD and pair programming. The employee sat at the keyboard, so I didn't have to deal with not knowing keyboard shortcuts or the unfamiliar operating system (OSX), but he was very careful to only write the code that I asked him to write, and to let me make my own mistakes.

It was one of the most enjoyable, and, dare I say it, relaxing interview experiences I've had. Though I didn't get an offer, I wouldn't hesitate to recommend that company to any of my peers who're looking to make a change. Also, like you said, it was a second tier screen; after an initial screen consisting of a more traditional phone interview where I had to write some Javascript.


Yeah. You can save a lot of your time, and the candidates' time with a simple coding interview - even FizzBuzz will knock out half of the candidates. Then pair up and/or give a take-home, then discuss the solution in a one-on-one session.


As an interviewer I only look at the resume and published open source projects the candidate did. Everything else just a random impression, nothing serious to bother. The questions are for something else.


I like this approach the most. Evaluate code that they've already written, maybe make sure that they can solve a similar problem to make sure they actually understand what they did and didn't pay someone else to do it, and then a personality fit. Chances are if they really wanted to be hired to learn how to program they're aware of the importance of contributing to open source.


This is frustrating for the many engineers who have no open source work. You're shutting out a massive amount of talent that has only worked on proprietary software.


    I imagine there are tons of false negatives, but it's nearly impossible for 
    a terrible programmer to get through a gauntlet of programming interviews.
Depends on what you mean by "terrible", I suppose. Yes, coding interviews do a good job at screening out the bozos who just can't program, period. The ones who don't understand the difference between a for loop and a while loop, or the ones who can't handle boolean logic.

But I've found that even once you get past the outright bozos, there are quite a few programmers who can program quick one-off things, but have no sense of design or maintainability. They can deliver functionality, but deliver in a way that piles on technical debt and damages the long term health of the codebase. I think the traditional technical interview format ironically encourages this sort of behavior, by encouraging applicants to focus on narrowly solving the problem at hand, as quickly as possible, both in terms of machine time and programmer time, even if that means the code is an unmaintainable mess in the long run.

Put another way, think back to the last time you had to do any sort of whiteboard coding, as part of an interview. Are you proud of the code that you wrote? Would that code pass code review at your current position? If so, then congratulations. You're a better programmer than I. The code I've written on whiteboards has been pretty uniformly terrible. Sure, it met the correct Big-O complexity requirements, and it was correct, insofar as it produced the correct output, given correct input. But there was no error handling. Variable names were single letters. The functionality wasn't broken up into logical functions because writing additional function headers takes more time, and my handwriting is messy enough when I'm not rushing. All in all, it's code that you'd see in a prototype, or a programming contest entry, not a robust system that's usable by customers.

Lately, I've seen more and more such code being produced by new graduates not only in coding interviews, but also as part of day-to-day programming. There's an incipient attitude of, "Well this code would pass in an interview, so it's production ready." I find it deeply troubling, and my concern is that programming interviews are setting up incentives by which this sort of code becomes, if not normal, then certainly more accepted than it was in the past.

This is why I advocate so heavily for take-home projects. When a candidate submits a take-home project, you can be assured that they had enough time to design and code the assignment in a maintainable way. You can see whether they added unit tests. You can see whether they split the code logically into objects and functions, or whether they smushed everything into a 500-line main(). I accept that take-home assignments aren't as scalable, either from the interviewee side or the interviewer side, but I do worry about the long term effects on norms that programming interviews are having.

EDIT: grammar


The problem is that most companies want the take home in addition to algorithm interviews, keeping the worst characteristics of both approaches.


I have a theory: I think the coding skills of a programmer are function of the number of programs he's written - from scratch.


Maybe, but at the moment I have inherited a bit of a mess of an application in my new job. I think it takes more skill to work out what it is doing and replace the crap code with something simpler and more efficient, more maintainable and easier to read.


My favorite sorts of interviews are ones where you're expected, either by pairing with an employee on your own, to fix a bug in an open-source project you use. Has all of the benefits of a take-home project over whiteboarding, but the end result is you have something tangible to show for your efforts even if you don't get the job.


This. Also, any interview where the candidate pairs with an employee lets both of them see how they like working together.


This is a general problem:

  > Being a good president has a surprisingly small role in
  > being elected president.


"Being a good programmer has a surprisingly small role in passing programming interviews." "And that just says it all, doesn't it?"

An interview is for both prospective employer and potential employee to meet and find out about each other. If you as a candidate notice something you don't like during the interview (including the interview itself), that already is a valuable information for you, as you already may assume things based on it (like, what kind of people you may find past the in-process interview as they all more or less were filtered in by it). So you can always stop and say "thank you" and walk away without wasting more time. You'll allow yourself to stay longer only when the interview's quality warrants it.


Here's an idea (perhaps naive): why not add all the methods and give them optionally to the programmers. E.g. - normal interview is mandatory (you need something to base your ideas on) - portfolio is optional but when chosen it weighs heavily - take home project is optional but when chosen it weighs heavily - etc.

With 'weighs heavily' I mean that it influences the criteria for which those methods are good for.

The downside of this is that interviews are more unstandardised, which is a trade-off worth considering.


Unfortunately, this is much the same as virtually any broad examination process. I'm primarily thinking of schooling up to 16. There's a lot of work to be done in ironing out one-size-fits-all testing in a lot of areas, primarily those that "require" a human to do the processing of the data.


When I was interviewing in the past I was usually refusing any take home projects, if you don't want to spend 30 minutes talking to while watching me code it tells me something about your priorities.

However whiteboarding is also terrible.

I prefer getting a real world / work related scenario problem, and solving it TOGETHER with the interviewer: if I'm stuck, as opposed to a hackerank codility challenge, there is a good chance they will let me know and hint in the right direction, and they will also see how I communicate, how I think, how I respond to hints. An online challenge can be easily done by the candidate sitting next to 2 more senior coder friends who help him / her along the way. Unless it is proctored, you can never know. I had people whispering someone answers during a phone screen, people typing my question in Google and reading me the first result (I google it at the same time).

I think that giving you a hackerank or codility take home problem unless this is an entry level job will simply drive away experienced people.

If someone can pass your take home test easily, you will still need to phone screen them before you fly them over in most cases, and if they are that good, they will probably prefer companies that don't waste their time and jump straight to the real person phone screen.

However, some people are more nervous when someone is watching over their shoulder, so here is what I would do: I would ask the candidate what they prefer:

1. work related a hands on assignment with plenty of time (e.g. should take 30 mins but you give 1 hour) and ability to search online and ability to compile use an IDE just like in a real world scenario

2. Skype screen with a lot of small questions on a topic they really feel they know about (no Googling allowed), and a relatively smaller coding challenge (something you can code in 15 minutes)

3. Work related scenario but with a real person, where there is no one best answer to the question, but more of a balance of tradeoffs and more open ended but 100% involves coding (just like most phone screens, but more work related and not just puzzles)

4. The standard puzzle challenge but alone - you have 30 minutes to solve the problem once you see it (Googling is allowed or the test is proctored to make sure you don't, then you get more time)

5. the classic - a cracking the code interview kind of question, but with a real person, code sharing (for crying out loud not google docs, at least have your candidates use something that offers color coding and easier indentation) - some might choose still this

If you let your candidate chose what is best for them, you already made an amazing impression, and you might have much less false negatives.

Just my opinion


> including (time/space) complexity analysis.

I think this is one of the most inane things to be asked during an interview. personally, I've never found myself in a situation where I truly needed to choose between a vector/map/list/hashmap. Or had to find the O(x^n) and replace it with O(x^2)

Obviously it depends on the application, but many jobs are simply maintenance coding: find bug, fix bug, test fix. Often times it makes absolutely no difference whether you use a list or a vector, or else you'll get the paradoxical "vector-is-always-faster" because of locality of reference.

In my (admittedly limited ) experience, most of the effort is spent simply making it work, not being bogged down because you used a map instead of a hashmap, or didn't know about some esoteric, bleeding-edge probabilistic data structure.


I've seen this plenty of times. I've worked both on a trading platform and a large website, and both times encountered many performance issues that were solved with a more appropriate algorithm or data structure. I've even seen this with a list as small as 10 items - a O(n^3) algorithm was making multiple network calls each time; changing it to O(N) alone made a huge improvement in speed.


This does come up a lot.

But I think it's more telling if a programmer knows how to profile his program and find the performance bottlenecks, recognize them for what they are, then fix them appropriately than if they can recall a specific optimization for a specific use case on demand.


Most of my work on an ecommerce platform doesn't need much attention to algorithmic complexity, but everyone on my team still curses the guy who wrote an O(n^4) algorithm in our checkout pipeline (discounts, promos, shipping, tax, etc). More than a couple items in your cart and you couldn't checkout because the thread would spin forever. I want to work with a team of people who can recognize these things immediately, even if it's not an absolute requirement for the job.


That has a lot more to do with mechanical sympathy, and awareness of when you've exchanged cleverness for complexity disguised as cleverness, than it does with knowing the big-O of operations on a datastructure.

What you want the person to identify is that they've made your simple iterative checkout process into multiple unbounded tree traversals with no circuit breaker.

Knowing that searching the tree is O(log(n)) isn't very helpful when your problem is an inability to identify that you've made (n) an unnecessarily huge problem space.


Most likely you think about it, though. When you're coding, and you have a triple loop, do you think, "Oh, this is O(n^3). Is n going to be too big here?"

It may be something that's so intuitively obvious to you, that you don't even think about it. So you naturally use the hashmap, where someone else might try a list and then start doing a lookup in a loop. Then while that particular instance might not break things, it'll slow things down, so overall the application feels sluggish instead of snappy.


I've never ever thought that to myself. Then again my code never has triple loops.


So do you never call anybody else's functions, or do you always verify that they don't do any loops in those calls?


I've never found myself in a situation where I truly needed to choose between a vector/map/list/hashmap. Or had to find the O(x^n) and replace it with O(x^2)

This entirely depends on the kind of product you are working on. When you get to a large scale with any programming project, optimizing computational resources will cut costs, and can often add value to the customer as well.


This is special pleading, though, right? It being relevant to "any" large scale project really just means that projects can get big enough for it to matter, but how many jobs involve this family of software? How many interviews for positions directly related? Very few, I would guess.


I find myself having to think about efficiency at least a couple times a week. I'm working on database implementation, and in the query processor we have to consider all the time how to evaluate various things efficiently.


Taking this at face value, were any of the techniques you use in these tasks directly addressed during your interview there?


I would say it starts becoming important for any application that has multiple concurrent users. If it's a single user running it on some device and there's no shared resource (I.E. back end), then chances are it's not going to be an issue.


"Or had to find the O(x^n) and replace it with O(x^2)"

The other thing that really seals the deal for me as an inferior interview question is that you don't need to have a clue what O(x^n) is to wrap some code in a simple time call, see that the code you think ought to run in microseconds is running in seconds, by visual inspection notice stupid nested loops, and fix it. Self-taught programmers may not be able to say "O of exx to the enn" but that doesn't stop them from fixing it.

So... seriously, what good is the question anyhow?


I would recommend going through some undergrad CS datastructures and algorithms lectures to any self taught programmer. My process of reading code improved dramatically. And the big O concept is, once you wrap your head around it, an intuitive way to think about speed. Also once you've timed your code and found the slow bits you need to know how to speed it up, not all speed ups are as simple as unnesting loops.


Same here. last week I was sent a Codility test, tried the demo and failed miserably.

Then I noticed the test was expressing constraints using time/space complexity, concepts I was completely unaware about for my previous 15+ years in the profession.

So now I am reading about algorithm theory face-palming at the realization I have reinvented the wheel many times during my career instead of just re-using a PhD. researched algorithm.


What if it isn't stupid code? What if it is straight forward and simple and looks correct but is just slow because it has to traverse a list instead of using a hash-table or some other data-structure. You can't get there by intermediate steps, you have to rip out the code that uses it and rewrite it with a hashmap. You can only do that if you know the space complexity and when a hash and a list is appropriate.


"You can only do that if you know the space complexity and when a hash and a list is appropriate."

You seem to be confusing "knows how to say 'oh of enn squared'" with "the loop gets slow when I iterate over a lot of things". One is generally a product of education, the other, merely experience.

The idea that a thing can only be learned in a classroom is perhaps in the top ten most pernicious ideas in the modern world, and probably one of the more surprising ones to show up in that list. You do not need special courses to discover that your code runs slowly, and as I've seen often enough, having had those special courses does not confer immunity against writing slow code.

Now, I am also a believer that formal education has its place, and if you are going to get a formal education in computer science, big-O analysis absolutely must be part of it or you are literally missing out on an entire rather important sub-discipline. But the idea that it's some sort of touchstone between Good and Bad programmers is just ludicrous nonsense. Slow code is slow. There are abundant tools that can be used to figure out why. If you can't work out why your O(n^3) loop is running slowly after a couple of years of practicing the art, you don't need formal education, you need a different job.


I've found that employees who are able to discuss the benefits of certain data structures and their associated time complexity are generally able to solve problems quicker than those who struggle to discuss these fundamentals. That said, the thing that matters most to me when hiring programmers is proof that they can write decent code.


My thoughts exactly, a bit suprised seeing it on HN


Are you going to profile everything for the entire nearly infinite range of possible input values, including all the pathological cases?


This is a very under-appreciated point. If you profile a program on non-pathological input, the profiler won't tell you what's going to explode later on when your program hits a rare case that you hadn't expected. Theoretical upper bounds don't have this problem.


It demonstrates that you have a passing familiarity with probably _the_ most fundamental tool of the trade.


The most fundamental tool of the trade is a profiler. The tool which is used in reality to find performance problems, unlike BigO, which is used in theory to find performance problems. Does BigO help? Sure. Is it the silver bullet people seem to think it is? No.


The tool which is used in reality to find performance problems, unlike BigO, which is used in theory to find performance problems.

Sorry, but this is just wrong.

I can guarantee you that BigO is very often used not "in theory", but in practice, to find real-world performance problems. And not having "situational awareness" of certain commonly occurring complexity profiles can be a significant source for performance headaches and technical debt.

One trivial-seeming, but frequently occurring example: not knowing when to use hash maps.

In fact, many people solve performance problems (including both those where order-of-magnitude performance really is the most important factor, and various other kinds) not by using a profiler but by, you know, understanding the code and thinking about it. Being as profilers, while they can tell you a lot about certain kinds of performance issues, are still generally quite limited in what they can tell you.

Is it the silver bullet people seem to think it is? No.

Of course not, and I've never heard anyone saying that it was, either.


It's not a silver bullet, it's only a model of the problem. If you profiler tells you some code is slow, you can model why it must be slow by using big-O. In fact it's the standard way of explaining such things. Without it, you must spend your time babbling about special cases.

It's like, yeah, you don't "technically" need to know any 2+ syllable words to be a programmer, but you're really not helping yourself by avoiding them.


Big O took about 15 minutes to teach. Sure, it went on and on a bit because a rigorous class will introduce proofs, but you are absolutely right - it's a trivial concept, and often flawed in practice (list is supposed to be faster for random inserts/deletions according to big O, but in practice they are almost always slower).


Honestly, I consider an instinct for complexity analysis the most important thing I learned in school, and the thing that I've gotten the most use out of. I don't know what case you're making here: are you saying that high-level architecture is so hard that choosing a map or a hashmap should be a coinflip, or the one you see first? Having had some criteria for making the choice makes my life a lot better when everybody is panicking because something is running like shit and no one understands why.


To me it was a bunch of rote memorization, just like a biology course. I never - never - have needed to know how bubblesort/heapsort/mergesort actually work, except to appease interviewers.

I'm not saying I'm pro writing-inefficient-code, but if you want to talk big-O during an interview, I"m going to roll my eyes about as much as you asking me who the 19th president was.


I ask such questions at the end of an interview, but mostly to see the sanity/reaction or thinking process - wrong answer would do, rolling eyes - would not :)


Oh shit, my code is O(1), looking at it in the profiler tells me that it's not a bottleneck (it says < 5ms). Yet this single function call somehow takes 78ms in wall time when it should at worst be 1ms (and even that is too much). The reason? JVM classloading/JITing. This quite annoying when you have shortlived applications.


That's fine, as long as you _never_ need to trust said dev to do anything complex. It's fine to have mediocre developers perform mediocre tasks, but if you want more from them someday you may be in trouble.


That's not really fair, people don't grow unless they're challenged.

Some people really do never advance past a certain point, but a lot of people write someone off as mediocre when they're just still in the process of gaining skill. Also, they only assess ONCE, which is pretty bad if we're trying to establish how good you are forever and always.


That's true, but how do you discern between someone who has potential and has not yet been challenged, and someone who simply doesn't care? In my experience, the best will challenge themselves to learn new things on their own.


to me, complex, and complex-ITY are entirely different matters. I want a smart programmer who can figure out really complex bugs (something you cant figure out from google/wikipedia). Not someone who memorized the big-O performance tables of 8 different data structures (something you CAN look up on wikipedia ).


Exactly. I once identified memory leaks in a managed runtime implementation, worked around them, and showed that third party networking libraries from the hardware vendor were causing irrecoverable crashes but I can't spout off runtime complexity of algos. The former saved a multi million dollar contract and my company's reputation, the latter is something I look up when I need it.


Optimally, I would like both. I agree that wrote memorization is not a very useful skill, but I'd like to know that they can grasp concepts, and I find it hard to believe that there are a bunch of skilled devs out there with great potential who don't or can't understand algorithmic complexity. Interviewing is hard though, I'm not trying to say that any one question or metric should necessarily rule someone out.


I am sure the RubyGems authors thought the same way until somebody sped the process up dramatically because they could do space-time analysis[0].

[0]: https://news.ycombinator.com/item?id=9195847


Here's how you should reply: "Sorry, I don't have those complexities memorized. When I really need to look them up (which is nearly never), I refer to bigocheatsheet.com."


This can happen even on small systems. I once replaced a O(n^4) with O(n^2*ln(n)) on an embedded target which made a minutes long user facing process take seconds. The catch is that the original used a good algorithm, but made an implementation error which I caught in profiling. So complexity analysis is good, but the only way to get better at something is to measure it.


You should have an intuitive idea around complexity and when it makes sense to optimise.

I don't think though you need to know that for example fastsortx is n log n in the best case off the top of your head in an interview situation. You should be able to reason your way through why it is faster than some other sort though.


What the heck is fastsortx? This is another problem I have. I study all these algorithm books then when I get to the interview they ask something that either isn't in the books or they've come up with some nickname for it and expect me to know it.


It's probably a placeholder. Same as writing <sort algorithm>.


Even Google doesn't know...


That's how I failed my Google interview - they said they expect programmers for any position to have good knowledge of algorithms, yeah, well... :) Frankly I don't like trivia-style interviews.


I think it's easy enough to ask for this skill if the job would require the interviewee to apply this skill.

If your job is mostly frontend, yeah, you probably won't need to worry about this problem. But if you're hiring somebody to work on graphics? You better be doing complexity estimates in your sleep.


Exactly. Performance issues happen. Most of them can be solved by re-indexing a database or adding caching. If I really need speed at the algorithm level I can look it up.


Because in most cases n is a very low number. If you only have 10 elements the algorithm doesn't matter.


That guy in Google who screwed up Android thought just like you. Now, as the number of text messages stored on your phone grows, the entire system slows down. Such a pity that cretin was not screened out on an interview!


If you're a programmer, you like to solve puzzles, so think of passing the interview as just another puzzle! I mean, yeah, it shouldn't be, but here we are.


If you're a programmer, you like to solve puzzles,

No -- we like solving problems. Especially those that have a legitimate context (i.e. are explicitly linked to some actual, real, business or social problem). And for which our skills are truly relevant needed (specifically, for which no one has a readily available answer, at the moment).

But made-up "puzzles"? For which the asker already knows the answer, so they sit back and watch us dance?

Not so much.


Homework projects could work well if the hiring requirements are small, but won't work well when a company has to hire say 25 engineers a quarter, which we had to. At that point, the process becomes too long, and it's easy to lose good candidates to a long process.

This method of interviewing has been around ever since, and is going to be around for the foreseeable future. Nobody loves it, including the interviewers, but there just isn't a better way to do it at any sort of scale. Especially when there are much bigger problems to solve when you're running a business.

It's best to take the bull by the horns. I run http://InterviewKickstart.com, which is a bootcamp for preparing for such technical interviews. We do almost exactly what is in the blog post. It works. Spectacularly.


i was about to sign up but then it asked for my phone number. why do you need this information?


Simply because it is quite difficult to explain the concept in writing. What does it mean to have an intense bootcamp just to prepare for interviews? What's the method? etc.

I want to take the time to talk to everyone who is interested in the course. Because the concept is new, people have all sorts of questions. Can't possibly address all of them in writing. I don't have a team of salespeople and haven't spent a penny on advertising.

If you still prefer email, please feel free to send one. It's on the site. It may just take longer to do back and forth.

Thanks for considering!


Simply because it is quite difficult to explain the concept in writing.

Then maybe it's not such a good concept.


Or possibly because it's a new concept, which takes more text to describe. More text than what can be included above the fold, and more than what most people read attentively these days.

After all, it's an 8-week intensive course, mostly for CS grads, that grills you hard, and is not cheap. As a consumer hence, I'd highly prefer to talk with someone. Not to mention, all educational institutes have an enrollment process that needs you to talk to a human.

At some point, when it becomes more common and well-accepted, we will condense it, but it feels a little too early to do so.


not showing the tuition somewhere up-front (or even in the faq) is another red flag...


I had it there at one point. But the concept is difficult to understand and hence it takes a bit to understand the pricing. I didn't want random discussions on pricing flying online, by people who hadn't taken the time to understand what the course was, and what the upside is.

Pricing is also nuanced based on whether you're an experience engineer or student, whether you're taking the course remotely or on-site. Plus, there are recruiting firms who have access to our pipeline, who return a significant portion of our fees to you directly (we don't take a cut).

And those who are super curious, can always google it :-) In fact, most people who call have already googled for it before calling.

Rest assured, we're a real business, running classes every week. Batch after batch. Those who work hard, are getting their work rewarded.


> candidates who have worked at a top company or studied at a top school go on to pass interviews at a 30% higher rate than programmers who don’t have these credentials (for a given level of performance on our credential-blind screen).

Welcome to Silicon Valley meritocracy.

And it's much worse for founders seeking investment, where there are no hard skills to test at all. It's almost purely about being the same class as the investor.

Which is why you get only upper class people funding upper class people, which then hire upper class people. The 99% only makes it in because there aren't actually very many qualified people among the "elite".

From http://paulgraham.com/colleges.html

> We'd interview people from MIT or Harvard or Stanford and sometimes find ourselves thinking: they must be smarter than they seem.


Problem is, that there just isn't enough time to evaluate everyone who applies.

In my last job, I was a Director of Engineering at Box. Every job post we put up, had hundreds of applicants (thanks to job-boards which let candidates apply to jobs like putting in a shopping cart). What do you think we, as hiring managers, are going to do at that point? We'll have to start forming biases. And if we have to start forming one, it's better to start with good schools and good companies. (I'm sure VCs have a more severe problem).

Problem gets worse when you're hiring at scale, and you want to hire before the holiday season nears, because if you miss the season, the company is doomed. At that point, there is almost panic. A resume with brand names on it, naturally gets higher preference.

Homework projects could work well if the hiring requirements are small, but won't work well when a company has to hire say 25 engineers a quarter, which we had to. At that point, the process becomes too long, and it's easy to lose good candidates to a long process.

This method of interviewing has been around ever since, and is going to be around for the foreseeable future. Nobody loves it, including the interviewers, but there just isn't a better way to do it at any sort of scale. Especially when there are much bigger problems to solve when you're running a business.

It's best to take the bull by the horns. I run http://InterviewKickstart.com, which is a bootcamp for preparing for such technical interviews. We do almost exactly what is in the blog post. It works. Spectacularly.


I dunno - part of leading an eng group is taking heat to do the right thing. I've been in the growth phase a few times now, been under tremendous pressure rapidly build a team. Taking the time to find the right people is absolutely key - better to hold off then get the wrong people in. Maybe it's because I've seen complete dipsh?t Stanford and MIT grads or maybe it's because I didn't go to a marquee school, but I put a big line in the sand on that one...


Taking the time to find the right people is absolutely the key, but the unwritten part of the rule, is not to do that at the cost of business. Better to hire engineers who are good enough, than to miss the holiday season.

The process of DS/Algos doesn't necessarily find wrong people. It's just the fastest way to find engineers who are good enough.

In some sense, they almost secretly WANT you to succeed, by "standardizing" the process.


By forming biases where applicants from "good" schools/companies, wouldn't you wind up losing out on plenty of potential hires that could actually be better?

I don't go to a top school, but I've spoken with students in similar degree programs who don't do nearly as much as me outside of class to learn. In some cases, my breadth (and depth in some cases) of skills and knowledge surpass what those students have and know.

It would seem unfair to give them a pass simply because they had the chance to go to a "better" school.

Not to judge, but just an honest opinion.


Of course you will miss some good candidates and it's unfair to them.

But as soham explained, it's a trade off. You might miss the best candidate, but you will probably find the second or third best candidate spending significantly less time.

It's totally unfair, yes, but it's obvious why it's happening and there's not much that could be done to change it. So while the candidate is losing on this one, the company is (probably) winning.


Yes, you do. But do you have a choice? The key part in soham's comment is "at scale." Obviously the most optimal strategy from the perspective of finding the single greatest candidate is to interview everyone and throw out no resumes. But there is an additional time constraint. So you start throwing out resumes.


I don't know what other industry you have experience in, but this is fantastic compared to the rest of the world. In 'soft skill' jobs, I'd bet the house that credentials, prestige, and 'reputation' end up doing a lot more than a 30% higher acceptance rate.

Should we improve it further? Absolutely, but to pretend that this isn't better than other industries is silly.


I've worked for LA and NY companies (among others) and never seen anything like the elitism that exists in Silicon Valley.

Silicon Valley is mostly funded by a few elite institutions, so it shouldn't be a surprise that they fund elite VCs, which then fund elite founders (and hire elite employees).

The funding sources in LA and NY are much larger and more diverse, so the elitism is far more diluted. It's a market opportunity that SV investors are so biased. Crowdfunding with equity might totally upset the applecart at some point.


You've clearly never worked in finance...


Are you talking about tech startups or other firms?


Which is why you get only upper class people funding upper class people

In my experience it's even narrower than that. In many cases, there is a pre-existing relationship between the investor/founders.

I once met (in a restaurant, because we had kids the same age that were making eyes at each other!) a serial entrepreneur. When we got to know each other he confided that his investors were friends from school (some Ivy league school, don't remember which) that would give him money for some idea, he'd start the company then they'd find a buyer. He'd done this 3-4 times already, and he was about 40.

I know this example is one data point, but I've run across it in other situations, too.


We flat-out ignore credentials in hiring decisions.

If an elite school is intended to signal competence and brilliance in someone, then those qualities should shine through in a candidate without us having to know where they attended college.

Simply stated: we're hiring you, not your certificate.


Doesn't that support the fact that it is a meritocracy? Is it not reasonable to expect that top companies and top schools are more likely to employ people who have more applicable talent and skill?

At the end of the day, going to a top university or working at an impressive company is always going to be a huge and relevant signal. It's difficult to see a problem with that.


>>Is it not reasonable to expect that top companies and top schools are more likely to employ people who have more applicable talent and skill?

No. Its more like a club.

Join this prestigious institution X, and then you shall enjoy life long benefits of employment, higher than average salary, bonuses, stock, opportunities to travel etc. Even if the person is actually the worst possible employee, or is barely productive. Merely have X on your resume, guarantees you life long privilege.

To know how worst it is you come see how it is in India. There are people who join IIT(Indian institutes of technology), a sort of a chain of colleges which is supposed to be Ivy league. Who says so? They themselves, because saying anything other wise means putting your own career in danger. There are coaching institutes, who train you just to get an entrance. Doesn't matter what you go and do there, in fact from there on you may do nothing in your life at all. The whole purpose of getting into those colleges is to enjoy lifelong privilege of having access to alumni who will ensure you a good career regardless of your performance.

The day you remove the real metrics of merit and put in artificial flags. People will do no real work and try to gather as many flags as they can.


Right, I'm shocked that everyone can't see that it's a club, to fund people who went to your school over others. I bet the original guy who posted that was in the club :-)


No, because

> for a given level of performance on our credential-blind screen


While we're in the business of doubting things that measure performance, why do you trust the credential-blind screen?


Gotta say, this is incredibly discouraging for someone like me.


On the positive side, no one can stop you from making lots of money using the internet. And if you have something that really takes off, those same investors will line up at your door.


Right, but it's still discouraging to be forever branded a "2nd tier engineer/human/etc" because of where I went to school (unless I get into a good grad school).


5 years into your career no-one normal cares where you went to school.


This is very specifically the case in Silicon Valley, the insular center of the world, and not the case elsewhere. Most companies I've interviewed at couldn't care less about where I went to school as long as I could build their stuff.


It should be encouraging because you're being told the truth and being spoken to like an adult for once in your life.


You know, fair point. Far too many people spread the meme that school doesn't matter.


Naw - places that hire like that are myopic are shitty and doomed, so you don't want to work there anyway. If you're a good coder and not a complete dick, you can build a nice career for yourself.


Yup, no doubt. There are lots of good areas for tech that aren't elitist like this, but compensation wise I don't think any can touch the top 3 (Seattle, NYC, SFO). That's just what I've heard ¯\_(ツ)_/¯, I don't claim to know much.


You're assuming that the interview is measuring skill correctly in a few hours, but the candidate's several hundred hours of course project work and exams at the forefront of his life over several years are irrelevant. This seems like a big assumption. Why do you think your whiteboard regime measures skill better than a well-designed set of CS classes?


this is different than other fields how? it's the same old since the beginning :\


It's different because other fields haven't built up an entire mythos about how it's way more meritocratic than everyone else.

Tech prides itself in being more objective, more rational than other fields but in reality is no different.

In other fields the effects of class and network are openly acknowledged, in tech you to even address the issue you first have to punch through the mythos.

In other fields the open acknowledgment of these issues has resulted in some action to de-bias the system (see: blind auditions for orchestras, residency matching for doctors). These efforts are imperfect, but nonetheless still way further along than anything we have.


No, in reality it is substantially more meritocratic. In most professional/white-collar industries, not having a college degree would wreck your career. The fact that it only merely disadvantages your career in Silicon Valley is not evidence SV is not meritocratic.

Don't let the perfect be the enemy of the good.


> And it's much worse for founders seeking investment, where there are no hard skills to test at all. It's almost purely about being the same class as the investor.

yeah I figured this was how things happened :/

but this is just the reality when you let people feel free to choose and make their own decisions, they are going to find safety in numbers and people similar to them.

This explains the disproportionate lack of African American and Latino Americans in tech and the 'Bamboo Ceiling' that many Asian Americans experience in the corporate and academic world where they cap the number of Asian American applicants in Ivy league schools. Jewish Americans were also capped and barred from attending Ivy league hundred years ago but not anymore so this probably means that change will happen soon (even if it took a fucking century for racist ass mentality to change)


Another tip which I give: Interviewers vary widely in how much they care about whether your syntax is accurate, whether you handle invalid inputs, and whether you write unit tests. It's really useful to ask the interviewer whether they want you to worry about those things.

If you handle invalid inputs for an interviewer who doesn't care about that, they're going to be a little annoyed by you going more slowly than needed. If you don't handle invalid inputs for an interviewer who does care, then they'll think you're careless.


Is there any reason an interviewer should care about syntax, etc?

When I interview, I ask for psuedocode - I don't really care what language the interviewee uses, I do care that they can get their point across.


This happened to a friend. He confirmed that pseudocode would be acceptable, but then as he was writing it out the interviewer got on him about not terminating lines with semicolons (I suppose the pseudocode looked C-ish). So yeah I'd say make this clear.


Ack. I'm not sure how I'd react to that.

I interviewed quite a bit last year (on the hiring side). I was really surprised by the variation in pseudocode written by the candidates. Most wrote something JavaScript-like, a few stuck to mostly proper Java or C. But then one dumped a giant web of crazy on the board (but still made his point) and one wrote something that looked suspiciously like COBOL - still not sure if he was trolling me.


"and one wrote something that looked suspiciously like COBOL - still not sure if he was trolling me."

That's brilliant. Now I have to learn myself some COBOL just for that.


You will need to write a lot before you even start solving the problem.


My pseudocode used to be a sort of relaxed Haskell, because it's closer to how I think about a solution... but some interviewers rejected it as not resembling any kind of code, so now I use something imperative and Pythonesque, which hasn't gotten complaints so far. The sad thing was that in some cases the Haskell "pseudocode", unlike the Python, would have actually compiled and solved the problem quickly (within a factor of ~4 of C), and it took me about a minute to write.

Unfortunately I think Haskell is disproportionately well-suited to these kind of toy problems, so being able to answer interview questions in Haskell doesn't tell the interviewers much except that you think yourself especially clever.


> Ack. I'm not sure how I'd react to that.

Thank them for their time, leave, move on to next company.


I always write a weird mashup between python and C for some reason haha


I wrote some Ruby in an interview. It was so terse, I had to explain to the interviewer (who favored Java) what the code did, and why it was linear instead of O(n^2). That was actually kind of fun.


If your code is sufficiently terse that it's not very understandable (such that the complexity isn't very understandable) surely that's a realistic red flag?


If it's idiomatic Ruby (which some, like me, are not familiar with), I think it would not be a red flag if they could explain the details of what the syntactic sugar represents, and why its runtime is what it is.


My shipping boxes hold an even number of widgets, but I "have to" sell odd quantities and those need expensive mil spec styrofoam peanuts added to fill the hole. Here, have an array of possible shipment sizes. Given that array, if its shipping an odd number of widgets I wanna add an additional half widget shipping charge.

newshipping = oldshipping.select{|i| i % 2 == 1 }.map{|i| i + 0.5 }

My ridiculous fictional writing about shipping widgets is way more confusing than the idea that you can select and then chain right into a map.

This probably looks really weird to a java guy but its not really all that mysterious. I wonder what that looks like in Java.


In Java it would look like this:

List<Double> newshipping = oldshipping.stream().filter(i -> i % 2 == 1).map(i -> i + 0.5d).collect(Collectors.toList());

a bit more verbose but the essential chaining idea is there...


The above, only if you want to discard (filter) orders with an even number of things.

Also I'm not sure about the use of floating point here...



"Shame! You're not terminating your pseudo-code with semicolons!!!"

I'd respond to that by drawing one huge semi-colon that spanned all 20 lines of pseudo-code.


I've had people interview claiming to know X and then not code in X correctly. So... that's a red flag.

We allow interviewees to pick their strongest language. But if you end up picking something that doesn't exist, well, you aren't earning yourself any points.


> But if you end up picking something that doesn't exist, well, you aren't earning yourself any points.

I don't know about your personal interviews, but I'd find this reasoning slightly strange if I were being asked to write computer code on a whiteboard. I'd find it much less strange if I were actually handed a laptop to write a functioning program on.

Expecting perfectly correct code on a whiteboard seems to me to be a slight abuse of the medium. Whiteboards and chalkboards specifically exist to sketch things out in an adhoc fashion, often in a collaborative and easy-to-edit way.


I don't think he meant perfect code. But I've had candidates claim their main language is Java, but were unable to write a proper for loop or know basic data types like arrays or ArrayLists. I've met such people with PhDs and impressive CVs.

Interview enough people and you'll encounter some that are very convincing until you dig down into details. So you have to dig into details.


To "not code in X correctly" is ambiguous, but I assume/hope the parent poster means that someone makes fundamental, non-syntax errors in their code - in C++ this would be something like returning a pointer to an object that's on the local stack.

If you're trying to filter for people can be productive in a particular language from anyone else, that's what you need to look for.

If you let the candidate pick their strongest language and they still make fundamental errors, you know they're not going to be immediately productive in any language.


I wouldn't pass then since I live in post 2000 and am used to let the IDE handle the nitty gritty details while I focus on the actual meat of creating software


I've had this problem as well. I go back and forth between Obj-c, python, javascript, matlab etc. so much without spending a significant amount of time on any one language that I often feel intellectually deficient because I don't know the nitty-gritty details of any of them. Curious to see what others think - is this something I should stop and focus on? Or in today's development environment is it considered acceptable to have to occasionally lookup language nuances in any given situation?

For example, I couldn't tell you off the top of my head how to test for null in python. I'd assume it'd be if(obj), but after a quick google search it seems like if(obj is not None) would be the correct answer.


I used to be in a very similar situation, but I was convinced otherwise by this article [1]. The fact of the matter is that, yes, it's easy to become familiar with a variety of programming languages, but I think you actually learn a lot more when you double down on a language (platform, ecosystem, etc.) for a long period of time.

Quoting from the article:

> Leaky abstractions mean that we live with a hockey stick learning curve: you can learn 90% of what you use day by day with a week of learning. But the other 10% might take you a couple of years catching up. That's where the really experienced programmers will shine over the people who say "whatever you want me to do, I can just pick up the book and learn how to do it." If you're building a team, it's OK to have a lot of less experienced programmers cranking out big blocks of code using the abstract tools, but the team is not going to work if you don't have some really experienced members to do the really hard stuff.

In areas that I'm just learning or dabbling in (for me, Objective-C), I look things up or reach out to experts. But there are areas where I want to be the expert that others reach out to.

[1] http://www.joelonsoftware.com/articles/LordPalmerston.html


http://exercism.io has helped me to write more idiomatic code (submit a thing, get comments, refactor, also comment on other people's code.) Maybe it'd help you for the languages they have examples for?


Thanks, I'll have to try that out.


The bad/goodness of your strategy I would measure by your market success or happiness. Do whatever makes you happy and employable.

Personally, I love the idea of being a generalist. But at the end of the day, you gotta code and code good, specialist or not.


I don't think it matters. No one want's to see import/includes on blackboard and they don care much if you remember if method on some object was called len(), size(), count() or length.


I don't get why. When I'm organizing my thoughts in code, I normally write something pythonish, but not really any real language. Stopping to think about the correct syntax does not make me solve the problem any faster or any better, and since I am writting on paper or a board I am going to have to rewrite everything anyhow. Maybe I'll even have an IDE to do most of that meaningless effort for me. It's like lazy syntax evaluation; don't do it until you have to.


I've been writing a comment as a placeholder for real code stating the problem in simple English when I'm stuck lately. Usually going through putting it into words really guides the code. It'll usually look something like:

# the problem is that our query only matches rows where the ID from foo table equals the ID from bar table, but we want rows from foo table that match the first part of our query regardless

This also makes it easy to ask for help, since now you've turned your "it no workie" into a question which you could ask another person on your team or in e.g. IRC for help with. They might then have additional questions, but I've found more often than not that simply getting a few minutes with someone else is enough for them to bring not-your-entrenched-perspective to the problem and hand you the (sometimes super obvious) solution in short order.

TL;DR https://en.m.wikipedia.org/wiki/Rubber_duck_debugging is great


I've had interviewers look at an unweighted keyword digest from my resume, apparently without reading said resume (which clearly states my current skill focus on the top, which has evolved quite substantially over time). And then start "grilling" me on a language that appeared on a job description from 10+ years ago.


Take out any of that old stuff. It's not necessary. Your resume should fit on one page, two at absolute most, and only include things that you would expect to be grilled on. If you are annoyed about being tested on something on your resume, take it out.


It's interesting that different companies will want different things on a resume. This is why no two jobs I've applied for get the same resume. If they want lots of experience in a lot of different things, sometimes they DO want the laundry list of acronyms (make sure you know what they all stand for). You might not even get through the first selection if they use XSLT heavily and you didn't think it was relevant that you had worked with it before on a project.

I've also had interviewers rip the other pages out of my resume in front of me, but everyone is different. At the end of the day, don't feel too bad about not getting an offer. A lot of it is luck.


I've also had interviewers rip the other pages out of my resume in front of me,

Really? That's incredibly rude.


I see what you're getting at.

I still think that in general, people who can't be bothered to read important documents, and instead just eyeball them for keywords (and start shooting off questions accordingly) -- aren't my cup of tea to work with, anyway.


I understand your point, it drives me crazy in interviews as well. But by the same token when I'm interviewing, I want to be able to grok someone's resume as quickly as possible - I don't want to see stuff that they themselves don't think is relevant.


There's no need to be "grilled" on something you did 10+ years ago (unless it's a requirement of the job, of course). Perhaps a "have you used Pascal since leaving ...?" would suffice.


A resume should reflect your current skills and abilities. You should feel free to leave in old positions, and the interviewer can ask about it if they want, but you should not leave in technical things you don't want to be asked about.


It depends on what "not code in X correctly" means. If they missed a few syntactical things, it's fine. If they're obviously still "thinking in a different language", then no. For example, if you ask them to loop over a list of items in Python, they shouldn't write:

    for i in range(len(items)):
        do_stuff_to(items[i]))


Well, even though it's not idiomatic, it still works. A good question to follow up with is if the candidate is familiar with iterators.


Right, the point is that such an answer would reveal a more superficial understanding of the language than they might have earlier implied.


I ask questions about the language to determine if they know the language. Correct syntax isn't going to show me you understand prototypal inheritance.


>> I do care that they can get their point across

This is kind of the point, right? Most places I've interviewed are far more interested in your communication skills, logic and thought process than writing perfect code on a whiteboard.

Many of my friends have failed to see this is actually the reason they have you write code on a whiteboard.


I hear that interviewers are really interested in seeing how you think and communicate constantly. Yet in my experience, if you solve the problem exactly the way the interviewer is looking for in a reasonable amount of time, you pass 9/10 times. If you don't, you fail.


friends who are candidates, or friends who are hirers? I see that problem on the hirer side.


You should be able to carefully review a few lines of code for syntax errors. This is important because incorrect syntax might be ambiguous as to what it could mean.

You should not be expected to write syntax-error free code on your first pass while solving a problem , without machine assistance


It sort of makes sense. If someone knows a language well, they shouldn't have much trouble writing it syntactically correctly on a whiteboard. Especially in languages which have simpler syntax, like Ruby vs eg Scala.


That's far less true if you use several languages on a regular basis. .size .length .count, which one is used in _? Does it use () after it?

Interviews are often based more on what the interviewer knows than the project / resume.

Now what happens when someone asks about a language that you have not used in 3 years? Well it gets fuzzy. Ramping back up on an old language might take a few hours, but that’s meaningless in terms of a job.


In his "Programming pearls" book, John Bentley stated that he always first writes non-trivial algorithms in pseudocode, and only then transforms them to the destination language.

The point is, it's much easier to focus on the idea of the algorithm when writing it down in pseudocode, without having to worry about c / c++ details that obfuscate the idea.


I think this is less of an issue when you code in Python, where the code is already somewhat pseudocode-ish.


I wouldn't call Ruby's syntax simple. Elegant, yes, but not simple. I'd consider myself a pretty seasoned rubyist, but my IDE catches syntax errors for me all the time.


Agreed. I know Python well and Perl and Java before that. Ruby code still looks unusual to me.


Except when you're used to working in an IDE that generates a lot of the syntactic cruft automatically.


Good point, I've encountered this. Some candidates unfamiliar with the process may not even realize they want you to ask that, I didn't know when I started out and used to think a good interviewer would specify what they want, that may not be true, although it would be a nice thing to remind a candidate they can ask for clarifications not just about the question but about testing and such.

I know some interviewers may be interested in helping but it's important to note assholes exist, especially at larger companies. There can be head games and assumptions made where they needn't have been. Even when I've been hired it can feel like if I'd done it again I may not have been. Try your best but don't be too upset if it goes badly either. Similar questions often come up too, it's actually amazing how many questions there are about linked list and trees.


I also think that it's not obvious that the interviewer is doing the wrong thing here. The claim "good programmers should always think to guard against invalid input" isn't ridiculous on the face of it: maybe checking for valid input is a sign that they're careful and methodical, and of course you want to hire careful and methodical people!

Or the other way around: I can imagine someone thinking "this person spent ages on checking for invalid input; I bet their code is always bloated and ugly".

The problem is that programmers do this one way or the other based on personal preference, not because of actual differences in ability. Once you know that, it makes less sense to care one way or the other.


If you think my code's checking carefully for the validity of the input makes it bloated and ugly, you just failed my interview.


There's a correct place for every class of input validation. The point is that you don't want multiple levels of input validation for the same thing. Most of what passes for "defensive coding" is superfluous. For example, if you are passing a pointer into a function, you don't need to reflexively null check. However, null checks are important.


If I'm at a whiteboard and the interviewer cares that my syntax is accurate I'm probably wasting my time.


You can always preempt the whiteboard issue by bringing a laptop along. "Hey, I'm a lot more comfortable writing code on a keyboard and with an IDE. Lets program this together in a text editor instead of a whiteboard".


As a candidate, I feel that it is fair to stand up for this as well. If the interviewer wants psuedocode, I'm happy to whiteboard it. But I've been whiteboarding before and had the interviewer say, "that code wouldn't compile, you're missing a bracket." So I said, "If you want code that compiles, bring in a laptop and I'd be happy to put it in to Visual Studio [it was a .NET position] and have it be syntactically correct; but if I'm whiteboarding, it's going to be psuedocode."


"This organization uses whiteboards as their Code Editor?" - did not get the job.


But be honest....by that point did you really want it?


good point.


Some places I interviewed did mostly whiteboarding, but had a problem that required producing working code. One place had a standard desktop setup aside for me - with the most popular IDEs pre-installed. Another had me bring my laptop.


What was their response to this? I think it's perfectly valid to state this, and may even throw the balance of the interviewer/interviewee dynamics but I can also see other people seeing this as obtuse. But pointing out a missing bracket (in a nonconstructive manner) on a whiteboard is pretty obtuse too...


It was the last interview of the day, and I was tired and had already decided I was almost certainly going to decline any offer, if given. So, honestly, I probably said it with a little bit of an edge and that was inappropriate on me.

Nevertheless, the interviewer was gracious and replied, "Fair enough" and stopped nitpicking my brackets and semicolons.


I got asked to find the intersection of two squares for a Django cosing job. I pointed out that it had absolutely nothing to do with Django (after getting a solution).

I was told they thought I might be difficult to work with.


Nice! Worth doing these things even if to personally experience the boundaries of good/bad interview practice.


Cool, since you opened <preferred text editor> and you have a dev environment. Write compiler/parser in BNF for the editing commands your editor supports. Assume non-standard encodings are possible for the key presses. Here are the examples for vim/emacs. Should work with both.

That could go horribly :)


Indeed, having the candidate use their laptop gives the interviewer a valuable signal, too: whether the candidate has a coding environment they are comfortable and fluent in. Are they stumbling on their editor, or is it an extension of their mind?


So, what about those who only have a desktop? If you want candidates to code on a computer, provide one. Although that had its own issues as well.


Speaking as an interviewer, don't do this to me without prior discussion.

Being able to discuss things on a whiteboard is a necessary skill for working in a co-located office. This includes pseudocode.


That's a double edged sword. Now, your code has to compile...


Ok, but being effective at getting it to do that is an actual thing that every programmer has to do all the time.


When I interview, I usually ask pseudocode first, then write it out in actual code. Think of it like writing a short essay. Outline first, then actually write the essay with proper grammar to the best of your ability. There will be typos and grammatical mistakes which I don't really care about but I do want to see your style and how you use the language to express what you want it to do.


Great point. I tend to be the person that realizes that interviews are stressful and I'm not gonna hold it against you if you miss sanitizing your input or similar issues, but I will call you out on it with a question like "well, what if you get X" and at least verify that you know that's an issue and let you then add the code in to accommodate for that.


And the most humorous interviewers are those that stare at you and slowly answer "uh, mmm, whatever seems fair..." when you ask about these reasonable questions.

More

Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: