Hacker News new | past | comments | ask | show | jobs | submit login
Algorithmic Interviewing – Optimizing to Hire the Wrong Developer (medium.com/p)
46 points by rcruzeiro on Aug 31, 2013 | hide | past | favorite | 65 comments



"Being a great business web developer and being a great algorithm developer are not mutually exclusive, but they are also not interdependent. The type of developer that would perform well at this kind of challenge is potentially a recent graduate, fresh from an algorithms class or one that is more focused on micro-optimizations than the big picture."

Oh, good grief. Being able to answer basic algorithmic questions in an interview isn't a high bar, folks. It's not as if a basic tree-traversal question is going to trip up everyone but Don Knuth.

The biggest danger you have as an interviewer is allowing a Stealth Incompetent on your team. These folks all have Github profiles, points on Stackoverflow, fancy blogs, webpages, etc. They might even have nice-looking, public-facing projects. You can even argue that these people Get Things Done (for some value of "things"). But they're still bad hires.

Stealth Incompetent programmers are great at using other people's code. They can install gems, download jQuery libraries, and stick them all together. They can make websites with Rails or Django or some other framework, and know enough to be able to piece together functional "products" from parts. All fine. But unlike actual engineers, Stealth Incompetents fall on their face the first time you ask them to do anything that requires taste or independent thought. They're the people who will introduce a horrible dependency on a bad gem, or architect a solution that takes quadratic time, when they could have trivially done better with a tiny bit of thought.

Algorithmic questions are designed to catch incompetent people before they can infiltrate your team. The "relevancy" of the questions to the day-to-day work is immaterial.


Why limit a "Stealth Incompetents" to one that is a "web developer"?

A person recently graduate from the U. Perfect scores in algorithms and that stuff.

Now, is in a company. That make web things. Don't know databases well (if any). Don't know JS. Don't know html5. Don't know CSS3. Don't know django, or ruby, or whatever.

But know BIg(O)! Know how do a Black tree!!

And that is almost useless in his job!

You know what is that? A incompetent.

Competence is the match of skills to the tasks.

You will say "A person that know algorithmic stuff can learn web development easy!" (I will say: Why think a web developer can't learn how build a algorithms?)

Real-life disagree. People are not equally good across the full spectrum of skills.

Thinking that "web development" is trivial in contrast with "real engineering" is wrong. Both are hard. Requiere different skills and mind-sets.


I have been doing this a long time, and I have never met someone who knows computer science who has had any difficulty picking up a web development framework. This stuff isn't rocket science.


FWIW, I was hired by Google in the depths of the 2009 recession because "You have both solid front-end skills and deep algorithmic knowledge, and that's not a combination we see very often." I work with a number of Ph.D's from top universities who are doing cutting-edge machine-learning and ranking algorithms. Many of them cannot code themselves a UI if their life depended upon it, let alone stay up-to-date with emerging web standards, stay away from browser quirks, debug obscure Javascript or CSS corner cases, or pay attention to client-side latency.

Good web development is harder than it looks. I've done some pretty impressive algorithmic stuff (eg. ported Arc to Javascript, wrote a Scheme interpreter in Haskell, built a JIT in LLVM, coded a conformant HTML5 parser in C), and I'd say that on a day-to-day basis, web development requires the largest body of knowledge and the greatest ability to make trade-offs in your head.


I believe the core of what you're saying: it's fairly rare to find someone who is algorithmically excellent, and who already knows all of the details of web dev. But I think that's largely a matter of interest and specialization, not capability.

But I stand by my larger point: I wouldn't hire a web dev who wasn't fluent enough in basic CS that they couldn't pass an algorithm interview. They don't need to be able to design machine learning algorithms, but things like basic algorithm analysis are essential.


I don't think we really disagree: I also wouldn't hire a web dev who wasn't fluent enough in basic CS that they couldn't pass an algorithm interview. Once you get past JQuery and need to do your own DOM manipulation, it becomes really important to have your tree traversals down. And webdevs who don't realize that NodeLists are actually lists (with O(N) .length calls) cause a world of pain with their for(var i = 0; i < element.children.length; ++i) quadratic loops.

But I thought what started this thread was mamcx's thought question about why we don't apply this the other way around: when hiring data scientists, why do we not require that they have a basic fluency with web UIs, at least enough to throw a table and a few images up onto a web page and show/hide some details? In my experience, what separates the best data scientists from the worst - and there's an order of magnitude difference in productivity between them, at least, enough that a poor data-scientist will never come up with a useful result - is the ability to quickly visualize their data set and understand its "shape". They need to know the distribution, and be able to drill into a few examples of raw data, and come up with the top X or so results by whatever signal they're studying. And the easiest way to do this is by generating a web page with a few charts, a few examples, a sortable table, etc. Nothing terribly fancy, but having the chops to do this on your own makes you much more effective than someone who knows the theory but can get their hands dirty with actual data.


Yeah, I think we're on the same page. Your traversal example is a good one.

I also agree with you that web engineers should be at least functionally competent at UI development. But where I depart from mamcx is that I'd still choose the algorithmically talented candidate over the one with UI skill and no algorithmic knowledge. I think that learning web UI development is easier than learning the fundamentals of computer science. But maybe I'm just a snob. :-)


Where do you even look for these types of jobs? I started in algorithms and business logic, and over the years have been expanding through the front end, all the way to photoshop. I can't design worth much scratch, but once a designer gives me a few pictures I'm well skilled from there until we hit database optimization.

But most jobs I've worked at and see have two very seperate categories: 1) Front End Developer, 2) Application Tier Developer, and don't expect or really allow much cross between the two.


The best place to be if you want to learn both algorithms and frontend webdev is a startup, because they're so short on people that you'll basically have to do everything. All my jobs before Google were startups, and I suspect that if I ever left, all my jobs after would be too.

The other important thing I did was to never let people stop me from exploring other stuff on my own. Most of my algorithmic experience was actually on my own initiative, either nights-and-weekends side projects or 20% time, although I've had a couple projects at Google where I do backend stuff. Compilers are cool because they're largely self-checking: it's immediately apparent if you don't understand the concepts, because your programs will give wrong answers.

There's a very good reason why the job descriptions are separate: they really are separate skills, and it's fairly rare to be good at both. But not impossible, and there's certainly less of a difference between frontend & backend dev than there is between backend dev & leadership.


Wasn't really looking for a place to learn. And yes, that's why I've largely worked in startup companies. Was just wondering if there was anything else out there I was missing.


Along with other responders to your post, I have dealt with plenty of really smart people that cannot program. I don't mean 'pick up a framework' - learning an API is right up a clever person's alley (I frequently carefully distinguish between clever and smart, and am doing so now). But, can they string together nested logic, design in a way to minimize dependencies, is their code readable, is it testable, and so on? So often, no. I think the ability to craft high quality code and to act as a empirically oriented engineer is a somewhat rare skill, and one that is quite orthogonal to proving the correctness of Prim's algorithm, or remembering how to implement Union Find efficiently.

Furthermore, it is not necessary. I have worked with plenty of people that I would work with any day, in any project, that had no real algorithmic chops. Oh, they may know that databases use something like B+ trees, but they'd be hard pressed to describe the data structure, let alone implement an operation on it. But they could, if they needed to. Half an hour with a book, a few experiments, and they'd be off to the races.


"I have dealt with plenty of really smart people that cannot program."

Straw man. You obviously have to make candidates prove that they can code, in addition to knowing algorithms.

That said, you can have the people who know how to use a database, but can't at least explain a B+ tree. That's a combination of laziness and incompetence that shouldn't be tolerated. You can't possibly understand how to build performant database systems if you don't know how they work.


Not only is possible, happens all the time.

Is because the abstraction level of a database engine is different of a core language.

Is possible to build a performant database systems without understand what is a BTree, or how is different from a Black tree or a hash. (I suspect, from my experience, that a lot of DBAs and database-focused developer have not much clue on that low-level stuff).

In fact, is not only possible, but build a good database is the job of the people that build good databases. But how archive that, is different from how build a performant algorithm (is more about how layout the data, how index it, how build the querys, analyzing the plans... but mainly, how layout the data)

In my life I have never ever ever think in the low-level implementation of a index as a key point in how build a database system.

(However, I will be glad in know a example in where this could provide a huge advantage)


I know it's possible to build a performant system without knowing how it works, it just isn't deterministically likely.

There's also a difference between "building a database", and "using a database". I'm not suggesting that people who write SQL should be able to write a DBMS (though that certainly helps) -- just that they should know, for example, how a B-tree index will most likely perform when joining two large tables. Or how storing a text column is different than a string. And so on.

I've spent a lot of time cleaning up the messes of people who can write SQL -- sometimes, on large, heavily-used systems -- but had absolutely no idea why their multi-way joins were slow, or why doing LIKE queries over a large text column is a performance hit. These are stupid problems that anyone with basic CS knowledge can understand how to avoid.

Aside: this is also the reason why fans of certain trendy technologies (...rhymes with bongo...) are routinely pilloried for making dumb technical decisions. Those who don't understand CS in theory are doomed to learn it in practice.


Anecdotal: just last week a CS graduate at my work produced Python code so horrible, that our tech lead refused to comment on it. Have you heard about storing an app's - no, an app part's - settings in a __builtin__ module? And that's a tip of an iceberg in this case :)

Well, being a programmer I am able to answer both simple and a bit more complicated CS questions, although I worked hard on my own to be able to as I have no CS degree. The reverse is not true - the CS graduates cannot produce maintainable code. At least that was the case in those few cases I witnessed, so of course it's not (probably) a rule, but it certainly happens.


This.

A "simple" thing like the kind of language you used to learn first have a huge impact in how you code or solve problems.

And if, for example, people learn first C, then try to workaround the limitations of it... in python.

I learn first foxpro. And my personal experience is that it give me a huge boost in database/data manipulation than the regular developer. To this day (I do integrations between several ERPs made with different languages and databases) the system that I have meet with people with strong C-like and algorithmic stuff are qualitative worse, by a LOT (take in account, is not normal to meet GREAT GREAT developers, so YMMV), that people that start with DBase languages (yes, DBase/Fox was HUGE in my country ;)).

Is like this. If a person that learn by the CS path have a problem, inside his C/C++/Java:= Amazing. But when the same problem is inside a database engine :=Not amazing.

Is weird! Some folks have tell me I'm a great developer (I know I'm not. I'm very average) because I know SQL! I could figure how solve stuff with Sql/Sql languages naturally, but then... I'm inside C/C++/Java and I lost if need to do a black tree...


I once worked with an individual with a PhD in Mathematics who wrote some of the most god-awful code you could possibly imagine. In an algorithmic test, he would have bested anybody I can think of.

Intelligence and usefulness at tasks are incredibly complex topics for discussion. If you think you can reliably go from A to B, you're dead wrong. The only reliable metric is, if you want skills A, test for A. If you want skills B, test for B.


Indeed. You want people who can code, and who know algorithms. You must test for both.


Nice counterpoint, but you are making a some assertions here that don't seem to follow -

So you say there are developers who are good at putting lego bricks together to solve business problems, but introduce bad, unmaintainable code because they are bad as engineers.

You say algorithms questions are a good way to identify them.

To me, algorithms questions show that you studied CS in an academic setting, and listened in class.

People who introduce bad code do so because:

* They 'don't have time to do it right', the root of this being that they lack the confidence or judgment to say a task will take longer.

* They lack experience maintaining code, or haven't learned the skills to write maintainable code. This happens wen people have mostly worked alone, and can be rectified by working closely and under someone good who reviews their output for a while. CS school does not teach this.

* They 'have not heard of' things that 'should be' common knowledge, e.g. unit testing. Fresh graduates often lack 'common knowledge', as do people who worked a long time in a role that just used some specific, older stack.

With all these problems, the way to fix them is to properly monitor and understand people on a team, but also they must have the humility to take advice. I'm guilty of identifying too much with my code sometimes, but we've all seen people who get very very defensive.

Look, I'm all for discriminating against people who didn't study CS. It keeps my wages up right. But if you are trying to solve this problem you describe, surely identifying the actual cause of what is bad about these candidates is a better idea?


"People who introduce bad code do so because....they 'don't have time to do it right'....they lack experience maintaining code, or haven't learned the skills to write maintainable code....they 'have not heard of' things that 'should be' common knowledge"

Those are all things that happen. But you're missing the most dangerous one of all: programmers who just don't understand how computers work. It's a surprisingly common problem, and it's a problem that algorithmic questions are especially good at identifying quickly.

Hire these people at your peril, because while they are fine for simple tasks, they generally have no idea why their code is fast or slow, have no idea why their programs eat memory like a hog, and no idea how to begin to break down a difficult problem in a way that a computer might be able to handle efficiently. Give them a hard problem, and you might get something good....or you might get total garbage. It's programmer roulette.

Programmers who don't understand computers are terrifyingly bad hires, because they can do the most damage before you catch them.


I would argue that, say, a recent graduate who can absolutely nail algorithmic questions would be the most likely to screw up on the big ticket items that matter (infrastructure, taste, independent thought) precisely because they have zero real world experience.


"a recent graduate who can absolutely nail algorithmic questions would be the most likely to screw up on the big ticket items that matter"

Why do people keep implying that only new grads understand algorithms as if it's some kind of fact? In my experience, new grads don't tend to know algorithms any better than they know anything else.

Good engineers keep learning about algorithms because the knowledge is timeless.


Ridiculous. Good engineers solve engineering problems. The huge number of successful engineers out there in the world solving problems without remembering their CS course notes tells you everything you need to know about how important much of the algorithmic subject matter is.


Uh huh. I can fix my car without knowing metallurgy and fluid dynamics, but you wouldn't want me building you an engine.

Just because people are getting by doing the technical equivalent of changing sparkplugs doesn't mean that I want to hire them to do something more difficult.


And you don't immediately give a recent grad a large project with no oversight. You give them time to learn that stuff, and they'll pick it up with experience. But someone who demonstrably cant understand how computers work? That's who you need to avoid.


Exactly, there's a reason the title 'Junior Developer' exists


This exact topic came up yesterday in the HN post "Bullshit Interviews": https://news.ycombinator.com/item?id=6304911 (https://coderwall.com/p/yn9g2a)

I repeat now what I said then: I think there's a very simple rule anyone can and should follow to not fuck up interviews: Don't ask people to do things in the interview you wouldn't expect them to do in the job. Examples:

(1) Do people in this job regularly write code on the whiteboard? Lack access to a search engine? No. Then don't do it in the interview.

(2) Do people in the job design spice racks for blind people? No. Then don't ask them to do this in the interview.

(3) Do people in the job regularly find themselves coding something on the level of fizz buzz? Yes? Go ahead and ask!

(4) Do people in the job regularly find themselves writing code to find loops in a graph? Yes? Go ahead and ask (though, let's be honest - this is exceedingly rare in comparison to how often it is deployed in interviews)

Verify people have the skills they need to do the job you're hiring for. It really is as simple as that.


I find it curious how so many computer engineers tend to have borderline religious beliefs when it comes to the topic of CS fundamentals.

This is an ineffective way to hire front-end devs, which is the scenario described in the article. In Javascript, there is limited room to implement fundamental algorithms, because many of them have already been implemented in native code, and they will pretty much always outperform a customized Javascript implementation for large n.

When doing front-end dev for a living, you will not be able to keep your fundamental algorithms chops by just doing your job. You have to go out of your way to keep them fresh. As a result, interview questions about fundamental algos for a position like front-end development will be biased in favor of those who are recently took CS algorithms classes, who do not necessarily have any more intrinsic potential as a front-end web dev, and who likely have less work experience.

As a consequence, it would seem that any employer that uses fundamental algorithms as a major factor in hiring front-end devs does not have a realistic conception of what the job actually entails.

And the same argument applies for any programming job where fundamental algos are not coded regularly.


I have mixed feelings about algorithmic interview questions. There is the camp that says that you shouldn't memorize something you can look up, but I am of the opinion that having something memorized fundamentally changes the flow of thought.

If you just know the lookup times of various datastructures, your thought process is much less jumbled than someone who must look them up while problem solving.

I'm not suggesting that it's necessary for someone be able to whiteboard an implementation of a red-black tree, but knowing what it is, and its advantages and disadvantages compared to an AVL tree or something is very useful.

As an aside, in a graduate CS class I took (about search engines), the prof asked "What is the lookup time of a balanced binary tree" to various students, two of whom gave wrong answers before she got to me. One said "n lg n" and the other said "n". It was evident that they were basically guessing.

My point is that if you actually understand what a binary tree is, the lookup time of it should be obvious. Same for most other basic datastructures, linked list, hash table, etc. I think asked basic, high-level questions about these are totally fair game.


If you just know the lookup times of various data structures,

The problem isn't knowing complexity, but rather with expecting someone to code something elaborate from memory while being timed, watched, and judged by a stranger. "Code a Fibonacci heap right now, live, with no references. Begin."

Nobody is arguing you shouldn't know the complexity of everything off the top of your head. You absolutely should [1], but you should also be able to figure it out pretty quickly if you don't know it. That means understanding what lg n means and understand what 2^n means, etc. (Yes, we're agreeing here, I'm just reiterating.)

[1]: http://bigocheatsheet.com


[pedantic] Technically, look up on a balanced binary tree is still O(n) as it says nothing about the ordering. If it was a balanced binary search tree then it would be O(log(n)) which I guess is the answer you gave. [/pedantic]

A part from that I totally agree with you on the importance of knowing by heart the implications of some technical choices.


The lookup time of a binary search tree is also in O(n). Personally, regardless of the algorithm, I prefer to just say that it is in O(Ackermann(n,n)).


(I upvoted you because you're pedantically correct, but being even more pedantic: if it's not a balanced binary search tree, is "lookup" even a valid operation on it?)


Well, lookup is a valid operation on any dataset.


Not really: "lookup" is not part of the API (as taught in a typical data-structures class) for stacks, queues, and priority queues, and technically arrays/vectors let you lookup by integer but the O(N) lookup by object is only an extension that most languages provide as a convenience.

If we really want to be pedantic, we'd have to define what the binary tree is used for, because there's no convention for what "lookup" means in this case. A heap, for example, can be implemented as a balanced binary tree, but the only methods it provides are push, pop, and top.


Yes, and no. ;) These are important things to know. But, they are also easy to know. I don't care too much if somebody doesn't know them, it's just knowledge. All I care is if they can learn and understand it.

I am not always on my A game, but when I interview I basically probe until I find an area where they are not strong or familiar. I don't care too much about the questions they get right (they've already been vetted as to actually knowing the language they claim to know). It's just knowledge. But, when they get to something they don't know, that is when it is interesting, and telling. If they can't give you the time of search on a binary tree, awesome! Awesome because now you can actually measure how they think and take on new info, rather than regurgitate stuff (some people have prodigious memory, and can mirror back almost anything, after all). Ask them how they would figure it out. Ask what the structure of a tree is. Ask for the math for # of leafs vs depth, that sort of thing. Help them out when they get the deer in the headlight look. Once they get an answer, immediately change the question and see if they can derive the answer for that new question (say, switch from binary tree to trie).

In some ways college was hard for me - for whatever reason I didn't internalize a lot of the algorithm stuff until I got to grad school, where I ended up TAing for the grad level algorithms course after awhile.

Of course, a common thought is that the algorithms question are meant to test thinking, not memory. I claim that is hogwash. Most of these classic algorithms were invented by giants in the field, alongside other giants that did not think of the algorithm. And some random person, off the street, is supposed to just re-invent a classic algorithm off the cuff? Right. Chances are great that unless you are Google, if that person actually did do it they'd end up so bored at your company that they'd leave in a few months anyway.

Look for what the candidate doesn't know, give them the info, and see if they can synthesize it and do something with it. Don't try to see if they are CAR Hoare - they aren't, and you don't need to hire him anyway. IMO, of course.


In college an old physics professor of mine once told me that you don't really know something until you can clearly explain it to other people. And you can't explain things to other people unless you have the basic concepts memorized. I think about that every time people complain that it's not fair for interviewers to ask them to explain basic programming concepts.


I'm in a school of thought that algo interviews are useful, but not for the knowledge of some particular algorithm.

I was usually asking people to write atoi() implementation without using high-level constructs on the whiteboard, piece of paper or in the browser window. Things I was looking for:

1. If I'm interviewing for C# position and a person chooses VB.NET, that's probably a red flag.

2. If they choose C, hey! show me your pointers arithmetic, strlen() at the beginning of the loop is a huge red flag.

3. There're always giveaways about how person is familiar with the language (and mind you, I do not expect any tricky stuff to be used).

But the important part is not the code - the important part is to watch how the person attacks this token problem, how much time do they spend on it, where they're when I come back in the room. Then we'll talk about what the did in order to solve the problem and why did they do things they did. I won't be running this problem to check if the output is correct or to see if it compiles at all - we'll just talk it over. Will we talk about technology stack in use at the company, assess person's architectural skills? We sure will. But at the end of the day I'll expect this person to write some code at their job and I need to be sure that they know what they are doing.

Does Github account help? It does to a degree. I'll sure take a peek to see at the coding style or how the projects/code are structured, but honestly, I just don't have time to fully assess someone's capabilities by reading their existing code thoroughly. Plus, it doesn't give me any clues on the thought process when they were writing that code.


Every now and then, these posts come up on HN people mention how bad the interview process at certain companies is and how it should be changed.

I also work for a company that does these types of interviews. While I wholeheartedly agree that asking these algorithms/data structures or other CS fundamental questions is not perfect, precisely because it may induce quite a bit of false negatives, the engineers who does end up successfully completing the process at my company are very competent ones. So from an anecdotal perspective, the process works fairly well in the fact that it does not produce too many false positives.


There's a broken culture of companies thinking they should only hire the smartest people in the world. Most companies don't need brilliant people, they need competent people who will work great together.

[Disclaimer of bitterness: I'm not one of the smartest people in the world. I get rejected after one to three interviews all the time. The author of the post is right: nobody ever looks at your online code and your resume only gets read once by HR to set up the initial pre-interview.]

Testing for concrete intro-to-CS level drivel may give you a company full of people who spend 30 hours a week doing topcoder or project euler, but... you don't need that (if you do need that, good for you, but most companies don't).

It also tricks the interviewee a bit -- "Oh, they're asking algorithms questions! Maybe this'll be a fun CS research job." [six months into the job] "Uh... all I've done is write SQL and fully library-driven backend website code. Boring."


You value what you can judge. (Similarly, you value what you can measure.)

Being able to test the ability of somebody in algorithms to a high ability level is a useful late-stage signal.

I always thought it would be interesting to just ask "What do you want to be interviewed on?" and to tease out their strengths from this. You'd then be able to question away to whatever depth necessary without worrying that they were being forced to dance in another man's frame.

I guess the problem with that would be that the interviewer needs to know the area he is testing an interviewee on in order that he may judge it accurately. The communal ground which is intellectually testing is often the academic experience they (possibly) shared.


I strongly believe that for many companies such problems with hiring process start much earlier - in the phase of defining the requirements (competencies). When they don't know precisely what they need (and both "ninja" and "experienced front end dev" are far from being precise), they simply can't succeed in measuring it.


If you're looking for people with the proverbial "smart and gets things done" properties, you need to give them something to do and see how smart their solution is.

I give a task, like "write some code to store data in X; here's the interface I'd like to use" or "write some code to find X given data set Y". The problem should have "layers" and follow up questions.

Some devs recognize the task and says "this looks like an application for $FOO" and apply some $FOO. Others do not know the name of the optimal technique, but are able to derive it on the spot from first principles.

Junior devs should at least be able to come up with a brute force solution that solves the problem, even if they can't take it to the next level. If the problem requires some deep "ah-ha" moment then it's not a great test; the whole point is to separate applicants into strata.


"My experience gave me the feeling that this was more of an interview assembly line than someone really interested in my relevant skills"

That's exactly what it is...

Effective interviewing is hard, and like most things hard you won't be very good at it without some practice and study.

Asking an algorithm question is relatively easy.


I was recently interviewed by Google, didn't get the gig, didn't do very well If I'm honest, however even though I'm a bit bitter, I'll admit that knowing algorithms and data structures is important. I was rusty before the interview so I hit the books and I was amazed how many things I've forgotten and I do believe knowing those thing makes you a (much) better developer. Whoever says otherwise, do yourself a favor and learn at least the fundamentals.


Aside from the main point of this article, could anybody recommend a good algorithms/data structures book?

I want something that gives examples of when you should use particular techniques/theory and the benefits and trade-offs of doing so? That is, as opposed to learning the rote implementations and time complexities, I want to understand the practical application of knowledge...

I recently looked at some coding tests for Spotify [0] and found an interesting article talking about the solution for one of these. It was interesting to me since he just pointed out that something was a maximum cardinality problem on a bipartite graph and then applies the Hopcroft-Karp algorithm [1]. Now I don't care at all about being able to code any of those things from scratch, but I'd love it if there was a book which I could read to try and learn the names of different techniques and when/why they are applicable. I want a better map and I want it to be written in english.

Any ideas?

[0] https://www.spotify.com/uk/jobs/tech/catvsdog/

[1] http://alonso-vidales.blogspot.co.uk/2013/03/new-spotify-puz...


Try Skiena's "Algorithm Design Manual" [0]. Another nice book would be "Programming Pearls" [1].

[0] http://www.amazon.com/Algorithm-Design-Manual-Steven-Skiena/... [1] http://www.amazon.com/Programming-Pearls-2nd-Edition-Bentley...


"Aside from the main point of this article, could anybody recommend a good algorithms/data structures book?"

Skiena's "Algorithm Design Manual" or Kleinberg and Tardos's "Algorithm Design" or Sedgewick's "Algorithms" for folks with little/no math. I mildly prefer the last.

Once you have some math under your belt (say worked through Knuth's Concrete Mathematics), Sedgewick's "Analysis of Algorithms" or even Cormen et al. The problem with the latter is that it is too big to work through comprehensively and knowing what to leave out is somewhat tricky.

Speaking of which, has someone ever worked through Cormen comprehensively? I sometimes think I should take two months off from work for a "desert island" holiday and put in the work to do this. (I started once and was making progress but got interrupted and never went back).


"Programming Challenges"[0] by Skiena maybe? "Practical application" of algorithms and data structure knowledge is mostly about recognizing which type of problem you're dealing with, and then implementing the simplest algorithm that solves it fast enough for your specific problem.

[0]http://www.amazon.com/exec/obidos/ASIN/0387001638/ref=nosim/...


I think my answer to the "write a sort" thing would be "are you kidding? I'd use the one in the language's standard library, because I'm not an idiot. Or if I had no choice, and was using a language and environment so primitive it had no libraries at all, I'd write bubble sort because it's easy. Or if it actually mattered I'd go read up on it and learn how."


Oftentimes that's the first answer that the interviewer is looking for (without the attitude). But then they want to see you code a MergeSort or QuickSort up from scratch, and explain the difference between them.

Why? Because what happens when your data set exceeds available RAM? Then you can't use the stock libraries, since they're all designed for in-memory datasets. You have to rely on external sort, which is a cousin of merge sort but has fairly different properties because of the vast difference in performance between RAM and disk. Or what if you have multiple computers available to you? Do you implement a parallel external sort over the network as a library (or service), or do you rely on domain knowledge of the keys to do some sort bucketing to each computer, like a radix sort? What are the trade-offs between each one?

Software engineering is somewhat peculiar in that 95% of the time is spent gluing pre-made components together, and yet the majority of the money is made off the 5% that requires actual creativity and knowledge. Why? Because gluing components together is a commodity. You can do it, but so can every other college grad who knows how to Google StackOverflow answers on the web. The folks who make $300K+ a year at Google/Facebook or get acquihired for a mil or two a piece or found a startup that strikes it big are the ones who know how all the "standard" stuff works on the inside so they can take it apart and piece it together in new and innovative ways.


Did we all read the same article? He complains about "algorithmic interviews", the comments here are about AVL trees and sorting algorithms, but what he was actually asked to write was "a string parsing implementation".

Well, OK, I suppose it's possible they tried to get him to write yacc, but it sounds much more like they asked him to do some trivial loops and tree traversal, at the very most some basic recursive descent parsing.

This ain't rocket science, it's not complex O(n) analysis, it's only slightly above the fizzbuzz level. If you're writing code at all then I think it's reasonable expect you to understand recursion and loops, even if much of the job is just going to be hooking up jquery plugins.


I don't agree 100% with the author on not asking them algorithmic questions. Its a process on how they go about solving it rather than just the outcome. Maybe for the web developer position, you might not need such type of questions, but for a full stack developer the answer is unequivocally yes.

I made a mistake by hiring someone who knew django [who originally wrote plugins in WP] and the coding was just plain horrible. One such example.

`for foo in queryset: foo.prop = 'i m bar' foo.save() ` Simple things like this, where a good engineer would know because of O(n) nature of the code, the so called programmer didn't.

And then there are guys who fork every possible repo, have blogs about how-tos, etc but really can't code for nuts.


Coding noob here. Can someone explain the problem with the

`for foo in queryset`

implementation? Is it that for loops are inherently inefficient? What would be a better alternative?


I don't see a problem with an O(n) algorithm to set some property on n objects.


I think the issue is the call to save() (I assume this is meant to be outside the comma).

Basically, it'll hit the database after each query.


I originally thought it might be that too, but it seems to me like you'll have to hit the database n times for n queries no matter what. Unless Django has some builtin batch save functionality.


Hmm, there's bulk_create(), introduced in Django 1.4, to create multiple new objects in one DB call.

However, I'm not aware of anything that would do the same to update properties.


The argument about whether to ask algorithmic questions reminds of the "can an airplane on a treadmill take off?" question. There are two sides who arr sure that they're right and the others are wrong, but they're just answering subtly different questions.

In this case, the people against always seem to come back to questioning why you would ask something that's just memorization, while the pro-algorithms side is pushing for questions that candidates wont know the answer to, so they can see how they think.

I am in the second camp, if you cant tell.


I personally don't look for someone to answer algorithmic questions correctly. If they can just show the ability to solve a problem with a bit of complexity, that's all that counts. It also just reveals a lot about how that person thinks and solves problems which is absolutely an important factor of a potential job-hire. However, I know some interviewers just keep score of your answers..I don't necessarily agree with that.


Do you hire a mechanic who knows how the car works in intimate detail over a mechanic which knows how to fix cars? Depends. But usually if they know how the car works chances are they are also great at fixing them.

In tech it is the same, with the additional wrinkle that you want to hire aptitude over experience.


But algorithm coding is just about personal programming ability. What if the company is using a non-opensource lib? I don't know how difficult the string parsing problem is. But interviewer may just unsatisfied about your slow implementation.


I think algorithmic interviews are fine if the interviewee has mentioned the algorithms on the C.V.. In general, I only talk about topics the interviewee has mentioned on the C.V. or are super important for the org/project/position.


Interviewing process is constant optimization just like every other workflow process in a company.

Disclaimer: I run interviewstreet.com which helps companies screen & hire programmers using coding challenges. We have had a substantially HUGE number of developers screened (might not be able to reveal the exact number) through the process and using the data points, here's what we found (btw, this is also constantly optimized)

a. Asking a 5-year experienced engineer to solve a graph theory challenge or a complex tree problem is a pretty useless indicator. The goal of such an interview challenge should be to test the problem solving skill of a candidate which is essential across any programmer role.

Can this person actually take an array of objects, perform an operation to get the result such that it works for any size of the array without throwing an exception? The data structures used in the question should be simple enough to start working on the challenge. You will be surprised how many errors, corner cases come up which are often missed. And as you gradually increase constraints, you can check their thought process of how their algorithm changes.

b. Make the problems interesting - put in actual effort to make the problems interesting, sometimes relevant to the problem you are actually solving. I often send this link to our customers(www.itasoftware.com/careers/puzzle_archive.html) and also help them design problems like this. That's by far the most interesting publicly available challenges. The other one is Quora, quora.com/challenges which is slightly harder though but very interesting (and has proved very effective!)

c. Calibration: Surely github/bitbucket/SO profiles are important and can serve as a data point. However, it's probably going to be very hard to calibrate. A web server coded in Python vs a new MVC framework written in PHP - who is a better candidate? Who has actually thought through the design of the problem better? It's tough to evaluate. The programming challenge interviews act as a data point (one of the interview rounds) to check these skills in a contained problem/environment helping the company calibrate the performance against the rest of them.

The problems should involve a combination of the ability to write good code, focus on problem solving skills and not on remembering an algorithm from the CLRS text book, intelligent thinking (not to be confused with weird math/geometry problems unless your company is working on that domain) and importantly the ability to catch hints and solve it in a better way. This is a data point, an essential one but there are more to technical interviews.

If used in the right way, it can prove to be hugely effective for your process not only in streamlining but also as a way of generating interest from potentially interested programmers.


I am actually pretty decently ranked on Interviewstreet because I do have a personal interest in algorithmic stuff (I was top in Australia for a little while :)). I used to do your codesprints just for fun.

My feedback to you would be that Interviewstreet is heavily heavily geared towards academic algorithmic questions which the average developer would not have a chance in hell of solving. For example, in order to solve some of your old Hacker Rank questions within the time and memory constraints you pretty much have to know the studied and documented solution you would only ever be exposed to in doing a masters degree in CS. Heck, your problem descriptions were often thin veneers around the exact academic statement of the problem.

Despite the fact that I'm someone who actually does excel at these types of problems, I think they are absolutely the wrong way to hire people and whenever I've seen companies actually use Interviewstreet it makes me cringe - and yes I know companies set their own questions, but you lead by example and so most questions are abysmal.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: