Hacker News new | comments | show | ask | jobs | submit login

If you work at Google or Facebook or any other of the tech giants, you need to know your algorithm-type questions.

For the other 98% of companies, there's just no need. I've also never ever had to do strange things with binary trees.

You need people who know how to find the answer to such questions. I would much rather hire a Java developer who knows how to implement equals() and hashCode() correctly, with a unit test to validate it, and knows that if they have any questions on say searching and sorting that they should reach for their copy of Knuth v3 first before writing a single line of code. That to me is a much more valuable employee than one who can spout off how a Linked List works.

90% of programming is problem solving. Knowing how to solve a problem is much more useful than being able to spout out the right answer.

(To echo the first poster. Number of times in 20 years outside of a job interview I've had to implement a Linked List: 0)

IMHO Google, Facebook et al are largely the cause of the current recruiting mess because companies who don't understand what makes a good employee for their company have been cargo-culting Google, Facebook et al in the hope that will "make it rain" for them.


> You need people who know how to find the answer to such questions.

You can't look for an answer if you don't know what question to ask.

This has to do with that whole "conscious competence learning matrix" stuff, more entertainingly described by Steve Schwartz as "shit you know, shit you know you don't know and shit you don't know you don't know" [1].

The purpose of interview questions about algorithms and data structures is to see whether the candidate has the basic understanding of how the most fundamental data structures work and when to use them; whether they understand the concept of complexity and why it matters.

If you think these things don't matter, you're taking them too literally. Knowing when to use a linked list, when to use a binary search tree and when to use a hash map might be the difference between your software being able to scale or not.

[1]: http://jangosteve.com/post/380926251/no-one-knows-what-theyr...


I disagree.

Asking algorithm trivia questions will tell you nothing about whether or not the candidate can apply that knowledge in the real world.

A large part of what I do involves coming into teams that are struggling with delivery, and helping them get back onto the path of productivity.

More than once, I have had developers that could (and have) leave and easily pass a Google interview. They knew more about computer science than I probably ever will.

And in many of these cases, the code they wrote was actively harmful. Unreadable, tangled messes, that nobody else on the team could unwind.

Knowing how and when to use monads, or bloom filters, or any number of other fun tools separates the supermen from the mere humans.

But before you need to care about that, there are more fundamental concepts that matter. Like being able to write readable code that conveys intent, knowing how to write tests that are flexible yet specific, and so on.


Who said anything about trivia? Asking trivia is pointless and any company dumb and/or lazy enough to base their interviews on trivia deserves the bad hires.

While we're at it, who said anything about monads and bloom filters? The former aren't really data structures or algorithms, the latter would hardly qualify as "the most fundamental data structures".

Also, I don't remember saying anything about the irrelevance of writing readable code and flexible-yet-specific tests.

In short, you're certainly disagreeing, but apparently not with me.


As someone else who has also been in the industry for quite a while, it used to be fairly common for me to implement linked lists back when I was programming primarily in C. But lately, yeah, it has been mostly abstracted away, even in languages where doing pointer operations is useful or allowed.

As far as companies attempting to emulate Google or Facebook when it comes to interviewing, this sort of thing is even older than either of those companies -- in the mid-90s it was common to attempt your own spin on the "Microsoft interview".

I totally agree with the basic point, though... for all of the supposed rational thinking that goes on in tech, hiring people is mostly cargo cult and voodoo magic.


I don't think interviewing candidates based on basic knowledge of CS is cargo-culting Google and Facebook. I think this predates those two companies. Also, someone that implements hashCode() correctly and can't spout off how a linked list works, is a problem for me. I guess it is subjective.


Your last paragraph really resonated with me. I personally could not emphasis my feelings and opinion towards "companies who don't understands what makes a good employee for their company" by any means.

I'm currently a CS student and I do find many of the startups locally do involve loads of problem solving on the job, which is beneficial from general algorithm knowledge. Although I believe it's more so the ability to understand the concepts as opposed to spouting knowledge for any specific abstract data type or algorithm.

But by any means all companies are not similar, and outside of the purpose of prodding for problem solving aptness it is definitely not the best topic to discuss in an interview to find the /right/ employee for many of the companies of which do ask it.


Why? Do Google or Facebook engineers live in a magical world in which they need to know off the top of their heads the big-O complexity of every possible sort function ever mentioned in an academic paper?

I see this argument used time and time again, but it makes no sense to me. Surely they can just search for it when they need it too :)


This is needed more for the people inside than outside. I used to be at a famous hedge fund, and then at banks like Goldman Sachs where this attitude was common : "We hire selectively; only 1 in 750 applications is given an offer". Now how does that make you feel to be an employee? It's an amazing feeling, isn't it? To know you're special, that you're a diamond in the dust. It makes it easier for the employer to then demand you to create great things, eh? You're going to pressurize yourself to show them you're worth it.

But in reality, what happened, at least in my experience, is that the top talent almost always left because a) they never really got to use that Longest Common Subsequence algorithm that they quizzed on, so the actual daily work-experience was sappy compared to the hype or b) it was just one more conquest anyway; off to better / greater things that challenged their abilities

The real gems, in my not-so-objective perspective, were the in-betweeners : people who may not be brilliant, but those who worked hard enough to pass the interview and were really thankful to have a great employer. They stuck around and worked sincerely and were an asset to the firm in every way possible.

IMHO, a firm needs 90% ordinary folks, and only a handful of brilliant crazy geniuses.


Why I Never Hire Brilliant Men (1924): https://en.wikisource.org/wiki/Why_I_Never_Hire_Brilliant_Me...


It's not about knowing the complexity of every possible sort function. It's about having a good base to stand on, a nicely sized toolbox.

If you're working on a solution to an actual problem, it's a pretty damn good to have an idea about the complexity of it. You don't need to be an expert, but developing some kind of intuition for it is very helpful.

If your attitude towards these things is "I'll just Google it" then you're not going to be quite as useful when it comes to discussions. Your toolbox needs to be bigger than that.


Again, what's the point of having a tool you never use?

I have had to implement a sort once in 10 years of professional programming, it was in IE6's painfully slow js to fix a slow table sort. More than most. I didn't do a CS degree, the people around me with CS degrees didn't know the answer.

I googled it.

After the 15 minutes it took to find and implement it I mentioned it to one of the CS people, he promptly reeled off the definition to me, but couldn't see how it solved the problem.

Give me someone who never cuts and pastes over someone who know algos any day.


It's not about "sorting" or any particular algorithm, it's about whether you're actually considering the costs and tradeoffs of the code you're writing (which is actually a big part of software engineering), and are equipped to handle those. Now, one cost is execution time, another is memory usage, another is development time, another is maintenance effort and so on. These costs are spread across different domains, but it's important that a programmer considers them.


It's really not a big part (apart from the rare times when it is).

Firstly you're almost always wrong about what you think will be slow. Add to that it's rare you actually know how a programm will really be used.

Secondly most programs never get stressed so it was a complete waste of time.

Third, you just made the program complex for no actual good reason, just inexperience and flawed logic.

So all that knowledge, those tools, is useless and we know it is. You could fill volumes about the cock ups and bad code caused by premature optimisations.

A for each. That's what you'll use 80% of the time, a for 20% and your algos 0%, for all intents and purposes.


> Firstly you're almost always wrong about what you think will be slow. Add to that it's rare you actually know how a programm will really be used.

If you don't have a good idea of what is likely to be slow, we have a problem. It's usually I/O, especially synchronous network I/O.

During initial development, maybe you don't know how a program will be used, but maintenance is usually a bigger phase than initial development, so you should have some idea of how things are used.

> Secondly most programs never get stressed so it was a complete waste of time.

What are you working on? Everything I've worked on that has users could go faster. It doesn't take that much complexity to get something that takes 100 ms; and making that take 50 ms instead is an easily perceptible difference.

> So all that knowledge, those tools, is useless and we know it is. You could fill volumes about the cock ups and bad code caused by premature optimisations.

You could fill volumes about the cock ups and bad code caused by completely garbage performance because people didn't stop to think about how to do things right. Simple things like reducing the number of round trips to the database, or avoiding calculations until you know you need them could be called premature optimizations or just doing the job right. Having knowledge of algorithms (and knowing what algorithms are likely to underly the abstraction layers you're using) helps you avoid writing code that will blow up.


Reducing the round trips to the DB or doing calculations when you need them need no knowledge AT ALL of algos.

Saying it's "I/O" is so abstract and ultimately useless. A simple SQL query could hold a dark secret of a missing index on a table with a billion rows. Yes, that's technically I/O, but for today's programming that's so obtusely abstract it's pointless calling it that. It's a couple of lines of code that all the algo knowledge in the world won't have stopped the problem happening.

It's simply common sense. This is what we're trying to tell you. This is what we're trying to explain to you. Performance problems are almost always unintended consequences rather than "I'll suck it and see". Knowing how to implement the sort yourself won't save you, you have to spot the problem first, which is the hard part.

Basic multiplication is not algos. Knowing that a 100 loops of a thousand loops of a 1ms method is bad doesn't require magical knowledge. Algorithms was always a smoke and mirror trick, the ones you need are usually already in your language written much better than you could and the rest simply hides the truth that performance is mainly down to unexpected I/O bottlenecks usually buried under many layers of abstraction (SQL) or the occasional loop within a loop hidden in a long call stack.


Depending upon what sort of industry and scale you write code for, very often, it is a HUGE part.

I accept the point that over optimizing is the root of all evil and the rest of the philosophy along those lines. But knowing algorithms and data structures inside out, because where and when you have to choose what - i doubt one can be called a programmer without all that.

And i would say again, i am yet to find some body renowned, who has made their mark as a programmer, and doesn't regard all of this stuff high enough.

Having said all that, if you work in a high level language and develop client facing / application layer desktop/web related stuff - and not involving too much scaling, you won't need these things most of the times. But if not, you can't live with out these.

Lastly, since college and probably since learning to program, the best of the breed usually were the guys who went on to acm and top coder kinda stuff. It is true that building products is not synonymous, but it definitely plays an undeniable part albeit i must agree that it is slightly over valued, nothing else.


High level language? I haven't heard that kind of condescension in years. Whatever you're working on isn't as hard as you think it is. Most experienced programmers could probably do it as well as you in a couple of months.

And who is renowned in this industry? Game programmers. Not Sass, enterprise or consumer web programmers, who make up the vast majority of working programmers. No, one of the tiny subsets of programmers who do need algos.

You're stuck in your own bubble thinking it's bigger than it is (and seemingly thinking it's "proper" programming). So you're wrong because your premise is flawed. 95% of programmers do "high level language and develop client facing / application layer desktop/web related stuff". Simply open a job website and search for "developer". Add them up by SASS/Enterprise/Consumer Web and then everything else. The first category will be much bigger, 20-30 times bigger.


I never tried stating it was hard, or making it look like it is main stream. I just said that depending upon the domain you work in, Algorithms and data structures might be more important and you might encounter them more often then in web or desktop, than an average programmer does.

So the whole line of reasoning is irrational to me. Never tried saying systems programming is more main stream then web or desktop or enterprise.

Talking about renowned, Yes people who have written kernels, operating systems, network stacks / protocols - there is huge list of renowned ones. Do you want me to spell the names out starting with Torvalds or Stallman?


it seems to me that people with 10+ years of experience have more empirical evidence what is actually needed/useful. Your comment and other similar ones sound to me as textbook material which is abstract and correct in principle but not exactly how it is when you spend some time in industry.


I have my own empirical evidence as an user: a lot of software I use is slow or uses too much memory, or has memory leaks. As a programmer, I keep wondering if those could have been averted if the programmers working on that code would have used better/more efficient algorithms, instead of just going with the simplest solution.


No. It's not. It simply inexperienced programmers. I bet they even have CS or EE degrees.

One of the biggest culprits of truely awful programs are graphics card makers. I bet they all know algos, the problem is that they think they know how to program when they're little more than amateurs or hobbyists.


I can do time complexity etc. I know about compilers, trees, graphs & other shit data structures and I can read academic papers on any new shit with a comfortable head. I can learn any new imperative, oop or functional language in a week.

But I still can't find a job in Google.


Because small to medium sized companies are looking for people to help grow their business and revenue.

Large companies tend to already have very stable revenue and now need people to make their systems more efficient in order to decrease expense.


As evidenced by countless anecdotes and accounts by people who used to work at these companies, they don't. It's not about needing that knowledge, it's about removing those people from the employee pool available to competitor companies and the entrepreneur class.


I suspect when they search for it they're going to find their own published writings/blogs ;)

I do think some people at FB/Google need to know this stuff cold, but even that isn't going to be 100% of their engineers.


Agreed. I think there's a tinge of arrogance to the claim that Google/FB/what have you need The Best(tm) while everyone else does not.

The reality is that there are some jobs (a small minority) at these companies that require incredibly deep algorithmic knowledge - they really do do some complicated things.

But the majority of jobs at TwitGooFace are your run of the mill programming jobs, where the requirement and the reality don't call for anyone at that level, and hiring someone at that level is just going to make them bored. Indeed, the vast majority of people I know at these companies aren't algorithmic encyclopedia, they're just solidly competent engineers.

Having worked at a "AAA" tech giant and many more places across the country, I'm happy to report that for the most part there's nothing exceptional about Silicon Valley jobs and what they require. For the most part, barring a minority of highly specialized high-level positions, engineering ability trumps computer science knowledge.


The thing is, that 0.1% would probably be better off just looking for a Math PhD. I suspect it is much easier to teach a mathematician C (Or Python/Haskell/OCaml/Java) than a programmer real math.


Math guy here, spent the last year writing high-performance ODE integrators that run on graphics cards. No formal background in CS at all. Hiring maths people to write performative algorithms is more common than not.


The problem is that Math PhDs tend to be arrogant and think that programming is for monkeys and they are above that. Code that they produce tends to be sub-par and under-tested (because they are above writing unit-tests).

Source: worked with multiple Math PhDs. Have a few friends who are Math PhD working on some esoteric problems that I could not understand.


I think you are drawing conclusions from a small sample size. I've had the exact opposite experience.


They can do the algorithms, but they don't engineer, imho.


"For the most part, barring a minority of highly specialized high-level positions, engineering ability trumps computer science knowledge."



That is what gets me about these interview questions. I know a lot of very clever people. I am carefully differentiating 'clever' from 'smart', by the way. So many that are superficially fast and smart about solving whiteboard problems are terrible at all the things that actually seem to matter when it comes to creating value:

  * Keeping a project on budget and schedule
  * being able to create a budget
  * managing people and resources
  * Grinding out boring, but bug free code
  * Writing clean code
  * Being able to refactor
  * Design software that is maintainable
  * not tick off the people around you
SW engineering is a big, fuzzy, ill-defined optimization task in n-dimensional space. There are plenty of clever people that aren't very good at that sort of real-world iterative process. Sure, it is great to have a few people that can quote Knuth (if you don't know what algorithms are available, you'll often choose a substandard one), but by and large I want an engineer, not a scientist. A good one can learn red-black trees, or whatever, when it becomes necessary. The converse (a CS-y type person becoming productive in an engineering environment) seems less likely to occur, in my experience.


+1, and a bit more.

Yes... I ran a small group where we all got too inward focused being clever in our code and optimizations, all the while ignoring (for various reasons) that we had less income each month, and eventually folded.

I've since seen this firsthand at a couple other companies, on the inside and outside, where engineering types get way too focused on trying to hyperoptimize for a theoretically possible scenario X, while ignoring real world tickets that are costing them customers daily. They don't seem to realize that... when enough days pass, and enough customers leave, there is no more money to pay them.

Having lived through a couple failures (personal and professional), it's probably easier for me to spot these behaviors (again, because I used to do it myself). In my case, I wish I'd had someone early on beat that out of me - I simply wasn't aware of what was going on. As much as you can say "people have to learn for themselves", I don't believe that's the right attitude to take when the fate of a dept or company rides on what you're doing - too many other lives are involved.

This is in reaction to your 'budget' stuff above - I don't expect people to be Excel/Quickbooks/Peachtree gurus nor CPAs, but understand that you have to get stuff out, on time, and sometimes have to choose your battles and put out less than optimal code, then have a plan to address the known shortcomings.

I agree too on the CS adapting to engineering being more rare than engineering adapting to CS.


> For the most part, barring a minority of highly specialized high-level positions, engineering ability trumps computer science knowledge.

Minimizing the CPU and memory usage of your application is engineering more than science; however, doing it requires a lot of CS knowledge.


They deal is data volumes so large and so interconnected, it can't hurt to know those sort of things. Surely you agree?


Unless you're being hired a PhD Senior Software Designer, there is exactly 0% chance (in 2014 anyway) that you're going to be working on improving Google's PageRank algorithm.


I was a lowly software engineering intern at Google and spent most of my summer writing MapReduce jobs which would traverse (1) every document in a large index of the web or (2) every book Google has scanned. Even worse, I sometimes needed to compute correlations between pieces of text in the documents. So, not only was the input size huge, the work per document was non-trivial. The remarkable thing about this sort of gig is how utterly unremarkable it is in a large tech company. If there's anything to Big Data beyond hype it's the commoditization of previously incomprehensibly large computations...and everyone doing computation on that scale should at least know the basic language of reasoning about time/space/communication complexity. You don't necessarily need to know the official jargon (though it helps to concisely describe ideas), but you should at least be able to think about how much extra work your program does for each additional input item (and how much extra storage it needs, and how network bandwidth it will eat up).


Except the thing is that these companies already have established internal libraries and tooling that implement "these sort of things," to a large degree and insulate the business needs from new hires such that any given hire's O(n) skills are not going to touch them until they get put on the task.


Agree, but what you described is something that can be googled and quickly learned, whereas all the intangibles op listed should be second-nature.


I highly doubt that someone that does not know what binary search is, that cannot think for a minute and come up with a small implementation, is going to "google and quickly learn" algorithms and the related thinking.

The issue isn't if you can write on simple binary search (considering such a search is a intro-to-programming task[1], that should be a given), but if you have an understanding of the topic in general.

When your project calls for a certain performance bound, will you flail around or have some idea what you're looking for?

1: "Guess my number between 1 and 100!"


I don't think it is difficult as you make it sound. The most difficult part is finding the term "binary search", but even that is not very difficult. All you have to do is generalize your problem and state it to Google. Binary search will almost invariably come up in the results. It even works for algorithms that are far more obscure.


My point isn't that if you don't know what binary search is then you're going to have a hard time doing a lot of algorithm searches. Just like if you didn't know what a linked list was, or a hashtable. Writing a hashtable or reasoning out the properties of a linked list, is not the kind of thing you should need to look up. It should be easy to talk it out during an interview.

These are basics. Same for strings, too. A programmer should be able to decisively explain how strings work in their language. Where do you draw the line? Arrays are just another data structure, is it OK if they don't know the properties of arrays since they can look it up?


> My point isn't that if you don't know what binary search is then you're going to have a hard time doing a lot of algorithm searches.

My point is that you can write a lot of software before you need to know what binary search is, and at the point that you do need it, you will be able to find out about it with ease. Like you said, it is a basic topic, so you don't even need much lead time to bring yourself up to speed.

I think where you are going with this is that if you don't know what binary search is, you won't recognize when you need it, which sounds quite logical, but experience has suggested it doesn't actually play out like that in the real world.

> is it OK if they don't know the properties of arrays since they can look it up?

I think so. For the vast majority of software being written, the only thing you do need to understand is the language's interface to arrays, which are usually presented as abstract collections. The idea of a collection is something that transcends beyond programming and computers and is understood by basically everyone. When the time comes that you do need a lower-level understanding, you can look it up quickly and efficiently.

A effective practice for writing efficient software is lazy loading, and I believe it works well for humans too. I might be willing to accept that it is a skill that not all people have taken the time to acquire, however.


I have seen developers struggle to identify the problem. You can't solve a problem if you don't know what a particular algorithm do for you. I have seen java developers knowing only ArrayList and nothing else in the collection.

In numerous forums, people ask for code because they don't know what the problem is. In which case, even search can't help.


> You can't solve a problem if you don't know what a particular algorithm do for you.

I'm sure anyone who is a CS researcher would disagree with you, but I'm probably taking your point out of context.

> I have seen java developers knowing only ArrayList and nothing else in the collection.

And I have seen a full-fledged computer science graduate, from a top school for CS, use MS Access for everything, including projects that had nothing to do with databases, because he didn't know how to use any other tools, and didn't appear to have the know-how to take full advantage of that specific environment. Bad programmers are bad programmers.

> In numerous forums, people ask for code because they don't know what the problem is. In which case, even search can't help.

Technically speaking, isn't that still a search? The only difference to using Google is that the results are indexed by humans instead of machines. I think this further emphasizes that most people really can recognize when they need a new tool and can find the right one on demand with very little trouble.

And, like I said, I am willing to believe that it is a skill that is acquired. Skills, by nature, come with varying degrees of ability. Perhaps the least skillful stand out most predominantly in your mind? The best are probably so good at it that you don't even realize that they are doing it.


You might be working in a scenario where you can't just look those things up. You have to code them through. Learning them then through general concept AND THEN implementing is too much. For instance, if you work in the embedded domain, you won't have the liberty to use public / standard / open source libraries.


You should know enough to avoid doing nested loops to find the intersection of two lists. Things like that come up on a regular basis for me, although I don't work at any tech giant.


You mean like this built-in from Ruby's standard Array class?

    intersection = ary1 & ary2


he is probably talking about using hashtable, but that needs linear memory so there is a trade off.


Ruby's method uses something like a hashtable internally.


Yes, exactly.


So, is that really knowing "enough" to avoid the things you mention, or can the heuristic be that you should know the standard library?


There are things built into some standard libraries that are able to be combined in a way that produces a list intersection, but that are O(n^2). I prefer working in high level languages as much as the next person, but there's no substitute for knowing basic algorithms and data structures.


I just demonstrated a substitute for knowing basic algorithms and data structures.


Yes, you just demonstrated an example of a case where a developer is not hurt by not understanding algorithms or runtime complexity.

Here's one where they are: In python, the in operator works on both lists and sets. For one, large numbers of elements will perform very quickly. For the other, it will perform very slowly. If a developer doesn't understand how they work, and where it's appropriate to use each, they can still do something completely terrible without even straying away from basic APIs.


While it seems like you're moving goalposts, this example also appears to be something a person could know without touching on nested loops or O(n) at all.

Case in point, 10s of Googling produced this link: http://stackoverflow.com/questions/2831212/python-sets-vs-li... ...and I don't even use Python yet.


Is it possible to write working code without understanding O()? Definitely.

Is it helpful to understand? Definitely.


Of course, but neither was your point. You were rebutting that algo & bintree knowledge are unnecessary in the vast majority of companies, in favor of the pursuit of O(n) knowledge in candidates. None of your examples validated this, and "should" became merely "helpful."


I suppose I considered it reasonable to consider things that are helpful should be done.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact