Solving dynamic programming interview problems 461 points by otasevic 9 months ago | hide | past | web | favorite | 164 comments

 I've done a fair amount of interviews in my professional career, both as an engineer at Google as well as for my own startup.In an eng. interview you want to maximize information divided by time, i.e. you want to learn as much as possible about whether the candidate would be a good fit for the company and spend as little time as possible doing so (because you have other things to do -- such as interviewing more candidates).In my experience these kind of interview questions have a very poor information by time ratio. They are poor on information because they may give you an idea how well the candidate can do on puzzle questions but not so much how the candidate would do on actual real-world assignments. And they are especially poor on the denominator (time) because you are probably going to spend at least 1h with the candidate before you get past the obvious stuff.Also I guess about 70% of (pre-qualified) candidates would outright fail the question, so if you let this influence your hiring decision, given that the question is really quite "puzzly", you're inevitably going to miss out on a lot of talent (the kind that does well on actual work assignments).
 Also, it depends on the interviewer. I have been on both sides of the table and I can tell that as an interviewer you need to convey these type of questions very clearly. A interview is typically 45 minutes in most companies, where about 5-10 minutes are wasted in introduction etc. You have about 35 minutes and if you waste 15-20 minutes explaining a problem to the candidate and in the end leave the candidate with 15-20 minutes to solve a dynamic programming question, then it is very unfair on your part.Most companies in bay area have 3-4 whiteboard interview rounds and I have experienced at least 1 interview, at every place I interviewed(FANG and other top bay area tech companies), where the interviewer wasted enough time either explaining the problem.As an interviewer, it is really important to keep the time aspect in mind and choose a problem which could be easily explained. The problem itself could be hard, but story based problems waste candidate's time and have little value in assessing the ability of the candidate.PS: I interviewed at one of the tech companies(the one you all know and I won't name), and in one of the technical rounds, it took the interviewer 20 minutes explaining the problem. The problem basically boiled down to finding largest number at any time in a given sequence without sorting the array. The problem had a background story of some cell towers where each tower had a strength and blah blah. The interviewer was also had communication issues.
 > where about 5-10 minutes are wasted in introduction etc.That tells me a lot of concerning things about the organization. If you're considering that it's "wasted time." You're missing out on a lot of important information there.
 If you interview at place like Google, introductions only really serve to make the candidate more comfortable. As an interviewer, even if the candidate gets and takes the job, you are unlikely to meet him or her again anyway, in a company so large. The introductions won’t give you any valuable hiring information, because you won’t be serving on the hiring committee anyway, and hiring committee won’t care about your opinion about candidate’s experience. I feel that spending more than 5 minutes on it is a waste of candidates time, time in which they could be showing the skill that is meant to be tested at the interview.
 Or the individual. Large companies are made up of many heterogenous people.
 I very much disagree with this assessment, the reason being that, as an interviewer, the least important part for me is whether or not the interviewee "gets" the problem initially. I have no problem giving the interviewee tons of hints about (a) it's a dynamic programming problem and (b) what the different cases are.At that point though, what I'm really interested in, and what I think tells me a ton of valuable information about the candidate, is how capably they are then able to turn the algorithm into actual code. With many folks you can basically tell them the entire recursive steps that are necessary and they still can't translate this into workable code. "How fast can you turn algorithm into code" is one of the critical skills for all developers, and these kinds of problems give me good insight into that skill.
 > I don't care if the candidate "gets" the problem> "How fast can you turn algorithm into code" is one of the critical skills of all developers.I don't understand this industry anymore. If I have anything to say about software is that solving the right problems is the main skill in an Engineer. Then, figuring out an algorithm is the hard part of solving any problem, turning it into code is usually never as hard (except for technical nuances) and especially, not how quickly it becomes code. Not sure how someone can value "how fast can you turn algorithm into code" over "understanding/solving the right problem".
 Yup. Building up a shared understanding of a potential solution, and then fluency in converting that shared understanding into code, is my goal from a code pairing interview.Crucially, I don't think it's important to have critical insight into the problem and come up with a solution unaided. We don't work alone - if we can come up with a solution together, and the candidate can be trusted to implement the solution, it's what we need.
 Most of the interviews are just technology-wrapped ways to confirm our initial biases about whether we like that person or not. One could ace an interview but if that person is not liked/relatable, they won't get the job. OTOH when we like somebody, we try to help even if the person is not up to required level. The only exception I can think of is when a team needs to hire a scapegoat before performance review to be fired when the expected axe lands on the team and somebody has to go. I've seen this played out at Google, FB, MS etc., regardless of what each company thought about its "scientific" interview process.Next time on interview just for fun try to ace all technical questions but contradict some interviewer's notions held in high regard (those could be usually inferred pretty quickly during initial conversation); I am 99% sure you won't get the job. Then on another one make yourself just average tech performer but amplify agreeability with the interviewer's ideas. What would you guess would give you (much) better success ratio?
 > Also I guess about 70% of (pre-qualified) candidates would outright fail the questionThis is probably right. Companies still use these questions, though, because they do a good job failing candidates who are not technical enough for the role.What you're essentially doing is filtering out bad candidates and selecting from good ones based on luck.
 In other words, the companies are optmizing for precision, rather than recall, which makes sense.
 I've done a lot of interviewing and participated in a lot of hiring decisions at a very large tech company.I think our process is guided far more by tradition than deliberate optimization for anything in particular. :)
 > In my experience these kind of interview questions have a very poor information by time ratio. They are poor on information because they may give you an idea how well the candidate can do on puzzle questions but not so much how the candidate would do on actual real-world assignments.I agree with you quite a bit, but to me it looks like there's some assumption buried in there that the only reason to ask interview questions is to evaluate individual performance on heads-down coding problems, which is not necessarily representative of a real work environment.The information I look for when interviewing includes: How do people behave when asked questions they don't know? How will a candidate act in meetings? How curious is the candidate when faced with a tough question? Do they verbalize their thought process? Do they start making things up and/or try to appear like they know the answer when they don't? Do they gravitate towards certain kinds of problems? Do they ask questions or try to tough it out alone?I've never asked a DP problem in an interview, but I do like to try to get an idea of the candidate's limits and boundaries, so I try to ramp up the difficulty until we hit questions they have trouble with. (I do warn them in advance.) I don't really want to hear correct answers to simple questions, I want to see how far they can get before they get stuck. I also want to hire people smarter than me.If you only look for coding efficiency in a candidate, then yes, your information over time ratio might be low. (And it might be surprisingly low for all your questions.) But if you expand the kinds of information you collect to include social factors, behavior, knowledge limits, and more, then it's really not that bad.
 Speaking as someone who finds DP problems easy, I'd not figure it out from this approach. The way that I'd think about and tackle it in an interview is this:1. Write a recursive solution.2. Memoize.If you can solve it this way, then you have a DP problem. Of course this forces you into the top down (aka recursive) approach. But in an interview, "easier to reason about" is all that matters.Also the tradeoffs in section 5 has a mistake. Memory can and frequently does go either way. Top down can let you recognize which states you never need to think through. But a bottom up (aka iterative) approach can let you discard memory after finishing an iteration. The memory savings from that can be considerable.
 Ah, you beat me to it. I came to make the same suggestion.DP is fancy caching, but you have to think about the space of the solution to do it right. It's not just easier to cache the results and use the recursive solution, but the code is often smaller and clearer too.I spent some time understanding and coding up edit distance between trees DP-style, which is a fair bit trickier than edit distance between strings. At the end of the whole project, I wished I had simply memoized.
 Sometimes DP problems are pretty complex in a convoluted way employing some weird tricks before you could see it the DP way; it's best to take some advanced DP problems from TopCoder or ICPC and dissect them in detail. Trivial DP problems are phased out of interviews already, unless looking for junior positions.
 When I interviewed at $BigCompany, I really enjoyed the whiteboard programming problems I got.But I noticed that 2/3 of them ended up with recursive solutions. Meanwhile, I spend less than 1% of my real professional programming time writing recursive code, and it's a bit silly that so much focus is on such relatively obscure technique.  I think it's a mistake to think of whiteboard problems as attempting to recreate a real work environment. They are a proxy for the kind of quality you are looking for that can be "measured" within the alotted time.They can show that the person is prepared (they studied), generally knowledgable (they remember obscure stuff), or just clever (they come up with interesting approaches to the problem).I'm not surprised you see both excessively simple (make sure they aren't ridiculously unqualified) and excessively obscure problems, since the first one essentially tests your reflexes and the second one tries to determine if you learned your job "by rote"  They’re used because “that’s what company X uses”. People are cargo culting each other, and making up post hoc rationalizations for their behavior.Unless you work for a major tech company where optimizing the false positive rate is your primary concern, using these problems is counterproductive.  Yeah, it's philosophically valid that you'd want to measure certain critical things that are different from the everyday grind.But I think the simpler explanation is that these interviewers mostly had picked "cool" recursion problems.I did get the job, and seeing$BigCompany's hiring process from the inside didn't contradict this.
 You can solve most recursive problems with a loop and a stack. That way you aren't making a bunch of function calls eating up memory.
 Is recursion really worth the loss of clarity? Almost always no. The more clever you are in your code, the less likely anyone will ever see it (or want to).
 Recursion is usually clearer that iterative because it assigns names to the work being done. I find that much easier to read. Better yet: when you can state your cases separately. That's even more readable.
 I don't think you represent the average coder. I'm willing to bet most people if shown 10 recursive and 10 iterative solutions to the same problems would admit the iterative approach is more intuitive. Especially since with a for loop you can control the number of iterations which has a number of optimization benefits. Now if its just while loops vs recursive then there's not much of a difference, however that is not the case.
 You might be able to trace the execution of the iterative code more easily, but in my experience it is often much less clear why that produces the correct result and how that code was written in the first place.Take a look at the wikipedia page for computing Levenshtein distance: https://en.wikipedia.org/wiki/Levenshtein_distance#Computing...The recursive version needs barely any explanation. But ask me to carry it out by hand and I'm sure I'll pretty quickly get lost. The iterative version needs a lot more explanation for why it is the way it is, but I also think I could carry it out on paper quite easily.
 I mean if you label all your variables i,j, and k and use minimal formatting or bracketing then I see your point it is harder to read. Whats complicated about an iteration. If it works properly after the first one chances are it will continue to work for the millionth one. If you see problems, its because something is modifying it between runs, but that wasn't a fault of the iterative strategy, it was the fault of a bad programmer. Conversely, you must always make sure the stopping condition and all base cases are met during recursion. Forget one corner base case and you got a rare production bug.
 > Whats complicated about an iterationWhat’s complicated about a recursion?> If you see problems, its because something is modifying it between runs, but that wasn't a fault of the iterative strategy, it was the fault of a bad programmer.> Conversely, you must always make sure the stopping condition and all base cases are met during recursion.You seem to be applying a double standard here.> Forget one corner base case and you got a rare production bug.Base cases are usually much easier to reason about.
 Then why does NASA consider it unsafe for mission critical code?How about unknown potential stack size?How about factoring a large number with recursion?Everything recursive can be transformed to iterative and yea sometimes it’s not as sexy but neither is a helmet
 > Then why does NASA consider it unsafe for mission critical code?They also proscribe unbounded iterations (point 2). In any case, NASA’s guidelines for mission-critical code are not necessarily good guidelines for general software engineering, given the constraints involved.It’s also worth noting that recursive solutions are probably more amenable to static analysis and automated theorem proving.> How about unknown potential stack size?If stack size is a problem, try an iterative solution.> How about factoring a large number with recursion?Go with iteration.You keep editing your answer to add more cases where iteration is the way to go. I’m not disputing there are use cases where iteration is appropriate.
 > Then why does NASA consider it unsafe for mission critical code?More like they're using an old Fortran 77 environment which doesn't support recursive functions.
 No that's incorrect. Their rules are C guidelines, and they are easy to Google. You might want to do that before making assumptions.NASA's rules, the ones being referenced above, are designed for safety. They require code to be easy to statically analyze and to have absolutely predictable behavior.Also to be avoided: memory allocation, unbounded loops, function pointers, preprocessor macros.
 Most of the numerical code they're going to use is in Fortran, and interlanguage calling convention and the runtime might pose a problem.This is in addition to not using recursive functions being pretty standard in anything embedded. Early computers and embedded systems had very limited stack space or had calling conventions that made recursion impossible.
 > Most of the numerical code they're going to use is in Fortran, and interlanguage calling convention and the runtime might pose a problem.The rationale they used for these rules was written down. It has nothing to do with Fortran. I've offered links that you can read. You're making more assumptions. If there's C-to-Fortran calling at all, then recursion presents zero extra difficulty. Once you can make any function call, you can make all function calls.> This is in addition to not using recursive functions being pretty standard in anything embedded.It's true that for small embedded devices, recursion is not used often. It's also true that function pointers and heap allocations and unbounded loops are generally avoided too. Though, often main() in an embed is a white(true){} loop. I wouldn't be surprised to see that at NASA.One could argue that all of these 10 NASA rules represent some standard practice in embedded code and/or some degree of common sense. They're not claiming to be new or non-standard or unintuitive or innovative; they simply wrote down what people agreed are best practices.
 More like they were using C, saw the prospect of unbounded stack calls unreasonable with a computer with limited ram and banned recursion. Oh wait that's exactly what happened because iteration is safer than recursion.
 Iteration is not inherently safer than recursion. NASA also banned while(true) iteration. The important part is "fixed upper bounds"."Give all loops a fixed upper bound. It must be trivially possible for a checking tool to prove statically that the loop cannot exceed a preset upper bound on the number of iterations. If a tool cannot prove the loop bound statically, the rule is considered violated."
 The static analysis tools have a harder time parsing the upper bounds on recursive functions, and so do the engineers doing the code reviews for similar reasons.This isn't just a NASA thing. Pretty much any embedded coding standard says the same thing. The JSF C++ standard, and MISRA-C I know both do as well, just off the top of my head.
 That's a restriction on the environment which has absolutely nothing to do with how readable a certain piece of code is. In fact it argues the opposite: we force you to use a less readable version of your code because the elegant one may be easier to read but it may have negative consequences due to implementation details.That's a really good trade-off for them but it does not necessarily help readability.
 But then your code is not portable because it is now susceptible to arbitrary stack depth changes blowing up your program. None of your arguments make sense when compared with the overwhelmingly superior iterative approach. Recursion is the same as iteration + downsides. I mean seriously name the last time you had a stack overflow and weren't doing recursion? That's an entire class of bug introduced or eliminated by a simple design decision. That's how you write superior code for now and in the future.You shouldn't have to know about the implementation when writing portable code. If you introduce recursion, you now need to worry about implementation since the machine max stack size is now an issue and you've broken the abstraction. And what exactly did you gain that outweigh's the cons?
 You are confusing implementation of a language with readability. And I honestly don't remember the last time I had a stack overflow, there is something known as tail-call optimization.
 Then why does NASA consider it unsafe for mission critical code?You started with the assertion iterative implementations were more intuitive and easier to read so this is a bit of goalpoast-moving. Write an iterative pseudocode DFS or quicksort. How 'intuitive' does that look?
 I'm saying that there are so many downsides both obvious and sneaky associated with recursion that it makes almost no sense to use when the iterative approach is usually safer, doesn't have the headache of unbounded stack calls, and can be more easily parallelized with things like OpenMP
 That's what you are saying now, this is what you were saying before:" I'm willing to bet most people if shown 10 recursive and 10 iterative solutions to the same problems would admit the iterative approach is more intuitive. "Now you are at NASA sending probes to asteroid Weasel 39812.
 > > Whats complicated about an iteration> What’s complicated about a recursion?Especially as iteration is just a special case of recursion :)
 In an iterative solution, you have to get your loop conditions correct, and you have to create a bunch of inputs before you know how you are going to use them, sort of a "solution in search of the problem". Recursion takes a problem and expresses it terms of similar, but smaller, unsolved problems, unless you can solve it directly (base case). It's a "problem in search of a solution", that always makes progress (unless you have a bad algorithm, as always).If a recursive solution works on a small input, it will work on a big input. If you missed a base case, you'll see it immediately because trivial (literally!) input will make your solution fail to terminate.In production, the main problem with recursion is stack size limits (or more generally memory limits) if you can't/don't use tail-call elimination.
 Very much agree.If you are working on any kind of tree structure, graph, parse tree, etc., recursion can be really beautiful and clear.My theory: folks like iteration because 90% of the "for" loops they write are really just "foreach".
 >"folks like iteration because 90% of the "for" loops they write are really just "foreach"."I didn't understand this comment. What do you mean by "for vs foreach."? Can you elaborate?
 I guess the foreach is where each iteration has independent operation, vs a generic for loop where this iteration depends on result of the previous iterations.
 It really depends on the kind of problem. For example, recursive solutions to linked-list/tree/graph problems are typically substantially easier to understand than their iterative counterparts, in large part because the data structures themselves are recursively defined.
 Most recursive functions have a tree structure, either in the control flow or data flow. For loops don't really cut it; you minimally need auxiliary storage to track which branch you're currently on for every ancestor level, either explicitly by pushing indexes or structures onto a stack or queue, or implicitly with a work stack or queue. That's a bunch of accounting in an iterative solution you don't need with a recursive solution. The iterative solution will end up harder to read as a result. It'll also be a lot harder to reason about if the tree-like structure of the code is obscured.If you need to return values for processing each child, and integrate them at the parent, then your solution gets tougher to reason about, as there's an ordering constraint and a dependency.Recursion doesn't come up super often in most domains, but it's not rare. E.g. doing anything non-trivial with the DOM in a web front end, parsing an XML or JSON document, etc.
 Agreed. HackerNews has a greater than average amount of developers that use Haskell, F#, OCaml, Scheme, and other languages which heavily promote recursion. I'm not sure if I personally find iteration easier because I learned it first, it matches my brain better, or if it is indeed easier for most of the population.
 Depends on the problem. Some problems have very explicit recursive structure (eg http://rosettacode.org/wiki/Arithmetic_evaluation) such that an iterative solution is more "unnatural". Not saying that most problems are like that, just that this shows it depends on the problem.
 It's only more intuitive to see the shape of the serial execution. That same property makes it more difficult to prove qualities about the code (say, correctness) because you need to keep track of state in your head as you trace through the program, rather than encoding it structurally as you would with a recursive solution.So, intuitive is not so valuable unless you get correctness out of it.
 You generally need to be familiar with making arguments about aggregate complexity (e.g., every element will only be visited X times because... so the complexity is O(N)) to show complexity for a recursive solution, as opposed to the iterative solution where the complexity is generally more apparent ime.
 All dynamic programming algorithms are recursive. They are essentially a (constructive) proof by induction.For clarity, perhaps one can include a "proof of correctness" of the algorithm in the comments.
 There are problems that have clearer solutions when expressed as recursive. There are problems that have clearer solutions when expressed as iterative.Having both tools in your toolbox and knowing when to use each is part and parcel of developing yourself as a craftsman or craftswomen programmer.In addition, being able to know when to use memoing in producing your solution is another tool to be put in your toolbox.Yes, every recursive function has an iterative solution, but the iterative solution can be far more complex. The best example of this is that lovely little function - Ackermann-Peter function. The recursive version is just a few lines long. The iterative version is quite a few pages long.There are specific schemas in which the recursive form should be rewritten as an iterative solution. You can find these detailed in such old gems of books like "Algorithms + Data Structures = Programs".
 It depends on the particulars of a problem, but structural recursion is often substantially clearer. One reason is that it enables more abstract reasoning about the behavior of an algorithm. You can reason like you would for an induction proof, instead of mentally executing steps in a loop (denotational vs. operational).
 Why are you assuming that the recursive solution is less clear?
 Experience, tells me so. To be fair some items like Fibonacci numbers are probably equally as clear.
 Divide an conquer type algorithms usually lend themselves to naive recursive solutions more often than not. DP usually requires you to find that solution and find some clever relationships that allow you to build up the final solution from the bottom up.
 If you systematically analyze it, you don't need to be clever on a per-problem basis. Every recursive solution can me methodically converted to DP.
 Yes, it's a form of pattern recognition. You can get very good at it and do it mechanically, just like solving integrals.You can spend a lot of time getting very good at them, or you can just use memorization/rsolve just like you can just use maple or evaluate them numerically for integrals.
 Sure there are some problems that lend themselves nicely I already said as much. But in general for an INTERVIEW problem, it as a poor skill to dwell on or even bother testing for since real-world coding is simply not done that way or God forbid what might happen when someone not as capable (but cheaper or younger) has to take over your code base.
 We're talking about ACM style search problems that are used for interviews, and not about iterating over an array recursively. For these, the recursive formulation is the naive and inefficient way of solving them.
 I have a fantasy. In the fantasy, an interview candidate says "this problem has an optimal substructure" or "this problem can be broken into overlapping subproblems" and then observes "reusing the results of overlapping subproblems to avoid re-computing them is sometimes called dynamic programming"At this point, balloons and confetti fall from the ceiling as Donald Knuth jumps out from under the table to hand the candidate an award for being the first known example of a candidate using "dynamic programming" correctly in a sentence.
 You know you’ve made it when you have Knuth on standby under the table
 does knuth also come and kick the guy out, if he fails to solve it?
 Or just call it caching the intermediate solutions.
 Potentially interesting data point: the company I'm at right now has been pretty successful for over a decade and has, to my knowledge, never once asked a dynamic programming question in an interview for any candidate in that entire time. We've managed to hire a lot of great developers and have very rarely had any real issues.
 Does the company you are at now pay developers the same as the companies that ask these types of questions? In my experience the companies asking these types of questions are picky because they can be.
 I'd break that down like this:Google-esque companies figure "Let's just ask these hard puzzles to filter by IQ, everything practical (e.g. databases) can be learned"However, some companies tend not to see these specialized puzzles as an IQ test, but rather a memorization-based learned skill (learn these 10 algorithms and their performance metrics). To them, asking questions that represent day-to-day job challenges is a better predictor.
 They aren't picky because they can be. If that were true they wouldn't be complaining about the lack of qualified developers (pretty much every tech company I know of).Rather, they're picky because they (think they) need to be.Note that I'm not judging whether they are right or not.
 false. I know hoards of engineers working on boring code at google
 That doesn't make it false.
 We pay very well for the area. We are not in Silicon Valley, but I know some of our salaries are higher than those of my friends who are at Google's MV campus.
 It's not terribly impressive to have a higher salary then a GOOG employee, but higher total compensation after stock and bonuses would be saying something.
 Interesting. What industry if you don't want to name names? Total comp at Google for a "senior" (8-12 years experience) is $300k-$400k. Rare to hear about a non SV darling paying developers that much.
 Don't want to speak for OP but it's highly possible he or she doesn't know how much Google-tier companies pay nowadays!
 And some boutique finance firms are made of money.
 Can you share your company name?
 I’ll let OP share it if they want, but FYI if you search a bit that information is freely available on the Internet.
 That's awesome! I generally think that DP problems introduce more noise because they're relatively easy to prepare for. So you might easily miss a good engineer who is not prepared. I try to argue about that in the beginning of the blog.However, they're a reality for many interview loops, so it's better to be prepared.Btw, what are some things that your company tests for, if you don't mind sharing?
 Step 1. ur problem graph better be a dagStep 2. ur sub problems better overlapStep 3. time to table dat dagStep 4. solve ur problems and build ur table graph the way a dag would : to-po-lo-gi-cal-ly
 I think the DAG approach is a good one, but the problem is its not great for being applied generally. For me, it's difficult to think of something like the "House Robber" problem as a DAG.
 Can someone explain what DAG stands for? Thanks.
 (Thanks)
 (4) is what sets your approach apart from the OP. Solve the problem by searching through the graph, using standard graph search. The OP's "iterative" approaches are close, but they waste massive amounts of time by precomputing the entire table when they could search instead. Iteration with a stack is DFS; iteration with a FIFO queue is BFS; iteration with a priority queue is Dijkstra's if priorities are precise, or A* if priorities are admissible.
 Nice way to put it!
 Unpopular opinion, but the best way to prepare for DP problems is to solve the well known ones and memorize them and their recurrences. Only 2-3 companies like FB and Goog ask them (well they’re the only ones worth studying DP for anyway). Coming up with a recurrence on the spot is very hard. The edit-distance paper was an award winning ACM paper and expecting someone who has never seen that before to code it up (even the memorized one) is ridiculous.
 The best way to perform well in an interview is to have seen and worked on the problem at some point beforehand.When I got a job at Google (in a previous life), two of the questions in my interview loop were ones that I had seen in previous interviews. I kept my mouth shut about that and faked brilliance in the moment. That is, I pretended to be stumped for a second, then I created a narrative where I had a sequence of 2 or 3 "Ah-ha!" moments where I figured out how to refine my solution.I cued off the interviewer, waiting until they seemed just about to blurt out a hint, when I raised my hand and said, "WAIT! Maaaybe.... I can use a BFS instead of a DFS here and label the cells!" Then the interviewer would usually smile and nod in satisfaction.Finally, I "stumbled on" the "right answer" and slammed out the code that I had pretty much memorized up to that point.Make a stupid game, I'll play the stupid game, and have fun doing it.For the record, I kicked ass at Google (getting promoted twice) before moving on to greener pastures.
 Yep, 100% this. I recently finished a series of interviews after extensive preparation and had at least 3 sessions that went something like this. Practice enough problems and eventually you start seeing them crop up in real interviews.Doesn't say much for the usefulness of such an interviewing paradigm but that's a whole other conversation.
 I recently had a programming interview where, at the whiteboard question, I said "this may be a dynamic programming question, let me see--"and the interviewer said "STOP! Stop, every time someone says that, they end up flopping and never getting anywhere. Don't go down that path, I'm telling you."I think it had more to do with the interviewer being a poor interviewer, however.
 I got screwed like that on my Google interview. You should not be blamed for assuming, when talking to Google, that the O(n!) or O(n^2) solution is not even worth talking about. So I flopped around on O(n), O(nm) and O(nlogn) solutions after trying to whiteboard a recurrence relationship and giving up on O(1).Interviewer had already decided that somehow the crazy rules I related to him about the industry I was coming from were somehow personally my fault to he had fun letting me twist in the wind.Personally, I think that given how small the industry is, one of the goals of the interview process should be not to make an enemy of the candidate. Candidates have friends, and sometimes candidates come back in a few years after they've gotten more experience or you're looking for different skills. None of this will matter to Google until they find themselves in a MS-style hiring crisis in another five years when they aren't cool anymore.
 I think most people will agree that Google doesn't have the best hiring strategy for minimizing false negatives during Interviews. Fortunately for them, they have the luxury of only needing to protect against false positives. Not getting hired at Google was the best thing that ever happened to my career. I ended up earning far more at a successful startup where I used the rejection as motivation.
 > You should not be blamed for assuming, when talking to Google, that the O(n!) or O(n^2) solution is not even worth talking about. So I flopped around on O(n), O(nm) and O(nlogn) solutions after trying to whiteboard a recurrence relationship and giving up on O(1).Did they say that they don't even want to listen to O(n^2) solution?
 I'd be super surprised if they did. Google interviewers usually encourage you to spit out an easily-verifiable O(n^2) (or even worse!) solution as fast as possible. That way, if you get stuck and honestly can't finesse a faster solution, you can code that up and provide a code sample. People still try to fake their way through the process so being able to code up something close to a compilable function is a useful metric.
 Yep. This is how I conduct my interviews. I don't ask trick questions. Many of my candidates can quickly notice the simple solution but fail to write the solution in code.
 Yeah, this wasn't a coding question.
 > You should not be blamed for assuming, when talking to Google, that the O(n!) or O(n^2) solution is not even worth talking about.If the naive solution is O(n!) then describing an O(n^2) solution is perfectly acceptable.
 Interesting fact: the standard DP algorithm for TSP reduces the running time from O(n!) to O(n^2 2^n)
 Long ago, I had applied for a software position at Google and they brought me in then said "We are going to interview you for a systems position" and I was like "umm, but I applied for a software position and am not prepared for a systems interview" then they said "roll with it, lets see". As I told them, I bombed at least partly because it threw me for a total loop and I was pretty shocked at the rudeness of that.I didn't get further along in that process because of that stupidity and have never even considered them as a place to work since, and I am an SRE/PE these days.
 For me it was a bit the other way around. I solved the problem in a brute force way and on interviewer's asking, was flopping to come up with a better algorithm. Then he mentioned that I should consider DP.Unfortunately my DP was really bad so I knew right then I'm going to flop. And flop I did.For me the two weakest points are DP, and coming up with the right O() estimate for an algorithm that I just created on the whiteboard, and am looking at it for the first time in my life. Would love advice on how to get good at both.
 For DP? Practice a couple on like HackerRank, CodeFights etc. It starts to make sense after you do a few.For Big O notation? Just think about how many times you're iterating through things (in the worst possible case).e.g. Got two nested for loops each going to N.. we're going to loop N on the outer loop, so and each iteration of the inner loop we go through N times? that's N x N so O(N^2).. (easy example obviously).
 Woah! What was the problem? Maybe then we can say if he was right or not.
 I often use the declarative programming language Prolog to solve dynamic programming tasks, because it is easy to type and helps you to declaratively express what a solution looks like. After you have expressed what must be the case in a solution, you can easily enable memoization on top of the existing code.For example, here is how you can use Prolog to solve the task from the article. The following Prolog predicate is true iff a given runway (represented as a list of "t" and "f" elements) is safe with a given speed: :- use_module(library(clpfd)). safe_runway(0, [t|_]). safe_runway(Speed0, Rs) :- Speed0 #> 0, Rs = [t|_], ( Speed = Speed0 ; Speed #= Speed0 - 1 ; Speed #= Speed0 + 1 ), length(Prefix, Speed), append(Prefix, Rest, Rs), safe_runway(Speed, Rest).  Sample query and answer: ?- safe_runway(4, [t,f,t,t,t,f,t,t,f,t,t]). true .  One interesting aspect of this solution is that we can generalize this query, and also use the same program to answer the question: Which speeds are actually safe for a given runway?For example: ?- safe_runway(Speed, [t,f,t,t,t,f,t,t,f,t,t]). Speed = 0 ; Speed = 2 ; etc.  To enable memoization for this task, you only have to use your Prolog system's tabling mechanism. For example, in SWI-Prolog, you turn this into a dynamic programming solution by adding the directive :- table safe_runway/2.  This makes the Prolog engine automatically remember and recall solutions it has already computed.
 So, the OP has:> Dynamic Programming – 7 Steps to Solve any DP Interview ProblemHere I see "any"!!!Dynamic programming is a huge field from work of R. Bellman, G. Nemhauser, R. Rockafellar, R. Wetts, D. Bertsekas, E. Dynkin, W. Fleming, S. Shreve, and more.E.g., there is, with TeX markup,Stuart E.\ Dreyfus and Averill M.\ Law, {\it The Art and Theory of Dynamic Programming,\/} ISBN 0-12-221860-4, Academic Press, New York, 1977.\ \Dimitri P.\ Bertsekas, {\it Dynamic Programming: Deterministic and Stochastic Models,\/} ISBN 0-13-221581-0, Prentice-Hall, Englewood Cliffs, NJ, 1987.\ \George L.\ Nemhauser, {\it Dynamic Programming,\/} ISBN 0-471-63150-7, John Wiley and Sons, New York, 1966.\ \E.\ B.\ Dynkin and A.\ A.\ Yushkevich, {\it Controlled Markov Processes,\/} ISBN 0-387-90387-9, Springer-Verlag, Berlin, 1979.\ \Dimitri P.\ Bertsekas and Steven E.\ Shreve, {\it Stochastic Optimal Control: The Discrete Time Case,\/} ISBN 0-12-093260-1, Academic Press, New York, 1978.\ \Wendell H.\ Fleming and Raymond W.\ Rishel, {\it Deterministic and Stochastic Optimal Control,\/} ISBN 0-387-90155-8, Springer-Verlag, Berlin, 1979.\ \some of my work, etc.Dynamic programming has been and is a major interest of the Department of Operations Research and Financial Engineering (ORFE) at Princeton.Uh, "any" seems a bit optimistic!
 If I had to guess, I'd wager that the Venn diagram of Dynamic Programming questions and interview questions is a narrow sliver.That is, unless you're being hazed, the sort of questions to show up in an interview might be at the shallow end of the pool.
 lots of places have a fetish for dp questions
 Good point. Thanks for the comment. It's likely a bit on the optimistic side, but this focuses on DP problems typically encountered in interviews.
 Depends on who is doing the interview!!!Now, if I were doing an interview asking about dynamic programming, how about scenario aggregation, multi-variate spline approximations, neural net approximations, certainty equivalence with the Gaussian, non-inferior sets, measurable selection, etc.!!!!One prof of mine wanted me to think about the role of potentials!
 And most of the interviewers asking these questions just want someone who can help develop yet another software as a service CRUD app....
 Exactly. I've been a C, C++, and java developer for the past 12 years and never used Big-O for anything other than a shibboleth to get past the door. A couple places practically used C++ itself as a gatekeeper: they want people who are smart enough to program in C++ and survive it's rigorous interview process, but their main product is filling out forms and routing documents through a document management system.
 I did have to do one algorithm type interview in 1999, but the company was actually writing cross platform (Windows console, Unix, and MVS) C code where we had to implement everything from scratch, so it made sense. But unless you're Google, Facebook, Netflix, etc. where you have to solve problems at a scale that no one has had to solve before, most of the algorithm style questions are meaningless in your day to day work.
 Yep. We need superman, but we are a bicycle repair shop.Only superman will do, that is the only thing we know for sure.
 In another world, everyone is superman and we are looking for a bicycle repair man: https://www.youtube.com/watch?v=54CpPlCnM4I
 Hoped someone would get the monty python reference :D
 Great post about how to attack solving a problem using DP. But this is not a good interview question.It's a toy problem, not something you'll ever need to solve in real life. Maybe if you squint hard enough it's close to pathfinding algorithms, but be serious. I hate questions that aren't remotely relatable to something the candidate might experience. I get that interviews are short so you need tiny problems, but making them somewhat relevant will make comprehending the problem that much easier and give plenty of time for working on a solution.There's also not much depth to it. Either they can get a naive solution, get the DP solution, or just can't solve it. Maybe I'm just not creative enough but the only follow up I can think of is the typical 'how to test' and this isn't even a good question for that. For me as an interviewer, it is better to start with an easy problem, and then later on additional complexity. For example dealing with concurrency, how to generalize the solution for other cases, etc. These follow ups don't even usually need full code written out so you can get much deeper since it's faster. Whereas if you start from a hard/tricky problem and have to keep explaining it and hinting at how to solve it, both candidate and interviewer feel bad and you haven't learned much.Not that this is a particularly hard problem, it will fit within an interview slot if they are on top of things. But it's similar enough to questions I hate asking/getting (e.g. min of maxes of sliding windows) that I would definitely never ask it.
 In python you often need sys.setrecursionlimit for the recursive solutions since the default is really small.I found out the hard way in a recent Google CodeJam problem[1] that even that wasn't enough and sometimes you really do need the iterative solution to not time out. (I still believe that the limits for python for this problem was set too low since even the iterative solution required hand optimizing of the memory usage to pass but the equivalent C or C++ solution didn't require any tweaks)
 Yes that, or code the recursive solution in a heap-allocated space rather than on the stack. It somehow seems easier, safer, and more explicit and controllable to me to adjust the code & data to use manual recursion with backtracking than try to adjust system stack limits. Often it consumes a lot less memory too, since you have more control over what your "stack frame" looks like; you don't have to store all your local variables for every step.
 I've tried to compare recursive tree DFS and iterative one (with stack in Python list) implemented in Python - recursive was slightly faster. I've not compared memory usage thought. Function call has overhead and theoretically recursive approach should be slower, but list manipulation (append, pop) in Python is also not fast. In compiled languages results can differ (iterative probably will be faster).
 Yeah, that’s true. I would expect manual recursion in Python using append/pop on a list based stack to be slower than native recursion. You might try pre-allocating your entire stack in a list or numpy array, and not using append & pop during the recursion at all. I might expect that to be faster than native recursion.
 Any examples of code for this approach? From what I guess, you are implementing some kind of assembly like approach with explicit saving of stack frame, but I am having a hard time imagining it as being easier.
 Not OP, but they probably mean something like the following toy problem (in C++): struct BinTree { long value; BinTree *left; BinTree *right; }; long long RecursiveDFSSum(BinTree *node) { if (NULL == node) { return 0; } return (long long)value + RecursiveDFSSum(node->left) + RecursiveDFSSum(node->right); } long long IterativeDFSSum(BinTree *tree) { std::vector custom_stack; custom_stack.push_back(tree); long long value = 0; while (!custom_stack.empty()) { BinTree *node = *custom_stack.rbegin(); custom_stack.pop_back(); if (NULL != node) { value += node->value; custom_stack.push_back(node->left); custom_stack.push_back(node->right); } } return value; }  Did not check this actually compiles etc. but you get the point. Both ways will give you the same solution in the same way, time complexity, space complexity etc. but the second one is not bound by max call stack size (only by max heap size). Additionally, the second one is slightly more space efficient, since the recursive solution requires saving an entire call frame into the stack (e.g. stack pointer, return address) whereas the iterative solution just stores one pointer per stack item.
 This is exactly right; thank you for providing the example. This is manual recursion in a heap allocated space.If you were coding a DP problem, then custom_stack might have a fixed size you can pre-allocate, and it might also be 2 or 3-dimensional.For some image-based recursion, your backtracking doesn’t even need to store real pointers in the stack frame. For example when I’ve written a flood fill, I can store the return pointer as a one pixel offset in as little as 2 or 3 bits, depending on whether I include diagonal pixels (4-surround vs 8-surround).
 Thanks! I was getting confused between stack data structure, function stack and stack space and had gotten to a weird mix in my head. The example cleared it up.
 I think the OP might be referring to allocating and managing your own stack. You could use a list in Python for this.Python memory management is automatic though, discussing stack vs heap doesn't make sense in the context of Python,well for CPython at least. I'm not sure about other Python implementations.
 > Python memory management is automatic though, discussing stack vs heap doesn't make sense in the context of Python,well for CPython at least. I'm not sure about other Python implementations.I'm not sure I know what you mean about stack vs heap not making sense because of Python's memory manager. Will you elaborate?The primary issue is that the default stack limit in Python is too small for some applications of recursion, which the comment before mine illustrates. This is true not just in Python, but any language, since the stack size is generally a function of the process or OS, not a limit of the language. Heap allocated stacks are a reasonable thing to do in any language.You're right you can solve that in Python by using a list. Python's memory management doesn't really affect one's ability to do so, right?A secondary issue is that native recursion sometimes uses more memory than a manually heap-allocated "stack". If I make my own stack, I have complete control and complete understand of what's in memory and how much I use. With the native stack, it can be very opaque, and it's easy to chew up the already-too-small stack very quickly by accidentally having a large stack frame.
 >"I'm not sure I know what you mean about stack vs heap not making sense because of Python's memory manager. Will you elaborate?"In Python everything is an object. Python gives you a reference to that object when you create it. There is no way to tell Python(CPython anyway) in which memory space you would it to create that object.
 > There is no way to tell Python(CPython anyway) in which memory space you would it to create that object.Ah right, that's because all objects are heap-allocated.You choose heap by using an object for the stack, and rewriting your recursion to use (superficially) iterative code.You can choose stack allocation instead by using regular recursion: native function calls with local variables.What you bring up is an interesting issue that can make recursion harder to understand in Python. Having local objects in the stack frame can cause both stack and heap allocation - pointers for the objects on the stack, and the object contents on the heap. Or, you might have global objects that aren't local to the recursive function call or the stack, in which case it's important to understand you're sharing data across function calls.Generally speaking, you probably don't want individual heap allocations in a recursive function, so it's best not to have local objects. At least performance-wise.
 >"You choose heap by using an object for the stack, and rewriting recursion using (superficially) iterative code.You can choose stack allocation instead by using regular recursion: native function calls with local variables."I'm not following you.I believe these would both result in the same thing in Python. Python gives you a reference to an object. That object is stored in a private heap "somewhere." That reference to the object that Python gave you is stored on the stack. The object that it points to lives in the heap. This should be the same for both of your examples. I'm not sure what you mean by "native function calls." I am not familiar this this term.
 Sorry maybe I’m making it more confusing than it needs to be, I think you do understand the terms.What we’re talking about is the difference between calling a function recursively (a function that calls itself) and instead simulating recursion using a data structure posing as a stack and an iterative function that doesn’t call itself but instead pushes and pops into your fake stack data structure.You can either use the system’s built-in stack (by calling functions), or create your own fake stack (by pushing/popping, writing/reading an array, etc.).Using sys.setrecursionlimit() as mentioned at the very top of this thread only affects the system stack. By “native function calls”, I just mean regular function calls. These are subject to the system’s stack limit.When you allocate an object and use it as a fake stack to replace the system stack, the size of the object is not subject to the system’s stack limit (which is small — on PCs often a megabyte or two), it’s only limited by the available size of the heap (which is normally large relative to the system stack limit, often gigabytes).When you make your own fake stack and use an iterative function, you can achieve a much greater recursion depth because your fake stack size is on the heap and not actually in the system stack.Does that make more sense? The two cases are very different in Python, and using a fake heap-allocated stack, i.e. a Python object, is super useful.
 I see, yes. It sounds like we are same thing then. Cheers.
 Another cool Python DP hack is using @functools.lru_cache to perform the memoization (beware of the maxsize parameter though).
 I did watch live where Nikola solves this problem, https://www.youtube.com/watch?v=kKhnYLpME3w. It covered most of requirements of dynamic programming. For me the interesting part was coming up with algorithmic complexity (Big-O) of the solution.
 I have always found the word DP to be a bit confusing. DP problems are essentially recursion + caching. I do not even like the word "memoise".
 "Dynamic programming" was intentionally a super-vague but cool sounding term, it wasn't meant to be descriptive (unfortunately). https://en.wikipedia.org/wiki/Dynamic_programming#History
 Indeed, it sounds like someone is just saying "memorize" incorrectly.
 That's what happens when you "recurse".
 Above all else, Dynamic Programming is useful for making yourself feel like a superior programmer and writing blog posts about it. In 10+ years of working in this industry, I can't recall using it once outside of interviews.
 6 points and already hugged to death...?

Error establishing a database connection


 Thanks for sharing the link while it was down. Appreciate it.
 with all due respect to author, here's why solving real problems > interview problems
 I agree. This blog is hosted separately from our servers so I went for a quick deployment of WP on google cloud. Learning: do not use google cloud VM micro instance, apparently it cannot support even moderate traffic.
 Double thumbs up for fixing this!
 Hey everyone, sorry for this. It should be back up: http://blog.refdash.com/dynamic-programming-tutorial-example...
 Seems like the site needs to memoize the page rendering operation ;)
 A tiny correction in the following: if (position + adjustedSpeed in memo and adjustedSpeed in memo[position + adjustedSpeed] and adjustedSpeed in memo[position + adjustedSpeed]):  The middle line is not needed.
 Thanks! Corrected it now to remove the duplicate line.
 It's a bad sign when a self-proclaimed authority lifts the definition of the topic from Wikipedia. Unless said authority wrote the Wikipedia entry, in which case I humbly apologize.
 I just give them a real world issue/bug/feature and ask them how they would complete it. Obviously im looking for as best an answer they can give me without knowing my whole stack, but what's even better is if they ask questions about my stack. Shows they know how to do requirements gathering and if they ask the right questions.
 I think the problem is easier viewed as just a standard graph traversal (although you can view graph traversals as DP...)
 why are there so many companies trying to "solve hiring". i get it's a big expense to make a bad hire but how are there like 10 different companies thinking they have an edge on1. existing practices 2. each other
 It's not only a big expense. It is a stressful, biased and highly ineffective process. There is very little (if any) correlation between performance in interviews today and work performance.It's a hard problem that requires people to approach it seriously. And it is only getting worse with a high growth in number of people getting into tech.It's interesting to try to solve it.
 Is it just me or the Big O algebra at the end doesn't seem right?
 What do you think is wrong, I did not notice any really obvious mistake? Admittedly you would miss the improved bound unless you actually modified the implementation to abort further evaluation once you reach the maximum speed that still allows stopping, but I assume that is implied. I also guess one could even further improve the exponent to 1.25 by taking advantage of the fact that the maximum speed that still allows stopping decreases from left to right but I did not actually think it through, it is just my intuition that you would gain another square root.
 Being able to improve the exponent to 1.25 is probably wrong. After thinking about it more carefully I think you end up with a number of steps proportional to the generalized harmonic number of order -0.5 of L which seems to be Θ(n^1.5) but I am not really sure about that.
 thanks for this comment. I realized that I skipped a few steps in the explanation, so I am updating it now.But essentially you see that the equation is S^2 - S - 2L < 0from that, you can solve that the roots of the function are: (1) 1/2 + sqrt(2L) and (2) 1/2 - sqrt(2L)That means that (S-1/2-sqrt(2L)) * (S-1/2+sqrt(2L)) < 0The second term is always positive, which means that we need to make the first term negative in order for the inequality to hold.=> S < 1/2 + sqrt(2L) which leads to O(sqrt(L)) when you let L approach infinity.Does this make sense?
 Being able to solve dynamic programming problems may or may not get you a job but it certainly will get you a medal at high school programming/informatics Olympiads.
 I really wanted to try out your service couple of weeks ago, but turns out you are only in US. Any plans of opening it for Europe?
 Yes, currently it is only available for people searching for jobs in the US. I would say the answer is yes long-term, but probably not over the next few months.
 I was excited for the article based on the title, but the example problem makes absolutely no sense to me.
 What kind of companies ask dp problems in an interview? Seems pretty absurd.
 A blog post Peter Norvig commented. Nice.

Applications are open for YC Summer 2019

Search: