The best online programming puzzles were the ones ITA Software posted to entice people to apply for jobs there.
What was effective about them was that, reading the problem, you wanted to know what a solution would look like, and knew the only way to know was to solve it. Having solved it, you wanted to know how your solution compared to others'. The only way to find out was to send it in. Having sent it, you might get an invitation to visit, and then how could you not?
After Google bought ITA, the puzzles were up for a while longer, then seemed to vanish. If they are archived somewhere, I would like to know where.
> Roll Your Own Chat Server
> Implement a simple standalone TCP-based chat server, using the following protocol:
> When implementing the server, aim for scalability and robustness. (Many submissions fail due to lack of robustness!)
> Please do not use an existing networking framework
What I sent in was the Add-a-gram (on that page, stashed behind the "Puzzle Archive" link). I sent one solution, and when they called, they admitted others were faster, so I redid it that evening ten times faster and sent that. What they never told me, even after I had been there for years, was that my second solution was faster than theirs or any others they had on file. It looks like they retired it right after I started there, maybe because I had shown it didn't actually require backtracking, which they were trying to filter for–ITA was mostly a Lisp shop, so they wanted people who would always reach for recursion first. My solutions were relentlessly iterative.
Some years after I left ITA, they posted the Bitvectors puzzle, which I had to solve, in a fever, but never sent off. (At the time I think it had a different name.) That solution also didn't require backtracking.
And some algorithms use mutual recursion to solve problems - often putting those into a non-recursive formulation results in incredibly messy code. Mutual recursion is powerful enough that F# (among others) added an entire keyword to the language simply to support it.
And since many languages do proper tail recursion, there is no penalty for using it properly.
Language parsers do a lot of recursion because the languages we invent are invented by CS people who like problems that can be solved that way, and because language itself is one that lends itself to that.
It is hard to tease out which recursions we find because professors just like it, and which ones real-world problems demand. Many commonly cited as the latter turn out, on close examination, to be the former.
As someone that works on real world problems all over the spectrum (google my name), I can say with extensive expertise that a significant number of problems are vastly better solved with recursion.
I didn't cite most of the list above because a textbook said so. It's because I use and design algorithms all over the map in all sorts of languages for a living, precisely for real world problems.
I take it you don't use this tool when appropriate do you?
But I wonder, do you really believe that overflowing one's stack is, universally, not possible?
A quote from you: "You encounter it in the recursive case, painfully, when your stack overflows."
And we're done.
I work on compilers and compilers are full of graphs. Graphs typically have a lot of algorithms that are naturally recursive. It's not because of some theoretical fetish.
But the recursive code can be much shorter, and easier to understand, because our languages have built-in apparatus to support it. They didn't always; original FORTRAN could not do recursion, and used no stack. The stack isn't free at runtime, but it's pretty cheap, and its fatal failure mode is rarely encountered nowadays.
In my work, I have always found the iterative version cleaner and faster, but that is an artifact of the kinds of problems I (and most people) solve. Having the iterative version enables more flexibility in details, because everything is explicit. E.g., escaping at the end can be much faster, and patching around errors actually possible.
Computer Science education loves to dwell on recursion far beyond its real-world usefulness, in part because CS professors carry a fondness for the Lisp of their youth, but also because arbitrarily complex problems that benefit from recursion are cheap to invent and easy to describe. In real life, recursion in any pure form rarely goes, usefully, more than three levels deep, but we like to make data structures that do.
At ITA, they did airline trip computations that would naturally recurse on the number of stops, but no one wants more stops, and time spent on possible trips with more than two is almost always wasted.
In a sense, recursion is like object-orientedness: an elegant pattern that fits many problems, but makes a poor basis for a world view or, exclusively, a language. Lisp and Smalltalk are not sidelined because of ignorance.
I've used this as an interview question at least 100 times, so I've got at least 50 hours of watching people try to program this and with extremely high probability your iterative solution will be ugly and hard to follow and the recursive one very tidy. Of course, you can screw up the recursive one, but it so straightforward it almost writes itself.
But the point you're making stands: recursion is first and foremost a thinking tool. Code as you think it need not be the same as code as you type it.
A more meaningful interview question would be to present a recursive topo sort and ask for the iterative rendition.
Recursion is required in procedures like Ackermann's Function . You can still simulate or write functions that avoid the syntactic fact that the procedure definition refers (either directly or indirectly) to itself, but you'll always be papering over the fact that it cannot be implemented with a fixed number of state variables.
This is very untrue. While loop and a stack.
> cannot be implemented with a fixed number of state variables
Of course it can't, any function outside DSPACE(O(1)) can't. The fact that you have an implicit call stack doesn't change the space complexity.
>> You can still simulate
> any function outside DSPACE(O(1)) can't
The space complexity of the algorithm is only incidentally related to the number of unique variables required to represent it's state.
You said it best:
> The fact that you have an implicit call stack doesn't change the space complexity
We could better say that recursion is papering over a fundamental fact that the iterative form reveals. You encounter it in the recursive case, painfully, when your stack overflows. Assuming infinite memory is the fatal flaw in most modern systems.
Ackermann's function never appears in the real world.
People who write high performance code don't worry, because they write the recursion properly, and tail recursion doesn't use the stack.
If your understanding of recursion leads to stack overflows then you're missing a significant tool in your toolbox.
There is no more need to overflow a stack using recursion than there is to overflow a stack or heap trying to store state when unwinding a recursive function into an iterative one.
However, the recursive solution is vastly simpler in many cases since you don't have to fiddle with state, which is mutable and error prone, and can instead use immutable concepts (in any language) to write correct code more quickly with zero penalty in execution speed or space. It makes you a more valuable employee to learn these techniques.
Compilers and interpreters often use recursion, and that goes to as many levels as the program requires.
Garbage collectors that use recursion to traverse the reachable objects are not unheard of.
Reference counting smart pointers are recursive. E.g. when we call object->dropReference(), and the refcount goes to zero, that may trigger otherObject->dropReference().
In C++, recursion is built into the language; e.g. destroying an object can trigger a cascade of recursive destructor calls. The object may have members which are smart pointers; those get destroyed, and may cause their referenced objects to get recursively destroyed.
Have you heard of a "recursive descent parser"? GNU C++ uses one (a huge source file written in C++, well over a megabyte long). This will recurse as deeply as the program's nesting goes; C++ programs often go to more than three levels of nesting. (There are some non-recursive hacks mixed in there, like some operator precedence parsing involving an explicit stack: Shunting-Yard or similar?)
Let's switch over to embedded. Have you heard of BusyBox? BusyBox provides scaled down system utilities for embedded systems. It is very widely used.
BusyBox's "libb" internal library contains a function called "recursive_action" for walking file system trees. This is actually recursive, and frequently goes more than three levels deep in actual use:
This is used by BusyBox programs like mdev (udev replacement) lsusb, lspci, chmod, netstat, grep, tar, ...
Also, HN isn't a good place to exhibit Lisp condescension/ignorance.
In fact, in this case, the specific ignorance that Lisp isn't a "all recursion, all the time, everywhere" language family. It includes multi-paradigm languages.
There could be many reasons, such as dislike for its syntax. But that is not ignorance, just de gustibus
(Your wording strongly suggests that you not only believe the hypothesis but that you don't find those people ignorant.)
[1, 2, 3]
elif v.kind == 'tuple':
return Tuple([restore(v[i]) for i in range(v.arity)])
elif v.kind == 'cons':
return List(restore(v), restore(v))
But it is a mistake to confuse what automation hides from you, for practical reasons, with fundamental qualities of the universe or of computation.
Another common example of the error is describing an algorithm as N log N, assuming that the numbers involved occupy constant space and multiply in constant time. In real-world computation, space tends to log N, and multiply in close to (log N)^2 time and power consumption. We get away with it for the same reason recursion works: the problems we apply our algorithms to usually fit in conveniently constrained space, and our multiplier hardware takes the same time and power for small numbers as for the biggest ones we can conveniently represent.
def msort(xs, lo=None, hi=None):
if lo is None:
lo = 0
hi = len(xs)
if hi-lo < 2:
mid = lo + (hi-lo) // 2
msort(xs, lo, mid)
msort(xs, mid, hi)
merge(xs, lo, mid, hi)
s = [[0, len(xs) // 2, len(xs), "step 1"]]
lo, mid, hi, step = xs[-1]
if step == "step 1":
if hi-lo < 2:
s[-1] = "step 2"
s.append([lo, lo + (mid-lo) // 2, mid, "step 1"])
elif step == "step 2":
s[-1] = "step 3"
s.append([mid, mid + (hi-mid) // 2, hi, "step 1"])
merge(xs, lo, mid, hi)
It's very natural for depth first search of a tree.
The "very natural" appearance is an artifact of the languages we have (very unnaturally) constructed.
Given that, and machines typically have an architectural stack pointer register with dedicated instructions to push and pop to it, the manual datastructure you come up with to simulate a stack is almost certainly going to be slower.
Usually it is considered better (hint, hint) to do that before replying.
Which is even more vague (and broad) than what I was talking about. Given that we are talking about converting recursive algorithms into iterative versions that nevertheless use a stack datastructure, the comparison between the machine stack (used for recursion) and a program-defined stack is a perfectly appropriate criteria to evaluate your vague statement. I gave specific microarchitectural reasons why I predict the machine stack will be faster and you responded (a fairly reasonable response) that we should measure. So please do. I'm not the one making broad claims so the burden of proof still lies on you.
And, especially for heavily optimized features, you can only meaningfully measure specific cases, so generalizing is generally wrong. Which was the point of my remark.
Nope, Linux reports it as readable/writeable but not having exceptional condition, while Windows reports it as only having exceptional condition pending but not as readable nor writeable. Beautiful, just beautiful. In the end you have to put sockets in such scenarios into all three sets and check what getsockopt(s, SOL_SOCKET, SO_ERROR) returns for them, that's the portable way.
Undergraduates solve it in two minutes; people with a BS need 5; MS 20; PhD never, or as fast as an undergraduate. The iterative solution is much simpler than a recursive one, so the question identifies how well-tuned your instincts are to problem spaces. If you go straight for recursion, you'll be there all day.
I found that everybody who found the minimal solution used the same hand gesture while describing it.
I always recited the statistics at the outset, so that the interviewee would know how much complexity to expect in the answer. It never seemed to help them.
Now that we have AI to solve "LC type of questions", can we stop asking candidates to invert a linked list in an interview.