Hacker News new | comments | show | ask | jobs | submit login
Big-O Algorithm Complexity Cheat Sheet (bigocheatsheet.com)
520 points by ashleyblackmore on May 4, 2013 | hide | past | web | favorite | 132 comments



You can pass some interviews by blindly memorizing, but it's unnecessary. If you understand a concept, then you can reason its big O. Memorization implies a superficial understanding that may be revealed later.

If you don't understand something, spend a few hours and implement it.

    "I hear and I forget. I see and I remember. I do and I understand."

    - Confucious


Agree, buy a good book (for example Cormen), learn the algorithms and implement them to get a good understanding.

Try to use the table to answer a following question: what is a time complexity for finding a next item (according to a key) to a given one in a hash table? Memorizing such stuff does not make much sense, but if you understand basic concepts, you will figure it out quickly.

There are basic errors in the table:

BFS, DFS are for graphs, not just trees, and their complexity is not b^d (what is b and d anyway?).

Quick sort expected complexity is nlogn and this is different than average complexity. You can also make quicksort worst-case complexity to be nlogn.

You can't sort anything with space smaller than number of items you are sorting.

You can use buble and insertion sort not only for arrays but also for lists and time complexity does not suffer.


You can contribute and fix the table yourself :). The author welcomes it . https://github.com/ericdrowell/BigOCheatSheet/blob/master/Ta...


> You can't sort anything with space smaller than number of items you are sorting.

I assume it's referring to extra space used. Most analysis of space I've seen is referring to this, not the space required to store the elements.


>what is b and d anyway?

Breadth and depth. These could apply to graphs as well, since these algorithms draw a tree while traversing the graph.


$81 for http://www.amazon.com/Introduction-Algorithms-Thomas-H-Corme...

No legal DRM free option as far as I can see.

Fifth link was a direct to the PDF.


> BFS, DFS are for graphs, not just trees BFS and DFS are applied to graphs, but the search space is indeed a tree.


Yeah BFS/DFS are O(n) for graphs and trees.


> You can also make quicksort worst-case complexity to be nlogn.

Quicksort in the worst take can take O(n^2) time, not O(nlogn).


If one takes the time to select optimal pivots it becomes O(nlogn). The selection is free from a big-O perspective, because it's O(n) immediately before the O(n) list division step.

Of course, nobody does this, because the selection is not really free. The constant factor cost of choosing the optimal pivot hurts the average case, making the modified quicksort tend to do worse than heapsort or introsort.


You can use a randomized selection algorithm to find the median in linear time, and if you use the median as a pivot you will never get worst case n^2 behavior.

This is not used in practice because the probability of getting worst case behavior is extremely slim if you do some clever, and cheap, tricks.


Randomized selection algoritm is actually O(N^2) in the worst case. Median of medians is the O(N) worst case selection algorithm.


> what is a time complexity for finding a next item (according to a key) to a given one in a hash table?

Problem ill-stated as posed: Does not specify if the 'next' key is hashed, or even what 'next' means in this context.


Say, the smallest item in a hash table for which item.key is larger than given_item.key. Assume larger-than relation for keys is defined, so for example keys are integeters.


Say, the smallest item in a hash table for which item.key is larger than given_item.key.

Well, keys in a hash table are hashed. This implies that unless you're searching for a specific key (e.g. "42") rather than a condition (e.g. "smallest key greater than 42") then the time complexity is necessarily O(N).


Correct :)


If you understand a concept, then you can reason its big O. Memorization implies a superficial understanding that may be revealed later.

Interestingly enough this works both ways. To say more accurately, memorization MAY imply superficial understanding. Memorization (and associated intuitive pattern-matching) may also lead to understanding.

A cheat sheet like this is also good shortcut refreshing one's memory of the concepts if this knowledge is not used on a regular basis.


I agree, our brains are good at recognizing patterns. So some amount of rote memorization of data can help us see these patterns.

I remember learning the 9 * table as a child and suddenly realizing that n * 9 = (n * 10) - n , thinking "hmm, does n * x = n * (x + 1) - n" why yes it does!


As pointed out by msvan, the quote is actually by Xunzi. Here is the original text in Chinese:

不闻不若闻之, 闻之不若见之, 见之不若知之, 知之不若行之; 学至于行之而止矣

I know what you are thinking `Why would you post something in Chinese? How could non-Chinese speakers understand?' and the reason is I want y'all to check out this super cool add of for Firefox: https://addons.mozilla.org/en-us/firefox/addon/perapera-kun-... With this plugin, the meanings of the words show up onmouseover and you can pretty much get the meaning.


This was my immediate reaction. It is very easy and common for an interviewer to make a subtle change to how a common data structure or algorithm would work and then ask for complexity. Ex: Changing the quicksort pivot selection method.


My friend was interviewed by a video game company not long ago and they gave him an algorithmic problem. He asked what the complexity of the algorithm was and the interviewer said O(n). Good hint to have! He passed the interview.


Still, not a bad reference for checking up on possibly dated knowledge. Memorizing the chart without an understanding of the algorithms will not be terribly useful, but running through the chart, trying to produce the O() values in your head, and checking them against the key, and knowing where to dig deeper, could be a good approach.


I hear and I forget. I see and I remember. I do and I understand.

Confucius


That's Xunzi, not Confucius. Good quote though.

</chinese geek>


http://en.wikiquote.org/wiki/Xun_Zi doesn't seem to have it; perhaps an update's required?


Check out: http://www.barrypopik.com/index.php/new_york_city/entry/tell... it has a bunch of translations and transliterations of this quote.


Don't memorize, memoize.


This is why it's a terrible idea to ask for some random algorithm runtime in an interview. It says absolutely nothing about programming or reasoning skill.

I'd rather ask a candidate to explain their favorite data structure to me and derive it's big-O complexity right then and there. Being able to regurgitate the correct answer doesn't count for anything in my book.


Often the followup question to what is the O notation of X is why? So if you are just going to memorize the cheat sheet then it won't get you very far.

It is good to know what you should know about though.


I was asked the O() of binary search. I said "log n". They asked "what base?"

... obviously they wanted 2, but O() doesn't work that way - changing base is a constant factor. Sometimes there's a tension between figuring out someone's understanding of the algorithm and someone's understanding of the notation (and math behind it). Of course, I gave them both answers...


> changing base is a constant factor.

Only if the base is constant. For example B-trees have O(log_B n)-time operations, and you can't ignore the B because it is a parameter (hence non-constant).


What was the second answer?


2 and the "no need for base in big-O" makes 2 :).


Maybe they wanted both answers?


Favorite data structure, that's a funny idea. Like asking a candidate to explain their favorite size of wrench.


This is a pretty limited list of algorithms. It should definitely include linear time sorting algorithms (e.g. bucket or radix sort), as well as graph algorithms (shortest path at least, but also probably all pair shortest path and minimum spanning tree).

There should also be a section about heaps and their operations. There are a huge number of ways to implement a heap (e.g. linked list, binary tree, or a more exotic structure like a binomial of fibonacci heap) and there are a lot of tradeoffs in complexity of different operations between them.


Radixsort isn't linear (https://en.wikipedia.org/wiki/Radix_sort). I have actually published research that said Radixsort was linear, only to have it later explained to me that it is not linear, for a subtle an hard to remember reason involving number theory.


It's O(k * n) where k is the number of digits. It takes log-b(N) digits to represent N distinct integers in base-b, so O(k * N) reduces to O(N log N) when there's no bound on the range of keys.

It's still pretty useful for sorting data where you know the keys are small integers (say, less than a machine word).


Thank you!

So if you put a bound on the size of the keys, let's say 32bit, it becomes linear? Obviously it would be cheating to put a giant number here :)


Right. Basically, if the key size is bounded, then k becomes a constant and it reduces to O(N).


Actually, it's useful when 'N' (the number of elements) is much larger than 'k', the number of significant digits in the largest number in the set.

If k ∈ Θ(log N), then O(Nk) becomes O(N log N), which is asymptotically no better than any optimal comparison based sort.

However, if k ∈ o(log N), then we get an asymptotically better algorithm.


Quicksort is O(k*n log n) though since it does O(n log n) comparisons and a comparison is going to be O(k), so radix sort is still asymptotically faster.


See response to Retric:

https://news.ycombinator.com/item?id=5656534

A comparison isn't always O(k). It's O(k) for strings. It's O(1) for machine-word integers. It can be much more than O(k) for user-defined types, eg. it can be O(k^3) if you have to multiply a matrix out and take the determinant of the product.


For word sized integers, k=O(1) so radix sort is O(n) and quicksort is O(n log n). If you have a comparison function that takes O(k^3) then you most likely can't even implement a radix sort for it, so the comparison can't be made.

So the fact is that quicksort is a factor O(log n) slower than radix sort. The flip side is that quicksort is more generally applicable to any comparison function whereas radix sort only works for lexicographic sorts. In almost all cases that is exactly what you want, but for example I'm not aware of any way to efficiently sort rational numbers with radix sort.


It all depends on what you're counting. Sort algorithm analysis typically counts key comparisons or record movement. In radixsort (and trie algorithms in general), you are relying on operations on digits of the key, not comparisons of entire keys. So linear isn't necessarily wrong, but you do have to say what you're counting.

Number theory? Not seeing the connection.


It wasn't number theory, but I couldn't remember and was hand-waving.


Is it nonlinear even if you have a maximum number of digits in the numbers you're sorting (like say, 64-bit integers)?


No, if you assume that the size of each object is constant, O notation will swallow it and report that, in terms of the number of items to sort, Radix sort is linear in time.

The usual bound of Omega(n log n), proved using decision trees, is only applicable when your only operation is to compare two elements. Radix sort asks for more than this, and so it can assume a specific structure of its input, so it can violate the lower bound.

Depending on which operations you assume, sorting can become more or less easy. In the extreme case, if you can ask the array "Please sort yourself." as a basic operation, sorting is O(1). Radix sort assumes bitmasking as a basic operation, which falls into the "make things easier" spectrum, leading to an O(n) algorithm under the stated assumption of constant bit length (or any encoding, really, it doesn't really need to be bits).


In that case, most other algorithms (e.g. insertion sort) technically take linear time too (though with a rather impractical constant factor).


Can you elaborate on that. It is proven that sorting based only on comparisons has a lower bound of nlog(n) comparison operations. Given that algorithms such as insertion sort use only comparison operations to probe the data, I do not see how they can break the bound.


The O(n log n) bound is a worst-case upper bound, which is a tight bound if all elements are distinct. It doesn't apply if there are guaranteed to be a sufficient number of duplicates.

For example, one might implement a variant of InsertionSort that stores a duplicate count for distinct elements, and then an insertion sort on e.g. 32-bit integers would require at most 4294967296 comparisons per insertion -- a constant factor that can be technically ignored in the complexity analysis. (I did warn you that the constant factor would become unwieldy!)

Note that this doesn't require values to be integers -- it suffices for them to be comparable with a lot of duplicates. The variant of InsertionSort described above would require O(k×n) comparisons where `n` is length of the input list and `k` is the number of distinct values, or O(n) whenever `k` is bounded by a constant (as required for radixsort to work in linear time).

That's not to say that radixsort doesn't outperform other sorting algorithms in practice -- it usually does. However, that isn't obvious from a strictly complexity theoretic point of view.


What's nlogn for a known n? A constant - therefore O(1).




I sent a pull request with heap things, a couple sorts, and interpolation search.


This sounds like a Wikipedia Subportal. If you want help building it, let me know.


I suppose the general idea for the colors is something like: green = best in category, red = worst in category, yellow = neither best nor worst?

In that case bubble sort and insertion sort should be green for the best-case time complexity ( O(n) vs. O(n log(n)) for quicksort/mergesort).

It might also be interesting to make the plot dynamic and allow the visitor to play with different implicit constants for the individual asymptotic bounds.


They've got the tables on github and I threw up a pull request to fix several color problems w/ the sorting table including this one, and also to add heapsort.


When reading it, I felt that red was "don't use in production", green was "fine to use in production" and yellow was "maybe use in production".


Except insertion sort is faster than quicksort for smaller N because of the overhead involved in quicksort - quicksort has large constants which big-O notation isn't designed to show. This is why many library sorting algorithms fall back to insertion once the things you are sorting gets small enough.


All "don't use in production" tags should be accompanied by "unless you really know what you're doing".


It seems to be just:

O(1) green O(log n) green O(n) red O(n log n) yellow O(n^2) red

Which is weird, since green-green-red-yellow-red is a really confusing order. I don't know why they did it that way.


If you need this, you're doing yourself a disservice by looking at it.

Go back and learn the concepts so that you're not memorizing anything.


What if you do not use this for an interview, butalready have a job that doesnt require you to apply this every day? I know a lot of these algorithms, but not all. I hardly ever need them, but when I do this might be a good starter to browse from.

thinking cheat sheets are only for interviews is limited. I would say knowing everything from head is useless with todays internet. Have a solid base, understand a few and google the rest tailored to your situation. I like cheat sheets to quickly remember what to explore.


> thinking cheat sheets are only for interviews is limited

I never even used the word "interview", are you sure you're responding to the correct comment?


true, the interview part was for large discussion above. The general concept of my reply applies to your comment though. I think your statement is a bit harsh. Or im misinterpretting what you are trying to say. Saying cheatsheets are useless is not correct imho.


I'm not totally sure what you mean by "dynamic array", but the vector algorithm for insertion (which you should probably at least include, if by "dynamic array" you were implying a more naive insertion scheme) is O(1) amortized.


If we are talking about the same thing (an array that you reallocate to twice its size when you fill), then it has an amortized O(1) time to append something to the end. If you want to insert something in the beggining, you will need to move n elements down an index. A random location will average n/2, which is O(n)


Ah, true. I guess the chart should probably distinguish between arbitrary index insertion and appending.


Though with a gap buffer, you can have O(1) to insert/delete at some other point. Costs O(n) to move that point, though.


I never understood why people look at 7-8 sorting methiods and ignore Radix sort which often beats everything else at O(n) average case. https://en.wikipedia.org/wiki/Radix_sort I mean is the assumption that people would never actually need a useful real world understanding of the topic?


Radix sort is technical O(k * n) where k is the number of digits. This is very useful when you know k falls within a bounded range (eg. sorting a bunch of integer keys, all of which range from 0-255), but it reduces to O(n log n) for arbitrary keys, because in general you need log n digits to represent n distinct items.


By that definition sorting strings using Merge sort for example takes (K * n Log n) which is still worse because string comparison is worst case O(k) not O(1).


Whenever you talk big-O you have to be aware of what your primitive operations are. When talking about normal sorting algorithms we usually assume comparison is a primitive operation, and then we're measuring the number of comparisons. This is not actually the case for strings (and several other data types), but that cost is the same regardless of which comparison sort you use, and so it usually doesn't matter in your analysis.

With radix sort, you're usually considering using it precisely because K is likely to be significantly smaller than log N, and so it's absolutely relevant to the actual problem at hand.

(For that matter, multiplication is not constant time either - it's O(N) in the number of bits, which is O(log N) in the size of the values stored - but this is conveniently forgotten in most algorithm analysis. If you limit the problem to integers that fit into a machine word, then this factor drops out as a constant, and nobody cares.)

Regardless of what algorithm you're working with, you have to be aware of the limits of the abstraction you use to analyze it. Fibonacci heaps are O(1), but nobody uses them because the constant factors swamp other simpler algorithms with worse computational complexity. And sometimes it's faster to use a red-black tree (or even linear search over an array) than a hashmap because hashmaps are technically O(k) in key size; red-black trees are too, for comparisons, but in a sparse key space the processor usually only has to examine the first 1-2 characters before it can bail out of the comparison routine while the hashmap has to examine every character.


True enough. The idea for Big-O notation is really cost = O(whatever) * (algorithms constant difficulty factor) + (algorithms overhead). My point is if you start adding difficulty factors then the same terms often wind up in your other algorithms. Granted string comparisons are generally O(log k) and pure Radix would end up as O(k) but you can also short circuit a MSD Radix sort if the buckets are small enough which effectively drops things back to O(log k) assuming sparse inputs. (if it's not sparse your not going to be doing anything past a depth of about 4 anyway.)


> For that matter, multiplication is not constant time either - it's O(N) in the number of bits [...]

Actually, multiplication is worth than O(bits). A naive approach will yield O(bits^2), but you can get something like O(bits^r) where r is not too much bigger than 1. See https://en.wikipedia.org/wiki/Multiplication_algorithm#Fast_... for an introduction


Sometimes, people sort things other than the natural numbers below 2^k.


Why is O(n) red and O(n log(n)) yellow? Clearly, O(n log(n)) is slower.

In general, whether a specific complexity is good or bad differs greatly based on what you're doing. I don't think it's a good idea to have the colors be the same everywhere. A particularly bad instance is how every single data structure is O(n), which is red/"bad".


Very few commenters think this is a good idea. The majority of posts lament the rote learning and lack of understanding involved. Why then, is this upvoted so much? Is it that people think the comments are worth reading so much that they upvote the article in the hope that other readers will read the comments? Are the people commenting negatively upvoting the article in the hopes their comments will be more widely read†? Are people afraid of flagging articles?

†testable hypothesis, data requested


You're assuming the same set of people that are commenting are those that are upvoting this article.

Another hypothesis is that those are two largely disjoint populations on HN. With the smaller one displeased with the article and is likely to express that in comments. The other, larger one is pleased with the article and doesn't bother much with comments.


Why then, is this upvoted so much?

First of all... what difference does it make?

Beyond that, I's say "it's impossible to know". Upvotes don't have strictly defined semantics on HN. And they also serve as a defacto "bookmark" mechanis. Given both of those factors, it's hard to justify assuming any correlation between "support for the content of the article" and the number of upvotes. Some people are signifying "This headline caught my eye, I want to save it to read later", some are endorsing the content, other don't endorse the content but are voting in favor of the resulting discussion, etc., etc.

Is it that people think the comments are worth reading so much that they upvote the article in the hope that other readers will read the comments?

Very possibly. Seems like a perfectly reasonable scenario to me.

Are people afraid of flagging articles?

Why would somebody flag this? It's not off-topic or spam. Just because you think a cheat-sheet isn't a great idea, is hardly a good reason to flag the post. But, then again, flagging also doesn't have particularly well-defined semantics either. :-(


Or those of that upvoted it aren't the ones that came here to complain about it.

There's no reason to assume they are the same parties.


If you only want to hire me based on answering big-O questions I don't want to work there anyway. 32 years of working on highly complex and performant stuff and not once did I think in terms of big-0. Optimizing is not about knowing the math but knowing how to measure and how to interpret what you measure. Big-O might make you feel smart but it's a tiny part of actually constructing something complex and optimal.


I mostly agree. big-O knowledge helps you pick the right data structures and algorithms for the job. The idea is that it will often help you avoid having to rediscover all those complexity classes by trial and error when looking at systemtap traces or time stamps in the log stream.

For example in real-time systems picking a btree vs a hash based data-structure might work better sometimes since there is a less of a chance of a sudden spike related to hash re-sizing, instead there is a small penalty to be paid during insertion. I believe that. Have I actually measured that? No. Because it would involve re-writing a bunch of code and it would take time. So I don't know if big-O had saved my ass here.

That is just one example.

Or say when when it comes to large data storage, knowing the base data structure used in the database will give you some expectation as to how it behaves when the size grows.

All that said, it is hard to point back in 7+ years and say, aha, I know exactly how many times knowing big-O saved me from spending extra time and effort debugging. I can think maybe only of one or two times recently when I had to think about big-O so I mostly agree with you.

It certainly seems that not knowing anything about big-O will not terribly handicap someone who knows how to use debugging and profiling tools. There are probably other more practical bits of knowledge that are more important to know.

Despite this these kind of questions are very popular. I see a few reasons. 1) "Big Company" interviews. Big companies love hiring fresh college grads from good schools. Those don't have a lot of relevant software development experience. But they have to be selected and tested somehow so theoretical CS is the goto tool. 2) Other companies just copy the interview questions from big company interviews thinking "well they are so big and successful because they are using these kind of questions to select candidates". Whether it is true or not, I don't know but I believe that processes goes on behind the scenes.


Slightly academic, but this cheat sheet gives out some shorthand explanations to many of the methods in the Big-O document:

http://www.scribd.com/doc/39557873/Data-Structures-Cheat-She...


>To download or read the full version of this document you must become a Premium Reader.


WHAT! I'm sorry about that. It's on my own website now: http://playground.omershapira.com/Notes/DS_CS.pdf


This is great! Gratitude



Actually walking through each problem is the only way to understand. There's not even a point in memorizing, it takes longer to memorize than to actually learn how to come up with those values.


Can you annotate it with the subtype of the algorithm. For example, you have O(logN) worst-case space complexity for quicksort - that is not true on all variations, include to naive one.



If you want to visualise big O runtime, you draw it on a log-log scale. The linear gradient on the log-log plot is the factor. i.e. if its at 45 degrees its O(n), if its at a gradient of 2:1 its O(n^2). Handy fact to work out your big O without having to do the tedious math! (See http://jcsites.juniata.edu/faculty/kruse/cs2/ch12a.htm)


Is this correct for n * lg n? The n * lg n on the plot has gradient that is about halfway between n and n^2, but in reality n * lg n ∈ O(n^1.00000000000000000001)


A log-log plot is useful for polynomials, but has nothing to do with big O notation in general.


I would argue the polynomial factor in an performance computing is about the only thing that matters in a practical setting. The difference between nlog(n) and n, is barely discernible when looking at actual results (and the k factor is much more important then). If your algorithm is x^n or n! your f*ed anyway so its not important cases for tuning. In high performance algoriths, after you add adaptive caching etc. your results are highly dependant on your data, in this casse you expect to get results like O(n^2.34) and stuff so you can't work it out through the analytical approach. Your only recourse is empirical measurement in which case log-log plots are the only sensible choice. The author of the article only has a linear plot on the page which is almost always the worst choice for graphing algorithmic performance, hence I brought up the issue.


not once is theta or omega used, so this cheat sheet isn't all that descriptive.


To be fair, people really aren't too interested in Omega, at least not in conjunction with worst cases. Omega is more suitable for best cases, and that in turn is slightly useless without any knowledge of how common it is. For instance, telling you that bubblesort is Omega(n) in the worst case isn't terribly useful, and telling you it's Omega(n) in the best case is somewhat more enlightening (you now have an absolute asymptotic lower bound), but still not really useful without knowing that the best case is going to be very rare (most of your n! possible input permutations have a lot of inversions).

Theta is a bit more interesting, however. I think it speaks to the "tameness" of the algorithm.


I don't know why people don't use balanced BST ( std::map in c++) for storing the adjacency lists of a graph. Sure the insertion would take O(log n) time but , I think the overall benefit would be greater than the costs. Correct me if I am wrong.


Storing the graph structure in a BST is only useful if your graph is very sparse and you need to have fast lookup for checking specific edges (say, given two nodes, find the cost for the edge between them).

If your graph is dense, using a an adjacency matrix is simpler and will be faster most of the time. If you don't need to query specifific edges and all you need to do is iterate over the edges for given vertices than using adgacency lists (or vectors) is simpler and does the job just as well.


if my graph is dense then would using a adjacency matrix take like O(V^2) space complexity ? Anyway I was just suggesting this because I use it in practice. Just wanted to know the cons of it if any.


The space taken by an adjacency matrix doesn't depend on edge count. That's why it is usually the favoured representation for dense graphs.


Dense graphs, by definition, have close to O(V²) edges already so adjacency matrices aren't wasting space in that case.


For most graph algorithms it doesn't matter in which order you traverse neighbours, you just need to visit them all.


As an aside, trees (Red-Black trees for std::map in libstdc++) have terrible locality, and thus cache behavior. In general, for reasonable n, it's even going to be better to have a vector<vector<int>> in which you literally push to each vector (for amortized O(1) each time) when you find an adjacency. In the case of dense graphs, yes, adjacency matrices will be even better, since you're going to pay the size cost anyway, and you may as well do it up front and not pay the resizing charges.

From my experience, using vector<list<int>> behaves more poorly due to terrible locality of lists. My use of red-black trees for graphs is mostly limited to implementing Dijkstra using set<pair<cost, node_index>> as a queue, since the priority_queue in <queue> does not have a DecreaseKey operation. Using that set (and some map (needn't be std::map, could well be a vector) of node_index to cost for faster compare during Dijkstra's neighbor loop) can make for a very fast, short, and easy to implement Dijkstra.

My usage is mostly competitive programming, so YMMV.


Trees don't have to have terrible locality. There are a number of tricks to encode parts of the tree structure into nice flat blocks. (This can involve some memory overhead, which may or may not be offset by the space saved on pointers depending on the size of elements.)

Just because std::set and map are completely awful doesn't mean you should completely give up on trees.


I would like to see the same type of complexity cheat sheet for algorithms for common math problems: addition, multiplication, division, subtraction, factorization, solving linear systems, solving eigen systems, matrix inversion, etc.


Those depend a lot on the context. Firstly, the complexity for the arithmetic operations you listed mostly only matters if you are working with big numbers (something that is not very common to be a bottleneck). Factorization doesn't have a polynomial algorithm so I don't see why the complexity matters anyway (its still going to take longer than your lifetime with hard inputs anyway). As for linear systems, it depends a lot on your input and the problem you want to solve. If we talk about the Simplex algorithm that most people use, empirically it takes around cubic time but its still an open problem to find a base-choide heuristic that does not have pathological exponential worst case performance. In addition to that, many important problems are modeled as linear programs but will have extra special structure that let them be solved with more efficient algorithms.

Finally, you got me when it comes to the numerical stuff (eigenvectors and matrix inversion). I haven't looked into that in a while.


> In addition to that, many important problems are modeled as linear programs but will have extra special structure that let them be solved with more efficient algorithms.

To specify: Those more efficient algorithms can surprisingly often be expressed as variants of the simplex method, too.


Pretty cool, thanks.

I think the graph at the end is the most useful thing. Really helps understanding the complexity in relative terms.

Adding tree stuff would be cool (especially for search which is often implemented as either a tree or graph version)


It is smart how you can edit the table and contribute via Github.


Question: I'm a junior software developer that did not get a CS degree. What would be the best way to learn and understand this sort of stuff? Coursera/Khan? A book?


Some other resources besides CLRS...

Free interactive Python textbook on Algorithms & Data Structures: http://interactivepython.org/courselib/static/pythonds/index...

Robert Sedgewick and Kevin Wayne of Princeton have a great textbook (code samples in basic Java): http://www.amazon.com/Algorithms-4th-Edition-Robert-Sedgewic...

They also teach a two-part Algorithms course on Coursera: https://www.coursera.org/courses?orderby=upcoming&search...

Thomas Cormen (the C in CLRS) also has a new book called Algorithms Unlocked which introduces some popular algorithms in pseudocode. http://www.amazon.com/Algorithms-Unlocked-Thomas-H-Cormen/dp...


Coursera, no doubt. I took 2 courses with Tim Roughgarden; they were awesome. And I just finished 2 more with Sedgewick. If you can find the Sedgewick courses, I recommend picking up his book (actually, I recommend it anyway) - Algorithms, Sedgewick & Wayne, 3rd ed.


Also, there's a book site for the Sedgwick & Wayne book:

http://algs4.cs.princeton.edu/home/

There's a lot of good stuff.


Try the video course that goes with the Cormen book on Algorithms. You can start it right now, its not a scheduled course like Coursera but more a repository like Khan.

http://ocw.mit.edu/courses/electrical-engineering-and-comput...


The math behind this stuff is in the first third of most calculus textbooks. The CS half can be found in CLRS. You can also probably learn it from TopCoder tutorials or usacogate (I recall there being a mirror of usacogate that did not require you to do all of the problems in order to advance).


CLRS? Why you young whippersnapper! Back in my day it was called CLR, and those three letters were good enough for us! Kids these days...

mmanfrin, it's this book: http://en.wikipedia.org/wiki/Introduction_to_Algorithms


The book along was a little terse for me. The videos from OCW however are priceless, even entertaining sometimes.[1]

[1] http://ocw.mit.edu/courses/electrical-engineering-and-comput...


I'm very bad at calculus , but I can work out Big O intuitively quite easily.


Even average case complexity?


Enough to guess with a fairly high degree of accuracy.


Only for the simple stuff (and that might be good enough for you). Average case complexity is devilishly hard in general, and even worst case complexity is super hard. E.g. try analysing fibonacci heaps `intuitively'.


You should create an empty version of the cheat sheet so people can test their understanding of various data structures and algorithms.


I'm amused that every single data structure is proportional in size to the amount of data stored and therefore marked as red.


Well done!

As others pointed out, one might be better off really understanding them, but for a quick overview this is a very usable website.


This is very shallow and of limited use.


It's a cheatsheet, not a tutorial.


"The map is not the territory."


Why don't your sorted trees support indexing? Mine support indexing in O(lg n) time.


Is this only for interviews?


/trolling on

Awesome. Next time I'm trying to pass some silicon valley interview, I'll have to look this up. Till then, I think I have more practical problems to worry about.

/trolling off




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: