You can pass some interviews by blindly memorizing, but it's unnecessary. If you understand a concept, then you can reason its big O. Memorization implies a superficial understanding that may be revealed later.
If you don't understand something, spend a few hours and implement it.
"I hear and I forget. I see and I remember. I do and I understand."
Agree, buy a good book (for example Cormen), learn the algorithms and implement them to get a good understanding.
Try to use the table to answer a following question: what is a time complexity for finding a next item (according to a key) to a given one in a hash table? Memorizing such stuff does not make much sense, but if you understand basic concepts, you will figure it out quickly.
There are basic errors in the table:
BFS, DFS are for graphs, not just trees, and their complexity is not b^d (what is b and d anyway?).
Quick sort expected complexity is nlogn and this is different than average complexity. You can also make quicksort worst-case complexity to be nlogn.
You can't sort anything with space smaller than number of items you are sorting.
You can use buble and insertion sort not only for arrays but also for lists and time complexity does not suffer.
If one takes the time to select optimal pivots it becomes O(nlogn). The selection is free from a big-O perspective, because it's O(n) immediately before the O(n) list division step.
Of course, nobody does this, because the selection is not really free. The constant factor cost of choosing the optimal pivot hurts the average case, making the modified quicksort tend to do worse than heapsort or introsort.
Say, the smallest item in a hash table for which item.key is larger than given_item.key.
Well, keys in a hash table are hashed. This implies that unless you're searching for a specific key (e.g. "42") rather than a condition (e.g. "smallest key greater than 42") then the time complexity is necessarily O(N).
If you understand a concept, then you can reason its big O. Memorization implies a superficial understanding that may be revealed later.
Interestingly enough this works both ways. To say more accurately, memorization MAY imply superficial understanding. Memorization (and associated intuitive pattern-matching) may also lead to understanding.
A cheat sheet like this is also good shortcut refreshing one's memory of the concepts if this knowledge is not used on a regular basis.
As pointed out by msvan, the quote is actually by Xunzi.
Here is the original text in Chinese:
I know what you are thinking `Why would you post something in Chinese? How could non-Chinese speakers understand?' and the reason is I want y'all to check out this super cool add of for Firefox: https://addons.mozilla.org/en-us/firefox/addon/perapera-kun-... With this plugin, the meanings of the words show up onmouseover and you can pretty much get the meaning.
This was my immediate reaction. It is very easy and common for an interviewer to make a subtle change to how a common data structure or algorithm would work and then ask for complexity. Ex: Changing the quicksort pivot selection method.
My friend was interviewed by a video game company not long ago and they gave him an algorithmic problem. He asked what the complexity of the algorithm was and the interviewer said O(n). Good hint to have!
He passed the interview.
Still, not a bad reference for checking up on possibly dated knowledge. Memorizing the chart without an understanding of the algorithms will not be terribly useful, but running through the chart, trying to produce the O() values in your head, and checking them against the key, and knowing where to dig deeper, could be a good approach.