Hacker Newsnew | comments | show | ask | jobs | submit login

Agree, buy a good book (for example Cormen), learn the algorithms and implement them to get a good understanding.

Try to use the table to answer a following question: what is a time complexity for finding a next item (according to a key) to a given one in a hash table? Memorizing such stuff does not make much sense, but if you understand basic concepts, you will figure it out quickly.

There are basic errors in the table:

BFS, DFS are for graphs, not just trees, and their complexity is not b^d (what is b and d anyway?).

Quick sort expected complexity is nlogn and this is different than average complexity. You can also make quicksort worst-case complexity to be nlogn.

You can't sort anything with space smaller than number of items you are sorting.

You can use buble and insertion sort not only for arrays but also for lists and time complexity does not suffer.




You can contribute and fix the table yourself :). The author welcomes it . https://github.com/ericdrowell/BigOCheatSheet/blob/master/Ta...

-----


> You can't sort anything with space smaller than number of items you are sorting.

I assume it's referring to extra space used. Most analysis of space I've seen is referring to this, not the space required to store the elements.

-----


>what is b and d anyway?

Breadth and depth. These could apply to graphs as well, since these algorithms draw a tree while traversing the graph.

-----


$81 for http://www.amazon.com/Introduction-Algorithms-Thomas-H-Corme...

No legal DRM free option as far as I can see.

Fifth link was a direct to the PDF.

-----


> You can also make quicksort worst-case complexity to be nlogn.

Quicksort in the worst take can take O(n^2) time, not O(nlogn).

-----


If one takes the time to select optimal pivots it becomes O(nlogn). The selection is free from a big-O perspective, because it's O(n) immediately before the O(n) list division step.

Of course, nobody does this, because the selection is not really free. The constant factor cost of choosing the optimal pivot hurts the average case, making the modified quicksort tend to do worse than heapsort or introsort.

-----


You can use a randomized selection algorithm to find the median in linear time, and if you use the median as a pivot you will never get worst case n^2 behavior.

This is not used in practice because the probability of getting worst case behavior is extremely slim if you do some clever, and cheap, tricks.

-----


Randomized selection algoritm is actually O(N^2) in the worst case. Median of medians is the O(N) worst case selection algorithm.

-----


> BFS, DFS are for graphs, not just trees BFS and DFS are applied to graphs, but the search space is indeed a tree.

-----


Yeah BFS/DFS are O(n) for graphs and trees.

-----


> what is a time complexity for finding a next item (according to a key) to a given one in a hash table?

Problem ill-stated as posed: Does not specify if the 'next' key is hashed, or even what 'next' means in this context.

-----


Say, the smallest item in a hash table for which item.key is larger than given_item.key. Assume larger-than relation for keys is defined, so for example keys are integeters.

-----


Say, the smallest item in a hash table for which item.key is larger than given_item.key.

Well, keys in a hash table are hashed. This implies that unless you're searching for a specific key (e.g. "42") rather than a condition (e.g. "smallest key greater than 42") then the time complexity is necessarily O(N).

-----


Correct :)

-----




Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: