Try to use the table to answer a following question: what is a time complexity for finding a next item (according to a key) to a given one in a hash table? Memorizing such stuff does not make much sense, but if you understand basic concepts, you will figure it out quickly.
There are basic errors in the table:
BFS, DFS are for graphs, not just trees, and their complexity is not b^d (what is b and d anyway?).
Quick sort expected complexity is nlogn and this is different than average complexity. You can also make quicksort worst-case complexity to be nlogn.
You can't sort anything with space smaller than number of items you are sorting.
You can use buble and insertion sort not only for arrays but also for lists and time complexity does not suffer.
I assume it's referring to extra space used. Most analysis of space I've seen is referring to this, not the space required to store the elements.
Breadth and depth. These could apply to graphs as well, since these algorithms draw a tree while traversing the graph.
No legal DRM free option as far as I can see.
Fifth link was a direct to the PDF.
Quicksort in the worst take can take O(n^2) time, not O(nlogn).
Of course, nobody does this, because the selection is not really free. The constant factor cost of choosing the optimal pivot hurts the average case, making the modified quicksort tend to do worse than heapsort or introsort.
This is not used in practice because the probability of getting worst case behavior is extremely slim if you do some clever, and cheap, tricks.
Problem ill-stated as posed: Does not specify if the 'next' key is hashed, or even what 'next' means in this context.
Well, keys in a hash table are hashed. This implies that unless you're searching for a specific key (e.g. "42") rather than a condition (e.g. "smallest key greater than 42") then the time complexity is necessarily O(N).