Hacker News new | past | comments | ask | show | jobs | submit login

This is a great point and I'll probably have to do some performance measurements and change parts of my code now. I should be more careful with sacrificing contiguous storage for O(1) insertions.

It is also worth noting that some manuals say that appending to an array list is O(1) amortized. Which is true, if you make an amortized analysis (which essentially distributes the workload of copying the array into a larger block over all the inserts). That's to keep in mind for systems that have to be realtime or at least produce stable framerates. The worst-case is important and amortized analysis generously glosses over it.

EDIT: Not inserting, appending




Insertion into an array list at a uniformly-distributed location is always O(n): you can't avoid moving half the list. Appending is amortized O(1).


Not the person you're responding to but perhaps that person meant assuming your insertions are uniformly distributed? Then I think insertion is O(n) amortized..?

At an insertion you'd have

(n + (n-1) + .. + 1)/n = O(n^2) / n = O(n)

The first part comes from each possible run times, each with probability 1/n. You might have to expand the list but that'd also be O(n) amortized.


You are right, my bad. I meant appending.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: