Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you have a finite amount of time, a given algorithm will only be able to solve a subset of all problem instances in the allocated time (assuming an infinite set of problem instances). If you have two algorithms and give the same finite amount of time to each, the asymptotically faster one would be expected to be able to solve more and/or larger problem instances in the allocated time than the asymptotically slower algorithm, as your allocated time increases.


It is important to note that big o notation is not a measure of "how fast will it take". It is not a measure of time at all, but a matter of how many discrete steps are required to solved the problem as a function of input size. Each step could take longer, which could effectively give it a longer run time. Solvability by the algorithm and the big O notation assumes unlimited time. They are all " theoretically solvable". The constant time and size of N would determine which implementation would be practically faster in a given situation. Lower asymptotic bounds algorithms are faster, given a sufficiently large (and specific) N. But that depends on the implementations. It may be that N is larger than most GI problems compared to the programs that currently run. The current algorithms run in polynomial time for most graphs (just like many sorting algorithms run closer to n rather than n log n the closer you get to pre-sorted input)


If the finite time is smallish, and the constant factor in the asymptotically faster algorithm is sufficiently large, it might be the case that it can't solve any instances of the problem in that time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: