Hacker News new | past | comments | ask | show | jobs | submit login

> Maximum Flow and Minimum-Cost Flow in Almost-Linear Time

> We give an algorithm that computes exact maximum flows and minimum-cost flows on directed graphs with m edges and polynomially bounded integral demands, costs, and capacities in m^(1+o(1)) time. Our algorithm builds the flow through a sequence of m^(1+o(1)) approximate undirected minimum-ratio cycles, each of which is computed and processed in amortized m^o(1) time using a new dynamic graph data structure.

^ Our framework extends to algorithms running in m^(1+o(1)) time for computing flows that minimize general edge-separable convex functions to high accuracy. This gives almost-linear time algorithms for several problems including entropy-regularized optimal transport, matrix scaling, p-norm flows, and p-norm isotonic regression on arbitrary directed acyclic graphs.

https://arxiv.org/abs/2203.00671




I've been out of school for a while, what does m^(1+o(1)) mean?

Isn't any constant trivially in "o(1)"? So m^1000 is in this class?

Or are they using it informally to mean "a very small number"?


No. Any constant is in O(1), but not in o(1). Specifically, f=O(g) means f is (eventually) at most Cg for a certain C, but f=o(g) means f is (eventually) at most Cg for all C.

A useful, albeit not really rigorous, way to think about this is O is like ≤, while o is like <.


o(1) is "little O" notation, where for a function y = o(f), y(x) / f(x) --> 0 as x --> infinity. In other words, y goes to 0 faster than f. In this context, m^(1 + o(1)) means roughly that you converge to linear time for large m.


I wish we could all collectively agree that using = for the o/O/etc. notations is stupid and stop this. These are sets of functions. f is not equal to the set, it is an element of the set and we already have a well defined notation for that: f ∈ o(...). So use that or use the LateX f \in o(...) if you want to restrict yourself to ASCII. Using = only serves to confuse.


(some descriptive linguistics) People often think of "=" as "is", when "is" can convey both equality and possession of an attribute. IIRC there was an article about how American kids think "=" means "the answer is" because they first encountered it with equations like 5+5=10. I've also seen people write 2x+5=10 = 2.5, meaning "the answer to the problem 2x+5=10 is 2.5"


Or, rather, it grows more slowly than m^(1 + epsilon) for any fixed epsilon > 0.

The function m log m is m^(1 + o(1)) but it grows more quickly than linear.


The notation means that it must be faster than m^t for any t strictly greater than one. Basically, it means that you can have any number of log factors on the running time, so it could be m*log(m)^1000


I'm sorry, but I'm pretty sure you are wrong there. m^(log(log(m))/log(m)) = log(m), and log(log(m))/log(m) is certainly not o(1) since it's nondecreasing (and that's just a single logarithm). I'm pretty sure o(1) requires your slowdown over linear to be faster asymptotically than any iterated logarithm, (so faster than log(log(m)), log(log(log(m))), etc.), which is close enough to linear time that calling it "almost linear" seems quite accurate!


log(log(m))/log(m) tends to zero as m tends to infinity, which is what it means to be o(1). It is decreasing for m > e^e.

Besides n^(1+o(1)), the other common definition for "almost linear" is precisely O(n log^k (n)) for some k, no matter how large.


Your first point is correct, but just to clarify your second point, these two definitions are not quite equivalent, as there is a world of functions growing more quickly than log^k(n) no matter how large constant k, but still within n^o(1).

For an example, consider 2^sqrt(log(n)).

This is a bit similar to something being faster than polynomial, but slower than exponential.


Thanks for the clarification! I didn't mean to say the two definitions were equivalent, as indeed they aren't. Rephrasing my second point to (hopefully) eliminate the ambiguity: There are two non-equivalent popular definitions for "almost linear" or "nearly linear" (n^(1+o(1)) and O(n log^k(n)), and nevertheless classifying "n log^1000 n" as almost linear is uncontroversial in the sense that both of the common definitions do it.

(The second paragraph of my original message addressed a point made in the parent's second paragraph, which has since been edited out.)


Oh, so it is. I retract my comment. In that case it seems like a pretty stupid designation to me.


The o(1) here represents a decreasing function in m, such as 1/m. (Note: This is little o, not big o.)

Therefore, for any k > 1: m^(1+o(1)) grows more slowly than m^k but faster than m


I'm trying to come up with an example to check my understanding. n (log n)^k for some constant k would qualify right?


Yes. (log n)/(n^epsilon) tends to zero for any positive epsilon, and so log n = n^(o(1)). The same holds for (log n)^k.


It means m^(1+eps) for arbitrarily small positive constant epsilon. So you can make it run in time O(m^1.1) but also in time O(m^1.00000001), where the latter will presumably have a larger constant implied by the O(). The technical reason you can make eps arbitrarily small is that it represent a function of m that tends to 0 as m tends to infinity.


Mhh, while what you are saying is not wrong (though I'm not convinced about the constant thing, given everything is asymptotic), it's a bit imprecise.

A more exact formulation would be m^(1+o(1)) is equal to m^(1 + eps(m)) for some eps where eps(m) -> 0 for m -> infty




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: