> Maximum Flow and Minimum-Cost Flow in Almost-Linear Time
> We give an algorithm that computes exact maximum flows and minimum-cost flows on directed graphs with m edges and polynomially bounded integral demands, costs, and capacities in m^(1+o(1)) time. Our algorithm builds the flow through a sequence of m^(1+o(1)) approximate undirected minimum-ratio cycles, each of which is computed and processed in amortized m^o(1) time using a new dynamic graph data structure.
^ Our framework extends to algorithms running in m^(1+o(1)) time for computing flows that minimize general edge-separable convex functions to high accuracy. This gives almost-linear time algorithms for several problems including entropy-regularized optimal transport, matrix scaling, p-norm flows, and p-norm isotonic regression on arbitrary directed acyclic graphs.
No. Any constant is in O(1), but not in o(1). Specifically, f=O(g) means f is (eventually) at most Cg for a certain C, but f=o(g) means f is (eventually) at most Cg for all C.
A useful, albeit not really rigorous, way to think about this is O is like ≤, while o is like <.
o(1) is "little O" notation, where for a function y = o(f), y(x) / f(x) --> 0 as x --> infinity. In other words, y goes to 0 faster than f. In this context, m^(1 + o(1)) means roughly that you converge to linear time for large m.
I wish we could all collectively agree that using = for the o/O/etc. notations is stupid and stop this. These are sets of functions. f is not equal to the set, it is an element of the set and we already have a well defined notation for that: f ∈ o(...). So use that or use the LateX f \in o(...) if you want to restrict yourself to ASCII. Using = only serves to confuse.
(some descriptive linguistics) People often think of "=" as "is", when "is" can convey both equality and possession of an attribute. IIRC there was an article about how American kids think "=" means "the answer is" because they first encountered it with equations like 5+5=10. I've also seen people write 2x+5=10 = 2.5, meaning "the answer to the problem 2x+5=10 is 2.5"
The notation means that it must be faster than m^t for any t strictly greater than one. Basically, it means that you can have any number of log factors on the running time, so it could be m*log(m)^1000
I'm sorry, but I'm pretty sure you are wrong there. m^(log(log(m))/log(m)) = log(m), and log(log(m))/log(m) is certainly not o(1) since it's nondecreasing (and that's just a single logarithm). I'm pretty sure o(1) requires your slowdown over linear to be faster asymptotically than any iterated logarithm, (so faster than log(log(m)), log(log(log(m))), etc.), which is close enough to linear time that calling it "almost linear" seems quite accurate!
Your first point is correct, but just to clarify your second point, these two definitions are not quite equivalent, as there is a world of functions growing more quickly than log^k(n) no matter how large constant k, but still within n^o(1).
For an example, consider 2^sqrt(log(n)).
This is a bit similar to something being faster than polynomial, but slower than exponential.
Thanks for the clarification! I didn't mean to say the two definitions were equivalent, as indeed they aren't. Rephrasing my second point to (hopefully) eliminate the ambiguity: There are two non-equivalent popular definitions for "almost linear" or "nearly linear" (n^(1+o(1)) and O(n log^k(n)), and nevertheless classifying "n log^1000 n" as almost linear is uncontroversial in the sense that both of the common definitions do it.
(The second paragraph of my original message addressed a point made in the parent's second paragraph, which has since been edited out.)
It means m^(1+eps) for arbitrarily small positive constant epsilon. So you can make it run in time O(m^1.1) but also in time O(m^1.00000001), where the latter will presumably have a larger constant implied by the O().
The technical reason you can make eps arbitrarily small is that it represent a function of m that tends to 0 as m tends to infinity.
> We give an algorithm that computes exact maximum flows and minimum-cost flows on directed graphs with m edges and polynomially bounded integral demands, costs, and capacities in m^(1+o(1)) time. Our algorithm builds the flow through a sequence of m^(1+o(1)) approximate undirected minimum-ratio cycles, each of which is computed and processed in amortized m^o(1) time using a new dynamic graph data structure.
^ Our framework extends to algorithms running in m^(1+o(1)) time for computing flows that minimize general edge-separable convex functions to high accuracy. This gives almost-linear time algorithms for several problems including entropy-regularized optimal transport, matrix scaling, p-norm flows, and p-norm isotonic regression on arbitrary directed acyclic graphs.
https://arxiv.org/abs/2203.00671