
New research a ‘breakthrough for large-scale discrete optimization’ - new_guy
https://eurekalert.org/pub_releases/2018-06/hjap-ae062818.php
======
jey
This is specifically talking about submodular function maximization problems.
A submodular set function is a function f(S) that takes a set S and returns a
"score". The objective is to find the set S with the highest score. The
"submodular" property means that f(S) has "diminishing returns", meaning
roughly that each new element contributes less to the score as the set S gets
bigger. It turns out that this is hard to maximize exactly, but the simplest
possible greedy algorithm already achieves an impressive approximation ratio
of 1 - 1/e. The downside is that even the fully greedy algorithm is still
pretty slow if you need to find a large set S, which is what this paper is
addressing with a parallelizable randomized algorithm. (It's not the first
such algorithm, but it's apparently much faster than the previous state of the
art.)

[disclaimer: not an expert in any of this]

~~~
joelthelion
Thank you! Your comment is so much better than the two pages of fluff this
article is made of.

------
mindcrime
Looks like it may be this paper:

[https://arxiv.org/pdf/1804.06355](https://arxiv.org/pdf/1804.06355)

~~~
westurner
"An Exponential Speedup in Parallel Running Time for Submodular Maximization
without Loss in Approximation" [https://www.arxiv-
vanity.com/papers/1804.06355/](https://www.arxiv-
vanity.com/papers/1804.06355/)

The ACM STOC 2018 conference links to "The Adaptive Complexity of Maximizing a
Submodular Function"
[http://dl.acm.org/authorize?N651970](http://dl.acm.org/authorize?N651970)
[https://scholar.harvard.edu/files/ericbalkanski/files/the-
ad...](https://scholar.harvard.edu/files/ericbalkanski/files/the-adaptive-
complexity-of-maximizing-a-submodular-function.pdf)

A DOI URI would be great, thanks.

------
Aardwolf
I see a lot of buzzwords (applications of the algorithm that are just
applications of anything) and little description about the actual details of
the algorithm, complexity theory, and what type of problems it solves,
obviously assuming they didn't turn NP into P.

------
zhanwei
"What if a large class of algorithms used today -- from the algorithms that
help us avoid traffic to the algorithms that identify new drug molecules --
worked exponentially faster?" \-- these problems used as example in the
opening line are only tangentially related to the problem of submodular
maximisation that the paper it is talking about is tackling. I also don't
think submodular maximisation is widely used today for identifying new drug
molecules or avoiding traffic.

------
karmakaze
I've seen two recent posts that mention 'exponentially faster '. Does this
have a precise meaning? I could imagine it to mean making a non-exponential
algorithm for a problem which only had exponential ones. Another
interpretation could be making an algorithm with 2^n steps where previous ones
took 2^(2n) steps making the improvement factor 2^n, but still resulting in an
exponential time algorithm. Should we just accept this a la 'literally'.

~~~
yorwba
According to this StackExchange answer
[https://cs.stackexchange.com/a/50356](https://cs.stackexchange.com/a/50356)
the speedup is best expressed as a function relating the two algorithm's run
times. So if there's a constant factor of 2, then that's "twice as fast", but
if the relationship isn't linear (e.g. 2^(2n) = (2^n)^2 is a quadratic
function of 2^n) then the speedup is named correspondingly. So 2^(2n) -> 2^n
is quadratic speedup, while 2^n -> n or n -> log n are exponential speedups.

------
efangs
Title could also be "Researchers find a better heuristic for a class of
exponentially scaling problems."

There's no claim to have changed the complexity class, so"exponentially
faster" just means the problem is still exponentially in problem size, but the
exponent is smaller.

From the looks of it, though, it is a nice result because it has pretty wide
applicability.

~~~
voidmain
No, "exponentially faster" means exponentially faster _with exponentially more
hardware_. Specifically, N processors can solve a problem of size N in O(log
N) wall clock time, where the previous (serial) algorithm used O(N) wall clock
time. Most programmers would say "more scalable" or "more parallel" rather
than "faster", but the terminology makes sense and is standard in the context
of the PRAM model of parallel computation.

I'm not sure, but I doubt this algorithm is "faster" than the previous one on
a single processor.

(The article is drivel; I couldn't figure out what they were talking about
either, until I looked at the paper)

~~~
efangs
Ah ok, yeah TBH I didn't read the article, but was guessing from the headline
this is what they meant since in exponentially scaling problems theres always
room at the top (unless explicitly proven otherwise).

------
leetbulb
I skimmed through because it sounds fairly click-baity... From what I gather,
they're claiming to have solved NP-complete problems to some degree with some
abstract form algorithm? I'm genuinely interested, if anyone can explain
what's going on...

~~~
allenz
Submodular maximization is indeed NP-complete, but we can find approximate
solutions in polynomial time. This paper speeds up the parallel running time
of the approximation from O(n) to O(log n), meaning that they've found a way
to make the algorithm more parallelizable, though you still have to do the
same amount of work.

~~~
leetbulb
Got it! Thanks!

------
mchahn
Could this (maybe only slightly) push out how long before quantum computers
pass traditional?

