Hacker News new | past | comments | ask | show | jobs | submit login
New research a ‘breakthrough for large-scale discrete optimization’ (eurekalert.org)
97 points by new_guy on July 1, 2018 | hide | past | web | favorite | 20 comments



This is specifically talking about submodular function maximization problems. A submodular set function is a function f(S) that takes a set S and returns a "score". The objective is to find the set S with the highest score. The "submodular" property means that f(S) has "diminishing returns", meaning roughly that each new element contributes less to the score as the set S gets bigger. It turns out that this is hard to maximize exactly, but the simplest possible greedy algorithm already achieves an impressive approximation ratio of 1 - 1/e. The downside is that even the fully greedy algorithm is still pretty slow if you need to find a large set S, which is what this paper is addressing with a parallelizable randomized algorithm. (It's not the first such algorithm, but it's apparently much faster than the previous state of the art.)

[disclaimer: not an expert in any of this]


Thank you! Your comment is so much better than the two pages of fluff this article is made of.


very good point. actually, 1-1/e is only for monotone submodular function. for a general submodular function, 1/2 approximation ratio is the best you can get.


Looks like it may be this paper:

https://arxiv.org/pdf/1804.06355


"An Exponential Speedup in Parallel Running Time for Submodular Maximization without Loss in Approximation" https://www.arxiv-vanity.com/papers/1804.06355/

The ACM STOC 2018 conference links to "The Adaptive Complexity of Maximizing a Submodular Function" http://dl.acm.org/authorize?N651970 https://scholar.harvard.edu/files/ericbalkanski/files/the-ad...

A DOI URI would be great, thanks.


Reading through that paper I'm not sure exactly what's new about it above and beyond what has been in a lot of the evolutionary algorithm literature, other than maybe formal proofs (which would be important). I'm sure I'm missing something though because this isn't my area.


I see a lot of buzzwords (applications of the algorithm that are just applications of anything) and little description about the actual details of the algorithm, complexity theory, and what type of problems it solves, obviously assuming they didn't turn NP into P.


"What if a large class of algorithms used today -- from the algorithms that help us avoid traffic to the algorithms that identify new drug molecules -- worked exponentially faster?" -- these problems used as example in the opening line are only tangentially related to the problem of submodular maximisation that the paper it is talking about is tackling. I also don't think submodular maximisation is widely used today for identifying new drug molecules or avoiding traffic.


I've seen two recent posts that mention 'exponentially faster '. Does this have a precise meaning? I could imagine it to mean making a non-exponential algorithm for a problem which only had exponential ones. Another interpretation could be making an algorithm with 2^n steps where previous ones took 2^(2n) steps making the improvement factor 2^n, but still resulting in an exponential time algorithm. Should we just accept this a la 'literally'.


According to this StackExchange answer https://cs.stackexchange.com/a/50356 the speedup is best expressed as a function relating the two algorithm's run times. So if there's a constant factor of 2, then that's "twice as fast", but if the relationship isn't linear (e.g. 2^(2n) = (2^n)^2 is a quadratic function of 2^n) then the speedup is named correspondingly. So 2^(2n) -> 2^n is quadratic speedup, while 2^n -> n or n -> log n are exponential speedups.


In this case, exponentially faster means that the parallel running time improved from O(n) to O(log n).


Title could also be "Researchers find a better heuristic for a class of exponentially scaling problems."

There's no claim to have changed the complexity class, so"exponentially faster" just means the problem is still exponentially in problem size, but the exponent is smaller.

From the looks of it, though, it is a nice result because it has pretty wide applicability.


No, "exponentially faster" means exponentially faster with exponentially more hardware. Specifically, N processors can solve a problem of size N in O(log N) wall clock time, where the previous (serial) algorithm used O(N) wall clock time. Most programmers would say "more scalable" or "more parallel" rather than "faster", but the terminology makes sense and is standard in the context of the PRAM model of parallel computation.

I'm not sure, but I doubt this algorithm is "faster" than the previous one on a single processor.

(The article is drivel; I couldn't figure out what they were talking about either, until I looked at the paper)


Ah ok, yeah TBH I didn't read the article, but was guessing from the headline this is what they meant since in exponentially scaling problems theres always room at the top (unless explicitly proven otherwise).


I skimmed through because it sounds fairly click-baity... From what I gather, they're claiming to have solved NP-complete problems to some degree with some abstract form algorithm? I'm genuinely interested, if anyone can explain what's going on...


The article doesn't even mention NP-completeness. Nor does the paper: https://arxiv.org/pdf/1804.06355.pdf Haven't read enough yet to comment otherwise, but let's not spread rumours.


Submodular maximization is indeed NP-complete, but we can find approximate solutions in polynomial time. This paper speeds up the parallel running time of the approximation from O(n) to O(log n), meaning that they've found a way to make the algorithm more parallelizable, though you still have to do the same amount of work.


Got it! Thanks!


The abstract seems to be comparing having to search an entire problem space for the most optimal result ("the current approach") with an approach using sampling for a good enough result?


Could this (maybe only slightly) push out how long before quantum computers pass traditional?




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: