To take the discussion on a slight tangent, how uncommon is this phenomenon? An applied tool works really well but nobody knows why. I can give another example from the domain of formal verification: SAT solvers which are at the core of most modern verification/synthesis tools.You can download open source SAT solvers today that work spectacularly well on "real" SAT instances with millions and millions of variables and clauses. Yes, SAT is the quintessential NP-complete problem and in fact it is pretty easy to come up with a SAT instance with only a few tens of variables/clauses that would kill these solvers. But somehow these "hard" instances almost never occur in practical problems generated in hardware/software verification/synthesis as well as ton of other applications (planning, constraint programming etc.)So this must there is some characteristic of the problems that we generate in practice that makes them "easy" but we don't have a good understanding of what this characteristic is. All we know, for now, is that we've somehow stumbled upon a near-perfect set of heuristics that work amazingly well on the SAT instances we encounter in practice.

 > So this must there is some characteristic of the problems that we generate in practice that makes them "easy" but we don't have a good understanding of what this characteristic is. All we know, for now, is that we've somehow stumbled upon a near-perfect set of heuristics that work amazingly well on the SAT instances we encounter in practice.There's also the intriguing possibility that "natural instances" however one might define them arise through some complicated process that ends up sampling from easy SAT instances. I think of it as analogous to the Benford's law (see: https://en.wikipedia.org/wiki/Benford%27s_law#Explanations) except that we do not have nearly as good explanations for SAT solvers.It is also highly likely that MOST instances of SAT are easy but that doesn't preclude it from being NP hard, nor does it stop us from easily constructing hard instances. (Such things are absolutely necessary for cryptography as an example.)
 The 'other' possibility is that hard instances have 'measure zero' in the space of all instances... So when you pick some random problem in the real world, it ends up being relatively easy to solve with high probability.My feeling is there's a bit of each; the problem space has a bit of unknowable quasi-structure, and also tends to miss the perverse corner cases and counterexamples that we like to think about as mathematicians.
 My understanding is that in SAT world, random instances are typically easily, even for large numbers of variables. Hard instances only exist in a very narrow band of instance space pertaining to the #constraints/#variables ratio. And yes, it seems like there should be some explanation for this, or maybe it's simply "your intuition for how hard problems should be is wrong".
 Can you recommend a good introduction tutorial on SAT theory?
 This should be an accessible introduction to SAT: http://www.georg.weissenbacher.name/papers/mod12.pdf.If you want something more "high-level", and I'll look for something appropriate.
 Good look, homey. Thanks.

Search: