
Using Simulated Annealing to Solve Logic Puzzles - djoldman
http://blog.pluszero.ca/blog/2016/07/17/using-simulated-annealing-to-solve-logic-puzzles/
======
osense
Another fun way to solve this is using Logic programming, e.g.
[http://baptiste-wicht.com/posts/2010/09/solve-einsteins-
ridd...](http://baptiste-wicht.com/posts/2010/09/solve-einsteins-riddle-using-
prolog.html)

~~~
kriro
Indeed anything with constraints is usually a great use case for Prolog. I
particularly like this version in ECLiPSe which is specialized on constraint
programming with Prolog:
[https://gist.github.com/JuanitoFatas/2227711](https://gist.github.com/JuanitoFatas/2227711)

I haven't benchmarked it but it is pretty intuitive to read which is very
important for these types of problems.

~~~
nurettin
This is the best prolog solution I've seen for this kind of puzzle. Other
solutions all had to define a bunch of primitives for neighborhood
constraints.

------
gort
The article mentions the problem of "local minimums". The author suggests
occasionally accepting worse solutions, in order to escape from such. "The
hope here is that by following this worse solution, we can eventually get to
the global minimum." This is good but one may wish to take it further.

I once messed with something called "metropolis coupling", where you run
multiple threads in parallel, with different thresholds for how often they
accept worse solutions, i.e. one thread is "cold", one is slightly "hotter",
and so on. If a hot thread's current solution is better than the solution of
the neighbouring colder thread, the solutions are swapped. In this way the
coldest thread (which is the one we're paying attention to) gets pulled out of
the local minimum.

As I say, I messed with this once and it did seem to help. It seems to have
been developed for inference in evolutionary phylogenetics, and is perhaps a
bit obscure?

~~~
ChronosKey
Hi, author here:

Yep, this is the sort of thing we study in our cooperative and adaptive
algorithms class. Similar techniques to what you suggest are shown here
[https://books.google.ca/books?id=G5ML5EYch94C&pg=PA88&lpg=PA...](https://books.google.ca/books?id=G5ML5EYch94C&pg=PA88&lpg=PA88&dq=cooperative+SA+cosa&source=bl&ots=DjpEgTvDab&sig=LSBRppSNQqhr0qVjqRikt60b36c&hl=en&sa=X&ved=0ahUKEwi0hs39vP3NAhVrw4MKHbd0CEsQ6AEIHDAA#v=onepage&q=cooperative%20SA%20cosa&f=false)

~~~
klausnrooster
Looks neat but I gave up on it converging on my new laptop. I'm going to port
it to rebol2. This bit confuses me:

    
    
      random.choice(range(0, i) + range(i+1, 4))

how is that different from

    
    
      random.choice(range(0, 4))? 

Obviously no Pythonista, but playing around in the REPL I see no diff.

~~~
klausnrooster
Duh. Where is the button that reverses the passage of time?

------
placebo
Nice. I've used Simulated Annealing to achieve good results in problems were
the solution space was too large for constraint programming, but for this
specific sort of problems I've had good experience using constraint
programming frameworks like GECODE or choco

------
doomrobo
A lot of people here mention Prolog as another tool to solve this problem. I
wonder if there's a natural way to combine constraint solving with simulated
annealing.

------
rsingla
A very well written post on Simulated Annealing, and a fun puzzle to solve
(even if it is old!)

------
jawns
If you like this sort of thing, I wrote some code a couple of years ago to
generate and solve logical deduction problems, such as the "Cheryl's Birthday"
brainteaser:

[https://github.com/shaungallagher/cheryls-
murder/blob/master...](https://github.com/shaungallagher/cheryls-
murder/blob/master/cheryls-murder.ipynb)

The code isn't very clean (it was written hastily as part of a hack days
project at work) but the notebook walks you through it, so even if you're not
familiar with Python, you'll get a sense of the technique.

------
baldeagle
Bonus points for including code in python to solve arbitrary logic puzzles.

------
agentultra
I've used simulated annealing to make constraint solvers more efficient. The
nice thing about constraint solvers and simulated annealing as demonstrated is
that it's fairly straight forward to adapt the algorithm to your specific
problem and data in order to get the best performance.

------
tunesmith
So, what does it mean exactly when they say that these kinds of problems are
in the same class as Sudoku?

~~~
ChronosKey
Simulated annealing and similar techniques such as Tabu Search are good for
problems that have large search spaces due to combinatorial explosion
([https://en.wikipedia.org/wiki/Combinatorial_explosion](https://en.wikipedia.org/wiki/Combinatorial_explosion))
and can be expressed as a state and an associated cost/fitness function.

~~~
yummyfajitas
This is true provided your cost function is "continuous" w.r.t. distance.
Roughly what this means is that if you make a small change - e.g. one edit in
this example - then your cost function shouldn't change a lot.

If you don't have some form of continuity condition you are actually just
doing random search.

------
makeset
Constraint Satisfaction Problems can be modeled as Hopfield-type neural
networks, such as Boltzmann machines (RBMs of deep learning are a variant)
which can be tuned using simulated annealing.

There was plenty of research on this around 1985-1995 (the "second coming" of
neural networks), but it died out with the hype because it was never an
actually practical way to solve CSPs. Given the recent innovations in deep
learning, it should eventually pick up again, to enable deep learners to solve
constraint satisfaction subproblems within the same integrated architecture.

------
cordite
This sounds exactly like this game:
[http://www.kaser.com/sherwin.html](http://www.kaser.com/sherwin.html)

There's relational properties, combinations, and maybes all over

------
Anilm3
Really interesting, I wrote a logic puzzle solver in Prolog a long time ago
while I was in uni, I just created a repo for anyone who wants to check it
out. Unfortunately the documentation and comments are all in Spanish...

[https://github.com/Anilm3/Logic-Puzzle-
Solver](https://github.com/Anilm3/Logic-Puzzle-Solver)

~~~
anthk
Al menos podías haber dejado "head-tail", es más intuitivo.

Gracias de todo modos ;)

------
rahimnathwani
Tweaking 't' here seems to be similar to tweaking the learning rate in
gradient descent. But in gradient descent you assume the problem is convex.

Is there any way to tell whether this logic problem (or any other) is convex?
Is it possible that the correct solution's 5-away neighbours all have very
high costs (like 9+)? What prevents you from doing worse that brute force?

------
jasdeepsingh
What if we change the cost function to something that returns a number which
has a direct correlation with the maximum number of attributes that matched?
Would solving the problem be any easier or we would still leave the
possibility of getting stuck in local optimums just like the one described in
the post?

~~~
rahimnathwani
What do you mean by the 'maximum number of attributes that matched'? How is it
different from the number of conditions that are satisfied?

------
jnwrd
I wonder if Quantum Annealing would also be applicable.

~~~
leni536
I think in theory it is possible by unary encoding attributes of the five
houses then defining the appropriate weights according to the constraints.

In practice it might not be possible to map the resulting graph on the
D-Wave's architecture, I can't find any good documentation on the D-Wave
though.

