

N-Queen Problem: Python 2.6.5 vs PyPy 1.5.0 - timf
http://aminsblog.wordpress.com/2011/05/29/n-queen-problem-python-2-6-5-vs-pypy-1-5-0/

======
reinhardt
"...all N-Queen problem comes to mere 8 lines of code"

If you are optimizing in LOC, it can come down to basically a single
expression; of course that's not a reason to write it as a one-liner:

    
    
        def n_queen(n):
            return (p for p in itertools.permutations(xrange(n))
                    if all(j-i != abs(p[i] - p[j])
                           for i in xrange(n - 1)
                           for j in xrange(i + 1, n)))

~~~
beagle3
If you are optimizing for LOC, use a suitable language:

from <http://nsl.com/papers/qn.htm> , the code that computes the solution is

    
    
      qn:{[n],/{:[n=#*x;,*x;,/_f'f x,\:/:(!n)_dvl,/x]}'(f:0 -1 1+/:)@!n}
    

And it uses backtracking, which makes it infinitely faster.

But armin isn't optimizing LOC.

------
ggchappell
This is actually a rather poor way to solve n-queens. Consider those boards
that have a queen in the first space of the first row and the second space of
the second row. These two queens can attack each other, so no such board is a
solution. However, this method will still go through every such board. On the
other hand, a backtracking style solution, which builds up the board row by
row, will backtrack after the second queen, thus skipping all these boards.

~~~
candeira
True, it's inefficient, but it's good for profiling Python interpreters, since
it does a lot of work.

It's also fast enough for low n (up to 8). Why write more code when you can
get your result with less? Maybe the time he's saved with this approach can
now be used writing a deduplicator (to cull out mirrored and rotated
solutions).

Maybe the point of the article was not only to show how fast PyPy is, but how
convenient itertools.permutation() is. You can't show the convenience of
itertools by not using it.

However... You know what? When I was asked to write a n-queens solver
recently, I also wrote a backtracker. It just doesn't mean I did it better
than this guy.

~~~
ggchappell
I know the point of this article is not to give the fastest n-queens
algorithm. But what the heck; here is an iterative backtracking solution. On
my machine, this code finds all solutions for the 10x10 board in about 1.2
sec, while the code from the article takes just under 18 sec. Using CPy 2.6.5,
just like the article. Have not tried PyPy. Output is essentially the same
format, except that my generator spits out lists instead of tuples.

Yes, it's twice as long and harder to read. Improvements, anyone?

    
    
      def hit_last(p):
          x = p[-1]
          i = len(p)-1
          for j in range(i):
              y = p[j]
              if x == y or i - j == abs(x - y):
                  return True
          return False
    
      def n_queen(n):
          p = [0]
          while True:  # Begin iter w/ board good, except maybe last Q
              full = len(p) == n
              hit = hit_last(p)
              if full or hit:  # Will we backtrack?
                  if not hit:  # Found solution?
                      yield p
                  while len(p) > 0 and p[-1] == n-1:
                      p.pop()
                  if len(p) == 0:
                      return
                  p[-1] += 1
              else:
                  p.append(0)

~~~
kingkilr
Don't worry, PyPy beasts this as well, at ``n_queens(12)`` on my machine:

    
    
        alex@alex-gaynor-laptop:/tmp$ time python queens.py 
    
        real    0m16.837s
        user    0m16.820s
        sys 0m0.000s
        alex@alex-gaynor-laptop:/tmp$ time pypy queens.py 
    
        real    0m2.548s
        user    0m2.530s
        sys 0m0.010s

------
overgard
Pypy is very fast, but to me what's way more significant about pypy is that
more than being just interpreter, it's a toolchain for writing other
interpreters that automatically generates a JIT compiler for you. I'd really
be interested to see what sort of performance you could get writing
interpreters for other languages in RPython.

------
dzorz
I think this is an example where C++ really shines. The code is not much
longer (44 vs 33 lines, although the output in python version is much nicer)
than the python code and on my computer it is about 140x faster than python
2.7.1.

~~~
reinhardt
Using a library for computing the permutations I guess? I haven't touched C++
for years but I'd be surprised if the 44 lines include a permutations
implementation.

~~~
dzorz
next_permutation is in standard library (header algorithm).

~~~
makmanalp
For the curious, STL <algorithm> is one of those things that you wish you knew
ages ago. Usually my code is much cleaner after I run through it and replace
relevant parts of it with STL stuff.

<http://www.cplusplus.com/reference/algorithm/>

------
beza1e1
The most interesting observation from my point of view is that the author sped
up PyPy with a simple manual optimization. This means that PyPy still has even
more potential for optimization.

------
orborde
Why is there so much CPU time being spent in system? And why is the rest
divided between User and Nice in a single-threaded process?

