
Performance Comparison - C++/Java/Python/Ruby/Jython/JRuby/Groovy (2008) - d0mine
http://blog.dhananjaynene.com/2008/07/performance-comparison-c-java-python-ruby-jython-jruby-groovy/
======
alextgordon
Of course if you write Python, Ruby, Perl and PHP code in the style of Java,
and benchmark it against idiomatic Java, then Java will come out on top. This
doesn't prove anything, other than you have a _very_ flawed benchmark.

Language implementations are optimised for the most common operations in that
language. Allowing only one implementation to take advantage of this renders
the benchmark useless.

~~~
devinj
Well, Java would still come out on top, in all likelihood. I don't think even
the idiomatic solutions in Python/Ruby/etc. would be faster unless they have
some _really_ nice JIT compilers I'm not aware of, or they changed the
algorithm. They'd sure as hell be a lot smaller, though.

The real Python code I'd use to implement the defined algorithm would be a
fifth of the size and running time (7 lines, sans whitespace, runs in 16.4 µs
versus 92.7 µs under his timing scheme (it should really use min/timeit, not
average/custom-timing-solution)).

    
    
        import collections
        
        def kill(size, nth):
            chain = collections.deque(reversed(xrange(size)))
            while len(chain) > 1:
                chain.rotate(nth)
                chain.popleft()
            
            return chain.pop()

~~~
algorias
How about:

    
    
        def kill(size, nth):
            lst = range(size)
            offset = nth-1
            len_ = len(lst)
            while len_ > 1:
                del lst[offset::nth]
                offset = (offset - len_)% nth
                len_ = len(lst)
            return lst[0]
    

It will reduce the list by 1/3rd each pass, so it's significantly faster at
larger sizes.

~~~
devinj
Looks great, except for the corner case of nth = 1.

I was too lazy to figure out the slicing solution myself. :)

------
viraptor
Strange comparison. I'm not sure who would want to use the trivial double-
linked list implementation in Python in real code. Especially if you can do:

    
    
        def method2:
            chain = [Person(x) for x in xrange(i)]
            chain = chain[:2] + chain[4:]
    

which is 4x faster:

    
    
        timeit method1: 23.7072529793
        timeit method2: 5.78906202316

~~~
d0mine
`method2` looks buggy.

On the question "who whould want to use", here's an example:

OrderedSet could be implemented using double-linked list
[http://code.activestate.com/recipes/576696-orderedset-
with-w...](http://code.activestate.com/recipes/576696-orderedset-with-
weakrefs/)

Here's "real code" from stdlib (OrderedDict implementation)
[http://svn.python.org/view/python/trunk/Lib/collections.py?v...](http://svn.python.org/view/python/trunk/Lib/collections.py?view=markup)

~~~
viraptor
> `method2` looks buggy.

Claim looks unfounded. (unless you can tell what looks buggy)

I'm not sure why they used a D-L list, but thanks for the link - I'll try to
reimplement it with standard lists in a bit and run some bigger benchmark on
it. Maybe collections need some tweaking? (I'm not saying list is always
faster, only that it is for this specific example)

