
Summing up – take a break - ColinWright
http://www.solipsys.co.uk/Writings/SummingUp.html?HN_20140629
======
xtrumanx
It's refreshing to see a site with absolutely no styling.

Every once in a while I think about blogging about something but then lose
interest while investigating different blogging platforms. Should just fired
up an editor, wrote some html and scp'ed it to my one page personal website.

~~~
ColinWright
Would you be interested in a service that hosts single pages of text with
minimal images, no styling, and no fuss? Would that be of value to people, do
you think?

~~~
richthegeek
Github gists, pastie.org... there's plenty of free services out there already
depending on what people need!

Aside from that, sites like Medium support very little actual styling beyond
the content images.

------
thegeomaster
I don't get it. Isn't it obvious that the last routine would be the fastest?
What is the point of the other algos?

~~~
ColinWright
_Added in edit: Why do you think the last version must be faster?_

In short, if you implement these things on multi-core machines, then no, it
isn't obvious that the last algorithm will be the fastest. With modern
experimental parallelizing compilers on suitable hardware, the exact algorithm
matters, and these show significantly different results.

I spend most of my time working on "odd" hardware configurations, and as a
result, sometimes these more "interesting" algorithms are the ones I
immediately turn to rather than doing what others think of as instantly
obvious.

~~~
srean
That sure surprised me, I would have rooted for _reduce_. May be _lambda_
overhead is screwing it up. Could you try the reduction with

    
    
        from operator import add
    

I cannot try it out on Colin's version of Python, however on

    
    
        Python 2.7.6 | 32-bit |
        [GCC 4.1.2 20080704]
    
        from operator import add
    
        def sum1(l): return reduce(lambda a,b:a+b,l)
         
        def sum2(l): return reduce(add,l)
    

The times to compute _sum1(l)_ and _sum2(l)_ on xrange(3333333) are

    
    
        11.69s and 
        6.38s

respectively.

So sum2 is about twice faster, which is expected. The timings are the minimum
of 5 runs. Given Colin's reduce was about twice slower than his for loop,
there is a fair chance that the updated reduce will beat the for loop
occasionally.

I was surprised that trying the same on an _array_ turned out to be marginally
slower than using an _xrange_. I would have expected the array to be faster.
xrange is possibly faster because it can be held in cache whereas the array
would definitely spill outside, although I expected locality to be a win.

------
murkle
(slightly OT, motivated by the first paragraph) Here's some code to
approximate images by (random) Voronoi disgrams. Might be interesting to add a
genetic algorithm to that.

[https://github.com/amedeedaboville/voronoify](https://github.com/amedeedaboville/voronoify)

