
Million requests per second with Python - d_theorist
https://medium.com/@squeaky_pl/million-requests-per-second-with-python-95c137af319#.3eiek12me
======
Anderkent
>HTTP pipelining is crucial here since it’s one of the optimizations that
Japronto takes into account when executing requests.

[https://en.wikipedia.org/wiki/HTTP_pipelining](https://en.wikipedia.org/wiki/HTTP_pipelining)

> Of all the major browsers, only Opera based on Presto layout engine had a
> fully working implementation that was enabled by default. In all other
> browsers HTTP pipelining is disabled or not implemented.[3]

>Internet Explorer 8 does not pipeline requests, due to concerns regarding
buggy proxies and head-of-line blocking.[7]

>Internet Explorer 11 does not support pipelining. [8]

>Mozilla browsers (such as Mozilla Firefox, SeaMonkey and Camino) support
pipelining; however, it is disabled by default.[9][10] Pipelining is disabled
by default to avoid issues with misbehaving servers.[11] When pipelining is
enabled, Mozilla browsers use some heuristics, especially to turn pipelining
off for older IIS servers.[12]

>Konqueror 2.0 supports pipelining, but it's disabled by default.[citation
needed]

>Google Chrome previously supported pipelining, but it has been disabled due
to bugs and problems with poorly behaving servers.[13]

That seems like optimizing for a benchmark.

~~~
dorianm
Would it make make sense to optimize for HTTP/2 then?

~~~
KAdot
HTTP/2 is a big step forward, but I wish web servers and HTTP clients
supported HTTP/2 over TCP (h2c), TLS encryption is useless for internal
microservices.

~~~
mattnewton
Isn't that thinking what got google tapped by the NSA in those infamous prism
slides? (Honest question, I don't know if there are other solutions)

~~~
hedora
Yes, and it wasn't just Google.

Client or even frontend https wouldn't have solved the problem in most cases
(it is necessary but not sufficient).

The database (or key value store, etc) replication stream is a prime target,
as are any backend protocols that let edge data center storage proxy internal
requests for core data centers.

------
floatboth
The project is very interesting but the benchmark results must be completely
wrong.

On my machine, using
[https://github.com/rakyll/hey](https://github.com/rakyll/hey) to test:

1 concurrent request: node.js does ~8000 req/second, go does ~9000, japronto
does ~10000

10 concurrent requests: node.js does ~24000 req/second, go does ~45000,
japronto does ~55000

Nowhere near a million. That's without pipelining though. Can pipelining
really add SO MUCH performance?

 _UPDATE_ : tried wrk with pipelining. japronto does 700000, golang 150000,
node 35000. Holy shit, pipelining is epic.

But the benchmark is not fair when only one server supports pipelining.

~~~
nhumrich
Yes, because the real bottleneck is TCP. Pipelining reduces number of packets
sent. However, very few clients actually support pipelining, so it's almost
useless

~~~
cuu508
Also, it's measuring the number of static responses to _a single client_. I
guess there probably is an use case for this, but a more typical situation is
many clients making one or a few requests, and also doing the TLS handshake.

------
Mayzie
ITT people complaining that the underlying library is written in C.

Who cares? The author wrote the C code and Python bindings for his library so
we can _all_ benefit from handling more requests per second just by writing
Python code and using his framework.

I appreciate the work and will be exploring its potential use. Thank you
squeaky_pl for writing it.

~~~
jsd1982
It matters because all the heavy lifting is done by C and has little if
anything to do with Python itself.

You could give the same treatment to virtually any other host scripting
language (Ruby, Lua, PHP) and then make outlandish claims that that host
language is now capable of ridiculous things when it had nothing to do with
the alleged success other than that it exposes a FFI that lets you create
something like this.

~~~
SEJeff
Your same argument can be applied to virtually any other language:

This ruby benchmark doesn't matter because ruby is written in C!

This nodejs benchmark doesn't matter because v8 is written in C/C++!

etc.

~~~
mattnewton
I think his argument is that ymmv as you add more python code to this native-
heavy benchmark.

~~~
throwawayish
ITT: Web developers being surprised by the meaninglessness of microbenchmarks.

~~~
jsd1982
No, we're not surprised by microbenchmarks. We're surprised by the amount of
investment in trying to improve the foundations of toppling-over buildings
when that investment could be better spent elsewhere.

------
ecopoesis
A million requests per second in Python, if you write all the code that does
work in C.

~~~
botverse
You use it in Python. OP writes the C code for you.

~~~
pg314
As soon as you do anything else than return 'hello, world' in Python, your
performance will drop dramatically. Just parsing a tiny JSON string a million
times in python takes 7s (i5-2467M).

    
    
      $ time python -c "import json;
      for i in range(0, 1000000):
          json.loads('{\"data\": \"hello, world\"}')"
    
      real	0m7.055s
      user	0m6.951s
      sys	0m0.063s
    

It is no use optimising one part of a system if other parts take 99% of the
time.

As it stands, the title of the post would be more accurately stated as
'million requests per second with C'.

~~~
Twirrim
json:

    
    
        $ time python -c "import json;
        for i in range(0, 1000000):
              json.loads('{\"data\": \"hello, world\"}')"
    
        real	0m3.142s
        user	0m3.098s
        sys	0m0.034s
    

ujson

    
    
        $ time python -c "import ujson as json;
        for i in range(0, 1000000):
              json.loads('{\"data\": \"hello, world\"}')"
    
        real	0m0.444s
        user	0m0.399s
        sys	0m0.028s
    

Like with many things, it's a question of being smart about your choice of
libraries.

Also, just because I was curious:

    
    
        $ time pypy -c "import json;
        for i in range(0, 1000000):
              json.loads('{\"data\": \"hello, world\"}')"
    
        real	0m0.422s
        user	0m0.386s
        sys	0m0.032s

~~~
pg314
If you use a C library and have hardly any Python code running it obviously
goes faster. That misses the point I was making. At some point you'll have to
write some Python code, and that will lower the number of requests
dramatically. If you're talking millions of requests per second, you will have
maybe 10s of thousands of CPU cycles per request. That doesn't buy you much in
Python. Case in point: parsing a trivial json string in Python eats up your
budget.

Of course, most websites aren't dealing with anywhere near that volume of
requests, and then Python is fine.

Pypy is interesting, but last time I looked it didn't support C extensions and
hence can't be used with the poster's library.

------
voidlogic
>wrk with 1 thread, 100 connections and 24 simultaneous

Say what? When I do this kind of testing using wrk:

    
    
      1. I use multiple clients
      2. Each client uses a thread for every logical CPU
      3. I usually use at least 256 connections
    

What is actually being tested here? This smells like a test to get a desired
result.

No to mention in regards to Go/Others:

    
    
      1. He is using an RC
      2. By default it uses all all cores, why handicap it?
      3. He didn't share his code for his contestants, how can I trust this?

~~~
floatboth
The contestants are in the repo. One core is used "to be fair" (the Python
stuff only uses one core). But the test, yeah. What's being tested is the
presence of pipelining :D

~~~
voidlogic
>What's being tested is the presence of pipelining :D

Which is silly considering a majority of wild browsers now support HTTP/2.

Thanks for pointing our the repo, I missed it before:
[https://github.com/squeaky-pl/japronto](https://github.com/squeaky-
pl/japronto)

His Go isn't horrible, but it could be written like this:

    
    
      func hello(respWtr http.ResponseWriter, req *http.Request) {
        text, status := []byte("Hello world!"), http.StatusOK
        if request.URL.Path != "/" {
          text, status = []byte("Not Found"), http.StatusNotFound
        }
        respWtr.WriteHeader(status)
        response.Write(text)
      }
    

Rather than: [https://github.com/squeaky-
pl/japronto/blob/master/benchmark...](https://github.com/squeaky-
pl/japronto/blob/master/benchmarks/golang/micro.go)

Also, limiting Go to use a single core is unfair because the Go GC is
optimized for multi-core operation.

------
xyzzy123
Yeah the hard part is having customers. If you have that, PHP, 5 layers of
proxies, worst case latency, who cares? The pig can be made to fly at that
point.

"optimised for loopback!" \- which is nobody's actual problem.

It's a cool thing though, just cynical I guess because it doesn't solve my
actual problems which are all self inflicted and organisational. Thinking
about deleting this comment because it would logically apply to every
incremental technical advance :(

~~~
VMG
I don't think it's cynical at all. These edge-case optimizations really are
just entertainment.

The author's claim that the other http servers are deficient is misleading.

------
xorcist
This looks like a contrived benchmark to me. If that's really your use case,
use nginx instead. The http parser might be useful, but it's not very well
documented and not super readable at first glance. I do prefer not to parse
http at every layer if I can but other people seems to like it.

Along those lines, what happened to uwsgi async mode? uwsgi has been rock
solid for me, with a straightforward binary framing and hot loading, and it
would be great to use it for the (admittedly, few) cases where async might be
useful. It seems like one of those obvious good ideas that no one uses?

------
79d697i6fdif
How does this get to the front page? Has nobody seen this?
[https://www.techempower.com/benchmarks/#section=data-r13](https://www.techempower.com/benchmarks/#section=data-r13)

Much more thorough than this toy benchmark article, which reads like marketing
release...

~~~
pekk
If you work out a way to model the performance of your own systems based on
these benchmarks, then on the way you will discover what is wrong with them.
Or switch to Urweb for all your future sites, I guess?

~~~
79d697i6fdif
the source to all the benchmark is available
[https://github.com/TechEmpower/FrameworkBenchmarks](https://github.com/TechEmpower/FrameworkBenchmarks)

If you want to make your chosen platform fast you can see what they're doing
in the src to get those results. The ones not marked "stripped" and with
"full" for ORM are very realistic examples. Not sure where you're going with
this? Nobody's real workload looks exactly like benchmarks.

------
kid0m4n
I checked the benchmark code out. I have asked the author for more
information. I am not sure if all the tests were limited to using only 1 core
during the run. GOMAXPROCS=1 severely limits the Go version.

1 Go worker process can utilise all the cores in the server. I just want to
get that clarified.

~~~
dom96
From the article:

> To be fair all the contestants (including Go) were running single worker
> process

That seems to imply that they were all limited to a single core.

~~~
jsd1982
A single worker process sounds like what it says it is. A single process
containing possibly many OS threads. Just because it's a single process
doesn't mean it won't use all cores. Where's the implication that process =
core? GOMAXPROCS defaults to the number of cores in the system.

~~~
floatboth
GOMAXPROCS _is_ the number of cores used.

------
NathanKP
> To be fair all the contestants (including Go) were running single worker
> process.

This makes the benchmark results much less impressive. Sure its cool that this
micro framework allows you to serve a million requests per second with a
single process but no one is going to be running a single process per server
in production if they value their uptime.

------
Xorlev
This is a great project. However, it not a very useful result (1M RPS in
Python). Once you start layering more work into your handlers (JSON
deserialization, admittedly C-optimized, business logic, IO, etc.), you can be
assured that your HTTP serving layer is not the bottleneck.

------
shanemhansen
I wonder how it would compare to the fasthttp libary
[https://github.com/valyala/fasthttp](https://github.com/valyala/fasthttp)

I don't suppose the hand tuned microhttp parser supports h2?

Pipelining is a dead technology that was never really alive. For those who
aren't familiar with it, it's based on a http "hack" where after http/1.1 was
released and response end could be signaled via something other than EOF on
the tcp connection they figure you can send n requests in a row and receive n
responses in a row. This was in the days where good manners required you make
no more than 2 simultaneous requests to a server.

Nobody ever really supported it, and the lack of support for out of order
responses meant that any one slow request could slow down everything else. The
h2 model allows at least hundreds of simultaneous requests and responses
multiplexed over one TCP stream.

------
molind
There is no pure python out there. It's python with bindings to the highly
optimized C code. It would be fair to say it's C gives you almost million of
RPS in some tweaked tests.

------
sandGorgon
Can this be retrofitted into flask (just the performance improvements) ?

I would pay for that.. a lot (as much as I can afford). And I'm pretty sure so
would a lot of others.

~~~
brilliantcode
I'd also pay for this to happen.

------
knivets
I think, one of the main points of writing a framework in Python or another
high-level language is the ease of extension in the future by the community.

~~~
pekk
Where are the "ease of extension" benchmarks?

What kind of data do people look at when they want to prove their language is
better, or choose the best language?

~~~
throwawayish
Evidently there's at least one group of developers who mainly derive their
decisions from irrelevant and often incorrectly conducted micro-benchmarks.

------
edsiper2
HTTP Pipelining should not be the base for a performance benchmark. It's an
unrealistic scenario.

To make the benchmark a bit fair, add a graphic of memory usage, pipelining
might have a huge impact on resources usage in the server side.

------
mmgutz
When something is too good to be true it probably is not. Even if it was
written in assembly these are dubious results. The graph alone tells me the
quality of the claims.

------
nerdponx
Sorta unrelated, but the author made an interesting claim that PyPy will reach
3.5 "conformance" in 2017 in the context of talking about NumPy. Does that
mean PyPy is expected to work seamlessly with C extensions like those used in
SciPy or generated by Cython?

------
tuyguntn
Django needs this kind of performance boost, without depending on
gunicorn/uwsgi servers

~~~
xyzzy123
Nothing will save django for free because if you used that your most likely
real problem is scaling postgres or mysql.

Use as many appservers as your customer base will pay you for.

Basically, cry in your beer because you're so rich and successful now that the
framework won't handle your stuff.

~~~
scriptkiddy
Agreed. Scaling Django isn't difficult really. Just spin up another instance
if you need to. The major problem is scaling the DB. Making sure you're not
bogging Postgres down by making it have to deal with possible race conditions
is difficult.

------
znpy
A million requests per second with Python.

Spoiler: technically, it's not "with Python".

------
danielovichdk
When did anyone start to believe that languages scale ?

~~~
burnbabyburn
what do you mean?

~~~
GrayShade
It might be a variant of the old argument that "languages aren't slow,
implementations are", meaning that scalability is a property of the language,
not the implementation.

In any case, this is written in C so I'm not sure it proves anything about the
Python language.

~~~
pekk
It says something about the performance that is available from the Python
language. This is how Python has always been, starting from the interpret and
builtin data structures in CPython which are written in C. That isn't
cheating, that's normal, realistic usage of the tool. Why should benchmarks go
out of their way to use only unrealistic Python programs just to prove that it
cannot "x += 1" as fast as C? Who is surprised by that, and why would it
matter? Node uses libuv and nobody blinks an eye.

------
owaislone
I love Python and use it as the main tool at my day job but I have to call it
out.

> It lets you do synchronous and asynchronous programming with asyncio and
> it’s shamelessly fast. Even faster than NodeJS and Go.

and then,

> To be fair all the contestants (including Go) were running single worker
> process.

What is the point of such a benchmark? It is completely useless. You are
deliberately crippling other tech before comparing your tech with them.

What am I missing here HN?

~~~
throwawayish
While I certainly don't want to give this sort of "benchmarking" any
credibility _at all_ :

allocating significantly different compute resources for different samples
would be at least as laughable (eg. giving Go a dozen cores [per it's
defaults] but constraining Python to one worker [per it's defaults]).

------
jchook
Just want to say that this post was inspiring to me. Thanks for writing it in
such detail!

Sounds like you have a long and fun road ahead of you with this project.

------
mvindahl
Trying to grasp the concepts here. But if I understand pipelining correctly,
it's a great boost for a single browser making a million requests but won't
really help you in dealing with a million browsers firing one request each.

If that is correctly understood then the benchmark is highly artificial.

But yeah, feel free to correct me if I got it wrong.

------
tomc1985
I was impressed til he revealed most of Japronto is C... so much for learning
of some new nifty Python technique :/

------
rlarroque
Have you tried load testing it with Gatling
([http://gatling.io/](http://gatling.io/)) which seems to provide more
accurate results including potential hicups you can't see with other load
testing framework?

------
red2awn
Who cares about Hello world benchmark?

------
mixmastamyk
Great work. The name looks to be Brazilian PT for "already ready," aka now!

------
gjvc
this is a ruse. all the code which does the work in this is in C.

~~~
Spiritus
So? The API is still Python. How do think numpy got so popular?

~~~
CorvusCrypto
I think the difference here is that numpy is not flaunting its speed as a
benefit of using python. They are very open that they were bringing the
performance of BLAS and LAPACK to python.

------
zepolen
Couldn't get this to work on osx, initially got an error from strip in
src/picohttpparser/build - which was fixed by running `strip -x
libpicohttpparser.so` which got `python setup.py develop` to work.

But then running the demo app resulted in:

    
    
      ImportError: dlopen(/tmp/japronto/src/japronto/protocol/cprotocol.cpython-35m-darwin.so, 2): Library not loaded: libpicohttpparser.so
        Referenced from: /tmp/japronto/src/japronto/protocol/cprotocol.cpython-35m-darwin.so
        Reason: image not found
    

Didn't bother after that

~~~
solmich
Exactly, so you need to manually copy src/picohttpparser/libpicohttpparser.so
to /usr/local/lib

~~~
xorcist
Doesn't LD_LIBRARY_PATH work on a Mac?

~~~
floatboth
It's DYLD_LIBRARY_PATH or something on macOS.

------
jjawssd
It's worth considering Python asyncio as well for this type of work
[https://magic.io/blog/asyncpg-1m-rows-from-postgres-to-
pytho...](https://magic.io/blog/asyncpg-1m-rows-from-postgres-to-python/)

------
myf01d
The current situation of Python's performance is really pathetic. Thanks for
the project, we need to make Python great again otherwise the future is for
Go, Elixir and Rust.

~~~
sametmax
People are obsessed with performances, while my experience is that for the
enormous vast majority of projects you could run on something 10 times slower
than Python.

Actually, I have a friend of mine running a streaming website. He banks 15k€ a
month for 400k daily users.

Python perf has zero inpact on his project. Python ease of use and
maintainability means he, as a terrible (really terrible) programmer have been
able to keep the site up and running for 5 years.

I worked in africa for a while. Needs for Python perf ? Zero. Needs for code
that is understandable and easy to write. A LOT.

Moved to geographes working on SIG. Needs for Python perf ? Hell the GIS
engine does the heavy work, they don't care. But they don't undestand jack
about programming and need things done. Python let them to do it cleanly.

Or I moved to embeded programmers. Hardcore C coders that are using Python to
test their boards. Need for Python perf ? They don't even know what you are
talking about. For them perf is assembly. They want something that is fast to
write, with 10000 libs available.

As a trainer and dev I teach Python on a monthly basis. JS training. Jeez,
where do I begin ? If you let them do something more than a one-file stuff,
they are lost. Python ? They are project ready in no time.

Data mongers ? They have numpy and databases.

Now myself ? I code in Python all the time. I've seen many slow sites. It's
always either because of the DB queries or the static assets. Never about the
server side language, be it Python, JS, Ruby or PHP.

Coding in Go, Elixir or Rust for task where my productivity is more important
than the machine output makes no sense. Especially in a world where an
unmetered VPS cost 3€/month
([https://www.ovh.com/us/vps/](https://www.ovh.com/us/vps/)) with 2 GB RAM and
10 GB SSD. I can just take 100 of those and build a freaking skynet without
making my company accountant blink.

We are not google or facebook (and they do use Python BTW. Fb just announced
they are migrating to 3 massively).

So yes, more perf is nice. I would literally pay to help having better perf
because it's always a good thing. But stop pretending you need it so bad,
because you would be magically the 0.1% that actually do on all the forum post
where people are saying they are.

It's a nice to have.

If you want to switch to something else, just admit you are following a trend.
Dev always are. They love what's new and shiny. Nothing wrong about it. But
those "Python is slow I'm out" posts are ridiculous. Dropbox can say that.
Bank of America can say that.

~~~
christophilus
Yeah. I mostly agree with this. Developer productivity is the best thing to
optimize for. These are the most important metrics for that, in my opinion:
Code readability, good editor w/ intellisense, fast test suite, solid library
ecosystem.

It would be interesting to see an analysis done of developer productivity
across languages to find out which language provides the best over all value
for new and existing codebases.

~~~
vgy7ujm
Perl would be my bet.

~~~
sametmax
Perl fail on big code base, particularly because the cognitive load of reading
it is high.

~~~
vgy7ujm
I'm not really buying into that argument. I have seen Python code that is
unreadable, actually a lot worse than 90s PHP/Perl. Take a look at modern Perl
please.

~~~
sametmax
Perl 6 is better, but it still offloads a lot of the logic parsing to the
human brain.

Create an array or string, take the last element and display it:

<A B C>[* - 1].say;

The same in Python:

print(['A', 'B', 'C'][-1])

The same in Ruby:

print ['A', 'B', 'C'][-1]

The first version is a good example of what perl is good at : it's short to
write. The Python version is more verbose.

But, your brain has to do the following:

\- Scan the element to realise A, B and B are strings. Perl can have strings
with "A", but in this here you can write it without it. So when you read, you
need context to know what's going on. The Python version make it clear it's a
string.

\- Then figure out the that the separation is on the spaces. Python put comas,
so you don't have to parse negative space in your head.

\- the you have the "*" you need to integrate to get context of what element
you use. Then you get the ";" which is just noise.

Small elements like those add up very, very quickly in a code base. Every time
your brain has to gather context before understanding what it reads, you add
cognitive load. You can't scan the code.

PHP, Perl and JS by nature have a higher cost that Ruby or Python. The laters
have been design with readability in mind. You have less tricks, they are more
boring. And easier to read.

Of course you have many other small issues with perl, like figuring out why
you use .say but .WHAT in uppercase. Why the shell by default on some linux
setup have a broken history. Why you can leave out the .say most of the case
in the shell, but sometime if you don't use it nothing is displayed. That you
have to install the rakuto package on Ubuntu but run the perl6 command to
start the shell, etc.

All in all, Perl is a neat technical achievement, but has not been very
ergonomy oriented.

~~~
vgy7ujm
Is Russian the hardest language to read for someone that doesn't know Russian?

C type brackets and semicolons are second nature to a lot of people.

Perl has a lot of built-in stuff that makes a lot of sense when you have
learned it.

Your example in Perl 5:

    
    
      print qw/A B C/[-1];
    

I instantly know that qw has to do with a list of words, and the qw// can be
replaced with qw(), qw[], qw##, etc to make the code the most readable to you.

Can be written like this if you like the commas and quotation marks:

    
    
      print qw/'A', 'B', 'C'/[-1]; #Will warn
      print(('A', 'B', 'C')[-1]);
    

'say' instead of 'print' adds a newline:

    
    
      say qw/A B C/[-1];
    

Print the three first elements:

    
    
      print qw/A B C D/[0..2];
    

Assign elements to variables:

    
    
      my ($a, $b, $c) = qw/A B C/;
    

Print the number of elements and assign it to a variable at the same time:

    
    
      print my $num =()= qw/A B C/;
    

You CAN very easily write "boring" readable Perl if you want to. But the power
is there when needed. One letter variables, insane uncommented regexes,
removing all whitespace etc is generally not how modern Perl programmers code.

Take a look at some code in Python that is done by someone less disciplined
(or intentionally obfuscated). It is as unreadable (or worse) as the
obfuscated C and Perl from the late 90s.

~~~
sametmax
Of course you can get used to anything. But even if you get use to run,
walking is always easier.

Using so much context to convey meaning implying a lot of processing even if
you are train for it, so you have an additional load. Using so many symbols to
convey meaning as well.

By design, Perl is harder to read if you write it the way it's intended to be
used.

Just like Russian is harder to read than english because the language has some
concept like letter effecting other letters than implies you brain need to
backtrack.

Eventually, if you become a master at it, you won't feel it. But you can only
master so many things in life. Most of the stuff, at best, we become very good
at it. But very good means you still use your brain CPU. There is no way
around it.

That's why language design is important, and to me, Perl failed this big time.

It's not just about habit either. Take Erlang for example. It is a very
different language. It's hard to reason about. But it's not very hard to parse
in your head because it's quite consistent and regular in the way it unfolds.

Erlang has other issues of course and I wouldn't recommand it over Perl for
non highly // related tasks.

But yes, Perl has been written to be clever. This is not a quality in language
design. Handy yes. Intelligent yes. Clever, not so much.

~~~
vgy7ujm
"Using so many symbols to convey meaning as well." Actually as a visually
oriented person I find it very helpful. And logical:

    
    
      $ - a single thing
      @ - a list of things
      % - a hash (associative array, key-value pair list)
      \ - a reference to any of the above
    
      my @fruits = qw/Apple Pear/;
      my $company = $fruits[0];
      
      # A reference can be declared directly, this is a \%
      # with a nested \@. Encodes nicely to JSON as an example.
      my $hash_ref = {
          fruits => [
              'Apple',
              'Pear'
              ]
          };
    
      my $apple = $hash_ref->{fruits}[0];
    

"By design, Perl is harder to read if you write it the way it's intended to be
used."

What way is it intended to be used? Perl 4 style mixing Perl and shell script?

Perl 5 has had (roughly) yearly releases since 1994 and is now at 5.24. Quite
a bit has changed in that time and the language has evolved and improved.

There are modern non blocking web frameworks, great database support, several
modern OO systems etc.

And Perl still works great shell script style (you can choose to code in a
clean style) and for the most parts there is great backwards compatibility.

Perl is definitively about getting the job done. Hence the slogans "make easy
things easy and hard things possible" and "there is more than one way to do
it".

For someone well versed in Perl something like idiomatic Python becomes a
prison where it is hard to get the job done because you can't freely express
yourself.

I think it's not just philosophy or language design but people are wired
differently. And Perl is a lot of fun for creative individuals.

Lastly, I am looking forward to the next years when the Python community has
to clean up because of its current popularity with novice programmers. That's
what Perl has been through and that old novice code is also where most of the
undeserved reputation come from.

------
vgy7ujm
Not so sure that I would want to use Python. For anything.

~~~
vgy7ujm
Some very petulant pythonista downvoters here. Thinking that no one should
have a different opinion than themselves. Actually that profile fits the
language... only one way... sad people.

