

The Future of Python HTTP - abraham
http://kennethreitz.com/the-future-of-python-http.html

======
RegEx
Kenneth has put a lot of work into Requests, and it's really turning into
something beautiful.

On a side note, Kenneth is a standup guy who helped propel me into the world
of open source. I made a few slight documentation updates on Requests as part
of my first pull request _ever_ , and he made me feel like my contribution was
so important.

~~~
kami8845
This is something I noticed as well, someone corrected a 1-char typo in one of
his projects' documentation and he quickly merged and replied with "Thanks :)"

------
zeeg
I think requests and werkzeug are great, but I'm not entirely sure what
problems this proposal addresses.

First, it will be extremely hard to get Django to replace a large chunk of
it's core with an external library. Not only because they don't rely on an
external dependencies, but because there's no obvious benefit.

For example, why would you replace the Django test client with something
resembling requests? There's no need to go through the entire request process
cycle just to test a small chunk of code. It makes them less efficient, less
accurate, and more difficult to understand when something goes wrong.

Second, things like "cachecore" are not synonymous with "http caching", and
are extremely confusing names. I think that "uricore" is a good idea, but it's
another non-obvious name. It's not a "core" at all, but rather an improved
version of urlparse.

I think there are a lot of valuable uses that can come from separating common
functionality out so it can be reused in various places, but I dont think
Python users are concerned about HTTP not mapping 1:1 with WSGI day-to-day. I
can definitely say I'm not, and I'd like to think I have some knowledge as to
what matters in [Python] web development.

~~~
kenneth_reitz
Don't focus too much on the notion of the Django stuff — it's just vaguely
mentioned as worth exploring at the bottom of the post.

Luckily, all of these changes will be purely non-destructive refactoring and
won't have any negative effects.

~~~
the_cat_kittles
"Luckily, all of these changes will be purely non-destructive and won't have
any negative effects"

That sounds like what economists like to call a "pareto improvement", always a
good thing :)

------
tbatterii
"So, instead of taking the WebOb approach of using WSGI as the common protocol
between services, why not use HTTP itself? The rest of the world uses HTTP as
the most-common denominator after all."

I agree with the assertion that HTTP is the most common denominator.

# done from memory

    
    
       In [1]: from webob import Request
    
       In [2]: print Request.blank("http://google.com/index.html")
        GET /index.html HTTP/1.0
        Host: google.com:80
    
    

Looks like HTTP to me.

    
    
       In [1]: from webob import Request
    
       In [2]: from paste.proxy import TransparentProxy
    
       In [3]: print Request.blank("http://google.com").call_application(TransparentProxy())
    

Yeah the syntax could be less verbose in WebOb, and there was discussion about
a nicer client API at one time, but it's fairly trivial to roll your own.
Though requests makes short work of common things, I think it actually
abstracts some HTTP details more so than WebOb in a way that leaves the user
less informed about what's going on in the HTTP, Basic Auth for example. Which
makes it it's own protocol, just like WSGI. WSGI in my experience makes light
work of implementing HTTP IMO, it's fairly easy to read the part of the HTTP
spec you need to handle and then make that work in WSGI, I'm not so sure
requests could make that any easier without intervention from the library
authors.

I think there's still a place for requests, but I don't see it as becoming the
standard way to build HTTP Requests and handle HTTP Responses. I view it more
along the lines of a library like mechanize, only nicer. It has it's place.

~~~
ianb
Thanks for noting this (speaking as the WebOb author). I should say that I
don't think WebOb is particularly contrary to the idea being put forward –
instead, WebOb was written with this kind of concept in mind.

Requests certainly adds a lot of stuff, some that is below WSGI (e.g., Keep-
Alive – WSGI specifically avoids that level of transport), a bunch of stuff
that is more stateful (the session stuff), and a bunch of stuff that just felt
like it was moving around too much to put in WebOb (e.g., OAuth).

That said, there's nothing keeping a client library from adding that.
TransparentProxy for instance could instead be something that handles
pipelining and statefulness, and perhaps also acts as a request factory. WSGI
is rather helpful here _because_ it does not try to touch those parts – it
leaves things open for other tools to take control there. Also to make a
better client library you could subclass Request and add functionality that is
handy but hasn't been added to WebOb (WebTest does this for functional
testing, as an example). Auth probably fits in there. Redirect handling could
go in there somewhere too. But if you always redirect transparently then
you've made permanent redirects useless.

WebOb itself certainly isn't a client library, its scope is largely limited to
representing HTTP requests and responses. But if you are looking for HTTP-
related tools, it has a lot of them.

------
j2labs
I don't like the idea of everything using HTTP. I prefer ZMQ.

There is a substantial work done to process all the HTTP overhead. In addition
to that there is no load-balancing included and it would likely be delegated
to a whole new service or machine, when ZMQ could just provide this.

Instead of relying on Werkzeug and HTTP, I can use DictShield models,
serialized to JSON or Python, and send those across ZMQ sockets. The ZMQ
sockets don't time out, like HTTP, they are instead removed when a host goes
down. PUB/SUB messaging is possible, round-robin routing. Lots of patterns
instantly available.

Ease in using HTTP itself is welcome, but I don't want to use it in my
infrastructure. It's a band-aid for WSGI instead of a solution.

~~~
someone13
Something else worth noting: from how I understand it, 0MQ isn't safe to use
over the internet. Safe in your infrastructure, inside your firewalls and
such, but not over the internet. So, in terms of making, e.g. public-facing
API, you don't really have a good choice except HTTP.

EDIT: This is incorrect, read below.

~~~
espeed
You may be referring to Zed Shaw's comment in his PyCon ZeroMQ talk -- but
that issue has been resolved since then.

~~~
someone13
Thanks, I didn't realize that. Specifically, this link mentions it:

<http://www.zeromq.org/area:faq#toc5>

Appreciate the heads-up!

------
jtchang
Thank you Kenneth for your contribution to the Python community. Never met but
I've used your code. Keep on focusing on what is "right" for the long term and
maybe someday we'll get there!

------
pbreit
Considering Python's place in web development, it's surprising how bad its
basic http client capabilities are.

~~~
objectified
Agreed. That's why I use the cURL bindings combined with human_curl.

------
judofyr
I don't know much about Python, but how does Request/WSGI solve async/long-
polling/WebSockets? It would be a shame if they re-did the HTTP stack without
solving these issues.

~~~
mr1900
Requests has support for async request by using gevent: [http://docs.python-
requests.org/en/latest/user/advanced/#asy...](http://docs.python-
requests.org/en/latest/user/advanced/#asynchronous-requests)

gevent has an wsgi module, but it is very low level. I am still looking for a
clean/nice way of doing it in python.

~~~
kenneth_reitz

        $ gunicorn -k gevent

------
po
What does _Django could potentially utilize the security features provided by
httpcore_ mean?

Would Django have to drop their own Request/Response objects and adopt the
Requests/httpcore provided ones? Was there any discussion with the Django core
devs at pycon regarding it?

~~~
jMyles
The blog post says that Paul M. was part of the discussion.

------
ak217
I think this issue stems from the fact that while WSGI was a great concept, it
hasn't been fully developed, and hasn't been maintained. We need a better set
of abstraction layers for socket and HTTP-based interfaces in Python.

And by the way, I love Kenneth's work, especially Requests. We use it in
production and it's a joy to build on top of it.

------
sho_hn
What about Python 3? Other than the async submodule (due to the missing gevent
dep), requests already supports Python 3, but Werkzeug does not. Is part of
the plan for the combined effort to change this, or will requests lose Python
3 support again? (This seems unlikely, but I thought I better ask.)

~~~
kenneth_reitz
All of these changes are targeted at Python 2.6–3.x.

A rewrite of many parts of Werkzeug is required to support Python 3. Might as
well kill two birds with one stone :)

~~~
sho_hn
Sounds good. Thanks!

Edit: As a bonus link to make this post more interesting, the other day I
caused the Python 3 version of requests to be packaged for Fedora 16 and 17:
<https://bugzilla.redhat.com/show_bug.cgi?id=807525>

~~~
kenneth_reitz
Awesome, thanks the help! I really appreciate it :)

------
rd108
This is a great library, and Kenneth has worked really hard to improve it over
time. I love Requests.

------
sitkack
At the risk of sounding like a negative-nancy the problem he is fixing is the
artificial impedance mismatch that was constructed ON PURPOSE by Ian Bicking
when he created WSGI.

Composability is one of the most powerful concepts we have. WSGI wasn't
mistake-proofing middle-ware by making the protocols different, he was
breaking composability that is now being fixed by Kenneth et al.

Thank You! But let us not allow these mistakes to be made again.

~~~
ianb
Philip J. Eby was the author of WSGI, not myself. But which mismatches are you
referring to? I've frequently heard these complaints, but generally they are
fairly obscure things (which are hard to resolve because we can't form a
quorum of people who actually care), and a lot of vague FUD.

------
mvanveen
The idea of proxying ZeroMQ over an HTTP layer is intriguing. I have never
used the .*MQ variants, but I understand that request/response is pretty
popular. At first glance, HTTP emulation seems like it might make a good, soft
introduction to a different paradigm, but I'd love to hear from people who've
actually used these libraries/protocols.

~~~
j2labs
It's backwards. HTTP is the weaker of the two transports, though tried and
true. Mongrel2 can used to proxy HTTP to ZMQ nicely.

Check these slides for an overview of all the things ZMQ can do:
[http://j2labs.tumblr.com/post/5036176531/zeromq-super-
socket...](http://j2labs.tumblr.com/post/5036176531/zeromq-super-sockets)

Even load balancing is included, which means you run less services too.

------
wildmXranat
Lovely how 5 of the *core projects on github have 200+ followers with 4 of the
repos empty. If anything, it shows there's interest in the proposed plan. I'm
curious what comes of it!

------
richurd
If it ain't Lisp, it's shit.

