Hacker News new | comments | show | ask | jobs | submit login
PEP 492 – Coroutines with async and await syntax (python.org)
234 points by 1st1 on Apr 17, 2015 | hide | past | web | favorite | 126 comments



I am detecting a high level of "Py3K denial syndrome" in this discussion.

The greatness of Python would have been impossible without rejecting virtually all feature requests. To move the vision of Python forward, it was necessary to depart with old syntax.

Python 3 is happening. Deal with it.


From my perspective (mostly scientific computing), Python has been killed after 2.7 by its creators. Python 3 might as well be Perl 6.

I know this is just the situation in my corner, and that it looks rosier for other users - e.g. for web development, system administration, etc..

The thing is, we have decades worth of legacy code - large libraries and small config scripts - that no one is going to rewrite. All python versions up till 2.7 were mostly backwards-compatible, and we came to depend on that. That the changes towards 3.x were breaking seems totally unneccessary (except for the string/unicode thing), which also gives it a psychological component in my opinion. Give us back the stupid print statement, and I bet this alone will massively increase adoption.

And if someone makes a patch to Python to import a Python 2 module in Python 3 (old syntax, old str/unicode, and a copy of the old stdlib - and it doesn't matter if it is 10% slower or never gets merged upstream), I'd be willing to pay $$ for it.


Considering that Numpy and Scipy are now Py3 compatible, and that there's no actual time constraint, I don't think the idea that it's rosier for others is particularly convincing. And, though neither are perfect, both 2to3 and Six exist to smooth the transition (particularly for smaller scripts and libraries).


Sounds more like a problem with technical debt than with py3k. If you have that much code and are completely unable to port it, then your engineering staff is thoroughly underpowered. And that will get in you in trouble soon, even if you stay with Python 2.


Considering most academic labs that write scientific computing code have no engineering staff and most of the code is written by academics analyzing data, yes, I'd have to agree that "no engineering staff" constitutes an underpowered engineering staff.

But more importantly, keep in mind that many academics move on to a new institution every couple of years (after finishing undergrad/Ph.D./post-doc, etc.), so any code written more than 5 years ago likely has no maintainer and no one in the lab knows how it works.


> code written more than 5 years ago likely has no maintainer and no one in the lab knows how it works.

Very true, but this happens regardless of whether it was Python2.x, Python3 or MATLAB code.

The problem is that many academics writing scientific code rarely have the training/education/experience to write maintainable code (if you feel this does not apply to you, you are probably the exception, and if you ever shared code with colleagues, you are aware how rare your skill is in academia). Also most of it is or started out as, quick experiments, try-outs, for that, and some other (even political) reasons, there is not a lot of incentive to write beautiful maintainable code.


At least in my opinion, using Code of which nobody knows what it does is a much bigger problem than just having to port it to Python 3.

But then again, a lack of reproducibility sometimes seems to be what it takes to succeed in Academia...


That's a problem that needs to be addressed by hiring a full-time maintainer for all legacy code. For example, Clark University has an endowment of $355 Million. That's three times the size a company like Veracode. Why are universities unable to hire maintainers for all this legacy code and bring this legacy code up-to-date with current standards and best practices?


Because it's really really hard to justify.

My dad is a chemist, and I recently learnt that some Fortran code he wrote in the nineties is till being used in academia.

Try to explain to his colleagues that you need an on-site engineer to maintain python code just a decade old.

Anyhow, while I disagree with the need of a dedicated engineer to port code to python3, porting isn't much of a big deal. If your dependencies work (eg: your libraries are python3-compatible), most of you work is done by 2to3. Very little effort is needed after that.

The problem up to now, has been waiting for you dependencies/libraries to achieve python3-compatibility (recursively, of course). But we've already moved past that.


There's no reason why you can't use "closed" Python 2 code, or even Fortran for that matter, from Python 3. It's just a matter of interfacing.

I highly doubt that "End of life" means you can't get it to run anymore.


There's python-bond as well:

https://pypi.python.org/pypi/python-bond

> You can freely mix Python versions between hosts/interpreters (that is: you can run Python 3 code from a Python 2 host and vice-versa).


It's unlikely that print statements are the only thing that won't work in older Python 2 code, the standard lib has changed (for the better) and there are lots of other smaller changes that could stop a script from running.


Sure, that's why I say it's in part psychological. It feels like that change was made just to spite users. "It's my language and I can do what I want." Of course, that's nonsense. Still, I believe that adding back a print statement would make the transition easier for some people.

For the people that are stuck with Python 2 legacy code, we'd need something else.

Python-future [1] seems to be able to import some Python 2 modules, but I'm not sure how far it goes.

[1] http://python-future.org/translation.html#translation


Making print a statement again would mean you can't print in a lambda. That'll break a bunch of Python 3 code, which puts us right back where we started; people being upset about a new release of Python breaking their code.


You don't need to jettison print-as-a-function in order to have print-as-a-statement.

    >>> print "hello"
    SyntaxError: Missing parentheses in call to 'print'
If it can print a SyntaxError explaining the problem then it can print "hello".

The only snag is that a tuple would be printed slightly differently, but that is harmless.

People should be upset that a new release of Python breaks their code, because the Python developers acted unprofessionally in expecting all python code in existence to be rewritten to suit their aesthetic fetishes.


So instead of code breaking explicitly, you'd rather it breaks implicitly?

You're really foolish if you believe that's better. The print keyword isn't what is preventing 3k adoption.


is there any PEP that makes parens around function call params optional? Tried to find it, no luck.


That would be tricky. Ruby has optional parens for function calls, but it makes it tricky to get a reference to a method without calling it. We have to write "foo.method(:some_method_name)", while the Python equivalent is simply "foo.some_method_name".

You might be able to require parens for methods with no arguments, but optional parens with arguments. It might require some clever hacks in the parser, but I suspect it's possible. The bigger question is whether or not the result would be Pythonic.


Maybe they could just re-introduce print x as syntactic sugar for galling a print function. Like in Matlab, where the operators (+, *, ..) are just syntactic sugar for calling functions add, mul, ..


The Python 3.x print function and 2.x print statement automatically called the __str__ method on an object to get a representation to print to screen. The result is then sent to the appropriate file handle for actual printing (which can be overridden).


IPython has this as a configuration option. I only use it in the interactive interpreter (not sure if it works in scripts, but I don't think that's a good idea anyhow). I like being able to type

    len flurp
to quickly get the length of an array.


That wouldn't be adequate at this point because the signature for print is different as well.


I always found this Python 2 vs Python 3 discussion very interesting.

One does not see such amount of discussions regarding other languages that suffer even bigger transitions problems.

Java is now version 8, still there are lots of places having to deal with versions 4 and 5.

.NET is getting 4.6/5 version in the upcoming months, still many enterprises still have version 3.5 code bases.

Everyone is discussing the benefits of C++14, while many corporations still use pre-C++98 like code.

Yet in the Python community it is such a big deal.


With Perl 6 we saw even bigger fuss and ultimately failure.

Ruby 2.0 had similar levels of incompatibility; I think people simply didn't have those large enterprise codebases in Ruby to make a fuss about.

Java, .net and C++ go to extreme efforts for backwards compatibility, compromising their current/future versions as a result.


Afaik Ruby 1.9.3 -> 2.0 was less painful than getting from 1.8.7 to 1.9.3. See http://nerds.airbnb.com/upgrading-from-ree-187-to-ruby-193/


> Java, .net and C++ go to extreme efforts for backwards compatibility, compromising their current/future versions as a result.

True, but they still introduce breaking changes as well.

I just posted all the Java release notes in a sibling post, and could do the same for .NET and C++.

Maybe the amount of breakage isn't as big as Python 2 -> 3, but it does happen.


The breaking changes really are tiny though. You can take a random Java 7 app and build/run it with Java 8 and be >99% confident that it will work, whereas with Python 2 -> 3 I'd be 99% confident it would fail (quite possibly even a syntax error).


Code for older Java versions still compiles in Java 8. Code written for python 2.x will not run on python 3.x.


Breaking changes in Java 8

http://www.oracle.com/technetwork/java/javase/8-compatibilit...

Breaking changes in Java 7

http://www.oracle.com/technetwork/java/javase/compatibility-...

Breaking changes in Java 6

http://www.oracle.com/technetwork/java/javase/compatibility-...

Breaking changes in Java 5

http://www.oracle.com/technetwork/java/javase/compatibility-...

Breaking changes in all Java versions up to 1.4

http://www.oracle.com/technetwork/java/javase/compatibility-...

I no longer remember which version it was and don't feel like going through those lists now, but I remember one of those versions changed some JDBC interfaces which would break any application using JDBC.


Well, it kinda depends on exactly what the breaking changes are and how much code they break, don't you think? Certainly none of the languages you mention in the grandparent comment broke Hello, world. It's also a question of what share of issues can be automatically flagged and when (build, run?)

Python had its share of breaking changes as well over the years and there wasn't much fuss about them. Who refused to upgrade over class name(object) or say hex(-1) producing '-0x1' instead of '0xffffffff'?


> but I remember one of those versions changed some JDBC interfaces which would break any application using JDBC

lol wut? How exactly does adding a method to a JDBC interface "break any application using JDBC"?


The code will fail to compile because the implementation no longer matches the interface? Duh.

Maybe I should have written extending the JDBC classes instead.


Yeah, you should have. An application is not supposed to implement JDBC. The database driver is supposed to implemented JDBC. If your application implements JDBC you're doing it wrong.

Just like nobody walks around and complains that new versions of the servlet API break any Java web application because web applications are not supposed to implement the servlet API. That's the job of the server.


Someone needs to implement those drivers...


Yeah, but not your application. If the driver specification gets updated it's not really surprising that the drivers will need to be updated. But the same is true for every specification. However your application doesn't need to be changed.


Those drivers can be application specific, as it was the case in the application I had to fix. Why it was made so, will be kept under the covers of the NDA agreement.

Another use case, many developers mock JDBC by creating their own dummy drivers, specially in large companies where mocking libraries are frown upon.


> Those drivers can be application specific,

Nope, they are database specific.

So we went from "would break any application using JDBC" to large company rolling their own database drivers for reasons that can't be disclosed (probably to protect the guilty).

> as it was the case in the application I had to fix. Why it was made so, will be kept under the covers of the NDA agreement.

Standard case of a large company doing it wrong and blaming somebody else when it comes back to bite them rather than fixing it. And hiding behind an NDA. Seriously what was the expectation? That JDBC never changes? Because at that time JDBC was already at version 3.0 which is obviously the final version after which no features would ever be added again.

> Another use case, many developers mock JDBC by creating their own dummy drivers, specially in large companies where mocking libraries are frown upon.

If you're doing it wrong then you're doing it wrong. Nobody to blame but yourself. Just because you're a large company doesn't make it right. Part of why it's wrong is that it will come back and bite you later on. And that's the problem in this case? The compiler tells you where you need to implement which methods.

Two give you two other examples. Interfaces for HTTP likely need to be updated when something in HTTP changes (WebSockets, HTTP/2). Servers implementing this will need to be updated to implement this. You don't accuse the language of the web server to make a breaking change. That's just how these things work. Same for SSL/TLS features like SNI. Sure it could be that you absolutely have to run your own web server or SSL/TLS implementations for reasons that you can't disclose because you're under an NDA. But then you expect that you'll have to maintain them and add additional features, don't you?


C++ is getting nicer and nicer. The issue is mostly that compilers aren't implementing all the nice features yet. I'm looking forward to C++14 etc. etc., I'm not looking forward to finally having to port to python3.


I wish installation was easy and having it as a default language on an OS. Scientific Python distributions are generally with 2. Trying to get all scientific packages with 3 is still tough.


Try Continuum's Anaconda, then that's not tough at all. The most important libraries have been ported to 3 already.


> Python 3 is happening. Deal with it.

Moving a large codebase to v3 is expensive for almost no benefits? Some parts of the community already moves on to Go, etc. Many third party libraries are v2 only and unmaintained. Many languages that broke backwards compatibility ultimately failed (their community moved on) or split their community. Examples: Modula 2/3/Oberon4, Perl 4/5/6, Lua 5.1(LuaJIT)/5.2, Visual Basic 6/.Net, QBasic/Visual Basic 1, VBA/VBA.Net, J++/J.Net/C#, etc.

Fibers and coroutines are key features of certain language and API's for decades. WinAPI16 had already Fibers (MS Word), Lua's coroutines, Go's goroutines, etc.


If you think you don't need to port, then don't port. If you don't like Python, don't use Python.


Modula-2 and Modula-3 don't have the same authors, even if the name implies otherwise.

Also Oberon had a different purpose than Modula-2.


What's your point? It's what I meant with split the community. And I meant Oberon 2 (not 4 which doesn't exist) - two different Oberon 2 versions exist. My point was introducing incompatible language changes that splits the community usually isn't that good, in the long run.


Those languages are not related other than sharing the Mesa and Cedar language design ideas.

In Oberon's case, you have actually Oberon, Oberon-2, Component Pascal, Active Oberon, Zonnon and Oberon-07.

It is not the same as the next version of a given language, rather different approaches how to design memory safe languages for systems programming.


Python is the future my brother, and python 3 is the future of python.


disclaimer.. i understand the strengths of 2.x, but i use and prefer 3

my standing question in the debate has always been what is the big deal with having built a robust 2to3 preprocessor?

i figure if you make a change to the spec migrating existing code to the new syntax should be a pretty straight forward ifttt, something that should be a necessary addition to the change, a la tests

knowing full well 'pretty straight forward' is the bane of all software endeavors could anyone explain to me what is going on within the scope of 2to3 preprocessors?

and why such preprocessors seem an impossible boon very few, except a handful of seeming independants, endeavor


It's pretty clear with support until 2020, that python3 isn't happening...

Shame on the community for not learning the lessons of Perl. Fragmentation is far worse than an imperfect language.


I'm not feeling this fragmentation. Many people are already using both versions, and the community doesn't seem to have suffered one bit.


Unlikely, python 2 doesn't seem to go anywhere and the two languages can coexist.


In five years it won't be maintained anymore.


So they claim. My experience says that somebody will not make the deadline and will have to keep maintaining it.


New features like this are exactly what we need to encourage more people (including myself...) to invest time porting their projects to Python 3! I look forward to seeing this approved.


Exciting but toothless "features" like this are exactly what we need to encourage more people to invest time porting their projects to languages with proper concurrency primitives like Go, Rust or Scala... :)


Looks good, but coming from someone who used Twisted for a few years, I had found deferreds to be messy and switched to yield-type co-routines (http://twistedmatrix.com/documents/10.2.0/api/twisted.intern...), but eventually I found those pretty verbose as well. Small demo examples always look clean and fun, but large applications at the top level will end up looking basically like this:

    y = yield f(x)
    z = yield g(y)
    w = yield h(z)
    ...
In this case it would be async/await instead of pure yields.

The worst thing was having to hunt for Twisted version of libraries. "Oh you want to talk XMPP? Nah, can't use this Python library, have to find the Twisted version of it". It basically split the library ecosystem. Now presumably it will be having to look for async/await version of libraries that do IO.


> but eventually I found those pretty verbose as well...

It is definitely a bit verbose, but I decided that the clarity for the rest of the code is worth putting a yield before each function call. Also I've found a few projects (for Tornado at least) that cut down on this boiler plate and make the yield only required at the lowest level where the async really happens. [0]

> The worst thing was having to hunt for Twisted version of libraries. "Oh you want to talk XMPP? Nah, can't use this Python library, have to find the Twisted version of it". It basically split the library ecosystem. Now presumably it will be having to look for async/await version of libraries that do IO.

I work with Tornado and this is absolutely the worst part. At least with Tornado the newest version is embracing interoperability with python 3.4+ native AsynIO.

[0] https://github.com/virtuald/greenado


I found the same, working with AppEngine NDB in an app that talked to a dozen or so backends. It's still a lot better than the Node.js (pre-ES6) style of:

  f(x, function(y) {
    g(y, function(z) {
      h(z, function(w) {
        do_something_with_w(w);
      });
    });
  });
What would be a better syntax? The Java/C++ way of threads & hidden shared-state concurrency is a total mess; it hides all the potential yield points, so you never know when your flow of control might block or what shared state might need locking. Channels in Golang are better - they at least have some syntactic support - but the reification of the yield point into a concrete channel can end up creating a fair bit of boilerplate in the common case where you make a bunch of one-off async RPCs to remote services. Maybe Erlang has it right, where the pid is an implicit channel you can send messages to - but then you still need to pass that into any async library function, and it gives you no typing discipline. Maybe we really need something like futures where the syntax:

  y <- f(x)
  z <- g(y)
  w <- h(z)
means "wait for the promise returned by expression f(z) to resolve, returning control to the executor, and then assign the result to y."

The splitting-the-library-ecosystem thing is a big part of it too (and why ES6 won't magically fix the Node ecosystem), but that's why Python is putting async into the language & standard library itself. At least then the stdlib will support it, and there will be strong social pressure to use the same concurrency mechanism in all libraries.


> y <- f(x)

> z <- g(y)

> w <- h(z)

So you would basically want "<-" instead of "yield from"? Or is there something additional I'm missing? My main thing against this, is that it's very opposed with general Python principles. Think "||" vs "or", "&&" vs "and", and "test ? value : alternate" vs "value if test else alternate".


Yeah, it's just syntactic sugar. "<-" is chosen to match roughly-equivalent constructs in other languages, like Erlang, Go, or CSP. The reasoning behind it is that 'yield' is common enough to warrant special syntactic sugar - there are other precedents, like allowing dict['value'] instead of getitem(dict, value) or the @ matrix multiplication operator that was just added in Python 3.5.


You may know already, but that syntax already does that in Scala (with for/yield), and I believe also in Haskell (with do).


At first look I don't like this. "async def" feels ugly, and I am not sure this is just because of the unfamiliarity.

I don't believe coroutines/async routines will ever be practical without decorators, so you might as well keep them and not disturb one of the most basic syntax rules of Python.


> I don't believe coroutines/async routines will ever be practical without decorators

Why do you believe so?

Have you seen the implementation of asyncio.coroutine? Have you read the PEP thoroughly and saw the downsides of using a decorator/generators?


The major benefit I see to the decorator syntax is that there is not an extra distinction about functions. So currently we may have normal functions and generator functions. The latter are mostly functions which just return an iterator. Do we have to make a second distinction?

Most of this is just "feeling". But good Python makes me feel good, so I sort of expect that from new features if I am to vote them up.

I might like "codef" more than "async def", if there absolutely have to be first-class coroutines.


For those of us haven't, can you make the downsides clear? (links or text reply would be fantastic)


Please read the Rationale of the PEP.

edit: I can't explain topics better than I did so in the rationale section of the PEP. If I could, I would have explained it better in that section in the first place ;)

As for asyncio.coroutine decorator -- it's just a very simple wrapper, that makes sure that the decorated object is a generator-function. If it's not -- it wraps it in one.

It also does some magic to enable debug features. But with some serious shortcomings (that is also explained in the PEP in great detail).

My point is: there is absolutely no other value in that decorator. There is nothing fundamental that it does, it just fixes the warts. Documentation, tooling, "easier to spot", etc arguments are unfortunately weak.

The PEP makes coroutines in python a first-class language concept, with all the benefits you can have from it (better support in IDEs, tooling, sphinx, less questions on StackOverflow).

Disclaimer: I'm the PEP author. I'm also python core developer, and I contributed to asyncio a lot.


This is the kind of feature that will get people really interested in moving code to Python 3. Everyone said it was lacking a blockbuster feature, this could be it.


As someone still using Python 2 (due to the very large number of libraries and tools I rely on for Python 2), I tend to look at these kinds of features more with the eye of "why not implement this for the language people are actually using more often, aka Python 2, rather than trying to use it to fight an ideological battle to move people to a language people aren't, aka Python 3?". The mentality just makes me unhappy with the attempts to somehow force everyone to move, rather than suddenly excited to use Python 3.

I thereby ask: is there something about Python 3 that makes this feature extremely easier to implement or extremely easier to integrate? If not, wouldn't it be more interesting to have "more impact" by improving the lives of a larger number of people? Python 3 was maybe an interesting experiment, but given how well Ruby 1.8->1.9 went and how poorly Perl 5->6 has been, it seems like people should be learning some lessons and fighting a more winnable battle.

Maybe the people who want to encourage people to migrate to Python 3 should he spending their time not providing "killer features" but instead narrowing the gap between the two versions, not by back porting features to Python 2.7, but by making it easier to use code written for Python 2 in Python 3. It should not have taken until 3.3 to see the u'' syntax return for compatibility with Python 2. :(

The current strategy of "try to convince an army of people to port a bunch of code from Python 2 to Python 3 while people are at the same time still often writing code for Python 2", which is what we are seeing being asked of people in other posts here over the last few days (there was a post about this for Debian) just seems like a waste of effort leading to tons of lost ground to alternative languages.


Open source developers work on what they like to do. Guido was the primary developer of asyncio, funded by dropbox, as I understand. The asyncio library depends on some Python 3 features that other developers have implemented. I believe there is some kind of 2.7 backport of asyncio. No one is stopping development but the core Python team is not interested in releasing new versions of the 2.x line.

The gap has been getting smaller. Python 3.5 will include %-style formatting for byte strings. I helped implement that feature since I feel it will make porting code easier (plus it makes some programming tasks, like network protocols, easier). The strict handling of bytes/unicode is what really trips up people from porting and there is just no good way to further smooth that path, IMHO.

I feel the upgrade path has been handled badly. People were wildly optimistic about how fast people could port code and much more effort should have been spent on making the transition easier. A little too much purity instead of practicality. For example, u'foo' style strings were not accepted until Python 3.3. That just made porting more difficult for great reason.

Given the massive amount of Python 2.x code in the world, I fully expect that version to live much longer than most people expect. There is still old Fortran and Cobol code out there for example. Businesses don't want to rewrite a working system. Someone is going to keep making releases of the 2.x branch. Maybe it will be a fork.

At this point, I'd bet money on Python 3 succeeding. The vast majority of users still use 2.7 but there is steady progress of porting. The 3.x branch has received a lot of new and useful improvements and that is where all the development is happening. The memory savings from the new string representation will help my projects a lot. As I said elsewhere, asyncio is really sweet. The core team needs to keep the improvements coming. More carrots and less sticks, IMHO.


This.

A lot of new python programmers learn python3 first, and are extremely hostile to people using python2.

It doesn't help at all. If you love python3, the best thing you can do is help make it more awesome!

Shouting at people using python2 or telling them to 'deal with it' makes you look like a jerk. Don't be a jerk, it's bad karma.


But it is not old code that is somehow stuck with Python 2. Lots of new code is written with Python 2 every day. It wouldn't surprise me if there were more lines of Python 2 than Python 3 produced today.

With many important code bases such as Ansible and Django you still need to keep a Python 2 around. (Yes, I know Django works on 3, but less people use it and performance is slightly lower.) So you are dual stacked for the forseeable future if you choose the Python 3 route.

So the Python ecosystem is a little more complicated than "old" Python 2 code versus "new" Python 3 code. Should I put my consultant hat on I would still advise people to start new developments with Python 2, but to keep an eye out for future compatibility issues. That recommendation hasn't changed in five years.


You're probably right that more lines of Python 2 are written today than are Python 3. But I bet it's like that for virtually every latest version of a programming language today.

Python 2 will be kept around until 2020. Obviously there will still be companies using it after that date. But they have been given ample notice.

I use Django with Python 3 and it's great. Admittedly, I don't need a lot of performance though.

I really like Python 3 mainly for its more explicit treatment of Unicode.

I believe the only way for Python to remain a success is if the community adopts Python 3 and puts in the effort to migrate away from 2. Once the community is there, organizations will follow.


This is new syntax that requires at least a new minor version. It can't be back-ported to Python 2.7. You might even say that what people find so appealing about Python 2.7 these days is that its syntax will never change.

Python 3 is not Perl 6, and that's a banal comparison to make. Python 3 has had many releases, and has had most packages ported to it. Perl 6 has had no releases. Ruby 1.9 should be an example that making incompatible changes that force everyone to upgrade is possible as long as you make timely releases.

What gap do you still think there is to be narrowed? We're not on 3.2 anymore. Complaining that "it should not have taken until 3.3" doesn't matter now.

What most people are arguing is quite the opposite of you -- they're saying they could switch to Python 3, and they would if there were a killer feature that mattered to them, but just being cleaner and making the core developers happier isn't enough for them to change the way they use Python. So I say, bring on the killer features.


I forgot to mention that in my sibling comment but I agree, because 2.x is so stable is the main reason it is and will continue to be so popular. You can't have it both ways (new features and stability).

Python 2.x is not going anywhere soon. The idea that is going to become unmaintained is ridiculous. If you like it, keep using it, I say. However, maybe take a look at the "What's new in Python 3.x" documents sometime. There are a lot of nifty new features, even if none of them are "killer".


To put it another way:

> I don't want to switch to Python 3 because of the changes to the language. You should make this change to Python 2.

You can't have it both ways. If you prefer Python 2 because the language hasn't changed, then don't demand changes to Python 2, and certainly don't accuse people of "fighting ideological battles" just because they are adding new features to the latest version of the language instead of old versions.


> Python 3 has had many releases, and has had most packages ported to it.

Actually, no. Guido mentioned in PyCon 2015 keynote from last week that only ~10% of packages from pypy have been ported. If I remember correctly there's ~50,000 packages, and only ~5,000 have been ported.


Unmaintained packages will never be ported, and that's fine. Most of the popular packages have been ported though: http://py3readiness.org/


Thanks for the link.

The fact that numpy, scipy and pandas are on green means a lot. They used to be a massive blocker. The secondary stops I've seen first hand were matplotlib and psycopg2.

A curious side-effect of this PEP might be that lack of py3-gevent could become less of an issue. At least for fresh and new projects that require co-operative concurrency. From a purely personal point of view - Oauth2, protobuf and newrelic plugin-agent are probably the biggest gaps.

Overall the list looks pretty good. I'm curious to hear what packages other heavy python users consider as blockers for even considering to migrate to python 3.


Regarding Oauth2 for fresh and new projects, someone suggested oauthlib as a replacement yesterday. [1]

[1] https://news.ycombinator.com/item?id=9392242


Much obliged, that ticks one item off.


Twisted's a huge blocker for me.


> Perl 6 has had no releases

http://en.wikipedia.org/wiki/Rakudo_Perl_6


According to that page, Rakudo is a still a preview release in development. The Python 3.0-3.4 series have been full production releases.

Now, I know that open-source projects are sometimes quite conservative with versioning, and one project's 0.9 may be more stable than another's 2.0. (I maintain an HTML parser [1] that's still on 0.9.3 and yet is more robust and better tested than one that is on 3.8.2.) Is this actually the case with Rakudo, though? You can write real production software with Python 3.4; can you with Rakudo #86?

[1] https://github.com/google/gumbo-parser



You're really stretching the definition of "release" there.


We really already have this though, in the form of asyncio. This is basically syntactic sugar over asyncio with the added advantage of context manager support, which while great, isn't earth-shattering.

I'm all for this change, but if people were willing to look at asyncio, they'd see this is largely a syntactic tweak to that.


This looks great, but it'll be even better with utilities that let you await multiple tasks in parallel:

  a, b, c = atools.join([f(), g(), h()])
and select among multiple async tasks (e.g. to implement a timeout):

  with atools.select({"res": f(), "timeout": timeout(10)}) as tasks:
      res, which = await tasks


There is equivalent to 'atools.join' in asyncio. And I suspect that we'll see libraries like 'atools' in the wild if the PEP is accepted. If those libraries will end up being popular, we'll add one of the to the standard library!

I like the idea of 'a, b = await foo(), bar()' syntax. I'll think of how we can implement it and integrate to the PEP.

Thanks!


I read it, but I don't see the benefit of it vs yield from and coroutines. Can somebody explain ?


This is explained in the Rationale and Goals[1] section of the PEP:

    it is easy to confuse coroutines with regular generators, since they share
    the same syntax; async libraries often attempt to alleviate this by using
    decorators (e.g. @asyncio.coroutine [1] );

    it is not possible to natively define a coroutine which has no yield or
    yield from statements, again requiring the use of decorators to fix
    potential refactoring issues;

    support for asynchronous calls is limited to expressions where yield is
    allowed syntactically, limiting the usefulness of syntactic features, such
    as with and for statements.
[1] https://www.python.org/dev/peps/pep-0492/#rationale-and-goal...


I just said I read it. Copying it and pasting it doesn't make it more easy to understand.

I just can't somebody somebody would like to introduce a whole new syntaxe just for dislike of decorator and 2 edge cases twisted/tornado/greenlet lived with for years. So I though I missed something. Espacially since we have ways to do async and "there should be one way to do it" is as much important as keeping builtins number down.

So I'm asking again : am I missing something or is it again on of these "I don't like it so let's do it like in language x" claims like we had for brackets, lambdas et alike ?


Here are some comments from python people (from the mailing list):

Victor Stinner: """As a contributor to asyncio, I'm a strong supporter of this PEP. IMO it should land into Python 3.5 to confirm that Python promotes asynchronous programming (and is the best language for async programming? :-)) and accelerate the adoption of asyncio "everywhere". ¶The first time Yury shared with me his idea of new keywords (at Pycon Montreal 2014), I understood that it was just syntax sugar. In fact, it adds also "async for" and "async with" which adds new features. They are almost required to make asyncio usage easier. "async for" would help database ORMs or the aiofiles project (run file I/O in threads). "async with" helps also ORMs."""

Marc-Andre Lemberg: """Thanks for proposing this syntax. With async/await added to Python I think I would actually start writing native async code instead of relying on gevent to do all the heavy lifting for me ;-)"""

Guido: """I'm in favor of Yuri's PEP too, but I don't want to push too hard for it as I haven't had the time to consider all the consequences."""

etc.

For more: https://mail.python.org/pipermail/python-ideas/2015-April/th...


Thank you, that helped.


Here is my take on it.

A Python generator is a lazily computed, readable stream. Generators have been integrated very nicely with the rest of the language (the iterator protocol, for loops, etc.). The result is that Python now has excellent support for the style of programming that builds a system out of composable components.

It is almost incidental that a generator function is really a kind of coroutine. This fact does make them very convenient to write; but when generators were designed, I don't think anyone was saying, "Let's add a high-quality coroutine facility to Python."

But somewhere along the way, someone noticed that generators are coroutines. If you don't mind doing a "yield" whenever a coroutine needs to give up control -- possibly using a bit of twisted syntax to make that work -- generators are in fact coroutines in full generality.

But they don't look that way. If you just want to pass around control and do things asynchronously, you are still required to use the send-this-value-out-on-the-stream syntax. Conceptually, this is not the right abstraction. It is also occasionally cumbersome; wrapping a code snippet in some asynchronous processing is messy, for example.

That, plus the fact that the async/await (or future/promise) idea has proven itself to be effective and useful in other languages, suggests that Python needs additional constructions for doing coroutines.

That being said, I'm a bit leery of this PEP myself. The Rationale and Goals section, posted by ceronman, is all about deficiencies in the syntax. The real problem is that Python syntax is not sufficiently versatile to allow a programmer to create needed abstractions. We're talking about adding two new keywords and several constructions to support abstractions whose functionality can be implemented currently in Python, but cannot be given a nice interface.

I would prefer it if attention were given to the kind of syntax Python allows for programmer-created abstractions. Give me enough power that I can write async & await myself, along with some other things I might come up with, but which no one else has thought of yet. Polished versions of async/await can then be added to the standard library.


Your last paragraph makes sense indeed.


"@coroutine" not only makes something a coroutine which is not already a generator, but it also document its use as coroutine.

If you are using asyncio (or something similar), you will always need to pay attention to the "concurrent choreography" (for lack of a better word). I find I rarely confuse coroutines with other routines, and it's mostly when calling asyncio based third party libraries.


This reminds me very much of C# async and await (although "async for" and "async with" are very nice additions). It seems to me many other languages are now basically adopting wholesale these same features from C#.

I'm curious, is this just my warped perception? Did this terminology and feature set previously exist in programming language theory or practice? I know F# had "async" for a while before C#. Was there "prior art" that looked basically the same in Lisp, Haskell, Scala, ML, etc. or in some research language long before that?


You have to thank Erik Meijer for that, at least partially.

"Confessions of a used programming language salesman (getting the masses hooked on haskell"

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.118....


C# has essentially put a common name to many existing CS features and popularized particular forms of them.

A bunch of modern C# features are actually lifted from F#, which had them first on the .NET platform. Sometimes the C# version is simplified or has some of the sharp edges filed off (which makes it less powerful, but safer). Async/await is definitely one of these.


I love C#'s names for things. They take often academic names of non obvious meaning, and just replaces it with a common word whose meaning is obvious.


And here's a proposal for something quite similar in C++, written by the author of Boost Asio:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n445...


The "async with" is syntactically pretty similar to crosstown_traffic from hendrix (based on Twisted):

http://hendrix.readthedocs.org/en/latest/crosstown_traffic/


Since

a. this is functionally equivalent to coroutines implemented via library support through generators

b. Python will never ever get rid of the GIL, i.e. won't have real parallelism (given that the JVM/CLR Python implementations are basically dead and PyPy is chasing the pype dream of STM, we're left with CPython the only actual implementation)

Is there any reason to add new syntax? Other than wishful thinking that one day someone will write a Python runtime with proper JIT and parallelism support?


async/await is useful for more than just 'real parallelism': its main use is coordinating concurrency. The GIL isn't a problem when dealing with async IO or async calls into C libraries.


There's already a good bit of work on getting rid of the GIL in pypy. pypy also has channels (borrowed from stackless).


This is a control flow primitive. You wouldn't hold off on implementing "call/cc" or "function calls" or "loops" until after you have multiple threads running the interpreter, imo.


Stackless Python


Off Topic: Does anyone know if there are anything similar for Ruby in the work?

P.S - Ruby's issues system is hard to follow.


Yes and no. Ruby has Fibers, but they're roughly where Python is with generators as coroutines. They're wonderful when working with EM, and one time I wrote a massively overengineered Ruby Warrior program that allowed you to do things like "move(:left) until feel(:left).wall?" instead of doing one thing each time a method was called. This is exactly what makes coroutines so attractive in simulation programs. Still, Fibers are clumsy and counterintuitive and require some abstraction to produce a useful interface. Also, there are some problems related to overflowing the Fiber stack, especially in ridiculously deep Rails stacks, not sure if those have been resolved yet.

I'm not sure that special syntax is necessary for widespread adoption of Fibers, since Ruby syntax is flexible enough to provide a sort of DSL for working with Fibers.


ruby used to have RCRs as "language level issue tracking", modeled after PEPs[0].

Sadly, it died some years ago.

[0] https://www.ruby-lang.org/en/news/2003/12/19/new-ruby-change...


Although this is great and all, what does this do compared to green threads (gevent, eventlet, etc...)?


I'm not sure from what perspective you were looking to compare asyncio to green threads (performance, readability, etc.) but there is an interesting blog post [0] by the lead architect of Twisted about the fundamental difference in programming model between explicit yielding (e.g. asyncio) and implicit yielding (e.g. green threads, regular threads). It may give you some answers.

PEP 492, from what I understood, is more about streamlining the syntax, and the major differences between asyncio (pre this proposed syntax) and green threads in terms of programming model still apply.

[0] https://glyph.twistedmatrix.com/2014/02/unyielding.html


Glyph is a smart guy and has been doing async code for a very long time (at least in computing terms). It would be wise to consider his arguments.

I've done a project making heavy use of Python 3's asyncio feature. It is much nicer than doing the same job with threads or call-backs, in my experience. I'd welcome nicer syntax but the current functionality seems to work well.

I can't understand why someone would design a language in 2009 (Node.js) that uses callbacks. For simple programs it works but it doesn't take long to end up with an unmaintainable mess (in my experience).


With all the respect I have for Glyph, he's the least objective person to talk about this, having so much invested in one side of the argument.


Why is it wise not to consider opposing arguments? Because people who disagree with Glyph are not "smart guys"?


Quite the logic you have going on there. Certainly you should also consider other arguments.



The problem with "green threads" is that their context switching is unpredictable. You never know where the context is switched, potentially confusing C APIs and such.

Asyncio "threads", on the other hand, never switch context between "yield from" instructions. This makes it usable with the most bungled and complicated C APIs (e.g. Blender...), and also lets us reason more clearly about the concurrent behavior.


I'm going to go out on a limb and say this is a fallacy, you have just turned yourself (the programmer) into the context switching routine that the operating system (or python itself) does for you. IMHO computers are good (and can be made better) at scheduling and we should really avoid trying to do it ourselves manually... Getting used to thinking about where code can be context switched is IMHO worth more than shoving it under a rug and turning the programmer into the scheduler...


It's not as bad as you think it is. First thing: Asyncio doesn't force the programmer to schedule anything. The event loop does that.

Also, this kind of message-passing concurrency is not meant for cpu-intensive work, but rather for situations where there's nothing to do in a particular task much of the time. Concurrent threads/tasks in frameworks like NodeJs, Tornado and Asyncio are very good at doing nothing.

For example in a web server, a traditionally threaded "worker" spends a lot of time waiting for a connection, waiting for database requests to finish, and so on. The operating system can use that time to execute other threads. But the memory of the thread sits idle during that time.

Scheduling in asyncio is not imposed by the operating system or the programmer, and only partly by the event loop, but largely because of external circumstances like network latency/bandwith, how long do the database queries take, and how fast the user is acting.


Who writes the 'yield from' or 'async' or @coroutine (yield points, aka context switching 'places', that u have to pick correctly)...? I thought that was the programmer ;-)

As a comparison to glyph http://techspot.zzzeek.org/2015/02/15/asynchronous-python-an... (the creator/author of sqlalchemy) is a pretty good read :-)


Declaring in which line to switch context does not a scheduler make. Also, "yield from" does not just mean context switching but also returns a coroutine's result.

I have written quite a lot of asyncio and tornado code by now, and I can't remember a single instance where I had to yield/yield from/wait just because the computation would freeze up the thread or increase latency.

Coroutines really aren't about explicit scheduling, at the very most you can specify what code is executed without switching. And the latter is absolutely necessary in some situations. For example, threads or greenlets freak out Blender.


Dear Python,

I love you.


The Python 2.7 code freeze is totally working! All the horrid new BS is getting piled into 3.* and Python 2.7 is becoming sweet FORTRAN. Viva la BDFL! So wise! ;-)


Downvotes? Folks I am a Python fanboy, I love it, am devoted to its success, and I'm delighted to use it every day in my work and personal life.

I meant the above comment in earnest, not sarcastically. Python 3.* lets the people who like that sort of thing have their cake and eat it too (and I am one of the people who drools at new shiny tech.) But the fact that Python 2.7 is stable for the foreseeable future is something to celebrate in my opinion, not condemn.


Python 3 does not need cool new features right now. Python 3 needs to get its existing libraries working properly. People aren't using Python 3 because the out-of-the-box experience sucks. We covered this yesterday on HN.[1]

[1] https://news.ycombinator.com/item?id=9378898


Who made you BDFL?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: