Hacker News new | past | comments | ask | show | jobs | submit login
What’s New In Python 3.4 (python.org)
289 points by ot on Oct 20, 2013 | hide | past | favorite | 72 comments



I didn't know what "Single Dispatch Functions" was all about. Sounded very abstract. But it's actually pretty cool:

http://www.python.org/dev/peps/pep-0443/

What's going on here is that Python has added support for another kind of polymorphism known as "single dispatch".

This allows you to write a function with several implementations, each associated with one or more types of input arguments. The "dispatcher" (called 'singledispatch' and implemented as a Python function decorator) figures out which implementation to choose based on the type of the argument. It also maintains a registry of types -> function implementations.

This is not technically "multimethods" -- which can also be implemented as a decorator, as GvR did in 2005[1] -- but it's related[2].

Also, the other interesting thing about this change is that the library is already on Bitbucket[3] and PyPI[4] and has been tested to work as a backport with Python 2.6+. So you can start using this today, even if you're not on 3.x!

[1] http://www.artima.com/weblogs/viewpost.jsp?thread=101605

[2] http://en.wikipedia.org/wiki/Dynamic_dispatch

[3] https://bitbucket.org/ambv/singledispatch

[4] https://pypi.python.org/pypi/singledispatch


Interesting, but I have a hard time accepting this. If I accept a type of iterable I would either check isinstance and do separate methods or check isinstance and convert to a single type and operate on it. On the other hand, I also try to minimize accepting multiple types of iterable. I don't do that very often except in the case of list and tuple. Since these two types are very very similar in nature I call a separate method in my code to handle the two types. That's okay, just a few extra lines.

I don't really understand the benefit of mm and dispatch. This sounds useful for building routes in webapp. But for library, it sounds like we are bringing back C++'s single interface, multiple types... I don't even remember what that is called.


From PEP 443: "In addition, it is currently a common anti-pattern for Python code to inspect the types of received arguments, in order to decide what to do with the objects."


This is quite useful to hide away casting from one type to another or to "build" objects - say you have an object represented by a JSON or XML structure, now you can have functions that accept either/or.

Is this method overloading or am I missing something?


It looks like overloading, but overloading as it is typically understood in languages like C++ is weaker:

http://en.wikipedia.org/wiki/Double_dispatch#Double_dispatch...


Not directly replying, but more amending your comment.

Double dispatch is generally simulated with imperative languages via the Visitor Pattern. Generally used when you have objects that won't change their structure very much (and are backed by an interface of some sort [also needed for the polymorphism aspect of the pattern]), but have to do lots of various manipulations with them. That way, you don't have to change the interface each time you add another method. Also referred to as "inversion of control." I used it before for filesystem type objects and for walking over the structure of an XML file (though both times were for school related projects). I could probably dig up the labs and post them on my Github for anyone interested. It's pretty cool how it works (and sort of hard to wrap one's head around initially), but not something I would consider using often.

Scala has a unique take on double dispatch[1][2] for being a hybrid imperative/functional language.

[1] http://stackoverflow.com/questions/8618082/visitor-pattern-i...

[2] http://blog.nirav.name/2009/04/how-scalas-pattern-matching-c...


Wait, so it's language-supported external polymorphism? That's...odd, since external polymorphism is considered an anti-pattern by OOP pl purists.


Why is that odd? I don't think the Python community is mostly OOP purists.


Which OOP PL purists specifically think language-supported external polymorphism is an anti-pattern? Do you have any citations?


Huh? But that's not single dispatch? Single dispatch is deciding what function to call based on the type of your object, not on the type of arguments. That's called double dispatch.

Single dispatch is pretty standard polymorphism, C++ can do that.


That's a bit of a semantic argument. Python already has "object-oriented single dispatch" -- aka traditional object-oriented polymorphism.

What this module adds is "functional single dispatch".

So, whereas before you'd always be forced to implement some type-varying function using two classes `HandleA` and `HandleB`, each with an implementation for `handle`:

    class HandleA:
        def handle(self):
            pass

    class HandleB:
        def handle(self):
            pass

    def main(obj):
        # obj could be instance of HandleA or HandleB
        obj.handle()
In this case, "dynamic dispatch" is done by `obj.handle()`, which will pick a different implementation depending on the type of obj.

With this PEP/stdlib addition, you can now write two functions, `handle_A` and `handle_B`, which take an argument, `obj`, and are dynamically dispatched using the generic function `handle`.

    from functools import singledispatch

    @singledispatch
    def handle(obj):
        pass
    
    @handle.register(A)
    def handle_A(obj):
        pass

    @handle.register(B)
    def handle_B(obj):
        pass

    def main(obj):
        # obj could be instance of A or B
        handle(obj)
And in this case, "dynamic dispatch" is done by `handle(obj)`, or really, by the dispatcher decorator. It chooses `handle_A` or `handle_B` based on the type of the `obj` argument

The reason this is a nice addition is because it makes Python eminently "multi-paradigm" -- you can choose object-oriented or functional styles depending on your taste and the applicability to the task at hand, instead of being forced into one programming style or the other.

(the content of my comments got long enough that I decided to document them for posterity over on my blog: http://www.pixelmonkey.org/2013/10/20/singledispatch)


The thing I'm looking forward to in Python 3.4 is that you should be able to follow the wise advice about how to handle text in the modern era:

"Text is always Unicode. Read it in as UTF-8. Write it out as UTF-8. Everything in between just works."

This was not true up through 3.2, because Unicode in Python <= 3.2 was an abstraction that leaked some very unfortunate implementation details. There was the chance that you were on a "narrow build" of Python, where Unicode characters in memory were fixed to be two bytes long, so you couldn't perform most operations on characters outside the Basic Multilingual Plane. You could kind of fake it sometimes, but it meant you had to be thinking about "okay, how is this text really represented in memory" all the time, and explicitly coding around the fact that two different installations of Python with the same version number have different behavior.

Python 3.3 switched to a flexible string representation that eliminated the need for narrow and wide builds. However, operations in this representation weren't tested well enough for non-BMP characters, so running something like text.lower() on arbitrary text could now give you a SystemError (http://bugs.python.org/issue18183).

With that bug fixed in Python 3.4, that removes the last thing I know of standing in the way of Unicode just working.


The advice you mention reminds me of the "Unicode sandwich", a term used by Ned Batchelder in his awesome "Pragmatic Unicode" talk[0][1].

[0] http://nedbatchelder.com/text/unipain.html [1] http://nedbatchelder.com/text/unipain/unipain.html#35


I love how there's a thorough "what's new" document for a still-in-progress release, and terms like "provisional API" are linked to a glossary that tells you exactly what they mean. While Python is not the most interesting language to me anymore, it still sets the standard in clear, comprehensive, newbie-friendly documentation.


I think newbies are far from the people most likely to be looking at version-to-version changelogs in any language.

This is great for those maintaining substantial codebases in the language, though.

The thorough specification process gives lots of warning for the introduction of changes (so you can almost update on release day if you were really so inclined and prepared).

Edit: Though on a re-read, clear, comprehensive newbie friendly documentation is something that (somewhat unrelatedly) Python does have. My bad.


I'm by no means a programming newbie, so maybe this skews what I'm about to say, but I find that reviewing release notes and changelogs is often quite useful while I'm still in the belly of a learning curve. If the project in question has a cohesive vision (big "if"), this can help frame the trajectory of that vision and get you to march in step with the project's state of the art more quickly.


Even I was amazed at the neat documentation of the changes that are made for this release. There is some unknown familiarity in that page layout that my brain naturally knows what to click and read through. Kudos to team who did this work.


I'm posting this mainly because it will be the first release that implements PEP 3156 [1], that is, a standardized asynchronous I/O API.

[1] http://www.python.org/dev/peps/pep-3156/


Been waiting for this one for so long.


> "Tab-completion is now enabled by default in the interactive interpreter."

Thanks! Now I don't have to do this on every box anymore for using Python.


It's a shame usability is so far down the list of priorities, this could have been done a decade ago.

After finding bpython this is a bit underwhelming. Perhaps they should just include it by default.


Oh wait, you need to explicitly turn it on? How?


In your $PYTHONSTARTUP file:

    import readline
    import rlcompleter
    readline.parse_and_bind('tab: complete')


Doesn't work on OSX like this. See below for portable version [1].

    import readline
    import rlcompleter
    if 'libedit' in readline.__doc__:
        readline.parse_and_bind("bind ^I rl_complete")
    else:
        readline.parse_and_bind("tab: complete")
[1] http://stackoverflow.com/a/7116997/10583


you install IPython :)


Or bpython.


We need to enable reload module and memory memorizer like IPYthon


I like the implementation of the "Enum" class, especially the way they allow for enums with variant behavior and "abstract Enums" (something I could have used in Java recently).

But, ever since I found out about algebraic data types from other languages, I keep wanting those. There's not quite a good way to do those in Python. (I've used both "tuple subclass" and "__slots__," but both of those have their own little quirks.)


if python were to have algebraic data types, it might also need pattern matching.


I don't think pattern matching is necessary for ADTs. It's useful, but you could have e.g.

> if MyType.ConstructorA(a):

> ...

> elif MyType.ConstructorB(a):

> ...

or even something like

> with MyType.ConstructorA(a) as b, c:

> ...

> else with MyType.ConstructB(a) as d:

> ...

a "with/else with" construct would be a fairly straightforward addition to Python's "context manager" interface.


Yes, for better of worse Python is nowadays fairly conservative in introducing backward-incompatible syntax changes, and pattern matching would probably require quite a bit of new syntax (and benefits a lot from a strong compiler).


Python 3.4 is one of the most feature-packed releases I recall. Besides the obvious new perks like enums and asyncio, note some hidden gems like PEP 442:

" This PEP removes the current limitations and quirks of object finalization. With it, objects with __del__() methods, as well as generators with finally clauses, can be finalized when they are part of a reference cycle. "

This has been a notorious limitation in Python forever, and in 3.4 it's finally solved.


I am not into the Python internals but I would suspect the widely critiqued Python 3 step was just that: a shift that did not bring that much by itself but rearranged the inners enough to allow unlocking a lot of new doors in the future.


Finally, a statistics module! Actually though, simple methods for measures of center and spread have been on my Python wishlist since, like, forever. The mean() function is going to be especially useful. So excited!


Yay, code by me is now in an official Python release! (data:-URL support)


Congratualations! It's also the first release with code I wrote (the dis.Bytecode class).


Python 3.4 is out and unfortunately I still can't write Python3.

Is there anyone else who is still procrastinating moving their workflow to Python3?


This seems to me to be a watershed release. 3.4 is the point where we should move en masse to Python 3 for software development. At the same time, we should conciously abandon the default distro installed Python. Leave that for sysadmin tasks and sysadmin scripting, but for actual applications, install Python 3.4 in a virtualenv and use that going forward. By now you should only occasionally run into non-ported libraries and the details of how to port them are well enough known that you should be able to do the job yourself if you run into one that is important to you.


I have quite a few projects based on django, and these are still 2.7. There are still too many dependencies that are not python3 ready yet.

On the bright side since django 1.5 has been released with python 3 support, many related projects have moved to python3. Still not completely there yet, but moving forward.

Shameless plug: I've been testing little badges to tell which requirements are python 3 ready or not on requires.io (https://requires.io). Should be in production later today or tomorrow.


It's in production.

An example of project that could consider making the jump from Python2 to Python3: https://requires.io/github/canassa/sodexo-api/requirements/?...


I'm still on Python 2.7. I've built up a lot of infrastructure at work surrounding the "scipy stack".

Are there any major libraries that have not yet been ported to Python 3? I'd be interested to hear about others' experiences making the plunge.


https://python3wos.appspot.com has a list.

boto, fabric, Django (still 'experimental'), and various django packages are my blockers.

gevent also doesn't support python 3.x yet afaik. Can the asyncio stuff supercede it?


All of the core Scipy Stack is ported, though if you use more specialist libraries, they may be missing. The main pain point in switching is with unicode, which is mostly not an issue for scientific/numberical code. Watch out, though, for integer division: now 1/2==0.5 (not 0).


> Watch out, though, for integer division: now 1/2==0.5 (not 0).

There go half of my bugs. ;-)


Twisted is the big one for me. The new async API introduced here sounds really great, but I have so much code built around Twisted that I don't know if it would be worth it for me to switch right away. Maybe I should write my next async project in Python 3.4 and start making a slow transition.


Not a library but I think mercurial currently still only works with Python 2.


We started up new development a couple of weeks ago, With it was the choice of language, runtime and so on.

Python3 ended up being the target for server-side bits. Thanks to the fact that Pyramid now is Python3 capable.

We are still finding Python3 issues. Deploying Python3 on RHEL was a bit of a pain, most solved by RH Software Collections providing a python3.3 build, and the functioning state of virtualenv+pip.

Not being able to use mod_wsgi with python3 was a bit of a pain, ( fastcgi wrapper, to the rescue ) but solved.


I recently started moving my existing open source apps to Python 3, now that Django, NumPy and Matplotlib all support it. For new apps I write Python 3 from scratch. Perhaps I'm not getting advantage of all the new stuff, but one app is into serial I/O and I had to rewrite entirely the code dealing with strings, because I read bytes from the serial device.

Debian Jessie will default to Python 3 only I heard. I am starting to only install python3-* packages.


Yes, and I'm planning on moving to 3.4 when it ships with Ubuntu next year. Looks like all the rough edges have been sanded. ;)


What are your blockers?


Can someone help me understand how to think about the transition from 2.7.x to 3.x?

I have recently switched my work to Python and just started development on what will become a series of web projects all done in Python + Django. Yes, when it comes to Python I am a noob.

Looking at it with fresh eyes it seems that the most useful ecosystem is solidly rooted in the 2.7.x branch. Books and online courses promote the idea of using 2.7.x. My kid enrolled in the MIT intro to CS edX course and they use 2.7.5. Codecademy, same thing.

From the perspective of developing a number of non-trivial web-based products, how should I view the 2.7.x and 3.x ecosystems? Do you see a timeline to a transition? How should one prepare for it (or not)? What should one avoid?

At the moment it seems safe to pretty much ignore 3.x. I kind of hate that because I have this intense desire to always work with the the latest stable release of any software I use. Here things are different due to the surrounding ecosystem of libraries and tools. I'd certainly appreciate any and all help in understanding this a bit better.


If you don't have any legacy baggage, you should celebrate your freedom to do what hockey great Wayne Gretszky said was his secret to success: Don't skate to where the puck is, skate to where it will be. The value of Python 2 knowledge is like the value of knowledge of how to write HTML/CSS for old versions of IE: a declining asset. If you don't have to deliver production software for today's user base, rejoice, and focus on learning how to target tomorrow's user base. Python 3 + Django 15 already let you do that. Those external libraries that don't work with Python 3 will either be upgraded or replaced. There is probably a lot you can learn about Python 3 and the latest Django while that takes place.

And, regarding kids, I'm teaching mine Python 3 only (no time wasted on Py 2) and the latest HTML5/CSS3 (no time wasted learning workarounds for obsolete browsers). I think they're better off focusing entirely on preparing for the future, not spending part of their time preparing for the past.


Thanks for your perspective on this. I had my kid go through the codecademy python course, which is 2.7.5. Now he is going through the MIT CS class on edX, which also seems to be 2.7.5.

At this point I'll help him through this phase and perhaps then make 3.x part of the continuing learning experience. In other words, if you are going to be in CS you will always have to deal with shifts in technology, this being a perfect teaching moment for him to learn that.


Unsupported Operating Systems

OS/2


It is strange that they claim to have implemented SHA-3 in hashlib. The details of the padding and capacity are still under discussion, so no final standard has been published yet.


I'm interested in seeing how robust this custom allocator feature will be. I doubt it will be more efficient than C/C++ but it makes the language that much more useful.


The new statistics library looks interesting.


As to statistics module, wouldn't it be more useful to have median functions with default percentile parameter (defaulting to 0.5) ? I mean, it's common use case and seems like natural thing to have.


Enums and statistics module. Yay !


"The :mod::pprint module" - an extra : there has broken the reference.


Python keeps getting more awesome, but we're still stuck at Python 2 :(


What's got you stuck?


For me, it's not enough resources (time) to test whether existing infrastructure would work with python 3 or it wouldn't (and something had to be fixed). I can't test it in production and it's loads of (sometimes hidden) code, most of which I didn't write.

I don't think we'll ever move... well not in a next year or two.


django here.


Django supports python 3, though I guess some django apps might not yet. https://docs.djangoproject.com/en/1.5/topics/python3/


If I had to guess, the same thing that has some places still stuck on COBOL.


It's amazing how a lot of big financial institutions are still running on COBOL.


Not just financial institutions - I've seen it when consulting at older medium-to-large companies. They've got new mainframes and code they've been running with minor patches for 30 some years.

I find the mainframe fascinating, honestly. It's an entirely different world than the one I'm used to, from terminology to how systems are structured.


TL; DR: "No new syntax features are planned for Python 3.4."


Gahh, I wish my work wasn't frozen in 2.6.


What prevents migration to 2.7? It's nothing technical, is it?


Finally Enums!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: