Hacker News new | past | comments | ask | show | jobs | submit login
Python 3.5.0 (python.org)
491 points by korisnik on Sept 13, 2015 | hide | past | favorite | 154 comments



Python 3 keeps getting better and better. I've been writing more code in Python 3 than 2 for about three years now, and I absolutely love it. When I have to write code in 2.7 now, it feels like shifting from fifth gear down to second.


Is Py3 ready for prime time? Do all the right libraries have support now?


Yes -- I write ~95% of my python code in 3, and scrapy is the only library I commonly use that doesn't support it. Here's a listing of package compatibility: http://py3readiness.org/.


and the other one: https://python3wos.appspot.com/


In most cases, yes. There are libraries that you might want to use that don't support it yet but the vast majority do. Personally, I'm anxiously waiting for Fabric 2 which has been pending for a long time. This is a good resource for figuring out whether the libraries you need are ready: http://py3readiness.org


Unfortunately I'm stuck on 2.7 until Google updates App Engine


Most of the time I find it pretty easy to make it compatible with both 2.7 and 3.4. Usually just a few "from __future__ import print_function" and a few other tips.

If your codebase is large, it can feel daunting, but it's not usually that bad.


It's been ready for prime time for the past 3 years.


Py3 has been prime time for a while now. Been using it full time since Py 3.3 for all my work and I haven't had any issues with it. There were some external libraries such as the library given by a twilio competitor that were incompatible, but easily fixed (almost exclusively, encoding issues).

Although, I still find myself running tests and cursing at the fact that I keep writing print "string" without the brackets :D


> When I have to write code in 2.7 now, it feels like shifting from fifth gear down to second.

You mean, second gear up to fifth?

I tried writing some Python 3 code and the lack of parameter tuple unpacking by itself stunned me and drove me crazy, never mind other stuff. Python 3 seems very user-unfriendly compared to Python 2. Won't touch it with a ten foot pole unless my life depends on it.


I've been using Python for 7+ years, and I had no idea parameter tuple unpacking was a thing. Never once seen any code that uses it, or seen anyone mention it. If you think the (rightful) removal of a hardly used feature[1] that has some big problems is a blocking issue then I feel sorry for you. Py3 is awesome.

1. https://www.python.org/dev/peps/pep-3113/


Interesting. Javascript is actually adding something very similar, calling it destructuring. It's actually quite handy - although admittedly more handy for destructuring object arguments than array arguments.


Python 2 & 3 have destructuring. This is talking about having

def fxn(a, (b, c), d):

    pass
Be a valid function signature. So the destructuring is held in the function definition.

Python 3 will still allow

def fxn(a, x, d):

    (b, c) = x

    pass
Both Python and ES6 JavaScript allow destructuring arrays:

//javascript

let [x, y] = [1,2]

#python

[x, y] = [1,2]

(x, y) = (1,2)


ES6 does allow that via destructuring - https://babeljs.io/repl/#?experimental=true&evaluate=true&lo...

    function fxn(a, [b, c], d) {
      console.log(a, b, c, d);
    }


> If you think the (rightful) removal of a hardly used feature[1] that has some big problems is a blocking issue then I feel sorry for you.

Uh, thanks?

I would call it anything but "rightful". Making life harder for introspection tools is hardly a rightful reason for removing a core language feature. The code is what most developers are trying to develop, not the introspection tools.


> Making life harder for introspection tools is hardly a rightful reason for removing a core language feature

Well that's one reason yes, and by itself not really a good enough one. But if you read the PEP there are another 4 good reasons that you omitted from your comment, why is that?


> Well that's one reason yes, and by itself not really a good enough one. But if you read the PEP there are another 4 good reasons that you omitted from your comment, why is that?

Because the other reasons were twice as horrible.

"No loss of abilities"? Uh, you can't do this in lambdas anymore.

"Exception to the rule"? Who cares? It was useful and readable, that's all that matters.

"Uninformative Error Messages"? This has never been a pain point for me. If you don't like it then no one forced you to use it...

"Little Usage"? Well I guess everyone matters but me. Which is fine, but then you shouldn't wonder why it would tick me off.


As someone who's bounced back and forth between Python 2 and Python 3, I'm curious about your need of parameter tuple unpacking[0]. I've never encountered it, and after some research I don't understand the use case.

I don't mean to sound like I'm trying to belittle your position, I would like to understand more. The PEP below doesn't seem to do a very good job explaining why people like them, and I'd like to hear a proponents position on the feature.

[0] - https://www.python.org/dev/peps/pep-3113/


> As someone who's bounced back and forth between Python 2 and Python 3, I'm curious about your need of parameter tuple unpacking[0]. I've never encountered it, and after some research I don't understand the use case.

> I don't mean to sound like I'm trying to belittle your position, I would like to understand more. The PEP below doesn't seem to do a very good job explaining why people like them, and I'd like to hear a proponents position on the feature.

Basically, I need it for functional programming. See here:

https://news.ycombinator.com/item?id=10213963


I use it like es6 or haskell destructuring. Sure it's a little weak compared to both those implementations, but the tuple case covers enough to be useful.


EC6 allows you to destructure in the function definition?



As someone who splits his time between Python, Clojure, and Javascript (via Babel), lack of this is extremely annoying for Python since the other two languages allow it.


A good example of a use case I can think of would be to pass an (x,y) coordinate pair as a single parameter, and unpack it in the parameter definition. This would save having an extra line that says (x,y) = pt.

Unless I'm missing something, I don't see this as a show stopper personally, and see it as increasing readability.


You should use a namedtuple for that sort of thing, and then refer to the members as pt.x and pt.y.


At first glance, parameter tuple unpacking seems like a silly hack that was wisely removed, explained here http://legacy.python.org/dev/peps/pep-3113/

What are the cases where you use it / rely on it?


> What are the cases where you use it / rely on it?

Basically any time I need to unpack inside a lambda (e.g. functional programming) and have to use something like itertools.groupby(), I run into trouble with Python 3. For example:

  from itertools import groupby
  def groupby_unsorted(items, key):
    return map(lambda (g, l): (g, map(lambda (k, v): v, l)), groupby(sorted(map(lambda v: (key(v), v), items)), lambda (k, v): k))
  print(groupby_unsorted([1, 2, 3, 4], lambda v: v % 2))
Never ran into problems with it either.


Python has comprehensions for this style of programming:

    def groupby_unsorted(iterable, key):
        return [(k, [v for k, v in g])
                for k, g in
                groupby(sorted((key(v), v) for v in iterable),
                        lambda item: item[0])]
    print(groupby_unsorted([1, 2, 3, 4], lambda v: v % 2))


Yes, list comprehensions can take care of some of these, but you're completely missing the entire point.

The point was that writing something like

  lambda item: item[0]
in your code is inferior to

  lambda (k, v): k
in terms of readability and comprehensibility. It is both longer and it also doesn't document the fact that the item is semantically a key-value pair.

The problem is not (and has never been) whether something is "necessary" or "possible". Tuple unpacking obviously completely unnecessary to begin with, it has nothing to do with whether this is done in a parameter or not. The problem is whether the code is expressed better via unpacking.


But tuple unpacking inside lambdas is actually supported in Python 3, isn't it?


Could you give me an example of what you mean?


Probably means from 3rd gear down to 2nd.


Care to share some highlights? Fifth to second sounds like a big difference!


It has the ability to statically check the type system, that's amazing. Moreover async/await with async resource management (which C# doesn't have yet) gives it a real edge for server development.


> has the ability to statically check the type system

I was under the impression that type checking was not one of the goals of PEP 484 etc.


> Moreover async/await with async resource management (which C# doesn't have yet)

In C#, the using statement combines with async/await. It is cool that Python finally gains proper async/await, there is no need to diss other languages.


IDisposable's dispose is not an async function, there is no IAsyncDisposable interface with language support. There is however, a proposal https://github.com/dotnet/roslyn/issues/114 , I'm not dissing the language but things are the way they are.


What is async resource management? can you link to docs for the features you mean? I'm having trouble googling this one, maybe different keywords are used?


I think inglor refers to the new 'async with' statement: https://docs.python.org/3/reference/compound_stmts.html#asyn...

Using it you can safely cleanup/commit transaction/release lock/etc when exiting a block of statements (even if an exception was raised). Plain 'with' statement cannot call asynchronous code in '__exit__' methods, 'async with' can do that (in '__aexit__').


> async/await with async resource management (which C# doesn't have yet) gives it a real edge for server development

An edge over what? You're still limited to a single thread via the GIL. Until that problem is solved, Python is certainly not the best bet (all else aside) for server development.


Node.js is also single thread and has been doing fine on servers.

Do you really need to share mutable state between threads in your server? If yes, what's your story when you'll need to scale to multiple servers?

If not because you can keep shared mutable state in a separate service (DB or similar), you can deal with one-machine scaling very similarly to scaling across machines - just start/fork a process per CPU. You get simplicity, avoid the whole thread-safety can of bugs, and all the administration benefits of stateless servers.


By far the most exciting news here, to me, is PEP 484 typing module. Typing support would eliminate one of pythons biggest weaknesses. Furthermore, having it optional means it can remain as easy play and prototype with while becoming "more professional."

The other features are all well rounded, with co-routines having quite some potential, though I'd have to play around with them first to assess.

Now if everything would please start moving on to Python 3 pretty please ;).


I like the thought of coroutines, but it's definitely been especially slow for the ecosystem to adapt to asyncio so far. (And for good reason. It has new syntax that doesn't play well with non-asyncio). Adding even more syntax for the same things may slow down adoption more? Admittedly, it's much better syntax.

The optional typing is a tricky subject. I'll have to wait until there are tools actually using it to see. One of my biggest hesitances with it is that until runtime checking or really good static analysis exists, annotating things incorrectly can be too easy. I am cautiously optimistic though. The theory is fantastic, just have to wait for the follow through. The best way to sort-of statically type things right now(imo) is through namedtuples (which I make use of heavily for lightweight, strict datatypes)

Formatting for bytes and bytearray is actually one I run into a ton trying to port py2 code to py3. Pretty excited about that.


PyCharm already takes advantage of optional typing using Python 3 annotations. The typing module will allow more types to be expressed, and I assume they'll update to take advantage of annotations that involve the typing module.


FWIW Tornado already supports the new syntax: http://tornado.readthedocs.org/en/latest/guide/coroutines.ht...


> PEP 484 typing module

Looking forward to this as well. So far the only static analysis tool I'm aware of that makes use of the PEP 484 framework is mypy [1] -- does anybody happen to know of other tools in the works?

[1] http://mypy-lang.org/


I know PyCharms is working on using them for its type-hints as well as offering better auto-completion options.


And this is why I paid for Pycharm. (and why their change in pricing model is so damn awkard, I both love them and hate them at the moment.)


Heh, for me it's the matrix multiply operator. Such a minor addition will make for a pretty big improvement to my day-to-day coding experience.


Agreed but does this apply in Numpy as well? Will Python matrix multiplications just work on lists of lists in which case they're much slower? Sorry just asking from a 2.7 holdout here as this might cause me to move.


"An upcoming release of NumPy 1.10 will add support for the new operator" [1]

[1] https://docs.python.org/3.5/whatsnew/3.5.html


The matrix multiplication operator is not implemented for any standard python data type.

The matrix multiplication PEP is actually titled "A dedicated infix operator for matrix multiplication", and that's (broadly) the only thing that it provides. Here's the arguments for why the operator should exist: https://www.python.org/dev/peps/pep-0465/#why-should-matrix-...

numpy and other libraries might/has/will implement the matrix multiplication infix operator for their array and matrix data types.


From:

http://legacy.python.org/dev/peps/pep-0465/#id24

Implementing __matmul__, __rmatmul__ and __imatmul__ will allow you to apply this operator to any given class. In that light, you could subclass the numpy matrix class yourself and simply apply these.

As for whether these will be applied to Python lists, my speculation is: I doubt it. Its possibly the most commonly used data structure, and I doubt they would add the overhead of another set of methods on each instance.


The matrix multiplication on lists could be used for the cartesian product (see itertools.product). However this operation is rather uncommon so there's no good reason to use an operator for it.

It wouldn't incur any overhead, don't know why you'd think it would. Each method of any object only needs to be stored once, not again for every instance. The overhead comes from pointers and data fields such as the length.


> Its possibly the most commonly used data structure, and I doubt they would add the overhead of another set of methods on each instance.

Unless I'm grossly mistaken, Python methods add basically no per-instance overhead (they add per-class overhead.)


Doubt it. The @ symbol isn't implemented by default. It's just available as a syntactic element primarily for NumPy to use, though other libraries are free to use it as well.


I want to see it used for an XML library, since I think node@"name" is a more succinct, and domain-appropriate equivalent for node.attrib.get("name").


I have kind of mixed feelings. The biggest argument against that PEP was that people would use it for things that are very unlike matrix multiplication, which can be confusing. I can kind of see the point there, but on the other hand the @ symbol really is natural for things like what you suggest.


Unusual operator overloading isn't that common in Python, though '%' for string processing is a big and primordial exception. Of the packages I know which use it, only PyParsing uses overloading for something non-analogous to the normal meanings.

I tried hard to think of ways to (ab)use '@', and this XML example and perhaps some sort of messaging API were the only two I could come up with where there was a natural domain-specific mapping.

One of the advantages of keeping '@' at the same precedence level and ordering as '*', '/', '%', and '//' is that it doesn't really offer much in the way of new use. If someone uses '@' for something very unlike matrix multiplication, then they could already use any of the other 4 symbols that way.

So I don't think @ ends up as an attractive nuisance.


It's a bit of a weak argument when pathlib is in the stdlib: https://www.python.org/dev/peps/pep-0428/#deriving-new-paths


Why not just use the array indexing operator for that?


In which way? node[1] means the second child.

If you mean node[x] returns node.attrib.get(x) when x is a string, and the nth-child when x is an integer, then I think that puts too much complexity into the API.

If you mean stay with the existing node.attrib dict-like API, then node@name is a simple shortcut for node.attrib.get(name), but it's 1) more aligned with CSS/XSL '@' notation for attribute syntax, and 2) faster in CPython.

(Just realized that '/' and '//' also make sense in that context. node / "abc" / "xyz" @ "p" would be the child "abc"'s child "xyz"'s attribute "p". Perhaps my proposal is a bad idea because it might suggest that these other forms are allowed.)


If you grab the latest code in master from https://github.com/numpy/numpy you'll get support for @ in numpy. It's great!


The @ operator won’t apply to any built-in data types. It’s only designed for numpy and other libraries.


Yep - can't wait until Python3 is the norm.

Feels like such a step back when I have to use a Python 2.x codebase - so many little annoyances.


Join us in the future!

I made python 3 a 'production readiness requirement' at the start of this year.

My two biggest Python3 complaints have stopped being 'something doesn't work' to 1: Pyston is targeting Python 2 and 2: PyPy doesn't care enough about Python 3 ( I mean seriously... PyPy + AsyncIO = EPIC )


Pypy is accepting donations for 3. I encourage everyone that is interested to chip in a few bucks


"Now if everything would please start moving on to Python 3 pretty please ;)."

Unfortunately, not always an option. Or rather, it is, but would require an immense amount of effort. Not to mention, the client wouldn't be too happy with the increase in bugs.

Working on a legacy C++/Python implementation stuck with 2.5. It's not as bad as it sounds, really. You learn to improvise, and work on python 2.7/3+ in off-hours.


So far, this is my favorite 3.x release. Typing annotations, async/await, unpacking generalizations. Lots of cool new features. Too bad that PEP0498[1] didn't make it in this release, we'll have to wait for 3.6.

[1] https://www.python.org/dev/peps/pep-0498/


> 'f' may not be combined with 'u'.

I thought that Python3 removed 'u' strings, was this just a bit of humour in the PEP?


We added u'' formatting back in 3.3 to ease 2->3 porting. u'' is equivalent to simple ''.


PEP 0448 in addition to already implemented PEP 3132 make it very tempting to switch. If PEP 0448 had unpacking in comprehensions I'd switch. (Doesn't seem be a follow-up PEP just for that functionality yet.)

https://www.python.org/dev/peps/pep-0448/ https://www.python.org/dev/peps/pep-3132/

I'd be really nice to use this.

  >>> [*range(i) for i in range(5)]
Instead of this monstrosity right now.

  >>> [x for y in (range(i) for i in range(5)) for x in y]
Python 2.7 has some minor features that 3 dropped unfortunately, which still makes me hesitate.

Such as filter keeping the type. In 3 it returns an iterator.

  >>> filter(lambda x: x in 'ABC', 'ABCDEFA')
  'ABCA'
Or this mostly cosmetic feature.

  >>> filter(lambda x: x[0] > x[1], ((1, 2), (4, 3)))
  >>> filter(lambda (x, y): x > y, ((1, 2), (4, 3))) # equivalent, error in 3
Also dropped. (It's slower than using the dedicated base64 module though.)

  >>>'Python'.encode('base64')
  'UHl0aG9u\n'
Also, I like print.


Hi, I implemented most of PEP448. We actually implemented

  [*range(i) for i in range(5)]
and

  {**d for d in ds}
but it was removed ultimately because people in dev-python found it confusing. If you're interested in seeing that construction in Python, I suggest waiting until 3.5 gains some traction and then making a suggestion in python-ideas.


I have a few follow-up ideas/discussions to raise on the list, and this'll be one of them. As you say, I'd like to wait for things to settle and people to get an idea of the new functionality first.

FWIW, I'm the PEP writer. It's not my invention, I just wrote it up. It still makes me glad when people mention the PEP, though :).


Joshua, you're not just the PEP writer: You also helped a lot with the implementation! Anyway, it also makes me happy :) By the way, we really need someone to document our feature.

http://bugs.python.org/issue24136


> Such as filter keeping the type. In 3 it returns an iterator.

> >>> filter(lambda x: x in 'ABC', 'ABCDEFA') > 'ABCA'

it was a special case for a few types, didn't (couldn't) work for all types e.g.

    >>> filter(lambda x: x in 'ABC', set('ABC'))
    ['A', 'C', 'B']
> Also, I like print.

I like Python 3's print. It's kind-of a pain because my day-to-day uses Python 2 and the context switch is annoying, but 3's is way more flexible and readable:

    print <<f, foo, bar # (or is it >>file? I can never remember)
versus

               actually makes sense, no sigil soup
                       ||
                       \/
    print(foo, bar, file=f, flush=True)
                               /\
                               ||
                   separate statement necessary in P2
and because it's an expression it can be used in more context than Python 2's print statement.

I feel you on the loss of argument unpacking, it was a very convenient feature (though not too common IME)


In case you haven't seen it, in Python 2.7:

  >>> from __future__ import print_function
  >>> print("hi")
  hi
  >>>
My team is starting to use it as we look to migrate various command line utilities to python 3.


Other options for your first case are:

    >>> sum((range(i) for i in range(5)), [])
    [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
and

    >>> import itertools
    >>> list(itertools.chain.from_iterable(range(i) for i in range(5)))
    [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
"filter keeping the type". That was only true for some types. Python 2.7's filter does not maintain the set type:

    >>> filter(lambda x: x in 'ABC', {'A','B','C','D','E','F','A'})
    ['A', 'C', 'B']
Regarding the cosmetic feature, that's a consequence of PEP 3113 -- Removal of Tuple Parameter Unpacking , https://www.python.org/dev/peps/pep-3113/ , which also applies to functions.

I use 's.encode("hex")' often. I realize why hex/base64 are gone, but it's so short and easy in Python 2.7.


> sum((range(i) for i in range(5)), [])

This won't work in Python 3 since `range` is a new type (and it's O(n²) anyway, so not missed).

The second option is idiomatic IMHO.


Indeed! I incorrectly thought I tested that under both versions. And I didn't notice the O() performance, which is:

    >>> for i in range(20):
    ...   t1 = time.time()
    ...   n = len(sum((list(range(i)) for i in range(2**i)), []))
    ...   t2 = time.time()
    ...   print(i, t2-t1)
    ... 
    0 0.00010800361633300781
    1 2.288818359375e-05
    2 2.5033950805664062e-05
    3 4.00543212890625e-05
    4 7.295608520507812e-05
    5 0.00017499923706054688
    6 0.0005929470062255859
    7 0.0024929046630859375
    8 0.012469053268432617
    9 0.14577507972717285
    10 1.7278220653533936
    11 17.82376503944397
    ^C
(MHO too, BTW.)


This:

    >>> [x for y in (range(i) for i in range(5)) for x in y]
can be simplified into:

    [y for x in range(5) for y in range(x)]


Are you tied to 2.7 for any other libraries or are these minor bits all that's keeping you from switching? If so I'd really recommend you just try python3 for a week, and have a play with some of the new features/functionality and the minor points will soon melt away.


Actually, filter always returning an iterator was one of the benefits of python3 for me; you don't have to think about the return type or always import itertools, everything just returns an iterator.


> Also, I like print.

That really bugged me for a while, but then I added a "snippet" into my editor. Now I type: pr then the tab-key, and it transforms to:

    print('', )  # with cursor between quotes
and it works well, also I have similar shortcuts for logging.


fwiw:

    >>> codecs.encode(b'Python', 'base64')
    b'UHl0aG9u\n'
The encodings still exist; hex works too (or hex_codec in older Python 3s). Python 3 just restricted the encode/decode methods to always go str to bytes or bytes to str.


What's New in 3.5 document: https://docs.python.org/3.5/whatsnew/3.5.html


I'd love to use Python 3. Alas, I work in the defense industry and am working on a new development project in Python 2.6. Why? Because that's the version that's already installed on the client's systems, and getting anything new installed is such a nightmare of Vogon-esque policies and procedures that I've learned not to even ask.

I'm constantly having to hack around bugs in ancient versions of libraries that have been fixed years ago for the same reason.

I have a feeling the slow adoption rate of python 3 is not due to developers "rejecting" it.


If that's because you work on Redhat, see if at least the epel repository is available to you. No idea if or when it'll have Python 3.5 but it's usually ahead of the base repo.


for folks using RHEL/CentOS >= 6.5 then https://www.softwarecollections.org/en might be the way to go, once 3.5 is available there.


Yeah, maybe RHEL 8 will have it, maybe as early at 2017. Maybe. :)


I went to 3.3 a while ago but reverted back to 2.7 because out of the 10 or so crucial libraries I needed, one was still on 2.7. Bloomberg. (http://www.bloomberglabs.com/api/libraries/). That's what the people who talk about high percentages of libraries already ported to 3, miss. All you need is a single one not to be ported and the future stays on hold.

I have taken to using the 3 syntax in 2.7 so that I am prepared, but until this particular library moves, I cannot.


The GitHub repo suggests there's been Python 3 support for a while: https://github.com/filmackay/blpapi-py

I've run into my fair share of Python 3 roadblocks, but you'd be surprised how far you can get by making a little noise and sending the occasional patch. Library authors are much more willing to prioritize Python 3 support if it looks like people actually need it :)


I will try to compile this today - appreciate that thank you. I understand that Ubuntu is making Python 3 the default from 16.04 onwards so I need to get moving. Looks like the long dusk for 2.x is finally turning to night.


The official library supports 3.x, and the downloads are on the link you put in your original comment.


If you're already working around bugs you could create your own venv with a later version. Easier to ask forgiveness than permission.


Not in the defence industry.


I work in the defense industry. Sometimes it is indeed easier to ask for forgiveness than permission, but you must be very careful. Obviously anything that violates any NDAs or security agreements is not to be pushed at all, but I've had no problems writing and running Python and MATLAB programs.

Point is, defense industry doesn't automatically equal a locked down environment.


I'll chime in here and say that quite a few people in aerospace/defense (at least where I work) prefer Python to some of the more traditionally used tools.

I've written a number of scripts for data analysis and simulation in Python that are much more portable for us since we don't need to get a license to run MATLAB on computers out in the field.


I just now (days later) read your profile.

If you're in the EE department...I had your exact same job a year ago. And your main Python users are down the hall in ACS. Those guys are great. Maybe EE has started to change some...since they lost pretty much half the department. For your sake, I hope so.


Some industries will be stuck with old versions of just about any language, no matter what. GP actually said he's stuck on 2.6, which means he can't even use 2.7.


I work in defense. Software approval is for major releases, so he I don't see why he couldn't use 2.7. If he wants to go to 3, that's when he'd have to get approval.


The scandir update also changes the underlying implementation of os.walk giving loads of production apps a huge speed increase by making use of additional data returned by the os calls.


Could os.walk have been changed "under the hood" without a PEP? eg. using — just for example — fast_walk_in_c in _os.so

(I understand that adding os.scandir might require a PEP)


In theory, but because the change was possibly complex and could be useful if available separately, one of the first thing a core member recommended was to iterate the poc as a genuine package and provide an API to the underlying system (which is now scandir).

https://mail.python.org/pipermail/python-ideas/2012-November... is the original proposal on -ideas.


No, a PEP was required because there was a lot of discussion on the API, and also non trivial choices on the implementation (pure C? pure Python? C+Python?). To have an idea of the amount of discussion, read the "Rejected ideas" of the PEP:

https://www.python.org/dev/peps/pep-0471/#rejected-ideas


Why would changing the underlying workings of os.walk require a PEP?


Great! 134 comments so far and only 1 or 2 about "Python 3 will never prevail", "Python 2 is better" and the usual rant. Things are getting better !


I love all the performance improvements to the standard library. That Serhiy Storchaka has made an impressive amount of contributions, a doesn't seem to be working for anyone?


Question - is there any web/api framework that leverages asyncio ? This implicitly also means first class DB/ORM support.

Flask, sqlalchemy ,etc seem to be using gevent and py 2.7.


I'm not sure, but perhaps aiohttp[1] fits the bill? Not sure if there's a "full" framework that does everything, but take a look at the simple benchmark code:

https://github.com/klen/py-frameworks-bench/blob/develop/fra...

(from: http://klen.github.io/py-frameworks-bench/)

[1] http://aiohttp.readthedocs.org/en/stable/


in this case, how are most of you guys writing code with asyncio ? or is there any code being written at all.

Right now, it seems to me that Python 3.5 is not ready for adoption since it is not usable with any framework at all. I mean, I really want to use asyncio, but 99% of python mindshare is around SqlAlchemy, Bottle, Flask, Django and Cherrypy. and I cant use it with any of them.

aiohttp looks cool, but I would much rather use one of the above for now. Anybody know why are none of these framework maintainers leveraging this stuff ?


You can definitely use asyncio with CherryPy: http://ws4py.readthedocs.org/en/latest/sources/servertutoria...


Sorry, I was confused, that is not true.


Yes, there is. https://github.com/KeepSafe/aiohttp

It's hard to write an asynchronous ORM though. Your options are either to use some existing synchronous ORM (like SQLAlchemy) in a thread pool (`loop.run_in_executor`) or use more low-level db driver: https://github.com/aio-libs/aiopg#example-of-sqlalchemy-opti...


The asyncio example/benchmark snippet I linked above uses peewee-async:

https://peewee-async.readthedocs.org


psycogreen is one such attempt with gevent - it works pretty well actually. that actually illustrates the confusion I'm having - gevent+py2.7 is actually working with all the major frameworks and giving all these performance benefits.

In fact if you look closely at the benchmark code (https://github.com/klen/py-frameworks-bench/blob/develop/fra...), the comparison is not against flask+gevent+psycogreen which may change the numbers.

https://pypi.python.org/pypi/psycogreen/1.0


The RethinkDB Python driver works exceptionally well with asyncio.


Pulsar[1] and pulsar-odm seem to be the python equivalent of akka-http. (Actor based concurrency and ORM[2])

[1] http://pythonhosted.org/pulsar/ [2] https://pypi.python.org/pypi/pulsar-odm


Tornado added support for that recently, I believe


It's about time I switched to Python 3.


PEP 0475 makes me happy, I've had to workaround interrupted syscalls a few times.


Does this mean that asyncio will be deprecated as there's now 2 ways to accomplish much of the same thing or do they truly have different semantics?


Probably you're confused by PEP 492. It's not replacing asyncio, in fact it's about adding new language features to make programming with asyncio (and other frameworks, such as Tornado) more convenient.


It complements asyncio. In particular, you'd probably want to use asyncio's event loop to drive your coroutines.


My experience:

I've tried Python 3 and enjoy it very much. A few things drove me nuts at first, trying to figure out why syntax that worked in Python2 is not longer working in Python3. But a little bit of googling and I was on my way with Python3.

I have a project that I started a long time ago in Python2 using web.py. I tried to migrate to Python3, but unfortunately, web.py is not supported. I know Flask has python3 support and I could migrate to that but I'm not ready to move my whole code base over yet.

From someone who tried to migrate to Python3 with no compelling reason to, hit a roadblock, I immediately shelved the problem till later as I'm not missing anything critical from Python3 to warrant the effort.

I wonder how many other projects go through this? Especially much larger, more complex code bases.


By the way have you tried out CherryPy? Another web library to consider when dealing with Python 3 projects.


I've recently joined a Python project, my first time working with the language. We're currently using Python 2.7. How do experienced Pythonistas decide if/when to upgrade to 3.x? I figured it was a no brainer given that it has been 7 years with a number of big releases, but Flask's warning on this topic put me off:

http://flask.pocoo.org/docs/0.10/python3/


Use it by default. If something doesnt work and you cant or dont want to make it work yourself, then fall back and use 2.7.10

P.S. That flask warning is definitely very out of date. I used Flask with Python3 quite happily, literally this week.


Those flask docs look out of date. I don't think you'll hit those troubles in python 3 anymore. It's imo a very mature ecosystem now.


That's very out of date going by: "Python 3 currently has less than 1% of the users of Python 2 going by PyPI download stats." The web ui download starts stopped being updated in 2013 if I remember correctly and api was disabled shortly after due to migration to CDN. So these stats are ~2 years out of date.


I like the "use Python 3 by default, and only use Python 2 if you have mandatory dependencies that aren't 3.x compatible" rule.


IMHO the only reason to use 2.x for new projects is if a necessary dependency doesn't exist for 3 yet.

Sadly the situation for using asyncio for web was not that great last I checked, because that would be a strong incentive.


Using Python 3 in production with Flask here, and been doing so for at least a year now. Never had an issue whatsoever. The warning is frankly outdated.


Very pleased to see PEP 465. In many ways I think python is one of the most exciting languages in terms of where it is going and what it is trying to do.


Awaiting an official release (and my PR being merged), I made a Docker image which is simply an upgrade of the official one: https://hub.docker.com/r/ctolsen/python/

Also, pytest users might want to use their dev version for now until they fix a 3.5 issue.


The new Windows installer is nice. It has needed updating for a long time so glad to see it get some love finally.


Could someone explain how to properly install Python 3.5.0 on Ubuntu so that it replaces the system's default Python 3.4.0 as the new default? Installing is the easy part, but I haven't figured out how to also make sure all the libraries that come with Python are also replaced with the 3.5 versions. Thanks!


You probably don't want to do this.

For stability it's best to keep the system managed through packages from the standard repositories. If you install things that are not from the repositories the packaging system won't be able to track dependencies - it will have bad information about some capabilities as some items won't have been installed using the packaging system. Overwriting official system libraries is a common cause of problems.

It's best to only upgrade something when the repository for your release makes it available: it pays to be conservative about the repositories you use, sticking to the standard ones and thinking very carefully about adding any others (e.g PPA's).

If you want 'newer' versions of frameworks/libraries for development then you're better off installing them away from the main system - traditionally that is /usr/local, or using a partitioning system such as chroot.

For Python specifically there's a really nice option of use venv/virtualenv.


I think the only correct way to do this is via 'apt-get upgrade'. In other words, you probably don't want to replace the system Python manually, because a lot of utilities depend on it, and you might bork your system. Using virtualenv is the generally-recommended way to run a different version.


I've often wondered why utilities that are particular about python versions don't just specify which version they want in the hashbang line. Actually it seems some utilities do exactly that:

    $ head -1 /usr/bin/lsb_release
    #! /usr/bin/python3 -Es
Obviously, all utilities should do this, and should probably be more specific than that. Distros could have a tool to patch this automatically, and then they wouldn't break when the user chooses to upgrade system python.


I really enjoy using pyenv these days.

https://github.com/yyuu/pyenv

In that case, `pyenv install 3.5.0` and then `pyenv shell 3.5.0` or `pyenv global 3.5.0` to switch which Python gets used. With pyenv's virtualenv plugin, you can also get rid of the need to use virtualenvwrapper for managing dependencies.


There are some other nice nuggets in there.

* The new subprocess.run() function provides a streamlined way to run subprocesses.

Yeah! I've implemented that function a dozen times in projects, I suppose because it was too trivial to put on pypi.

* collections.OrderedDict is now implemented in C, which makes it 4 to 100 times faster.

Nice, I use this one relatively often.


I am impressed that it upgraded cleanly over my previous installation of Python 3.5 release candidate.


Worth it just for isclose.


Python somehow always manages to miss the mark. The new co-routine stuff is a good example. Who can explain to me how I return more than once from the same co-routine? That is what a co-routine is after all no?


Check any tutorial about it or even the relevant pep. It's explained in many places. In short:

    def some_co():
        yield 2
        yield 3


Hmm,

    It is a SyntaxError to have yield or yield from expressions in an async function.
That's what I mean when I say it misses the mark. There are now two ways of doing the same thing and one of them is more powerful than the other. Using generators I can use both `yield` and `yield from` but if I use what they're calling co-routines then I can only use `yield from`. I see no value in the new syntax.

So an async function is not really a co-routine. It is a regular function that gets to use `await` and that's it. Calling it a co-routine is misleading and confusing.


The new coroutine stuff is really just syntactic sugar for "yield from", which has been around for a few versions now (and in turn is little more than syntactic sugar for "yield"ing repeatedly).

But you don't return more than once from a coroutine. You can suspend it as often as you like, and that's what "await" does.


And yet, Python 2 usage is still strong while Python 3 adoption has actually slowed down, and at this rate, it will probably stop being used completely while Python 2 continues to live on:

https://www.reddit.com/r/programming/comments/3k9yif/larry_w...


> Python 2 usage is still strong while Python 3 adoption has actually slowed down

Hypothesis: distributions started installing python3 by default now, so people don't have to download it separately anymore. Python 2 downloads will stay at the same level as they were and actually 2.6 will go up as it's not available in supported distributions anymore. This is strangely opposite trend to actual adoption.


Python 3 usage doubled in the last year: https://caremad.io/2015/04/a-year-of-pypi-downloads/


What's your point?

You do have a point, right? Coming in to a thread about a python release and trying to stir up controversy?

Or are you just doing it for kicks?


His point seems obvious to me: he thinks Python 3 is not worth migrating to, because Python 2 will always have more users. But it disregards that Python 3 may be more comfortable. And it assumes that the Python 3 user base will shrink in the future.


Trolls are trolls...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: