Hacker News new | past | comments | ask | show | jobs | submit login
What’s New in Python 3.5 (python.org)
288 points by calpaterson on July 7, 2015 | hide | past | favorite | 131 comments



Well, there's also type hints that so far we haven't added to "What's New" :)

https://www.python.org/dev/peps/pep-0484/


Yes. I'm not entirely sure why this was posted now, as it's a couple of months from 3.5's release date yet, and What's New is not up to date. On python-dev, David Mertz just volunteered to write/update this "What's New" page, somewhat sponsored by the company he works for, Continuum: https://mail.python.org/pipermail/python-dev/2015-July/14065...


Type hints are not Pythonic. The syntax is ugly. It adds "stub files", like C's ".h" files, something newer languages such as Go and Rust don't need. Some of the proposed syntax is even in comments, because it didn't fit the language. But something had to be done to meet competition.

The problem Python faces is that Python's Little Tin God painted the language into a corner. He insisted for years that everything in Python had to be fully dynamic. This limited performance.

Then came competition in Python's space. Go came along, Javascript got traction on the server, and Python faced a real threat. Both of those can beat Python on performance, sometimes by huge margins, and they're useable by Python-type programmers used to the freedoms of scripting languages.

The Python 3 debacle had already thrown the Python community into disarray - most of the production work is still on Python 2, which was supposed to be abandoned but refused to die because the cost of conversion was much higher than expected. Meanwhile, Python 3 takeup was far less than expected; numbers like 10-20% come from downloads, and production use is probably less. Another incompatible change in the syntax would be rejected by the market. An incompatible change to a "typed Python" wouldn't fly.

So we get PEP 484, which is ugly but might help Python survive. Might.


Your comment is grossly overstating the issue, and unnecessarily inflammatory. I have reviewed the type hints syntax, and it's not so bad. It's doubtful that anything nicer can be implemented on top of what you have clearly pointed out is a dynamically typed language.

Your criticism of Python being dynamically typed is also misguided. There are many benefits of having a dynamically typed language, which I won't bother to enumerate here since this subject gets beaten to death regularly on HN. It is a good choice to make this design decision up front and honor it as the language grows. Guido is not a moron; he knew there would be performance implications. Nobody today chooses Python for it's native performance anyways (although Cython and PyPi have made great strides for common cases).

The value of Python is not in performance, it's in the language simplicity, large ecosystem, and highly developed libraries. There are some disciplines such as machine learning and quantitative finance which are all but predicated upon Python, with excellent results. Comparisons to Go and JS are incongruous; those languages have other benefits which would make them good choices if things like concurrency (Go) and very high level abstractions (JS) are important.

The Python 3 transition was indeed rough, but in no way is it a "debacle". The community is not in "disarray"; that's absurd. The transition to 3 will happen eventually, and indeed this lethargy was caused by deliberate breakage in language features. Maybe not the best decision in hindsight, but far from this cataclysmic fantasy you seem to be depicting.


I'd say the comment was less inflammatory than perhaps justified.


>So we get PEP 484, which is ugly but might help Python survive. Might.

This seems really hyperbolic. I'm not sure Go really competes in the same space as Python; also Python is so well entrenched as the successor to Fortran in the scientific computing space, that this would seem to guarantee continued relevance far into the future. It's probably like a lot of things, it may not be as sexy or break neck fast (though PyPy is staged to change that, and maybe we can also feasible get a non-sucky version of IronPython or Jython), but it's pretty much everywhere now, kind of like Perl, C, PHP, Java....


>also Python is so well entrenched as the successor to Fortran in the scientific computing space

Uhhh... no. FORTRAN is literally the fastest programming language in existence. Maybe Python is good for prototyping or pre/post-processing some data, but with a performance hit in the 100x order of magnitude, you won't run Python scripts for the bulk of any serious scientific computing project.


I think numpy and scipy[1] beg to differ. You should go look at what's actually available in terms of scientific computing on Python, as I think you might actually learn something (hint: numpy is really a bunch of Python wrappers over FORTRAN routines, if you look at the source code). Having been a PhD student in Physics, in particular (at an Ivy League university, nonetheless), I can tell you that the majority of new code we were writing was Python, and since I've left, it's probably gotten more so. A good example of Python in physics, is PyMCA[2]. Python is really poised to also (thank god) reduce the marketshare of Matlab.

[1] http://docs.scipy.org/doc/numpy-dev/f2py/getting-started.htm... [2] http://pymca.sourceforge.net/

Also: http://www.quora.com/How-did-Python-dominate-scientific-comp... http://programmers.stackexchange.com/questions/138643/why-is...

Finally as a Physics person, you might also like the coursework for this (I certainly enjoyed the course): http://pages.physics.cornell.edu/~myers/teaching/Computation...


another thing to consider is f2py, which allows calls to Fortran subroutines from Python. in my experience, it was faster than numpy, but you have to suffer from writing Fortran.


Just use numba and get fortran like performance by writing in python.


I got inconsistent results when using Numba. when it worked well, it was way faster than Numpy, but sometimes it was slower. I wasn't able to figure out how to do AOT compilation, so I just went with f2py. if Numba has AOT compilation, I'd definitely use that over f2py though.


AOT compilation is in the works. Also you might have been using features that numba didn't support yet. They just added more numpy ops, array allocation and vector ops, so your code might be working now.


I am no longer working on the project (it was part of a 4-month research project), but I'll make sure to tell my supervisor. thanks! :D


Most scientific computing is not "serious". My past couple papers, for example, have been mostly Python analysis of geospatial data. I use Python because it's easy to integrate with Postgres/PostGIS, and the numerical code provided by Numpy is fast enough. (I drop down into Cython when needed.) The algorithms I need to run aren't hugely intensive -- at most, something might take an hour to run, and most of my simulations take just a minute or two.

I don't need raw speed. I need development speed so I can easily iron out bugs and try new methods. My colleagues develop in R for the same reason.


I think it depends what you count as scientific computing. I'm an engineer at an industrial plant and like you for offline non intensive stuff when I have to knock up a prototype fast I use SAS mostly because it has good integration with databases and I can crank out code very fast using it.

My housemate is a Math/Stats person he works in finance and uses R for much the same reasons.

Maybe Python is useful but it would have to offer me something compelling to make me switch over.


> "Most scientific computing is not "serious""

Stopped reading after this.


Maybe not pure Python, but one of the best features of CPython is easy interoperability with C or Fortran. Python is the glue for hard-to-use but blazing fast numerical libraries. You can also write very fast code in Cython, which has basically the same syntax as Python.


Python is used in a vast amount of scientific codes. I do a very significant chunk of my work in Python. And heck, I reach to C++ to do the pre/post processing, believe it or not, the inverse of what you suspected.


Yes, python is not the future...but no, Julia and Python's Numba is just as fast as FORTRAN.


As far as I know type hints don't affect Python's performance, because they're optional and not yet used by the runtime.

In any case, JavaScript is dynamically/unityped, so its performance is unlikely to provide a reason for type annotations. The real problem here is that PyPy is still maturing, and can't be used by many people because of legacy C extensions.


An they're unlikely ever to be used by the runtime, because what hints there are are far too vague. Saying something is class X is useless when you don't know what attributes that class has (because they can be dynamically added and removed) nor what types those attributes are.


The level if information modern Python tools like those in PyCharm and JEDI can reason through to and discover about runtime behaviour is actually pretty impressive. I don't think it's going to pose a huge challenge for these kinds if tools to find. Dealing with ambiguous cases is a UI problem, too much info, not enough space.


They don't affect performance, but I have to agree they really don't look nice to read or write, which is unfortunately the antithesis of Python's philosophy.


How's this for a debacle: I just ported a 2.6 app to 3 and it only required fixing one line due to a change in library behavior. 2to3 took care of the rest.


That's not wholly true. While stub files are proposed, they're not required to use this feature.


Perl 6 has full gradual typing YAY https://www.youtube.com/watch?v=id4pDstMu1s


I'm extremely enthusiastic about `async/await` semantics, and in particular async iterators and the `for async` loop.

It's astonishing to see how fast Python is moving in this direction. It is truly a powerful language for async computations now - surpassing in this ability both JavaScript and C# (at least for now). For example Python got `async with` but JS isn't even close (userland solutions like Bluebird's using exist) and C# is only starting to work on `IAsyncDisposable`


I admit to not fully understanding the async PEP (partially since I don't do much Python programming anymore). Are there any downsides to having to explicitly mark an async function using the "async def" syntax?

I'm thinking of how in Go I can launch a goroutine with "go foo()", where foo can be any function and not one that happened to be declared explicitly as async.

Or in Perl 6, I can "my $p100 = start { (1..Inf).grep(*.is-prime)[99] }; say await $p100;" This provides the similar blocking "await" keyword, but I'm still free to execute any arbitrary code inside the block denoted by "start".

I have a feeling that these questions are both answered by whatever coroutine functionality existed prior to the introduction of "async def", but that's an area of Python that I have no experience with. I'd appreciate any clarification that could be offered. Thanks!

Edit: re: the P6 example, it looks like it works somewhat like Python, where a "Future-like object" (from the PEP) is provided to await; in P6 start returns a Promise. However it looks like in Python I still need to wrap the enclosing function with "await def". https://www.python.org/dev/peps/pep-0492/#await-expression


Goroutines are, for all practical purposes, threads. They're cheaper than real threads but they still have all the subtle complexity that comes with shared-memory preemptive threading (everything shared needs to be appropriately locked, scheduling is non-deterministic and can happen any time, etc). Async/await is different; context switches can only happen at an `await` expression. This greatly reduces the number of possibilities that must be considered and simplifies the task of concurrent programming (for more, see https://glyph.twistedmatrix.com/2014/02/unyielding.html).

In Python, the closest equivalent to `go foo()` is `threading.Thread(target=foo).start()`. Or you can use gevent to get goroutine-like lightweight threads, but I find this is works better in go than in python because the entire language is designed around it. In python it's common to find libraries that are not compatible with gevent's monkey-patching and so you lose concurrency in a way that is difficult to detect or guard against.


Going to have to disagree with the criticism of gevent. The monkey-patching is very robust, despite the fact that monkey-patching is a sin in general. I've never had any problems with it, with the exception of third party native modules.

If code is using the regular Python socket and threading libraries, it will almost always work seamlessly. Gevent provides the closest thing to goroutines.


Third party native modules are exactly the issue. Many database drivers (etc) are wrappers around native libraries that do not integrate with gevent; the ease of integration with native code is generally one of Python's great strengths. To use gevent you must pay a lot more attention to implementation details of libraries you rely on.


> Are there any downsides to having to explicitly mark an async function using the "async def" syntax?

Yes. One main one is splintering the library ecosystem. So some libraries are using Twisted, some are using base threading system, some run in Tornado, some will use async and so on. They are usually not mixable. One day in the distant future they will all support async, but that means updating all of them.

Special markers ("async" waits points/objects) for IO co-routines tend to propagate vertically through all the API layers. If, say, a low level library you found is returning a Deferred/Promise/Future or generates values, then top level has to handle that as well. Then its parent also has to handle it.

It is infecting the ecosystem with low levels details about how the IO needs special casing because of how the GIL works. Yes, it is more elegant in Go, Rust, Erlang, C#, Java etc.

BTW Python has something like it based on the greenlet library (eventlet and gevent libraries are based on). Those monkeypatch system libraries so it manages to hide this complexity, but there are other costs, unsupported or untested side-effects being one.


What does the GIL have to do with this? All blocking I/O calls release the GIL.


It's more complicated than that. Specifically having a mix of any CPU and I/O bound threads doesn't play out as nicely:

See this:

http://dabeaz.blogspot.com/2010/02/revisiting-thread-priorit...

Or even better for a demo watch David Beazley's Pycon 2015 video (It is an awesome video even if you don't care about the GIL or socket programming).


Looks like this is the video referred to: http://pyvideo.org/video/3432/python-concurrency-from-the-gr...


Yap, thank you for finding. I was just being lazy.


Think of async/await as syntactic sugar for doing the kind of asynchronous programming when you pass callbacks/continuations and have an explicit event loop. For example, assume that you have an http_request function that looks like this:

    http_request(url, callback)
If you want to make a request, get a value, and then make another request depending on that value, you would need something like:

    def callback(result):
        url2 = get_from_value(result)

        def inner_callback(result):
            print('Final result:', result)

        http_request(url2, print)

    http_request(url1, callback)
This is very complicated, specially because in Python it's not easy to inline callbacks like in JavaScript. And even then, it's sometimes messy and becomes a callback hell.

With async/await you could do something like:

    result1 = await http_request(url1)
    result2 = await http_request(get_from_value(result1))
    print('Final result:', result2)
The await keyword doesn't block like in Perl 6. It suspends the execution and returns the control to the even loop. Then the event loop calls other functions in response to other events. All this happens in a single process and a single thread.

This model is more explicit than the Goroutines of Go or the multiple threads of Perl 6. But it has the advantage that the user has more control over when the execution control flows between different parts of the program. For example, if I have this code:

    await request1()
    # some synchronous code
    await request2()
I am completely sure that nothing will run while the code in the middle of the two request is running. This simplifies a lot of synchronization problems.


AFAIU await can only be used with async functions:

  async def http_request(url):
    # ...

  await http_request(u)
If the method I want to use is not an async one, is it possible to dynamically define a lambda ?


Python could do green threads (IO-base coroutines) for a long time -- see eventlet, gevent, even Twisted's inlineCallback pattern. There are probably others, there was even one based on "yields" as well.


Stackless python pairs well with green threads. Very, very well. I believe pypy based a lot of their initial work on the design of stackless python.


Yap, I've used and shipped code based on eventlet. And you are right PyPy supports that as well. I have never tested it using PyPy. Perhaps one day. For large concurrency cases I would probably look at Erlang/Elixir though...


C# has the reactive extensions API https://rx.codeplex.com/ which is the inspiration for the Perl 6 concurrency model which is also quite advanced https://www.youtube.com/watch?v=JBHsdc0IVIg So not sure why you think Python is somehow unique in doing all this.


This is awesome!

I want to ask:

- Does it mean that it will be possible to create realtime apps with python? Will it be possible to use it instead of node.js? Do you think Django will implement that functionality?

- Now that WebAssembly is coming - doesn't it mean that it could be possible to create both backend and frontend for realtime apps in python?

That would be so cool....


It's always been possible to create realtime apps with Python. Callbacks have been available since the beginning, and frameworks like Tornado and Twisted have been doing something very similar to the new async/await functionality with generators since Python 2.5 (or maybe even older). The async/await keywords are a nice performance improvement and work in a few places that "yield" doesn't, but they're not adding fundamentally new capabilities.


> Does it mean that it will be possible to create realtime apps with python?

I'm not sure what you mean by realtime but usually real time means something else and requires more deterministic behavior.

If you mean applications that use non-blocking IO to great extent then yes, but this depends a lot on the ecosystem to provide libraries for async file io, database io and so on. The Python language left good infrastructure to use though.

> Now that WebAssembly is coming - doesn't it mean that it could be possible to create both backend and frontend for realtime apps in python?

Not anymore than we could before with asm.js unfortunately as seen in http://repl.it/languages/python3 the problem is wrapping and the library, I hope a company "picks up the glove" and makes "Python for the frontend", sharing Python code between backend and frontend could be amazing.


People have started to hijack the word realtime. It's truly sad, because now it just muddies up people's conversations. So now people have to ask, is it the one that keeps planes in the air or the one that makes sure your email badge counter updates.


It is unfortunate. In most cases you can assume until proven otherwise that when someone says "realtime" they mean "really really soft soft-realtime" (sometimes referred to as "flaccidtime") rather than hard- or soft-realtime. Who needs planes when you have Javascript?


I wish there was a better alternative term for the web sense that I could promote, just to help un-hijack the word before it gets out of control. But I don't know of one that is likely to succeed. 'Responsive' makes sense to me but it already has another meaning in the web world. What about something like 'reactive' or 'on-the-fly'? People will never go for 'near real-time' because it sounds too weak in terms of its advertising value. Someone else below me suggested 'push', which is pretty decent but again it's an overloaded term... Thoughts or suggestions? :)


I'm not much of a wordsmith, but I am leaning towards "forthcoming".


Yes, for better or for worse, "realtime" (and sometimes "near-realtime") are used in the web community to refer to push technology as opposed to time constraints.


"Will it be possible to use it instead of node.js?"

It's been possible to use Python instead of node.js since before node.js existed. And by "possible" I mean "tons of people have been shipping code", not just "it's theoretically possible but nobody does it".

Node ported a well-established existing technique to Javascript, it didn't invent it.


As for Django, I think this is an exciting idea: https://gist.github.com/andrewgodwin/b3f826a879eb84a70625


I don't know that it's possible to have a realtime application running in an interpreter (which makes no realtime guarantees as far as I know) running on top of a soft-at-best-realtime OS.


One thing I don't really get, why is the "async" keyword necessary at all? Couldn't just the mere presence of "await" in the function body implicitly make the function "async"?


I guess the reason is that it changes the type of the return value. Compare:

    def moo():
        return 5

    async def oink():
        return 5
moo returns 5, oink returns an awaitable.

For a human, without the 'async' marker, you'd have to scan the entire function source to know. I guess also type checkers and documentation tools like to have an idea about the return type.


The presence of "yield" also changes the return type. No extra keywords needed for that.


Ah right, that makes sense.


This is directly addressed in the PEP [1] - the short of it is, without the `async` modifier if you have two functions:

   def important():
       await something()

   def something():
       # ... stuff ...
       await something_else()
       # ... more stuff ...
and you refactor `await something_else()` into a new method then you've changed the return type of `something` from `Awaitable[ReturnType]` to just `ReturnType` and `important` will break at run-time when it tries to `await something()`.

  [1]: https://www.python.org/dev/peps/pep-0492/#importance-of-async-keyword


Self documenting code for one.

I suppose you could follow the .NET Naming convention of appending "Async" to all routines.


Me too. There will be less of a switch in mindset for me after working in JS for a while then switching to Python for the backend.


Am I the only one surprised by Zipapp (since 2.6?). Wish I had known about this before ... https://docs.python.org/3.5/whatsnew/3.5.html#whatsnew-zipap...


does PEX still have value?

https://pex.readthedocs.org/en/latest/


This is nice. I actually used to manually create zip archives and prepend a shebang.


Can that handle statically linking dependencies with FFI calls?


I thought % formatting was the "old way" and we're supposed to use format() in Python 3. Strange that they're adding % support to bytes now.


Thanks for bringing this up. I asked about this on reddit yesterday[1] and was linked the thread where they originally discussed this[2]. Basically, the goal is to make porting from 2 to 3 easier, and there's also a notion that printf-style formatting may be more natural than format-style formatting for bytes.

I still wish they were consistent and provided a .format method for bytes, especially since, as you point out, .format was supposed to be the "new way" to do formatting and this feels like a step backwards. And what about backward compatibility for porting Python 2 code that used .format to do formatting on what are semantically byte strings?

[1] https://www.reddit.com/r/Python/comments/3c7lne/python_350b3...

[2] http://thread.gmane.org/gmane.comp.python.devel/144712/focus...


As long as % formatting still works, it is in many ways more convenient that format. '"Foo: %s" % bar' is a lot shorter to write than '"Foo: {}".format(bar)'. And % formatting is still built into logger, so if you use logger heavily you will probably be used to it.

While the format() mini-language is nicer in a lot of ways, the convenience of % and its ubiquity mean that I generally use it more often. The main time when I reach for format() is when I need more detailed control of column widths and the like, which you can do with things like "{var:{width}}".format(var=var, width=width)


Personally I dislike % formatting even if it's a few characters shorter to type. .format is superior in every other way and has consistent-with-everything-else method call syntax.

Edit: I think it's largely the special flower overloading of the % operator that bugs me the most. If % was spelled .sprintf I'd like it more.


If you read the PEP, it explains how they are adding it for easier python2 -> python3 migration


Lots of good stuff in here.

Python 3.5 may actually be what finally convinces me to move to Python 3 (that, and the end of Python 2.7 support coming up in a few years).

The new async stuff, finally fixing some of the problems with byte strings that made them not really an adequate replacement for Python 2.x strings, and os.scandir is a substantial performance improvement over os.listdir plus calls to stat (though os.scandir, at least, is also available in Python 2.7 via PyPI).


For what it's worth, even if the core Python developers discontinue support for 2.7 (and they might not, it was originally supposed to EOL this year), there is enough Python 2.7 code running in the wild, making people money, that some entity will certainly take over maintenance responsibilities. Python 2.7 is here to stay for a very long time if only because of the huge amount of legacy code written in it.


"Python 2.7 is here to stay for a very long time if only because of the huge amount of legacy code written in it."

True. It might even outlive Python 3, just as Perl 5 looks like it will outlive Perl 6.


It will stick around like Perl4 did and then fade away with time.


Let's not forget https://www.python.org/dev/peps/pep-0484/ which looks really promising


These kinds of advancements in the language are why I'm glad I've switched to Python 3, Python 2 needs to die already.


Python 2 will never die. Ask Kenneth Reitz for example. He says he will use Python 2 forever. There are a lot of companies with huge codebases who never will port to Python 3.


> Python 2 will never die.

Everything dies.

> Ask Kenneth Reitz for example. He says he will use Python 2 forever.

So, assuming this currently-stated intent never changes and Reitz is immortal, Python 2 will always be in use.

The first of those premises is somewhat suspect, the second even moreso.

> There are a lot of companies with huge codebases who never will port to Python 3.

That's probably true. Its less likely true that those companies will never port off Python 2.


> Everything dies

Python itself might die when Python 2 dies, and arguably, Python 3 makes that more likely.


In my job we have some big projects on Python 2 and Cython gives us access to some of the "new Py3 features": types (real types, not just hints) and async IO (through C primitives). We eventually chose Cython because at least we can get big speedups (depending on the code and how much can be properly cythonized, of course). With Py3 it was a change to the codebase (which we also had with Cython, of course), to get approximately the same performance, maybe slightly less, maybe slightly more.


This is the unfortunate truth. My company still uses Python 2 solely, not a single line of Python 3.

Unless there is some major breakthrough in porting tools (in combination with more and more libraries supporting 3), Python 2 will still probably be the only Python version in many enterprises for the next 10 years.


Note: The "What's New" document is not up-to-date yet. See https://hg.python.org/cpython/file/3.5/Misc/NEWS (warning - it takes time to load) for a more complete list.


I am a bit reluctant about asking this but where do we stand in terms of real world performance? This is such a broad topic, I know, but surely there has to be some in-house metric system about this (perhaps popular web and scientific frameworks, game dev frameworks etc).


The answer is "all over the place". It really does depend on your benchmarks and use case. There is no aggregate real-world benchmark. Such a thing is very hard to do, and not all that portable.

For example, in one performance intensive code that I use, while mostly written in Python, has a C extension, which itself uses inline assembly for the core routine. Similarly, most numeric calculations that use NumPy will end up doing BLAS / LAPACK / ATLAS calls, so the performance limitations aren't due to C. The switch from 2 to 3 isn't really going to affect these.

If it was seriously and consistently worse, eg, for Django, then it's likely we would heard about it by now.

Instead, there are reports like https://www.youtube.com/watch?v=f_6vDi7ywuA (slides at https://speakerdeck.com/pyconslides/python-3-dot-3-trust-me-... ) from PyCon 2013. At minute 30:54 / slide 16 the speaker evaluates the two and finds that the median performance was effectively identical for microbenchmarks, and ~5% slower on macrobenchmarks, though even then it varies from 27% slower to 13% faster depending on the specifics, and as the speaker points out, the tested packages aren't doing exactly the same things on both versions.

In any case, that was 2 years ago.


An image-reading tool I maintain works 14 times faster in Python 3.4 compared to Python 2.7. I have no way of accounting for the difference though.


This is probably the first time I heard of such a speedup when going from 2.7 to 3.x, and the fact that you can't account for it makes it worse because we can't evaluate what's happening and if such speedup could be achievable on other projects :(


How much of this is planning on being backported to python 2?


Python 2 is feature frozen. 2.7 gets bugfixes and security improvements and that's it; new language syntax/features only happen in 3.x.


Except large chunks of the 3.5 standard library has been backported and are available via the cheeseshop.


While there are some Python experts around I have a question; in the world of Python 3 does it matter between 32-bit and 64-bit anymore? I remember a while back the only real option was 32-bit as most libraries were 32-bit only, especially on Windows where Python wasn't fantastically supported compared to Linux/OSX.


If you're downloading pre-built binaries like on Windows then unfortunately yes it does matter if you're using 32-bit or 64-bit libraries. Windows in general has always been a bit of a second class citizen with Python library support, although it will hopefully improve with easier access to compilers (there's a free version of visual studio tools for python library compilation apparently now). If you just want a lot of libraries that 'just work' you're probably best served using a pre-built distribution like Anaconda. If you're more adventurous this page has all kinds of pre-built libraries for Windows in both 32 and 64 bit: http://www.lfd.uci.edu/~gohlke/pythonlibs/

Alternatively it's insanely easy to spin up a Linux VM with vagrant and just compile and run python scripts from there. Everything will pip install without any issues.


Windows is a second class citizen for most language library support, that being said though Python does a great job of having the most recent versions available for Windows. The same can't be said of Ruby unfortunately. :(


Many thanks for the replies. Much appreciated.


It's going to be hard to get used to having the function body 'less' indented than `def`.


Maybe this is obvious, but what would be the difference between the new async / await def and the current @asyncio_coroutine decorator?

Or between the new async / await and the current asyncio?


Does anyone retain their legacy Python2, write new in Python3, and communicate between processes, read/write from a common database, and just in general keep 2 and write new in 3?


Someone somewhere is.


I still use Python 2.7. I guess I should upgrade.


I've found this website useful for checking which packages have made the jump.

https://python3wos.appspot.com/


That site uses a very outdated list of popular packages. Use http://py3readiness.org/ instead.


Thanks!


Actually, did I speak too soon? I looked back at python3wos and they have updated their package list.

It looks a lot like the one on py3readiness.org now, and it doesn't have all the nonsense like "tiddlywebplugins" that it used to have.


How come `bytes` and `bytearray` got old-style formatting? Aren't we trying to do away with that in favor of a `format` method?


It's much more convenient to decode bytestreams (e.g. for a wire protocol) using % formatting. The lack of % formatting for bytes was a huge impediment for porting mercurial to python3, just as an example.


Could you point me to a code example? It's hard to see how old-style formatting would facilitate decoding. Typically, I think of decoding as being a process which deconstructs something; not like formatting, which is really more about constructing.


I think he just meant encoding, which is true.

I remember Armin Ronacher (author of many nice things, especially Flask) even wrote a blog post against Python 3 and this was one of things he hated about it.


I think the general opinion is "use where appropriate but be consistent" which I tend to agree with. There are times where the simplicity of old-style formatting just gets the job done but it certainly has it's limitations and new style can be extremely powerful.


I hate the new @ operator for matrix multiplication, it just looks ugly. (And I use numpy matrix math all the time.) But whatever.


As a fellow numpy user - why? I would love having sugar for this, I use it all the time. Introducing a single operator to overload seems odd but it's definitely a pragmatic solution to making code more readable. What would you do instead?


Personally, I'd have made it a diglyph of some sort, like :* or ^*, to make it obvious to someone looking at python math code for the first time that this is a multiplication operator. @ just leaves a weird taste here.


Agreed; this needs an operator, but it doesn't need a single-character operator, and the operator should have had * in it.


The pros and cons of different operator possibilities are discussed extensively in the PEP that added the matrix multiplication operator.



I love the snark in that PEP:

> APL apparently used +.× , which by combining a multi-character token, confusing attribute-access-like . syntax, and a unicode character, ranks somewhere below U+2603 SNOWMAN on our candidate list.

and then later, one of the reasons @ is better than the alternatives:

> Whatever, we have to pick something.


What about the & char?


& is the bitwise and operator.


Yep, forgot thanks.


Not particularly well, though. No actual explanation was given for rejecting ^(star), and :(star) wasn't even mentioned.

(Aside: HN really needs a better way to escape star inline in text.)


It sounded like there was a bit of general preference for a single-character operator, which is a sentiment I can get behind.


OCaml has * for int multiplication and * . for float multiplication. you cannot overload but you can define your own, so people tend to do things like define *: for complex multiplication.


In R %*% is matrix multiplication.


Python 3.5: In which your email address became syntactically correct.


What alternative would you propose?

Given that having an operator for this was deemed important for more readable numeric code, and that * was already taken, there weren't too many fabulous choices left.



Also it feels like wasting a character on something that only a small number of users are likely going to need.


Not necessarily. You can overload @ to do some other kind of object combination, which fits your paradigm better. You're not tied to using it for matrix multiplication.


We should appreciate the contributions of Serhiy Storchaka, who appears to be in the Ukraine (or Kazakhstan, or Tajikistan).


Oh my god they finally removed the GIL!


Umm, no? https://docs.python.org/3.5/glossary.html#term-global-interp... - "The mechanism used by the CPython interpreter to assure that only one thread executes Python bytecode at a time"


Hmm? Where does it say that?


Does it really matter if nobody is going to upgrade past 2.7?


Looking good. Soon I'll just be waiting for the announcement that they have sped things up 10X, and I might finally be able to let go of my Python ennui.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: