Hacker News new | past | comments | ask | show | jobs | submit login
Python 3.3.0 released (python.org)
346 points by cx01 on Sept 29, 2012 | hide | past | favorite | 110 comments



The real advantage here is the release of Armin's u'' syntax addition proposal:

http://www.python.org/dev/peps/pep-0414/

In a nutshell, the 2.x version of declaring a unicode string is now valid (although redundant). From the PEP:

In many cases, Python 2 offered two ways of doing things for historical reasons. For example, inequality could be tested with both != and <> and integer literals could be specified with an optional L suffix. Such redundancies have been eliminated in Python 3, which reduces the overall size of the language and improves consistency across developers.

In the original Python 3 design (up to and including Python 3.2), the explicit prefix syntax for unicode literals was deemed to fall into this category, as it is completely unnecessary in Python 3. However, the difference between those other cases and unicode literals is that the unicode literal prefix is not redundant in Python 2 code: it is a programmatically significant distinction that needs to be preserved in some fashion to avoid losing information.

This version of python should see more uptake by 2.x developers as it is now easier to port.


Although it was a pain, it was manageable beforehand and in fact, if you need to support 3.1 or 3.2, you will still have to manage it a different way.

Many of the features detailed in the release list are more helpful in general. 'yield from' is actually really good if you are using generator based coroutines, the wide/narrow build thing addresses a long-time pain point, it will be great if namespace packages are actually fixed by now, and adoption of virtualenv into the core is a big deal!


This would have made it so much easier to create a version of Crunchy that could run transparently using either Python 2.x and 3.x. However, as much as I applaude this change, there are others new features in 3.3 that are just as important imo.


There are two significant factions in the Python community: the scientific group and the web-dev group. The scientific group is pretty much on board with Python 3. Perhaps due to unicode handling intricacies, that the web-dev group ain't exactly on board with Python 3 yet. But this needs to change and it takes leadership. Fortunately, bit and pieces such as webob are Python 3 compatible. And personally, I feel the Python web frameworks need a fresh redesign.


I work in both camps, and I always felt it was the web developers that were better about moving on to 3. Scientists in general are loathe to risk breaking well-tested code for the sake of new features. In my research group, in fact, we've just barely moved on to 2.7 from 2.4! I doubt there will be any motivation at all for moving to 3 any time soon.


I doubt that web developers are better about it, my reasoning follows:

The #1 Python web framework, Django, is a bit ahead of the curve but still hasn't released a major version that supports 3. Django's huge ecosystem of addons are not going to catch up for some time after that. And users are not going to develop green-field Django apps on Python 3 until some time after that. So the timeline for serious use of Python 3 with Django should still be counted in years, which means that the majority of Python web development will not be done with Python 3 for a while. (If Flask and Werkzeug had been out ahead of this issue, it could have eaten up even more of Django's market share, but it looks like it will actually be behind in adopting Python 3). I don't think any of the Zope frameworks or ZODB, etc. support 3. And I think between those three you probably have most of Python web development accounted for.

Pyramid, Bottle, Tornado and CherryPy DO support Python 3 but aren't nearly as widely used. App Engine needs a new runtime and that could be years. I don't think that Web2py or web.py have released versions which support Python 3, though both probably have something in the works.

Since web development is so heavily framework-driven, the majority of web development on Python is not going to move on to 3 until Django and Flask ecosystems move over, or everyone switches to Pyramid.

And due to the requirement for backward compatibility, Python 3 features will not be widely used for a while (suppose you make a widely used extension, you are not going to lock out people using 2.7, this means you are using the common denominator of features and supplying Python 2 implementations for the stuff you are using from new Python 3 libraries).

I would contrast this with the very large number of general Python libraries which already support Python 3. So I think the Python web development world is well behind the rest of the Python world.

It's a mess but I think that Python 3's improvements are worth not straitjacketing the future of the language into the decisions made in 2.


At KyivPy#8 I heard about new lightweight framework supporting Python3 - wheezy.web. But as you said it is not a mainstream among web-developers.

I think the most popular Django apps will move to Python 3 as a lot of people are interested in it. Time show us how Django with Python3 support will change the current situation.


Research software typically have a very small user base and can quickly become out-of-date, so there's no great incentives for current software packages to upgrade. But I'm talking about the developer community of scientific Python, not the user community. The developer community includes such things as numpy, scipy, matplotlib, etc. are pretty much Python 3 compatible. This makes it easy for new scientific projects to jump on board with Python 3.


Yeah, a lot of research software in academia is written by grad students with no formal CS education who're just interested in completing their dissertation and graduating. After they've graduated and left, the codebase just languishes until the next grad student comes along, who spends a couple months trying to figure out all the spaghetti code and ridiculous hacks used by the previous student to complete their thesis in the fastest time possible. After this process repeats itself 3 or 4 times, the codebase is pretty much useless and has to be thrown out.


> a lot of research software in academia is written by grad students with no formal CS education

Are you sure? Because that sounds like an complete oxymoron. You probably may have meant something else. Would you like to clarify on why grad students lack formal CS education?


Yes, I'm sure. I'm referring to scientific software - that is, software written in fields such as biology, chemistry, physics, and (non-computer) engineering. There are many grad students and professors in these areas who write software for their research that have no formal background in CS.


OK - I think I'm surprised to know that those fields do write software more than anything. Thanks for the clarification.


I wouldn't say they write software more than anything. I would say that there's a steadily increasing amount of time and energy invested into writing software, and very poor coding practices are used because of their lack of a formal grounding in CS and their short-term focus on obtaining the results necessary to publish their next paper and/or obtain their graduate degree.


That's like a classic scientist - software engineer argument. The scientist says he wants to prototype with quick dirty code while the software engineer wants to write great refactored code.


Yes, in those fields we literally "do write software more than anything" [else we do]. That is, most of what we're doing is writing software. So the fact that there's no formal training is a huge waste.


A member of my family is a post-doc here at Stanford doing cutting-edge genetic research on cancer in one of the brand new bazillion-dollar buildings that just opened. I write code for everything; I'll probably be writing code to brush my teeth for me someday, so I've offered to teach him to program. He doesn't show much interest. He says that there is only one member of his research team who can code--the PI--and he only knows Perl ("which is what they used in his bioinformatics program"). But, "He doesn't really write code anymore."

I don't know why I'm always surprised to hear this. You'd think I'd learn. I recently asked him if his research didn't require a lot of gene sequencing and analysis of the sequences and if that wasn't pretty computationally demanding. "Oh, yeah, sure it is. That's why we outsource it."


> There are two significant factions in the Python community: the scientific group and the web-dev group.

You forget the animation group. And the hardware/embedded group (Raspberry-Pi, anyone?). And the C-wrapper-writing group. And the sysadmin group. And and and...

The Python ecosystem is very large and diverse. That's part of its strength, and part of its weakness as well (it's very difficult to "herd" all these people, as this 3k migration has shown), but don't make the error of reducing it to the most vocal sectors -- they're not necessarily the most significant ones.


http://www.python.org/dev/peps/pep-3333/

Last-Modified: 2011-01-16 09:57:25 +0000 (Sun, 16 Jan 2011) ... Status: Final ... Created: 26-Sep-2010

Until this was made final(January 2011), web frameworks that use the wsgi spec had no python 3 path short of ditching the wsgi spec.

Now that it is final, there is a path to python 3 for web-dev. Cherrpy got there first I think. Pyramid/Webob got there, and others as well, and I'm sure Django will get there soon enough.

I'd say web-dev adoption of python 3 has been pretty swift.


Django has unofficially supported python 3 for a year.


Wasn't experimental support only announced just over a month ago?

https://www.djangoproject.com/weblog/2012/aug/19/experimenta...


There were unofficial versions of Django (forks, but not in the hostile sense) ported to Python 3 before that. The announcement marks the work being done in the main Django codebase, meaning that the next release will work on Python 3.


so, for about as long as pep 3333 has been approved. that was my point. No web framework that uses wsgi(pretty much all of them) were going to move to python 3.X without it.


> The scientific group is pretty much on board with Python 3.

Ah .. NO! I can assure you that we are still stuck with Python 2.x (2.7 to be more precise). I am not too sure the others would have moved either.


There is movement, though - lots of the core stuff is already ported to Python 3, and once matplotlib 1.2 comes out (any day now), it's quite feasible to do serious work with Python 3.


Can you list the reasons?


I deal with a lot of text data so I use NLTK for Natural Language Processing. In addition, maptplot lib for publication quality figures and graphs. I don't really have a motivation to shift to Python 3.


Have you looked at Pyramid? It's pretty awesome, really the ideal framework philosophy in my opinion (i.e., actually a framework instead of attempting to be a crappy domain-specific language/replacement standard library). It's also Python 3.x compatible and has been for several months (they replaced the one library they had been holding out for).

I gave Django a couple of truly honest tries, but I just couldn't bring myself to tolerate it. Pyramid is by far the best.


Python 3 support in django is coming soon....!


I think that Django support will probably speed up a lot the Python 3 adoption


I believe that the lack of WSGI held web-dev group back for a long time. PEP3333 (WSGI for Py3k) was finalized in early 2011, over two years after the release of 3.0.


As a PHP developer I've been waiting for the web-dev group to jump on board with Python 3 before I make the switch. I'm hoping it will be sooner rather than later.


I'm setting aside the question of whether or why to switch. I'm also setting aside the possibility of starting to learn now on Python 2.7, which is what I think you should really do assuming you don't intend to procrastinate it ;)

Assuming you do want to wait on Python 3 adoption, your timing should depend on the framework you want to use, because effectively each one has its own community and ecosystem, and their adoption is at completely different rates.

If Pyramid looks good to you, for example, it already is on board with Python 3. Bottle is on board with Python 3. If you want to use Django, which is what most people will want to do, you should just wait on Django to release a Python 3 version, and Django should be usable on Python 3 within the year. If you want to use Flask (considered the closest analogue of Ruby's Sinatra) then it could take a while.


I see a couple mentions in this thread of Flask taking a while to adopt Python 3. I am relatively new to Flask, could you explain why they are seemingly behind things in regards to Python 3?


The Flask author's post about issues he has faced with Python 3 might give some insight to that: http://lucumr.pocoo.org/2011/12/7/thoughts-on-python3/


There is some work currently going on to make Flask run on Python 3.2.

https://github.com/puzzlet/flask/commits/py3-dev


Few core developers (mainly, one, Armin) that couldn't be bothered enough. It's a volunteer project after all.

He did write about the unicode problems with the Python 3 changes and the need for an improved WSGI spec (heck, he even co-wrote the unicode literal change PEP).

But after the new WSGI spec was out, and the u thing was already implemented in 3.3 pre-release, there was no much motion in Flask, whereas Pyramid, Django and others have already started work.

Even the "When will Flask support Python 3" document has not updated and is 2 years out of date in it's contents.


Is the difference between 2 and 3 so huge that it is not possible to make a jump?


Here is my thought process...

Regardless of how you feel about PHP, I've spent enough time with it that I can make it work and write good code, quickly.

I do want to switch to another language at some point (I'll leave the reasoning for that out of this discussion).

Why switch to Python 2.7 if it's going to be old in a year or two? Even if there isn't a huge difference between 2.7 and 3.x it sounds like it would be a nightmare trying to upgrade any of the old work that I would have done in 2.7. So either I've got some projects that will be on 2.7 forever, or I go through the hassle of trying to upgrade. I'd rather just wait and avoid the whole debacle.


There are ways to write Python2 code that will work when you make the switch to Python3. It _is_ a hassle, but it's quite doable.

This may not convince you (and I perfectly understand why), but I think it's worth giving Python a shot even if you're fluent in another language.


Interesting, I'll look into it!


It's definitely possible (and not really hard), but I understand the mindset: if you're coming from an other platform, why bother with switching when you can wait a bit and just start with Python 3?


Because Python 2.7 is completely mature and production ready and also allows you to write code which runs pretty easily on Python 3. Then you are marking Python 2.7 as deprecated version at the time where before you would have been just getting started.

Really this is down to whether you want to do something in Python now, or you want an excuse not to do it in Python for some time. It's not a bad excuse but it's also not a strong reason to wait.


Exciting. Python 3.3 is the first release of the Python 3 series which makes me go from "I'll have to come around to use Python 3" to "dammit, why isn't my codebase under Python 3 yet?". There's a bunch of neat stuff, minor and not so minor.


Any specific examples that got you excited? I'm going through the new features list and I can't point to one that wowed me.


It's more of an accumulation of things ending up making 3.3 way superior to 3.2 in the end. But,

* The feature I like most is definitely generator delegation, it significantly improves more extensive uses of generators and iterators, and makes generator-as-coroutines a much more interesting proposition (before delegation, calling an other "coroutine" would be rather painful, now you essentially just have to tack a `yield from` in front, as you'd tack an `await` in C# 5.0)

* The flexible string representation finally fixes narrow build's issues with astral planes, which is becoming rather important as astral planes include e.g. emoji, and it significantly reduces the possibility of bugs when working with astral planes (as there's no more behavioral difference between "narrow" and "wide" builds)

* We'll have to see how they're used, but namespace support could be used to significantly cleanup of... well, namespaces (and multiple separate libraries living in the same namespace, without having to resort to PYTHONPATH hacks or setuptools tomfoolery)

* A built-in, clean implementation of contexts/scopes (collections.ChainMap) I can already see plenty of use for. Same for signatures, there's high hijinks potential in that one.

* The rest really is about a better experience all around: reduced memory, "unicode literals" (for Python 2 compat), ElementTree fixups, ...


Not exciting, but virtualenv in core is nice.


I've done quite a bit of python development. One of the things that really bugs me about python is how they keep breaking older stuff. Other languages have been much more careful about maintaining backwards compatibility and I think that is a big factor in the retention of users.

Having to re-do any part of your code from one release of a language to another became a real deal breaker for me.

For an interpreted language that problem is even worse because you don't know you have a problem until that bit of code gets hit.


Be careful what you wish for. The extreme alternative is Java, whose slavish obsession with backwards compatibility has effectively crippled the language. There's enough cruft in the standard library to keep newbies guessing for years, plus fundamental language design failures like type erasure and "beans".

At some point you need to burn bridges in order to move forward. Doing this frequently destroys the community. Never doing it also destroys the community, it just takes longer.

Personally, I think Python is managing pretty well. Yes, it's occasionally painful, but it's less painful than stagnation.


This exact point is the basis for semantic versioning. Major version changes, e.g. 2.x.x to 3.x.x is expected to break backwards compatibility for the sake of progress. If you are happy with 2.x.x. stick with it, just don't expect any new features, just maintenance in the language and libraries built around it.

http://semver.org


I don't think C# ever broke backwards compatibility and they have no problems adding plenty of nice features every release.


I think in Python's case there was really no compelling reason to burn the bridges between 2 and 3. Burning the bridges should be done when something will be significantly improved.


Python's Unicode support was significantly improved between 2 and 3.


I write a fair amount of Python 3 and I don't agree at all.

If your concern is that code written for 2.x won't work in the future, it will be some years until 2.7 is no longer getting security fixes etc. The mere existence of 3.x doesn't prevent you from using 2.7.


Well said.


One of the things that really bugs me about python is how they keep breaking older stuff.

That's a fair point, and it applies to a lot more than just programming languages. Operating systems and browsers also come to mind, for example.

On the other hand, if you insist on maintaining strict backward compatibility indefinitely, you have increasing drag on every useful new feature you want to introduce. You also can't remove edge cases that should ideally never have been there, even if they make it easy to introduce a bug.

In programming languages, this is the C++ effect. Building on the familiar foundation of C was a good decision by Stroustrup in the early days, and I'm sure it contributed greatly to C++'s success. On the other hand, today I believe that C++ is holding back large parts of the programming community, by being good enough in its niche that huge numbers of projects stick with it, yet lacking the expressive power, broad standard library functionality, and clean syntax/semantics that we take for granted in numerous modern programming languages.

In general, the goals of stability and progress are always going to be in conflict for any platform-like software. Such software essentially defines a standard for others to program against, and the entire point of standards is to create stability and common ground, but sometimes old standards don't adapt well to incorporate new ideas.

I suspect the best we can ever do is restrict major changes, which in practice means those where the old code cannot be automatically converted to get the same behaviour on the new system, to major releases. Minor changes that can be automatically converted are much less of a problem, as long as the "breaking" version of the platform comes with a simple conversion tool.

To be fair to the Python developers, this is essentially what they've done with the jump from Python 2 to Python 3. There is a tool to deal with converting the trivia, and most of the breaking changes were in the initial jump and acknowledged as such.

There probably is a case for making fewer, if any, breaking changes in minor releases. On the other hand, if you're looking at an estimated period of five years to migrate the bulk of the community from one major version to the next, there is probably a fair case for allowing a few smaller but incompatible changes in minor versions as well, as long as their effects are clear and only within a tightly controlled scope so they don't unduly disrupt everyone they aren't there to help.


You're spot on. My basic expectation is that software that I write today will run 10 years from now, unmodified. Things like this give me 'upgrade fever', the fear that if I upgrade a machine something will break and I won't find out about it until it is too late. So I stick to writing software like that in compiled languages now. That requires a bit more work (ok, sometimes a lot more work) but the increased reliability across upgrades is worth it for me.

Other than that language itself being incompatible with previous releases there is the added burden of having to maintain a whole ecosystem, not unlike many frameworks and their plug-ins.

Many people will write some module or other and will make it available for others to build on, and then an upgrade will break the module. The module creators have since moved on and are no longer supporting their brainchildren.

Fortunately with open source you actually can fix these problems - most of the time - but there is not always time or opportunity to do so.

Backwards compatibility is what made microsoft a dominant market force, I believe that you mess with it at your peril.


It is becoming harder and harder to find backwards compatibility for binary interfaces even in the open source world, where libraries move so fast and finding old versions of libraries is becoming more and more of a pain. If you dynamically linked against something 10 years from now unless you kept all of those libraries around, and their supporting files the program won't run anymore.

Backwards compatibility in Windows definitely made Microsoft a dominant market force, but it has also led to the Win32 API stagnating, and it becoming increasingly more fragile as time goes on. Also in all the old code paths, all those not often taken branches there are various bugs just waiting to be found and to be exploited. Yes it has gotten much better, but still.


If you're talking about breakage between version 2 and 3 that is sort of what the major version number bump implies. I don't do much python work these days, but if stuff is (deliberately) breaking between minor releases I would agree that's not very nice.


Is this a 2x -> 3x gripe, or is there something specific in this release that breaks compatibility?


If anything it should help, what with the u"" syntax being allowed now.


I gave up before this release.

edit: oh my, that's a lot of downvotes for answering a question.


Python deliberately broke stuff in the 3.0 release, in order to get rid of some (minor) cruft, but other than that it is as backwards compatible as anything else.

Complaining about backwards compatibility on the 3.3 release news item makes people think you have some specific issue in mind.


Could you provide an example where a minor Python release broke your code and it was not a bug in Python?


> you don't know you have a problem until that bit of code gets hit

The easiest way around this problem is to have tests and run them before you upgrade your production boxes. It's not that hard to do.


> they keep breaking older stuff.

What are specific APIs or things they keep continuously breaking? Did you have issues with transition from 1 to 2 and now with 2 to 3.


Is there a LINT for Python? It looks like there was a question about them on Stack Overflow, but it was closed. http://stackoverflow.com/questions/5611776/what-are-the-comp...


Aside from pylint which everyone has already mentioned, there's also pyflakes.

http://pypi.python.org/pypi/pyflakes


Pylint is the most comprehensive, even if too noisy by default. After editing ~/.pylintrc to disable some overly strict checks it's pretty decent.


pylint is probably the most commonly used:

http://www.logilab.org/857


The problem is in the type system. Dynamic typing makes it difficult to ever change APIs, because a change could break anything that depends on them. pylint and other such tools help a little bit, but they can only do so much when the language doesn't support static typing.

Another Python feature that makes upgrades hard is monkeypatching, where one piece of code can inject arbitrary code into another piece of code. In many dynamic languages, this is often used to patch the behavior of core classes like String. I don't know how common this is in Python, but I've definitely seen it before. By erasing the distinction between interface and implementation, this makes it difficult to ever change them implementation of anything.

People say that writing more and more unit tests will solve these problems. But guess what? The more unit tests you have of an API, the more unit tests you have to change when that API changes. Unit tests are good and should be written in any language, but they are hardly a substitute for static typing.

The end result of all of this is that dynamically typed languages usually follow a trajectory where there's an initial burst of enthusiasm about some cool syntax, followed by a lot of code being written, and then a gradual descent into a compatibility tarpit, where nothing can be changed because of fear of breaking working code. Only additions can be made, and the language gradually grows uglier and uglier. Dynamically typed languages approximate Bourne shell more and more as time goes on-- a dozen slightly incompatible implementations, ancient quirks that bite hard on newcomers, and a resignation that this is the "best it gets."

Sometimes there's a burst of irrational hope towards the end of a language's lifetime. Perl 6 and Python 3 are good example of this. Developers go into their happy place and forget about the big bad compatibility bear that's been chasing them. But it's just a fantasy-- dynamic languages can't escape from the tarpit in the end, and nobody adopts the new thing.


Java is statically typed and it is ugly as hell. It also breaks stuff between major versions.

Implying that a language is doomed to be ugly or hard to evolve because it doesn't use static types is not a logical conclusion.

Besides, Perl 6 is not Perl 5 continued, and never pretended to be. Perl 5 continues to evolve (e.g. Moose) and is not getting uglier. You could argue that it was already ugly when it was created, but it is not growing uglier.


Monkey patching is a clear violation of the API contract and is expected to break when versions change.

It's really not clear to me how your claims can be backed up. Lack of static typing is a fundamental feature of Python, it's part of the design philosophy. The majority of the language functionality is built in the standard library, where modules and interfaces can be swapped out and deprecated easily. Modules in pypi can have different codebases for different interpreter versions, and programs/projects can declare their dependencies using virtualenv; multiple Python interpreter versions can coexist on the same system. Your claim boils down to saying that static typing allows languages to develop faster and cut down maintenance costs, but I don't think you can conclusively demonstrate either point.


virtualenv is a symptom of the disease, not the cure. People get stuff working with a specific version of all the libraries and of Python, and then they forget about upgrading. And why should they upgrade? It will just break their code in mysterious ways. It would be different if the compiler could tell them about changes in type signatures, but it can't.

In the real world, monkeypatching happens. And you may not always know that your libraries or framework are doing it, either. Ruby on Rails does extensive monkeypatching, even of core classes such as String. I don't think you can patch str in CPython due to implementation issues, but that's not the point.


Ruby's culture is FAR more tolerant of monkeypatching than Python's culture - monkeypatching str, for example, just isn't a thing you would do. So what constitutes 'the real world' differs between the two.


> Another Python feature that makes upgrades hard is monkeypatching

You can't alter core classes (like str) in Python, and patching classes from other modules is considered a really weird, rare, bad practice.


Another Python feature that makes upgrades hard is monkeypatching

Although possible, it's my understanding that monkey patching is greatly frowner upon in the python community. I've been programming python on and off for a decade and can only think one time when I've monkey patched something, and I felt really bad about it. The only library that I know of that uses monkey patching is gevent, and it doesn't do it by default but only if you've explicitly told it to.


Not mentioned among the major features: Windows builds have finally moved to Visual Studio 2010.


It seems strange to me that they use Visual Studio rather than mingw/msys. Do any HN readers know why that is?


I don't know why they don't, but I'd give serious thought to not supporting Windows at all if I had to use mingw/msys. And this is coming from somebody whose first order of business on a Windows machine is to install cygwin. The environment is just Not Friendly.

Visual Studio is the vendor-suggested way of building C++ and it's free besides; there's not really a good reason not to use it.


I've found mingw and msys to be very fast and easy to use.

In my experience it's been Cygwin that's a bloated, non-intuitive, awful monstrosity to be avoided at all costs. I've had great difficulties in the past with Cygwin DLL's, particularly when a program comes with its own Cygwin DLL which is a different version than the system Cygwin.

I haven't used Cygwin since 2005 or so due to these difficulties.

For Windows virtualization solutions, my first stop these days would be Virtualbox, qemu or the like. Second preference is mingw/msys.

I've also had a positive experience with a little-known solution called Colinux [1], essentially a Windows port of another little-known technology in the Linux mainline kernel called user-mode Linux. Colinux requires some setup, especially if you want graphics (for GUI work, you need some remote desktop with a Windows client like Xming or VNC). Again, I'd recommend VirtualBox or qemu for casual use, but Colinux is an interesting technology, and I've found it gets very good performance.

My negative experiences with Cygwin were so great, it is only something I use when there is no alternative available. And in preference to all of these is simply using Linux, but sometimes that's simply not an acceptable alternative to Windows (i.e. when you're making a product that you want to run on all popular OS's, supporting only Linux seems like a recipe for disaster. See Sage [2] for an example of an open-source project whose official line on Windows compatbility is "use Virtualbox.")

[1] http://www.colinux.org/

[2] http://sagemath.org


All your points (well, aside from mingw itself) are great ones. Cygwin is 100% a monster. But it's a monster that works the way I expect it to. MSYS is often lacking in things I consider basic that I miss going from OS X to Windows and trying to configure it is a super-pain-in-the-ass.

Colinux is actually pretty cool, but the need for an X server and the setup time is a pain in the ass.


> It seems strange to me that they use Visual Studio rather than mingw/msys. Do any HN readers know why that is?

More widespread, easier to handle, more accepted among windows developers.


Because none of the core contributors use it and no one has contributed patches that make it work there.

(Windows contributor who did the VS2010 work)


Just after Visual Studio 2012 was released?


Yes. Also keep in mind that Visual Studio 2010 is the last one which will work on Windows XP and Windows Vista.


The binaries are still backwards compatible, I hope wherever the python dev team is building the binaries they aren't still running 32 bit or vista.


We closed 3.3 for features months ago (June, maybe?) and it wasn't released. I would have liked to move to 2012 myself but the timing wasn't right.

(I did the 2010 changes)


> The Python interpreter becomes aware of a pvenv.cfg file whose existence signals the base of a virtual environment’s directory tree

As someone who hated the .ini configuration for logging in python 2.5, this smells a bit.


As someone who has used PHP and the infamous php.ini, this worries me too.



"•A C implementation of the "decimal" module, with up to 80x speedup for decimal-heavy applications"

That in itself is should make a few python gamers happy. Also some serious motivations for older version users, not all but more and more. Also many other interesting develepments others have highlighted already.


And 80x is apparently actually an understatement, at least for a few cases. Some numbers recently posted to python-dev show up to a 124X improvement:

  Precision: 9 decimal digits

  float:
  result: 3.1415926535897927
  time: 0.113188s

  cdecimal:
  result: 3.14159265
  time: 0.158313s

  decimal:
  result: 3.14159265
  time: 18.671457s


  Precision: 19 decimal digits

  float:
  result: 3.1415926535897927
  time: 0.112874s

  cdecimal:
  result: 3.141592653589793236
  time: 0.348100s

  decimal:
  result: 3.141592653589793236
  time: 43.241220s


Impressive stuff, though what struck me was the results, least accuracy wise and what rounding they using:

Pi is 3.14159 26535 89793 238....

So I do wonder what rounding they are using, even truncating as I have (next digit 4 so good place to do that) then can see the last digit should at least be 8, worst case 7 and 6!! There again this may be a convention or result of the methord to calulate Pi.

As for floats, well, for accuracy I'd go with cdecimal right there, though is it as accurate. I suspect it is the formular used that induces the minute error in results.

http://en.wikipedia.org/wiki/Pi #21 reference


This is from the decimal benchmarks included in the python source[1], in the recipe given in the decimal documentation[2] the precision is increased for the intermediate steps of the algorithm so it gives the correct end result.

  >>> pi()
  Decimal('3.141592653589793238')
1. http://hg.python.org/cpython/file/344d67063c8f/Modules/_deci...

2. http://docs.python.org/library/decimal.html#recipes


Thank you, out of interest on a appliction I work on every now and then showed a 50% improvement with the code as is, not heavy decimal at all, mostly int's though still a nice speedup.


Gamers? Why are they heavy users of the decimal module?


I've always been uneasy about the increasing use of yield / generators in python these days, for instance, ndb in google app engine, twisted, etc. 'yield from' should make managing more complex uses of generators much easier - the question for me is, will this make me like widespread use of generators more, or will it make the use even more widespread, and still be kind of a pain to reason about, debug, etc.


Which Python version would you recommend to someone just starting to learn the language? I know "Learn Python the Hard Way" focuses on Python 2.7, and "Learning Python (4th Edition)" focuses primarily on Python 3.


Attending PyCon Ukraine 2011 it was very interesting to hear about Distutils2. But I see that in the end they are still not in Python3.

But even without it I am going to try Python3 in my projects (hello Django 1.5, Tornado, Wheezy.web, jinja2 and a lot of others).


Programmer pr0n straight up. I can't wait for Pyramid 1.3.x to support Python 3.3!!!


Built-in virtualenv? Finally!


Very awesome!


Cool!


And not a single reason for me to stop using 2.7. It's been just bloat since 3.0.


> It's been just bloat since 3.0.

I can't make head or tail of that claim, what's the "bloat since 3.0"?


I'd tend to just ignore anyone that uses the word 'bloat' without backing their statement up with any substance/insight or perhaps acknowledgement that they may be wrong on some counts and are unlikely to understand the entire problem-space of said project to the extent they can justify deeming any significant portion of it unnecessary.


Well sure, but maybe he has genuine and interesting issues with Python 3, and maybe he just had a bad day, hasn't had his coffee yet or whatever and went with a quip rather than a complete comment. That happens.


Sure, but it's lazy either way. Cries of 'bloat' without some supporting reasoning = alarms bells suggesting willful ignorance to me.

Personally gets my goat a lot at work. It's so easy to allude to bloat, or (for example) some framework containing 'stuff you don't need' instead of doing your homework and making an effort to understand why that stuff exists.


It's time


The `whole greater than the tally` feature set and performance improvements of this release, really has inched Python 3 to the point where I will want to start using it. With no basis beyond my own and other's like sentiment this still seems like an important milestone. Feel like many will move from inertiaville, at least building a weekend house in momentum town.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: