Hacker News new | comments | show | ask | jobs | submit login
The Case Against Python 3 (learnpythonthehardway.org)
180 points by kot-behemoth 385 days ago | hide | past | web | favorite | 271 comments



This has to be the ne plus ultra of Python scaremongering. It describes some sort of bizzaro world of a doomed language which it populates with Machiavellian Python maintainers, brainwashed developers, and a small group of heroic holdouts who see resisting Python 3 as a moral imperative. Apparently having too many string formatting options is a moral issue (three, to be specific).

As someone who writes a _lot_ of Python code (mostly in 3 but with occasional switches to 2), who maintains several libraries which work in both 2 and 3, and as someone who uses a wide range of libraries in stats, machine learning, networking, web development, etc. in my python 3 work...this piece seems totally disconnected from reality. The problems it describes are relentlessly overblown when they're not simply manufactured from whole cloth.

Put another way: the differences between 2 and 3 are not great, the vast majority of libraries you'd want to use (i.e. are actively maintained or of at least good quality) work in 3, and while I don't doubt there's great piles of Python 2 code moldering away big "enterprise" apps and those sorts of places, it's ever been thus in that space, no matter what the language, and doesn't pose any sort of existential threat to Python.

(edit: I used to recommend "Learn Python The Hard Way" to newcomers, and have just kept reflexively doing that over the years because I wasn't aware of a better resource. But if this article has accomplished one thing, it's that it's spurred me to look for a replacement)


I feel like I'm the only one who learned Python from the official site's tutorial page:

https://docs.python.org/3/tutorial/index.html

I found it really helpful.


"I feel like I'm the only one who learned Python from the official site's tutorial page"

From Py1.52 I've done the same.

I wish every language was as well documented (Perl6 is pretty good) [0] as Py, especially the changes b/w versions. Rough order of reading: What's new, Tutorial, PEPs, Library references. Why is it good? It is canonical, pushed out as a release is done, comes with the install (cf: $ pydoc foo) and is free.

[0] https://news.ycombinator.com/item?id=12889082


Same here.

Python is also rather outstanding in the sense that there are tons of good recordings of PyCon talks that explain almost every language feature in very approachable ways.

The quality of those talks is usually great too, it seems that PyCon consistently hosts very good speakers.


Apart from a few things here and there from places like stack overflow (and of course new libraries), all of my python learning happened buried head first in their official documentation and that tutorial.

I'm only starting to realize, as I branch in to other things, that this might not be the norm.


Did you learn programming from that tutorial or just python? What's good about Learn Python the Hard Way is that it first and foremost teaches programming, with python largely being an implementation detail.


Ah, I see. No, I learned programming eons ago with a "Logic of Programming" course at my local community college where the book in question made heavy reference to mainframe programming and came with a stencil for flowcharts. There was no actual coding involved when learning the concepts, just tediously produced flowcharts and ample use of a sketch pad / eraser. Kids these days would be better off learning in a similar fashion.


If you're looking for a new tutorial for beginners, I really liked "Dive Into Python 3" (http://www.diveintopython3.net/) and have had some success giving that link to new coders.


If Zed's understanding is correct, that would be a waste of time: The outcome would be that you learned a dead language.


Good thing his understanding isn't correct.


One major plus in the differences is that Python3 makes standard libraries always return text or bytes, instead of depending on the input to decide on the return type.

This( + no longer allowing "encoding" of bytes or "decoding" of unicode) means that you have a much higher chance of writing correct code when it comes to handling encodings.


Think Python is pretty approachable for newcomers. It's also made available for free, and fwiw has a version for 2 and 3.

http://greenteapress.com/wp/think-python-2e/


One of the nicest things about the Python community is the lack of drama.


> Currently you cannot run Python 2 inside the Python 3 virtual machine. Since I cannot, that means Python 3 is not Turing Complete and should not be used by anyone.

I stopped there.


A programming language "is said to be Turing complete or computationally universal if it can be used to simulate any single-taped Turing machine" [1]. This author doesn't seem to understand this concept properly.

[1] https://en.wikipedia.org/wiki/Turing_completeness


And if he doesn't understand that concept should he _really_ be teaching anyone programming?


I'm pretty sure that if you understand both Turing completeness and the practice of actual programming, then you know that in most cases the one has virtually nothing to do with the other. (Which is not meant at all to imply that the author of linked article understands Turing completeness, just that even though he doesn't seem to he could still be excellent at teaching programming.)


Then the question becomes "why is he talking about turing completeness, something that is either irrelevant, or that he knows nothing about?"


I mean it sounds very nice and fancy - but loop semantics and conditional jumps have nearly nothing to do with what he's talking about.


Heck no he shouldn't.


I'm pretty sure this was a tongue-in-cheek way of stating the true criticism.


I'm wondering that here. The whole article feels over-the-top, perhaps it's a well crafted parody, an extreme sense of sarcasm? May be he is a Python 3 lover after all.


Isn't over-the-top Zed Shaw's "thing"?


Ditto.

Then I read more for laughs.

I kind of feel like I could fix py2to3 if I have a sed and awk layer in there.. maybe a fun project.


In my experience a lot of the problems with automatically converting existing Python 2 code to Python 3 code is that the Python 2 code usually makes fundamentally broken assumptions when it comes to unicode and bytes.


This is what the author doesn't get- the fact that Python 2 makes no distinction between bytes-like objects and string-like objects is a bug, not a feature. He finds his code so awful to migrate because he has built it on a contrary assumption.


I was working on a side project and ended up upgrading because I could figure out how Python was encoding my data. I got so frustrated with the semantics I found it easier to jusy upgrade.


lol, winning.


Definitely disagree. I am a literal genius, wrote compilers, learned a dozen programming languages for fun, but prefer to do hard things. Yes p2 handling of coding is broken, but p3 is much worse. Making a theoretical argument about a pragmatic problem is a category mistake. And it's not even the core of Zed's expressed concern, which is the arrogant and abusive manipulation of the user community in service of the interests of self-obsessed project.


such smart. impressed. "p2" "p3". knows unique terminology. literal genius. so smart. A dozen programming languages just for fun? You definitely know what you're talking about and have very worthwhile contributions. People should listen to you.

I like what you said about self-obsessed project. It's clear that you have a very well reasoned position, not just opinion. What you say makes irrefutable sense.


I also can't run Python 2 by passing it into a C compiler. Man, I should publish a paper on this- they thought C was Turing-complete, but I proved them wrong!


Considering how C has been targeted for all sorts of nonsense, I suspect that your argument would be somehow used as irrefutable proof that C is outdated and deprecated.

At least that's the conclusion that is repeatedly forced upon C, no matter what the argument is.


Stop, it hurts. Too much.


Yea I skimmed the rest. This makes me ashamed that I even bought the guys book (Learn Python the Hard Way). His attitude is what is going to kill python, not python 3. I moved from Perl 5 to Python 3 hoping to flee the internal dev fighting and attitudes, yet here I am again.

I'll say this, at Pycon 2016 I attended some dev sprints/hackathons with the python 3 developers. Python 2 is no where on their radar, its dead. They have moved on. There will be no additional compatibility layers or any of that. All of the mainline libraries that everyone uses (django, sql alchemy, etc ) have moved on to python 3. If you haven't, you should to.


> His attitude is what is going to kill python

No. The rest of the community is much better. In fact, his behavior is an aberration.


Pypy hasn't


http://pypy.org/download.html

py3.3 support is at alpha level. py3.5 support is in the pipeline and already available for testing. It's unlikely they'll do any effort that's targeting py2 specifically anymore.


Being in alpha isn't exactly 'having moved on' ;-)


But there is a plan and the milestones are being achieved.


First I spit my coffee when I read that too, but let me play devil's advocate (because I pretty much disagree with the entirety of his rant) : he's pointing out that since it's technically feasible to write a Python 2 interpreter in Python 3, and since it should even be pretty easy since the two languages do not differ immensely, then the only reason why it hasn't been done and there's no -2 flag to python3 has to be an ideological reason, a manipulation by the core devs to impose their ideas by force.

It's extremely badly conveyed but that's what I got out of it by keeping reading.

I do disagree with him on all the rest though, especially strings. It didn't "just work" before, it failed silently and who knows how much disaster he or his readers have caused because of it. Now at least you can't be wrong anymore. For a language heavily used on the web, it's hugely important to understand where your strings are coming from and where they're going to, and how.

Besides, Python is not a frigging "beginners language". Just because Python (and admittedly especially Python 2) is generally easy to grasp as a first language doesn't mean it's its purpose, nor should it constrain itself toward this goal. It's used in a million of different areas, including pretty sensitive ones. It's now become a more mature language, reaching for exactitude and consistency. Just because Zac no longer has an easy toy language to point script kiddies to so they can "learn to program" by reading one website doesn't mean Python is to blame.


Yes, the author doesn't know what he's talking about.

But people talking about Turing completeness in a real programming language (usually in the form of "x is Turing complete, therefore you couldn't ask for more") almost always haven't got a clue.


> But people talking about Turing completeness in a real programming language (usually in the form of "x is Turing complete, therefore you couldn't ask for more") almost always haven't got a clue.

There is a fun quote about that...

"There are those who tell us that any choice from among theoretically-equivalent alternatives is merely a question of taste. These are the people who bring up the Strong Church-Turing Thesis in discussions of programming languages meant for use by humans. They are malicious idiots. The only punishment which could stand a chance at reforming these miscreants into decent people would be a year or two at hard labor. And not just any kind of hard labor: specifically, carrying out long division using Roman numerals. A merciful tyrant would give these wretches the option of a firing squad. Those among these criminals against mathematics who prove unrepentant in their final hours would be asked to prove the Turing-equivalence of a spoon to a shovel as they dig their graves."

-- Stanislav Datskovskiy


I just point them towards Unlambda. Turing completeness in three characters (plus one for output)

http://www.madore.org/~david/programs/unlambda/#what_is


Unlambda as a language may be a joke (the funny kind), but learning it is incredibly enlightening, and makes you understand some fundamental concepts of computer science.

As such, I think it represents one of the most enlightening esoteric languages out there together with Brainfuck.


True. No programming language has infinite tape.


Actually some kinda does:

"Garbage collection is simulating a computer with an infinite amount of memory". https://blogs.msdn.microsoft.com/oldnewthing/20100809-00/?p=...

We both agree, I'm just using this as a pretext to share this intriguing piece of knowledge. I find the concept much more intuitive than "getting back unused memory".

Especially when you look at the 90% memory usage on your OS, it still make sense with the "infinite memory simulation" definition.


> THERE IS A HIGH PROBABILITY THAT PYTHON 3 IS SUCH A FAILURE IT WILL KILL PYTHON.

I stopped there.


Isn't this what killed perl?


Correct me if I'm wrong, but the idea of perl6 was announced in 2000, and perl6 was finally published in 2015. So there was a 15 year gap when perl5 was perceived on its way to become obsolete, but perl6 was not available.

I am unsure when the idea of python3 (first as python 3000) came to general knowledge, but it must have been some time between 2000 and 2004. Then python3 was published in 2008.

So the time between obsoleting the old version, before the new version was available, was notably shorter with python.


I thought there was a release in the interim that broke backwards compatibility and the went back to the drawing board. But it's that long ago I might be confusing it with Netscape.

There's also a good chance I'm confusing perl and parrot.


Perl is alive and well, thank you.


I think it would be more accurate to say that it is alive but largely forgotten. Perl6 is very cool, but has not gained much traction.


Sure, it's just resting.


No, that's Parrot.


From the article:

> Python 3 has been purposefully crippled to prevent Python 2's execution alongside Python 3 for someone's professional or ideological gain.

I can't tell if Zed's referring to python3 doing a fork()/exec() of a python2 not working correctly or if he wants/expects some kind of inter-language import or "linking" among files written for respective language versions. What's he getting at?

Is there really something that prevents you from executing python2 at the same time as python3? (I tried a simple program "os.system('python3 -c \'print ("hello")\'')" and it worked just fine.


I'm pretty sure what he wants to say is that it should let you use libraries written for an older version of Python than you are using, much like the CLR would do if you wanted to use a library written using an older version of C# than you're using (or a different language targeting the CLR, for that matter). I'm pretty sure the claim that Python 3 is "not Turing-complete" is just a hyperbolic mockery of the claim that that isn't possible to do.

N.B.: I haven't worked with Python so if you can actually do this somehow let me just say that that's not the impression the article left me with.


This is doubly wrong as PyPy which runs Python 2.7 works in CPython 3.


I came here to post this. What?


I did the same. I honestly wonder if there is a single other developer on earth who is troubled by the inability to write a Python 2 runtime in Python 3.


Same. That's when I came back for the comments on HN


same. i can't tell if he's serious or not and i don't really care.


Then you missed his crazy notion of what static typing is and his failure to understand the difference between a string and a byte sequence. It's incredible that someone who knows so dangerously little about Unicode has written a web server that is so popular in the Ruby community. It says something about the quality of the language's ecosystem.


The argument is bad but the headline has a point: the value of a language is the quality of its libraries and the community that maintains it. Python 3 seems like a mistake; it fractured the community and sent people away. (to golang, to scala, probably even to ruby).

If I were a library maintainer on py2 I would have felt betrayed by py3. Suddenly print is a function? 'yield from' won't be available on the py2 branch? Yikes.

C++ is an example of the serious negative consequences of language change. The language has been in flux for decades and the 3rd-party code shows it. Some fraction of good libraries rely on c++14 features. Many others duplicate in library code features that have been available in the language since c++11 or earlier. Boost, the 'missing standard library' for cpp, is something you either love or hate.

The end result for c++ in 2016 is that large shops either (a) don't use it or (b) don't use the new features and have a slow acceptance process for 3rd-party libs.

You can argue that the C community is healthier than C++ -- a ton of important system libraries are written in C and have bindings to tons of languages. Yes, they have problems (openssl cough), yes nobody understands unspecified behavior across compilers, but C has done a good job of supporting lots of platforms and staying relatively stable.

Betraying the community can kill a language. Perl learned this lesson the hard way. Let's hope python can be saved.


> Suddenly print is a function?

from __future__ import print_function

Now you've got print function in python2.

> 'yield from' won't be available on the py2 branch?

And 2.6 doesn't get set literals. And 2.5 doesn't get "with" statements. And ... There's got to be a cutoff somewhere. Or we'd just have an eternal 1.0 with all the features backported to it.


Well no. The problem is that py3 breaks backwards compatibility, without really a good reason. What's so much better about it? Why couldn't they sort py2 at least for module imports?

Which is exactly opposite for c++11 - it doesn't break backwards compatibility, but I really want those new features. I think if it broke backwards compatibility, I'd still switch to the new c++.

At work we use py2 and c++11...


The most visible changes, such as print being a function, are the easiest and most trivial to adapt to. They clean up the language, and can be automatically converted from one to the other. This can be pulled into python 2 with "from __future__" imports.

The harder part is the string handling. In Python 2, you have one class, string, which is performing two different jobs. It is acting both as a holder for text, and as a holder for binary data. This sort of works, because ASCII looks like binary data if you squint it. If your users are English-only, then this probably sounds like a reasonable thing. Once you need to start supporting Unicode, this ambiguity causes a world of pain.

On python 3, this ambiguity is removed. You have bytes, which are binary data, and strings, which are encoded text. No longer is there one class trying in vain to represent both concepts. This is also why automatic translation doesn't work, because the translator doesn't know which concept you were trying to user when you used a python 2 string. This is a rather low level change, which is why it took libraries so long to update, and why it couldn't be done without breaking backwards compatibility.


> In Python 2, you have one class, string, which is performing two different jobs.

What? Have you used Python 2? No: in Python 2, you have two classes; one is called "str" and represents a sequence of bytes, and one is called "unicode" and represents text. The former is used while interacting with network protocols and files, and the latter is what you use internally in your program: at the boundaries you use .decode and .encode to convert using character encodings. (BTW: Python 3 actually got this wrong for years, and even now I think it is only fixing the problem with a mitigation :/ filenames do not have an encoding: they should be of type "bytes", not of type "unicode", and Python 3 seriously shipped with an implementation of "list files in directory" which returned names as "unicode" and any names which failed to convert were just skipped.)

Then, Any text in your program should use a syntax u"" to indicate it is program text of type unicode. Sure: there are some really annoying implicit conversions in place that allow mistakes to be made, but WTF: they absolutely didn't need to rename "str" to "bytes" and "unicode" to "str" while simultaneously not only changing tons of other things in the language (the ability to pick and choose is amazing for porting) and making pointless related changes like removing the u"" syntax (which yes, I realize they added back later... years later... many many years too late). They should have just given me a way to "poison" str/unicode objects so they refuse to participate in implicit conversions, which would have let me find the corner cases in the libraries I use that are doing things wrong so I can report them to their maintainers and get them patched; after a couple years of people throwing .poison in a bunch of places users would have been ready to flip a VM-level "just don't convert them anymore".

As for renaming the classes, "bytes" should have been left as an alias for "str" (as it was/is "temporarily"/forever in Python 2.7), "str" should have generated a warning... I mean, "str" isn't even a good name :/... it's a silly three character abbreviation of a concept that is ambiguous in the minds of most developers already due to other languages, whether they be like C or JavaScript). They certainly had absolutely no good reason to get rid of the "unicode" type name (and this one still isn't back). The set of decisions they made were so annoying that even people who know what they are doing have to do a ton of fidgety and easily-broken work in a dynamically typed language where you are more than likely to just break something and not realize it until a month later.

Seriously: I've been programming in Python (2, not 3000) since 2007 as one of my primary programming languages; I do all of my web development in Python, for a website which is used around the world by millions of people and is not only itself translated into tons of languages (by both professionals and volunteers) but deals with tons of user-generated content that is accessed via tons of different channels (APIs from various websites, native network protocols, and tons of file formats as all the content indexing is also written by my, in Python), and dealing with these issues in Python 2 is simply a non-issue: in fact, I would say it is almost trivial (and in fact, the ways it tends to break in Python 2 are often quite similar to the way things break in Python 3, as it is easy to set up situations where the implicit conversions essentially fail).


Yes, I have used python 2 quite often, as well as python 3. Python 2 is sloppy in this regard. You can use a "str" to represent text, and it will work perfectly, so long as you stay within the ASCII character set, setting up hidden bugs later. Telling everybody to just use u"", while correct, sets up "" as something that looks correct, but isn't. In addition, suggesting that "str" is a sequence of bytes, rather than a string, is rather inappropriate. The documentation refers to them as strings, they are created in similar manner to strings in other languages, and they are generally accepted by library methods that set text properties.

I agree that there were some changes that were unnecessary, like the removal of the u"" syntax. That said, I think that the fundamental change, which was to have the thing named "string" be a string, was the right choice. Python has always been about having the obvious choice be the correct choice. Telling everybody that "quoted text" is not text runs counter to this.


I totally agree with you.

> ... one is called "str" and represents a sequence of bytes ... The former is used while interacting with network protocols and files ... Python 3 actually got this wrong for years ...

Actually some Japanese have suffered from this wrong design of Python 3. If you read Japanese, please see http://www.oki-osk.jp/esc/python/upload-cgi/v2.html#3


> Which is exactly opposite for c++11 - it doesn't break backwards compatibility

Yes it does:

https://stackoverflow.com/a/6402166/31667


> At work we use py2 and c++11...

As do most corporations. If you want a job in Python, you had best know 2.7.


for native unicode support I would break the world.


2.7 does Unicode quite well. Not as well as it should, certainly, but arguably better than 3.3 did. I haven't played with anything since, and given the long history of this project going off the rails, I will require some serious convincing before I waste more time in it.

Yes I do multilingual string processing a LOT. I worked in SMT for about 3 years. Python 2.7 was the only practical option.

Python 3 does not fix the GIL. That would be enough to get me interested again.


A language must change, or it will be left behind, and become an esoteric toy. Even C has to deal with this. The only reason C is still popular is because of it's grandfathered status in so much of our infrastructure, and in many of the popular OS choices.

A language that changes will alienate people, and lose people. Perl saw this. Python is seeing this. The way to guard against this is to keep backwards compatibility. That generally doesn't allow enough change in languages that provide enough constraints to make them popular for large engineering projects, whether open source or commercial. Lisp doesn't need to change much, it's malleable enough in many respects that you can implement whatever you need, but good luck getting large engineering projects done and maintained using it. Perl has the same problem.

The only solution I see to stay relevant is to allow alienating users, but make sure those changes that alienate users are good enough to draw substitutes, enough to keep steady state, or preferably still grow slowly. Maybe prior users will even swing back in occasionally, and without the false sense of betrayal, they'll actually find they like the state of the language a few years later. For a while, at least.

This seems like the end of the world to Python community members, and it seemed like the end of the world to Perl community members because of the prior status those languages held, where they are or were at the top of their respective niches. I doubt the Scala community worries about exactly the same things.

Put another way, did you expect to still be writing Python the same way in 20 years, with only the popular libraries changed? If you did, did you think about what that would mean for the language, and what the community would look like at that time? I'll tell you. Perl. That's not necessarily a bad thing. I write Perl professionally, every day. I love it. It's not the worst thing, being stable and reliable. But you don't get to be top language in your niche and sit on your laurels at the same time.

Change is good. Embrace the change. If you're lucky, you'll use many languages to program in your life and be happy. The only way I see anyone being happy using any single language forever is to bury their head in the sand and ignore everything else going on around them.


Even C has to deal with this.

But is there any C11 compiler out there that will refuse to compile C99?


I used C as an example of what happens when you don't change, not what happens when you do. C is used decades later with little change because of momentum and history, not because it's the pinnacle of computer languages.

C is not a good language. It was a good language, but that was a few decades ago. We've progressed past the point where C's shortcomings are excused by there not being alternatives that address those shortcomings while offering comparable features. At this point, we're just finally edging past of the local maxima created by C's exceptional popularity and the knock-on effects of that popularity, such as most operating systems (and all popular open source ones) being written in C.


What's replacing C? I can see languages like go/rust/crystal replacing C++ for application development, but what's producing stable binaries that other languages can consume in the same way the do C ones?


Rust can produce C compatible libraries. Go apparently can too. I suspect a little googling would show me that Nim and D can as well. I'm sure there are at least a few more.


Go can use C libraries, but AFAIK it cannot be compiled into a shared library to be used by a C program. Go needs control of the program flow to orchestrate its runtime (esp. the GC).

Nim can definitely produce C-compatible libraries, seeing how it compiles to C itself.


>but AFAIK it cannot be compiled into a shared library to be used by a C program.

Go 1.5 added support for this.


Interfacing D to C and C++ is failry easy as it was built with excellent C/C++ interop in mind. It's also compatible with C ABI and has limited compatibility with C++ ABI.


I didn't know rust could, I just checked it out. I didn't realize rust was that low level.


Python is an interpreted language, so the comparison is meaningless.

It is cheap to deal with the ambiguity in the compiler using compile options. You could go the Java way and have separate class path (e.g. In PyPy, as CPython 2.x is dead).

But this does not allow you to mix and match code for older version anyway. Java was designed from start to allow this and it is why it's so stagnant.


Python is an interpreted language, so the comparison is meaningless.

Bash has no problem running old Bourne shell scripts from ancient history. Browsers have no problem running JavaScript from the mid-90s. I fail to see how being an interpreted language is a valid reason for breaking backwards compatibility.


being interpreted is not a valid reason, but the string class for both bytes and text is a valid reason.


>I doubt the Scala community worries about exactly the same things.

Why do you single out Scala as being particularly different to Python and Perl?


Because Scala isn't nearly as popular as Perl was or Python is. I imagine losing some perceived status is less important to them than just doing the best they can to provide a good product and address people's needs.


Scala work sure pays a heck of a lot better.


The term 'C++ community' feels strange. It's like saying 'the combustion vehicle community' or something like that. As a software engineer I don't want a community or someone to bond over the tools I use, I just want my stable tool with a large industrial user base.

It's an industrial tool, with an ecosystem provided by an industry, used to solve industrial problems.

C++ can't change. It's a feature. Unless the code uses some non-standard extension it is guaranteed that the code you write today will compile tomorrow.

Refactoring? Rewriting? We are talking of decades old codebases with millions of lines, filled with bugfixes to handle odd-but-critical cornercases.

Software that is sold for hundreds of millions, that run businesses that execute billion dollar projects.

Stability, utility man.

Not so graceful, perhaps, or cuddly. But it works.

Yeah, the language is still a bitch but it's at least a stable one with support from several industrial vendors.

From cognitive or software design point of view it's a disaster though - people should be thoroughly vetted in some other more sane language before allowed to write C++ :)


> Let's hope python can be saved.

Can you explain why Python would need to be saved? I mean every week on r/MachineLearning there is at least one new Python deep learning framework being launched[1], I wonder if you can mention any other language which is that healthy.

[1] https://www.reddit.com/r/MachineLearning/search?sort=new&res...


All stable versions of the big ML frameworks are still 2.7.



I think the author's point was not that python3 is dying. His point was that it deserved death, that 2.7 is flourishing, while 3 is failing to thrive, and reasons for this were postulated.


Are there many ML frameworks that only work in 2.7? Scipy and numpy work in both 2.7 and 3.x, so I'd imagine (without being heavy in the field) that any new development using these scientific computing packages could be Python version agnostic as well. Is this generally not the case?


Not true. Please specify which ML library authors are obtuse.

About the only thing that is dead like that seems to be Enthought project for drawing and creating UI for graphs.


Sorry, should have said run better. There is support for latest 3.x branch, but everyone uses 2.7 as it's less bugguy.


Every week on HN a new JS framework is launched. I'd say it's pretty healthy.


Too bad JS lacks a standard library.


I don't really hear that many complaints from the C++ community about what you're talking about. All those older libraries that pre-date C++11 still work fine and interoperate with no difficulties with even C++1z libraries. C++ is the language that has done feature adoption correctly in my opinion. Standardized, by committee, with forwards and backwards compatibility in mind.


I'm with you. I like some of the changes in the C++ language. But I don't love the C++ 3rd-party ecosystem. Python's is very healthy.

C++ is so unconcerned with modules they've been kicking the feature down the road spec by spec. I think it's now 'post-17'. (could be wrong).

Rust is a language that's in the perf class of c++ but has built-in lifetime support and built in modules. It's ten times easier to do database or, say, SDL interaction in rust. And rust is barely 1.0!

I don't know anyone who programs who doesn't use a ton of libraries. C++ just hasn't prioritized this part of programming.


Errr... what?! C++ is a positive example of language change, because it keeps backwards compatibility, unlike Python where we're still having discussions about 2vs3 after 8 years.

Nobody is having discussions about C++14 vs C++98, the latter is better period. A C++14 compiler will compile C++98 code just fine in most cases. If it doesn't you get a compile time error.

People are complaining about the opposite, that C++ is keeping compatibility for too long and that it has too many features. Both strategies have their pros and cons, but breaking seems to have more negative effects.

C has done a great job of supporting buffer overflows on lots of platforms and injecting safety errors in all software that uses those libraries.


>C++ is an example of the serious negative consequences of language change.

From the outside looking in (as a non-C++ developer), for me the bigger problem is honestly that they left in the warts rather than just paving around them. Those who have followed C++ for its entire life cycle have probably been able to mostly keep up with what language features were mistakes and what the best way to do things is, but I can't imagine myself ever picking up C++ now and hoping to have any clue at all what to avoid without spending years learning the hard way (no pun intended).

As a very basic example from C# (which I do use professionally), the fact that untyped collections still exist (and things like IEnumerable with explicit casts instead of IEnumerable<T>) is just pointless cruft that we'd be better off without. As far as I'm concerned, no code should ever depend on them, and any code that does should be forcibly broken to force people to fix it. We treat security flaws seriously by forcing people to fix things, why should we not do the same with features that are essentially bug-magnets?


> As a very basic example from C# (which I do use professionally), the fact that untyped collections still exist (and things like IEnumerable with explicit casts instead of IEnumerable<T>) is just pointless cruft that we'd be better off without. As far as I'm concerned, no code should ever depend on them, and any code that does should be forcibly broken to force people to fix it. We treat security flaws seriously by forcing people to fix things, why should we not do the same with features that are essentially bug-magnets?

I mean this is an age-old argument but MS comes down very strongly on the other side because, like... what if nobody is working on that C# 3.0 code anymore but you need it for whatever reason? Well, I guess you're out of luck in the world where they just break ArrayList.



On c#, the numerous core libs that don't meaningfully support nullable primitives is the most infuriating thing.


Python still has the best libraries of any language, and almost every library has full Python 3 support. The real issue is with legacy codebases that are going to need to upgrade from Python 2.7 to 3.5 or else switch to PyPy by 2020


Don't hitch your wagon this this garbage. If you think the 2 to 3 transition is a harbinger of doom, write your own blog post about it.


"Python 3 Is Not Turing Complete"

... Not only is this blatantly false, but the author acknowledges it as such and seems to think that it is still OK to write this sort of sentence. Backing a pure falsehood with anecdotal support from "actual Python project developers" does nothing to change its veracity. I can't respect anything else in this article after seeing this kind of sensationalism.

Perhaps the author doesn't understand Turing-completion (I'll give the author the benefit of the doubt), but even if so it's inexcusable to throw around this sort of technical language this casually.


Why the hell even bring that up. Not a very good tutorial in any way when the writer starts throwing around unrelated terms they don't grok.


It is also wrong, as PyPy interpreting Python 2.7 runs well in Python 3.


> The strings in Python 3 are very difficult to use for beginners. In an attempt to make their strings more "international" they turned them into difficult to use types with poor error messages. Every time you attempt to deal with characters in your programs you'll have to understand the difference between byte sequences and Unicode strings.

The author has this exactly backwards. It is Python2 that makes the weird distinction between the string and the unicode type. Strings (and bytes) in Python3 are done (almost) right. (I say "almost" because Python bytes are constrained to be 8-bits wide, but that is not a serious problem.)

He also clearly doesn't understand Turing-completeness.


As someone who's used Python 2 for the past half decade and started using Python 3 only recently, 100% with you on this.

The first time I was working on a project which required me to be able to work with characters beyond ASCII, I was so confused. To this date, I'm still not sure if I can actually explain what str.encode and str.decode do in Python 2.

Admittedly I've never used Python 3 for a project where I had to deal with international charsets - and his point about the concatenation of unicode and bytearrays certainly seems a valid point - but it still seems infinitely better than what Python 2 did.


> his point about the concatenation of unicode and bytearrays certainly seems a valid point

This is one of those subjects where I feel like being a non-English programmer has given me surprisingly valuable experience.

The operation of concatenating a text value (string) and binary data (byte array) simply does not make sense, for two reasons. What do you expect the result to be?

1. Binary data with the text appended. Then you need to specify which encoding you want the text to have. Once you do (a.encode('utf-8')), all will be fine. You could argue that a conversion to utf-8 should be implicit since that's almost always the correct choice, but you'd be surprised how fast you run into trouble with other systems.

2. Text with the binary data appended after interpreting it as text. Again, which encoding? You'll get very confusing and terrible results if you do not specify this. If you specify this (b.decode('utf-8')) everything will be fine.

The only other alternative I accept is a type error. I'm guessing zed would be even angrier if that was the case. I'm not quite sure what Python 3 does, but at least it somewhat communicates that whatever interpretation it does is internal and arbitrary. The Python 2 result is ridiculous.


For the record:

    Python 3.5.2 (default, Nov  7 2016, 11:31:36) 
    [GCC 6.2.1 20160830] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> u'' + b''
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: Can't convert 'bytes' object to str implicitly
    >>> u'' + bytearray()
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: Can't convert 'bytearray' object to str implicitly
And I agree wholeheartedly. This is the only sane thing to do.


That's a pretty fair point, and I completely agree that the Python 2 result is pretty ridiculous.

In zed's defense, it still does seem like a rather daunting error for a beginner programmer - although like you, I have no good viable alternative to suggest.


Encoding and decoding binary data is hard for a beginner in general, and I'd argue it's good to avoid it for the first few lessons. Unless the beginner has a CS education in which case by all means explain how encoding is taking text and translating it to bits.


> Python bytes are constrained to be 8-bits wide

??

Aren't all bytes 8 bits wide?


Nope! (TIL) http://stackoverflow.com/questions/5516044/system-where-1-by...

I think in the context of the original answer there's also the problem that char types aren't necessarily 8 bits wide, e.g. wchar etc.


Ok, I should have been a bit more specific: aren't all bytes on machines that Python can realistically run on 8 bits wide?


No. All modern machines can address 8-bit-wide bytes for historical reasons, but they can also address wider bytes. 32 and 64 bit bytes are common. We call them "words" instead of "bytes", but they're really the same thing: a chunk of data that can be operated on as a single unit by the hardware). And having that capability exposed as a software abstraction can be extremely useful, particularly in applications like cryptography.

EDIT: Downvotes? Seriously?


> a chunk of data that can be operated on as a single unit by the hardware

I didn't realize that this concept was referred to by the term "byte". But this clarifies how you are using that term.

Given that usage, I'm not sure why 8-bit bytes are an issue at all for a language like Python that is supposed to be hardware-agnostic; the whole point is that programs should not care about how the data is physically represented in the hardware. Python 3 got rid of the distinction between int and long for the same reason. The only reason the "bytes" object exists at all is to represent binary data, for example data that gets read from/written to files, network sockets, and other I/O, and AFAIK bytes in such data are always defined to be 8-bit chunks of data. (If you really insist on having Python representations of hardware data chunks, there's the ctypes module.)


> I didn't realize that this concept was referred to by the term "byte".

It is common to see "byte" used to mean 8-bit-byte, and also to see the term used for wider units of binary data. If you want to be unambiguous, the word "octet" is commonly employed to mean 8-bit byte.

> a language like Python that is supposed to be hardware-agnostic

That is exactly why you want wide bytes. If you want to represent an integer >255 as fixed-width binary data and you don't have wide bytes, then you have to choose an endianness convention. If you have wide bytes then you can be endianness-agnostic.

> AFAIK bytes in such data are always defined to be 8-bit chunks of data

This is usually true, but not always. For example, UTF-16 and UTF-32 data is specified as 16-bit and 32-bit bytes. This is why you need a BOM.

> there's the ctypes module

Yes, this is why I said it's not a serious problem.


> If you want to represent an integer >255 as fixed-width binary data

But Python 3 doesn't represent integers that way, because Python 3 doesn't put any size limit on integers. There's no way to represent integers of potentially unbounded size as fixed-width binary data.

This is one of the things I meant by "hardware agnostic"--I didn't mean "able to represent the different register sizes of different hardware"; I meant "having a representation that doesn't have any concept of register size at all".

(Python 2 does have a separate int type, which IIRC is either 32 or 64 bits wide depending on the platform. AFAIK this is just a wrapper around the underlying C int, so it isn't affected by any 8-bit byte restriction.)

> UTF-16 and UTF-32 data is specified as 16-bit and 32-bit bytes. This is why you need a BOM.

Um, no. UTF-16 and UTF-32 data is specified as 16-bit and 32-bit code points. You need a Byte Order Mark (BOM) to specify in which order the 8-bit bytes that make up a 16-bit or 32-bit code point appear.

From the Unicode FAQ[0]:

"Q: What is a UTF?

"A: A Unicode transformation format (UTF) is an algorithmic mapping from every Unicode code point (except surrogate code points) to a unique byte sequence. The ISO/IEC 10646 standard uses the term “UCS transformation format” for UTF; the two terms are merely synonyms for the same concept."

Further down, under the table listing the UTFs and their properties:

"In the table <BOM> indicates that the byte order is determined by a byte order mark"

And in the answer to the next question after that:

"UTF-16 and UTF-32 use code units that are two and four bytes long respectively."

[0]: http://unicode.org/faq/utf_bom.html


> Python 3 doesn't put any size limit on integers

Yes, obviously. Variable-length bytes is a feature that Python does not have. (AFAIK the only language that has variable-length bytes is Common Lisp.)

> There's no way to represent integers of potentially unbounded size as fixed-width binary data.

Yes, obviously. To be excruciatingly precise I should have said: if you want to represent a binary data type with N instances where N>255 (e.g. an integer in a range from 0 to N where N>255, or from -N to N where N>127) then you either need bytes wider than 8 bits, or you need to worry about endianness.

> UTF-16 and UTF-32 data is specified as 16-bit and 32-bit code points.

No. Unicode characters are specified as code points, which are just numbers with no specific representation. UTF-16 and UTF-32 are encodings of unicode which use 16-bit-wide and 32-bit-wide bytes respectively to encode those code points. It is only when you serialize those encodings to octets that you need a BOM. When UTF-16 and UTF-32 are used internally to one machine you don't need a BOM.

There are also, as I pointed out before, many algorithms (particularly in cryptography) that are specifically designed to operate on fixed-width binary data wider than 8 bits.


> Variable-length bytes is a feature that Python does not have.

That wasn't my point. My point was that, as far as integer objects are concerned, there is no concept of "byte" at all, not even a fixed length "byte" of 8 bits. There is no concept of "chunk of data operated on as a single unit by the hardware", and there is no concept of "unit of data of the underlying storage for this object". There are just objects representing integers of any value you like.

The only kind of object in Python for which a concept of "byte" in your sense exists at all (if we leave out modules like the ctypes module) is the byte string (str in Python 2, bytes in Python 3). For those objects, you are correct that "byte" is always 8 bits in Python, and there is no way to vary it. But that has nothing to do with the underlying hardware; it is just the data model that Python has chosen for binary data.

> if you want to represent a binary data type with N instances where N>255 (e.g. an integer in a range from 0 to N where N>255, or from -N to N where N>127) then you either need bytes wider than 8 bits, or you need to worry about endianness.

Only if these things are exposed to you at all. In Python, you don't have to worry about this for integer objects because you don't see the underlying storage at all; things like what endianness is used to store integers > 255 are implementation details of the interpreter. Frankly, I find that to be a good thing: if I wanted to worry about stuff like that, I'd be programming in C, not Python. But of course different people will have different needs and preferences.

> UTF-16 and UTF-32 are encodings of unicode which use 16-bit-wide and 32-bit-wide bytes respectively to encode those code points.

I understand that you prefer to use this terminology; but my point in that particular comment was that it is not the terminology that is used in the Unicode documentation, which is the closest thing we have to an "official" terminology for Unicode. As is shown by what I quoted, that documentation clearly uses the term "byte" to mean 8 bits; it refers to UTF-16 and UTF-32 as encodings that use 2 or 4 "bytes" to encode a code point. It does not use the term "byte" to refer to 16 or 32 bit chunks of data. So it seems misleading, or at least potentially misleading, to use the term "byte" to refer to anything other than 8 bits when talking about Unicode encodings, since the Unicode documentation does not give the term "byte" that meaning.


I think we're mostly in violent agreement here.

Let me try to re-state the point I was trying to make: the only reason there is anything special about 8 bits as a unit of data is because hardware is built to operate on units of 8 bits at a time. Whether you call these things "bytes" or "words" doesn't matter. You can call them florbs for all I care.

The fact that florbs are 8-bits wide is mainly for historical reasons. There is no particular reason to make the magic number be 8 (except that it's a power of 2), and indeed there have been historical examples of hardware with florbs of other sizes. The Intel 4004 had 4-bit florbs. The PDP-8 had 12-bit florbs.

Modern hardware is built to natively handle florbs of multiple sizes, typically 8, 16, 32 and 64 bits, but sometimes other sizes as well (e.g. 80 bits for extended precision floats, or 128 for SSE5 instructions). Because of this, many algorithms are designed to operate on binary data in these sizes, and because of that it can be helpful to have binary data in these sizes exposed in your programming language. Python exposes 8-bit florbs, but not other sizes of florbs. Common Lisp has arbitrarily sized florbs. If you're doing certain things (like crypto) having access to florbs larger than 8 bits can be useful. That's all.


> the only reason there is anything special about 8 bits as a unit of data is because hardware is built to operate on units of 8 bits at a time.

More precisely, hardware at the time that "the natural unit of data" was becoming a standardized convention was built to operate on units of 8 bits at a time. As you say, the 8-bit convention is mainly for historical reasons, and hardware today can handle larger chunk sizes.

> Python exposes 8-bit florbs, but not other sizes of florbs

To the extent it exposes florbs at all, yes. But that does not mean you are stuck with 8-bit operations in Python in all cases. For example, operations on Python floats use the IEEE standard data chunks, which, as you note, are not 8-bit. Operations on Python integers are not restricted to using 8-bit CPU instructions; they will use whatever register sizes the hardware allows (more precisely, the hardware the Python interpreter was compiled for--if you run a 32-bit Python interpreter on a 64-bit machine, you won't be using 64-bit register operations). It is true that the programmer has no control over the data chunk sizes that Python uses; they are hard-coded into the interpreter.

> Common Lisp has arbitrarily sized florbs.

Can you give an example of how this capability in CL is used?


The best example I can think of is implementing certain cryptographic algorithms, particularly symmetric ciphers and cryptographically secure hash functions, which are often defined in terms of 32 and 64-bit chunks and mod 2^32 and mod 2^64 arithmetic.

Why not just use C? Because the C language spec has "features" that allow a C compiler optimize away parts of your code that you really don't want optimized away in crypto code. For example, if you do this:

    int secret_key[SIZE];
    ...
    for (int i=0; i<SIZE; i++) secret_key[i]=0; // Clear secret key so attacker can't read it
A C compiler is allowed to get rid of that code if it can prove that it doesn't contribute to the final result (the "as-if" rule). Forcing a C compiler to not do such things is generally very tricky.


Not really. Just put the clearing function into another translation unit.

    clear_memory(secret_key, sizeof secret_key);
(And don't enable any wacky whole program link-time optimizations. Those violate ISO C by continuing semantic analysis into translation phase 8. ISO C makes it clear that tokens are "syntactically and semantically analyzed" in phase 7, as translation unit. Then in phase 8, only references are resolved to link. Doing any semantic analysis to optimize things at link time violates the conceptual model thereby given. GCC has support for this but it has to be explicitly enabled; moreover, there is more to using link-time optiizations than just passing options.)

The current translation unit must really call clear_memory and really pass it the pointer to the secret_key, whose contents have to be settled. The writes performed by clear_memory really have to take place, because the caller depends on it; clear_memory has no idea that the object is dead (having no next use) in the calling translation unit.

The main problem is not getting the clearing not to be elided, but with stray copies of the data being elsewhere. The C programmer doesn't have visibility and control over all the storage areas where a datum may end up. If secret_key is really cleared, is that enough?


> Can you give an example of how this capability in CL is used?

Common Lisp for example can extract/set a byte given its width and its position:

    CL-USER 145 > (write (ldb (byte 11 2) #b1100101011010110) :base 2)
    1010110101
(byte 11 2) is a specifier for this: 11 = width, 2 = position.

Historically this comes form a DEC PDP-10 instruction 'load byte'.


> Downvotes? Seriously?

I upvoted you to compensate. :-)


[Re-posting as a top level comment as I don't want to give the comment I'm replying to any more attention.]

>> Currently you cannot run Python 2 inside the Python 3 virtual machine. Since I cannot, that means Python 3 is not Turing Complete and should not be used by anyone.

> I stopped there.

The author is - quite obviously - staying the Python 3 VM should run Python 2 code, in the same way the JVM and CLR run other languages. That the Python 3 VM doesn't run Python 2 code - and that the Python maintainers apparently say it can't - means the Python 3 VM isn't Turing complete.

This is nonsense obviously, but it's a way of pointing out that the claim that the Python 3 VM can't be made to run Python 2 code is also nonsense.

This is very, very obvious. Yet HN have gone nuts and rather than discussing the points brought up - why doesn't the Python 3 VM run Python 2 code? - you instead make the laughable claim that the author doesn't know what Turing completeness is.

The author even explicitly states they've already created a Python 3 version of their work and HN accuses the author of ulterior motives: "having an out of date work".

It's like HN have gone out of their way to cut out quotes from the article and not actually bother to read it.

The author is a better programmer I am and also than most in the thread. They're also over the programming mob mentality that sometimes appears on HN and I can very much see why in the responses to this articles.


I took the time to read what Zed Shaw had to say, and I think that Zed is a bad journalist or public communicator, and the reaction on Hacker News is symptomatic of this. Zed Shaw had a straightforward (but probably wrong) case to make about Python 2to3, but instead he lacked the restraint from shooting himself in the face.

To be clear, Turing completeness is Zed Shaw's baseless launching point for discussing the incompetence or political nefariousness of the Python language devs.

Turing completeness does not mean a perfect translation between all languages exists. Genuine ambiguity can exist. If your language goes from 1 to 2 types, a perfect solution may be <permanently> out of the question.


I would say that the incompatibility is exactly what makes python 3 be python 3. If it could run python 2 bytecode, then it would allow the same ambiguity between strings and binary data as is allowed in Python 2. One of the main goals of Python 3 was to remove this ambiguity, by introducing "str" and "bytes" as separate classes.


> This is nonsense obviously, but it's a way of pointing out that the claim that the Python 3 VM can't be made to run Python 2 code is also nonsense.

> This is very, very obvious.

Um...what? That in itself is nonsense.

The reason he brought up Turing completeness is to make it seem like Python 3 breaks some fundamental programming 'law' that all programming languages should adhere to, in order to further his point.

Sure, maybe the Python 3 VM should run Python 2 code, but he doesn't need to puff up his argument like this. It just makes him look uneducated.


> The author is - quite obviously - staying the Python 3 VM should run Python 2 code, in the same way the JVM and CLR run other languages.

There is no "CPython VM": no version of Python guarantees bytecode compatibility between versions. The bytecode is just an optimisation.

For a Python interpreter to support both Python 2 and Python 3, it would need to have have some source level way of telling the two apart. Bytecode doesn't even come into this.

> That the Python 3 VM doesn't run Python 2 code - and that the Python maintainers apparently say it can't - means the Python 3 VM isn't Turing complete.

No it doesn't. That's not what Turing Complete means.

Here's a pretty trivial way to demonstrate both Python 2 and Python 3 are Turing Complete: implement a Brainfuck interpreter in both. Brainfuck is Turing Complete (memory limits aside), and thus because you can implement a Turing Complete language in both Python 2 and Python 3, both are trivially proven as Turing Complete.

Zed's argument is like this: ARM processors can't run AMD64 code, thus ARM processors are not Turing Complete.

That argument is nonsense.


> > That the Python 3 VM doesn't run Python 2 code - and that the Python maintainers apparently say it can't - means the Python 3 VM isn't Turing complete.

> No it doesn't.

You do realise the next four words in the post you're replying to is:

> > This is nonsense obviously

?

And then a reason why the claim is being made?

To continue your CPU analogy (I actually read your post), it's like Intel saying x64 can't run x86 binaries, then someone else saying "why not, isn't it turing complete?" as a joke way of pointing out that it should.

It perfectly reasonable to read Zed's post and argue that Python 3 VM should not be able to run Python 2 binaries. But not even bothering to read or comprehend the post (or this thread) and claiming he doesn't understand Turing machines is a waste of everyone's time.

* We know AMD invented x64, and x64 does run x86 binaries, and the IA64/Itanium story, but anyway.


This is the bit I have a problem with:

> but it's a way of pointing out that the claim that the Python 3 VM can't be made to run Python 2 code is also nonsense.

And that's why I wrote:

> For a Python interpreter to support both Python 2 and Python 3, it would need to have have some source level way of telling the two apart.

[And now I have no way to double check what you wrote originally because you overwrote your original comment.]


This is an excellent treatise on both the value of a computer science education and the importance of being sure that things you say in public aren't verifiably false.


As someone who has been extremely critical of the Python 3 debacle, even I think this is overdone. Yes, the transition was totally botched, and 8 years in, Python 3 still has less production usage than Python 3. But by now, at least, enough libraries have been ported or replaced that you can actually use Python 3. This was not true three years ago. Porting remains painful. I ported a medium-sized web backend system about two years ago. I found lots of bugs which should have been found and fixed long before, indicating that some standard packages were not being used much.

As for "Machiavellian Python maintainers", that's real. There have been explicit attempts to apply pain to Python 2.7 users. This has decreased since von Rossum got stuck maintaining Python 2.7 code at Dropbox as his day job.


Could you share any links that show "There have been explicit attempts to apply pain to Python 2.7 users"?

I don't think you are lying, but it seems extraordinary to me, so I'm interested in seeing some context and I'm not sure how to find it.


After careful statistical analysis I concluded that there is a 9.56% chance Zed is trolling experienced programmers.

Disregarding the trolling about "Turing Completeness", there are some points he raises that are interesting.

On some (most thing) items there, the train has left the station. But some are good to listen to:

Better error messages. Rust has been making progress there so maybe a good place for inspiration. Error codes and links to additional info online has been done (maybe overdone) but I think there is room for more variable names, more info about the context ("looks like you're trying to do X, maybe Y might be better here..." kinda thing).

On the lack of typing and such, MyPy might be a good project to focus on.

Standard library updates would be good, but they are tricky. Wouldn't be nice to have requests in the main library? Yes, but changes are still made to it, it might not be updated for a long time then. As they say modules go to the standard library to die.

Things I'd like to see: better performance (close working with Pyston or PyPy) and better packaging. On packaging there was talk of Pipfile just today. That was a good start. I've been using Python for many years, and I still don't know off the top of my head the basic pip flags and options, or how to start the venv, what goes into setup.py vs goes in requirements.txt vs goes in setup.cfg. The story got better in recent few years, but there is more room for improvement there I think. I often end up with a build.sh script or run.sh which just does the building and venv-ing and such. Can that be done by default? For usability sake? Even if it is not super-consistent and doesn't work for 1% of use cases... Probably.


Packaging and the distribution network to go with it PyPI are one of the key underpinnings of the python community, and as such deserve more love from the PSF. It is in the best interest of the PSF to actively support and push improvements to packaging, and to put community resources towards its development.


> Wouldn't be nice to have requests in the main library? Yes, but changes are still made to it, it might not be updated for a long time then. As they say modules go to the standard library to die.

In fact, the plan is for there to be no further new features to requests: http://docs.python-requests.org/en/master/dev/todo/#feature-...


Oh interesting! Thanks for pointing that out.

I think requests module is one of the nice assets in Python, having it in the standard library would be a good move.


Anyone here remember "Rails is a Ghetto"? http://web.archive.org/web/20080103072111/http://www.zedshaw...

Zed's Rails article did lead to some improvements in the Ruby community (eg: The 'pickaxe' had meta-programming added to the next version). Zed likes to stir things up, but rather than dismiss him completely over some nit picky issues, eat the meat and spit out the bones.

To the "I stopped there" folks in this thread: why do you think Python 3 is still seeing such poor adoption?


Python3 is seeing poor adoption because for 5 years it was extremely hard to port existing code to Python 3. But there's been a lot of effort to fix this, and now the tools are in place to make porting easier.

Since those tools are in place, pretty much every major Python lib has Python3 support. Many minor libs as well, though it's easy to have one or two dependencies missing it if your project is big enough.

Application developers (overwhelming majority of Python users) had to wait for the libraries to get over to Python 3. This has basically happened, so a lot of application developers are now porting.

Python 3's rollout was awfully managed, but the community has learned its lesson, and is working hard to fix things.

----

Another issue with this article is that its recommendations simply won't work.

- If you try to make "text + bytes" work, you are letting people write code that will fail. You can't implicitly convert bytes to text. You have to know the encoding! Python 2's mechanism (system locale-implicit conversion) guarantees bugs in situations with multiple encodings floating about.

- Running Python 2 libs in Python 3 would mean that "text + bytes" style failures in the Python 2 code would bubble into Python 3, destroying all assurances you're supposed to have in Python 3.


> Application developers (overwhelming majority of Python users) had to wait for the libraries to get over to Python 3.

And, in the mean time, they were stuck with a language ecosystem which entirely stagnated; while there are some benefits to working with a "stale" language, this was even worse: it isn't like "the C language is no longer going to change, but gcc gets better every year, with better error messages and faster compilation and improved code generation", but "the developers of Python have decided to spend years working on something which you can't port to yet because you are blocked on libraries and libraries are blocked on tooling and language fixes, and in the mean time have gotten pretty damned hostile towards users of Python 2 and constantly claim that whatever the most recent release of 2.7 is is going to be the last one ever so people had better start moving"... and a lot of the people I knew who had been using Python have thereby moved alright, but they moved to Go, Rust, Clojure, or Elixir, and they aren't coming back to Python given that the language is really only slightly better than it was five years ago while these other languages have been going in really interesting places. And yes: there are some people using Python 3 now, but interestingly it seems to be an entirely different group of people, such as data scientists, than the people who were using Python 2: when was the last time you heard about a big website being developed using Python? You don't: that era ended when Python 3000 was announced.


There are loads of websites that get developed in Python. Django has gotten more and more popular every release. I work in a Python shop, and getting to Python 3 was something we've always wanted to do (and did! Recently)

Language communities evolve. Guido is working on a static type checker.

Python 3k transition was bad, but I haven't really met many people who are against the current state of the language. Re litigating the transition over and over doesn't accomplish much. Everyone knows how bad it was now


As part of a data science/machine learning course, I have taught many, many beginning students how to use Python 2 and (starting a year ago) Python 3.

The reality is that given what beginning programmers learn, they hardly notice the difference between them. Adding parentheses to print() and some explicit conversions to lists are all that is necessary to convert most code that beginners see. The last I looked at Zed's book, the same is true for his coding examples.

Overall, I believe for newcomers it is more useful to learn Python 3. Given this background, no employer would reasonably believe you cannot learn Python 2 quickly. They will likely even be impressed you are future-proofed.


I honestly don't get how this is still even an issue, if you're stuck with libraries that haven't been migrated to Python 3.x you have my sympathy - but I have had 0 need to touch any Python 2.x code in over 3 years - in fact, all of the Python code I have deployed at work is Python 3.x only.

Python 3 is a non-issue in my eyes, the only reason I don't use it more is because there's only one other developer on my team that knows Python well enough to maintain anything if I get hit by a bus - Python is "weird" for everyone else because we're mostly a .Net shop, so unless I'm writing an ansible module I use Kotlin and Spring-Boot or ASP.NET so my team can easily pick it up in the "snuxoll gets hit by a bus" scenario.


Agree. A few months ago I stopped supporting py2 in anything I'm writing. Maybe it happens to work. It shouldn't be hard to port if there's a need for it. But nobody complained yet.


"It's sad to watch Python destroy itself because it's such a great language, but that seems to be where things are headed."

The author has a point here. I've been programming Python for about 10 years now. Really concerned about the future of the language. Languages like Node and Go are progressing rapidly, but what's happening to Python? I used to complain about Node, but with the latest ES6/ES7 improvements it's actually pretty decent. The difference in usability between npm and pip is also huge.

Python is still my favorite language, but they really need to fix this part of the ecosystem. Maybe it's time to release Python 4 that has a clean upgrade path from 2 and enough language improvements to encourage users to adopt it?


> what's happening to Python?

Great things, IMHO. Now that the painful changes of 2 to 3 are done (or nearly so), recent releases have focused on incremental and steady progress. Take a look at the "What's New" documents:

https://docs.python.org/3.6/whatsnew/3.6.html https://docs.python.org/3/whatsnew/3.5.html

There is a lot of great stuff there without ripping up the world and breaking all your programs. For example, the new compact dict implementation in 3.6 is a great improvement. All Python programs can potentially benefit from the memory use reduction. Bug fixes and improvements to library code are great.

The transition from 2.x to 3.x was not handled well. The core Python team should have focused much more on making transitioning code easier. Allowing code that can easily run in both 2.x and 3.x (with suitable shims like 'six') should have been a focus early on. Disallowing 'u' prefix on strings in 3.x is an example of a serious mistake. The argument of keeping 3.x pure was wrong. It is much more important to make transition easier rather than making 3.x extremely pure. That's something that C++ has got right, although maybe they went too far the other way.

My small contribution to transitioning 2.x to 3.x is "ppython". See my github repo:

https://github.com/nascheme/ppython

Now that I've been programming in Python 3 for a few months, I find it pleasant. Writing programs that handle Unicode text correctly is easier. For small programs, it is easy to make them run under either Python 2 or 3.


Node is an interpreter for the v8 JavaScript engine and it uses a language named JavaScript. There isn't a language named node :)

I've done Python about as long as you but also like Golang for things Python is awful at.


Javascript is a colloquial name for what is formally known as ECMAScript. There isn't a language named JavaScript :)


You might just be being facetious, but in case you aren't, that's not true. The EMCAScript specification was based on JavaScript, not the other way around, and there are EMCAScript implementations that are not compatible with JavaScript (e.g. ActionScript, JScript). That is, JavaScript is one of many implementations of the EMCAScript standard, in and of itself it is not simply EMCAScript.


Nice try! ECMAScript is a standardized child of JavaScript, one with less warts and more awesome after lessons learned in javascript hell. ES6 will be nice.


I would argue the technicality you pointed out is a lot more minor than the OPs.

The OP was basically talking about the JVM as if it was a language.

*edit typo


Well, there is a Jim language, it's just not something you'd want to write by hand...


Netscape would like to have a word with you.


Python's adoption is exploding in the data science / machine learning fields, I don't see Go or Node do the same, so I guess it depends on which field you are considering. Regarding web development specifically, then yes Python is not the latest hot-new-thing.


For the vast majority of a language users, the community and the libraries matter a lot more than the merits of the core language. Look at R. I'm not sure anyone would hold it up as a great core language. However, it is very successful and becoming more so, due to its awesome libraries and community.

numpy, scipy and matplotlib are really powerful. Go and Node don't have anything close to their capabilities.


The author completely misunderstands the following:

- The concept of Turing completeness

- What static typing is

- Why strings are not the same thing as byte arrays

- That the distinction between strings/bytes is a fix for a problem exists with Python 2, not something that's "broken" in Python 3.

I understand why people might still use Python 2 if they have a legacy codebase to maintain, or need certain libraries that for whatever reason have not yet been ported to Python 3. However, if you're just beginning Python programming, or starting a completely new project in the language then none of that matters - you might as well just start with latest version.

The one point he gets right is that they should have maintained backwards compatibility by allowing Python 2 modules to run on the Python 3 VM. That was a major fuckup, the implications of which should be a lesson to every language designer of what to never do. It's sad to see the developers of Angular make essentially the same mistake.


This kind of compatibility is what stifles Java and C# to virtually death.

Plus native code depends on particulars of CPython VM quite often.


> Also, it's been over a decade, maybe even multiple decades, and Python 3 still isn't above about 30% in adoption.

Python 3 was released in 2008. Eight years ago.


That's also about the time Zed switched to Python.

https://web.archive.org/web/20080305002455/http://www.zedsha...


I run Repl.it (https://repl.it) and I have an interesting stat. Python3 has been loaded 3.6x more times than Python2 (500k vs 140k) just in November, and it's been used in 5x more classrooms (https://repl.it/classroom).

* two caveats:

- we have Python3 listed on the languages page whereas you'd have to click around or search to get to Python2.

- we have slightly more features on Python3 like live pylint and ability to use modules


Definitely the listing thing. I remember trying to demo something for a friend and went looking for Python 2 on your site and it took a good few seconds to find it.


There's good criticism here, but I can't help but feel the rest is mostly Zed flipping out.

Regarding 2to3 not being flawless: agree, could be better.

Regarding core libraries: agree, but the same problem exists on Python 2 (different core libraries have different levels of compatibility for taking string or unicode as arguments), so I don't see how this can be a recommendation against 3 specifically.

Regarding new string types and handling, this has been discussed over and over again everywhere on the web, and boils down to people freaking out at unicode by default. This one does a good job explaining everything: http://www.diveintopython3.net/strings.html and this one does a good job at showing what you get w/ Py3 (not limited to new string types): https://speakerdeck.com/pyconslides/python-3-dot-3-trust-me-...

Regarding complaints about how many string interpolation methods there are: the `f` method relies on local scope (which is prone to introducing bugs), so it's actually pretty gimmicky compared to `.format` (which takes an explicit data structure as parameter). I would do the opposite and only teach `.format` to beginners since it leads to better code.

Regarding Turing completeness: fun.


> Regarding 2to3 not being flawless: agree, could be better.

The thing to remember though is that 2to3 will never be flawless. Since the underlying representation of some things has been split into different types, it simply impossible to automatically fix some code.

For example a function returning the first letter read from a file object is going to be completely different and more explicit in py3 than it was in py2. And the end result it better. You have to fix some code now - tough.


On the other hand, I never suggest Learn Python The Hard Way or Codecademy to any new programmer, because I don't want them to ask me down the line "why you directed me to an obsolete Python course?". I think I'm not the only one. So I'm not sure what's the net result of those decisions made by the two sites, do they really help in keeping Python 2 alive?


Well, it seems to me like that's the whole "may kill Python" thing he's talking about.


I don't know, a week doesn't pass without a new deep learning Python framework being launched, Python dying doesn't just seem the case to me.


FWIW, I do recommend Zed's book. Many, many people have benefitted from Python The Hard Way.


I have too, but I learned Python in a time where learning Python 2 was still politically correct.


I attended a few newbie classes at pycon2016. A few of them suggested Learn Python The Hard Way, to which I pointed out how the author states to never try installing python 3 on the first page.


The author should essentially be banned from community and his book no longer recommended after that lest someone believes him.

You're essentially giving up great features like async keyword and proper encoding handling getting nothing in return. The few libraries that are still Python 2 only should be ignored.


You should essentially be banned from the python 2.7 community for saying these things.


I also saw that but gave the author benefit of doubt that may be the book was written years ago and not updated. I have completed switched to python3 for almost 2 years now.


> In the programming language theory there is this basic requirement that, given a "complete" programming language, I can run any other programming language. In the world of Java I'm able to run Ruby, Java, C++, C, and Lua all at the same time. In the world of Microsoft I can run F#, C#, C++, and Python all at the same time. This isn't just a theoretical thing. There is solid math behind it. Math that is truly the foundation of computer science.

What does this even mean?


I think he's trying to suggest that the Python VM should be separate, and have both Python2 and Python3 target it (and potentially other languages)

But who knows, his actual argument is absolute mess


Someone doesn't realize they actually have to do work, even if a language is turing complete.


One feature of Python 3 that I found really pleasantly useful recently (thank you SO :) ) was passing generators to the zip() function in a for loop, like:

for a, b in zip(generatorA(), generatorB()):

In Python 2 you have to import itertools to do this. I bring this up because I find examples like this all the time in Python 3 where core, modern language features like generators are more tightly integrated and easier to use. It's really a better version of the language and if you're starting fresh on a Python project you should do yourself a favor and use it.


Naw, `zip` is a builtin in py2. You might be thinking of `itertools.izip`?

https://repl.it/E5v6/2


zip in py2 returns a tuple type (i.e tries to consumes the iterator through a StopIteration) in py3 it returns a iterator (consumes it on demand).

Try this in py2: https://repl.it/E5vd/0


I'm not near a real REPL, what's the diff? Eager vs. lazy? I think I remember reading something about that change awhile ago.


Exactly. Py3 zip is lazy like py2 izip.


Wow, this guy seems really pissed off.

But I must admit, it is nice having all those python developers distracted by working on another language and leaving 2.7 in its current state of perfection. I wonder if this has actually helped increased the use of 2.7, not having endless "improvements" tacked on. It sounds like this could be one of those weird laws of the computer age. I dub thee "Shaw's Law".


Remember the author is talking about helping beginners learn programming. Not you, HN user with 10,000 karma, and maintainer of five python projects.

There's a case study in the Christensen's Innovator's Dilemma where a company tries to improve its milk shake by making it more chocolaty, by putting whipped cream on top, etc but for some reason this doesn't increase sales. Then some guy goes into the restaurant to figure out “What job do people hire a milkshake to do for them?” and it turns out its not the job the people making the milk shake thought[0].

I wish the core python developers would do the same and ask themselves not how to make the best milkshake, but why do people buy their milkshake in the first place?

[0]: http://www.itsma.com/a-better-way-to-segment/


Python was one of the first languages I've tried I actually enjoyed programming in.

I hope to continue leveraging my investment in learning python by using it for a long time. So far I haven't been forced to use Python 3. I've tried a few times but always found it easier to just go back to python (2.7).

I figure eventually I have to use Python 3 but at this point, my life has been fine without upgrading to python3.


As a writer of "quick n' dirty" webapps for internal use, I use python3 because philosophically, I want to help the world move forward, and the standard library has more stuff in it.


Ditto. I have submitted at least 10 PRs to fix Python3 bugs in OSS projects directly because of this. So perhaps I have made the world .000001% better.


Thank you!


What did you find difficult about Python 3?

I switched around 3.1, though granted I was still a novice programmer then and didn't have to port any legacy code. I had so many issues with Unicode in Python 2 (some may have been my own ignorance at the time, but with 3 a lot of those issues magically disappeared). Since text processing / NLP is my main focus, it was an easy decision to switch.


I think Python is amazing to get small stuff ip and running and quick scripts, but it quickly becomes a nightmare as the project scales up.


Why would you say that? The main Python project I work on is 271k LOC... no problems here.


The lack of type checking makes it much harder to coordinate right inputs and outputs. Doable, but makes the overhead much harder than when you do have it.


I would guess that he just doesn't want to update his courses.

EDIT: https://learnpythonthehardway.org/book/ex0.html

Wow - python 2.5.1. He even does not update to 2.7.

  Last login: Sat Apr 24 00:56:54 on ttys001
  ~ $ python
  Python 2.5.1 (r251:54863, Feb  6 2009, 19:02:12)


> The fact that you can't run Python 2 and Python 3 at the same time

I'm confused by this statement. I can start a python2 interpreter, background it, then start a python3 interpreter. Is there some other area that they inhibit each other in?


I think the author means that there's no single interpreter that supports both languages (a la DrRacket's #lang directive, perhaps?).


It sounds like the author's just miffed about incompatibilities in general -- specifically, that they can't (necessarily) import python 2 code whilst running under a python 3 interpreter and vice versa.


No, unless you really try to confound them together


The "Don't use it because people don't use it" argument bugs me. That's a valid argument when you're a CTO or Employee 0 picking a language that a thousand people will be using for fifteen years. Putting it on a tutorial site with good google juice, on the other hand, is a sad self-fulfilling prophecy.


This must be a serious attempt to troll the interwebs. https://meta.wikimedia.org/wiki/Cunningham's_Law "the best way to get the right answer on the Internet is not to ask a question, it's to post the wrong answer."

This REEKS of wrong answers. Turing completeness? Py3 in Py2? The argument that bytes are to confusing?


There was an amusing discussion last week in a Facebook group about this exact topic:

"So there were 7 applicants and I rejected 7 because they sent in their demo assigments in Python 2.7.

Come on people, its nearly 2017!"

https://www.facebook.com/groups/hackathonhackers/permalink/1...


Now that was a 5 minutes of my life I'll never get back. The author commits all the crimes of "propaganda" and "FUD" he accuses others of.

The primary and possibly only reason Python 3 has not taken off as much as some would like is because 2.7 is that good.


This article was written in the mental state of hate. The author may well reread it, and remove all the hate. He can have the opinion that Python3 is not suited for teaching where Python2 is, but his arguments are in fact just opinions.

Another problem in the tone: no respect for his readers.

    "if *I* struggle to use Python's strings then you don't have a chance."


If you're at the level where you want to use "the hard way" stuff to learn to program he might have a point.


Zed just needs to be at war with something.


The thing about Zed is that his shtick was originally a joke: a parody of overconfident asshole programmers. But... well, like Orwell said, "He wears a mask, and his face grows to fit it."


I mean, his original unhinged rant[0] was 9 years ago now.

[0] http://harmful.cat-v.org/software/ruby/rails/is-a-ghetto

"I’ll add one more thing to the people reading this: I mean business when I say I’ll take anyone on who wants to fight me. You think you can take me, I’ll pay to rent a boxing ring and beat your fucking ass legally."


Apart from his argument that Python 3 is not turing complete, which has been torn apart in the comments, can someone evaluate the other arguments. Are the other criticisms valid? If not, why?


I think some are valid criticisms, but not valid enough to throw your toys out of the pram.

> Not In Your Best Interests

I say what's in my best interest and py3 fixed a lot of issues for me. I'm happy with the upgrade and dropped py2 for new projects this year.

> No Working Translator

A flawless translator is simply not possible. 2to3 is the best effort idea. At some point you'll run into `def f(x): return x[0]` and without doing a full program static analysis you won't be able to say what's the right translation. You have to do it manually where needed.

> Difficult To Use Strings

I understand that the py2's "just works" strings are easier. But at the same time, having a non-ascii name I know that most applications "just work" simply because nobody actually tested them outside of ascii set. Py3 doesn't explode on encoding for fun - it basically says: you made an assumption that kind of worked so far by accident, but now you need to say explicitly what you want to do. I think it's a good thing, even if it takes people time to adjust.

> Core Libraries Not Updated

I don't even know what he means. This really needs an example, or it's just a meaningless rant.

> Purposefully Crippled 2to3 Translator

As above. Some things are just not possible to translate without knowing what the programmer meant. Some things in py2 strings happened to work a bit "by accident" and simply won't work in py3 for good reasons. For example py2 code:

    In [1]: "abc".encode('ascii').encode('ascii')
    Out[1]: 'abc'
It may look silly on one line, but there's lots of applications that relied on py2's "it just works" strings - and this what was effectively run at some point in the code logic.


Adoption is a real issue, but one that can be helped by actually using Python 3 (and encouraging others to use it), assuming you think the language is better. Big organizations have more clout here, for example Arch Linux uses Python 3 as the default Python interpreter.

The string/byte confusion is an issue for people who like to pretend that they're the same. They're not. A string can be represented by different byte streams depending on the encoding. That by itself tells you that they are not the same and should not be treated as the same thing. But we have many decades of conflating the two, and old habits die hard.

It's convenient to treat them as the same because IO deals with bytes and `write(some_string)` is easy. JavaScript's `"42" == 42` is just as easy, and just as wrong. The solution is both simple and easy: convert between strings and bytes at the IO interface. Explicit is better than implicit.


Alright. Point by point (although I'm considering both the beginner and "advanced" arguments for each point simultaneously):

> You Should Be Able to Run 2 and 3

That makes no sense. The whole point of versions is that they are different. Maybe he means 3 should be backwards compatible. That is potentially valid criticism. That said, AFAICT, the breaking changes are pretty reasonable [0][1]. Also, I'm not sure what world he is living in, but interop between Java and C/C++ through the JNI is not easy. As a math major I'm not sure what the "solid math" here is. I think it is ironic that the CLR is referenced here so extensively - it has made some pretty big breaking changes in the past (unlike Java, it decided to break backwards compatibility to have reified generics).

> No Working Translator

Translation is tough, _especially_ in languages like Python that don't have good static guarantees. The real problem here is probably that the code you want to produce with a translator should look as much like the original code when possible. That _is_ tough. The point is it isn't just what the program does in terms of inputs and outputs that you want to preserve. You still want it to look more or less the same.

> Difficult To Use Strings

This is not an incorrect point. I think Python 3 wanted to have both the performance of byte strings and utf8-by-default strings (finally!). The intention behind that seems reasonable. As a proponent of strong static typing, the error message he shows seems quite benign. That said, I understand how one might be unhappy about this.

> Core Libraries Not Updated

This may be the case, I'll defer to someone else. That said, this is yet another claim the author makes without much backing (this alone could be the subject of a blog post given the right backing). The quip about the Python community liking bad design seems a bit gratuitous.

[0]: https://docs.python.org/3/whatsnew/3.0.html [1]: http://sebastianraschka.com/Articles/2014_python_2_3_key_dif...


So many inaccuracies and exaggerations that it feels sarcastic.


I'm getting the sense it is, but he doesn't make it obvious at all (hence it's funnier to him?)


> a more stable language like Go, Rust, Clojure, or Elixir

Uhhh..? A more trendy language? Are we really saying that Elixir is more stable than Python?


A lot of people seem to be confused by his Turing completeness argument. I think it's misleading and the phrase 'Turing complete' is a red herring. My guess is that what he was getting at was that other languages use 'compiler-mode' switches for compatibility-breaking upgrades so that you can use older code with newer code.

For example, you can write a program in the latest dialect of Free Pascal that includes a unit written in a 30 year old Turbo Pascal dialect and another unit written in a late-90s Delphi dialect. Even Javascript has `"use strict";` and PHP has `declare(strict_types=1);`. Python's solution was instead to say "Just go rewrite all the code you've ever written."

That said, claiming that the language isn't Turing complete because it doesn't include switches for compatibility is a bit of a stretch.


There was no way to keep the compatibility given the string change anyway, as the literal is changed.

Port to Python 3 is almost never a rewrite.


    from __future__ import unicode_literals, print_function, division, absolute_import


Pretty sure Zed's playing a joke on experienced programmes. He's clearly mentioned that the book is for complete beginners and that's why the entire section for experienced programmers is crap.

Funny how many haven't seen it and are arguing pointlessly.


Is this actually a joke? I'm still kind of reeling. The typos, the technical inaccuracies, the logical fallacies, all bundled together... Is it trying to satirize something? If not, what is the blog post trying to do?


He definitely is...

I just checked his twitter to make sure I'm right. He's clearly joking.

https://mobile.twitter.com/zedshaw


Should be clearly stated on the page. Because Poe's law.


Ah but where's the fun in that? Just look at the number of people who've jumped the gun. Makes me wonder about the media and how we consume information in general...


and most importantly the book Learn Python the Hard Way only supports python 2.


A lot of the basics work in both, or with some minor tweaks.


The "unicode" string changes in python 3 is enough for me to avoid it, as they somehow managed to go from broken to brain dead.

Mandatory utf-8 strings would had been a reasonably nice solution I think.


Interesting how this unicode string thing in py3 seems so similar to php6. Which was canned because it turned out to be a flawed design, and php7 continued where php5 left off (fortunately!)


It wasn't canned because it was a flawed design. It was canned because they couldn't make it work well enough technically. Encodings are still a total mess in PHP unless you are very careful (no runtime errors, you just run with it and maybe you'll see correct output or mojibake in the end).


This is so filled with #fakenews it's perhaps easier to pick out the one useful criticism, which is having to learn 3 string formatting styles, which is indeed a pain. For commercial work it seems wise to stdize on one in a given project/organization.

As for the rest, I'll pile on: - python3 is eight years old, not decades, and this matters: Perl4 => 5 was a similarly long transition and for similar reasons. - python3 is obviously Turing complete. Moreover, I'm not sure its lack of whatever the author thinks TC means, actually matters in the question of whether python3 is worth choosing to learn or build systems. - very few languages can be used "side by side" with each other, and I want aware that this was a common use case. If you have python2 code, there are various ways to invoke it without porting, eg subprocess, web service, etc. Obviously, there's various costs eg performance, system mgmt, debugging, etc. - high probability of failure? Strange but I see the exact opposite, with more and more projects more supporting ONLY python3 ie the tide is turning. Anyway, this is scaremongering and #fakenews so YMMV.


FUD, really:

> This document serves as a collection of reasons why beginners should avoid Python 3

> The Most Important Reason Aledged 30% adoption. No sources for this really, just a number out of the blue. In me experience, most companies have migrated their internal software to python3, or are doing so. A great deal of desktop apps are already there too.

> You Should Be Able to Run 2 and 3 Like so many other languages/libraries, a major breaks compatibility. But hey, guess what? Python actually allows you to install and run python2 and python3 at the same time - most distros actually do this!

> No Working Translator Newbies have no legacy codebase no migrate. This is really off-topic. New devs can use code that is compatible with both, or just plain python3

> Difficult To Use Strings Only if you're coming from python2, but definitely not for new users, who have no background as to "how it was before".

I got bored of refuting the invalid points by this, but none of them have any validity, especially for beginners.

Also, I work at a university where we teach python to new students. They've generally had a much easier time learning algorithms and THEM migrating to C in the second semester (and actually tend to understand everything clearer) than using C as a starter. I've also never seen any of the above mentioned items being an issue.


I've helped a few people learn Python. Nobody has ever gotten hung up on Unicode strings. We've even done string-y stuff such as talking to hardware via pySerial.

We've modified a couple of libraries ourselves, typically stuff that was written a long time ago and abandoned. Other than that, we've never really run into a problem where the choice of one version of Python over the other was an obstacle.


Treehouse has the opposite view for beginners:

http://blog.teamtreehouse.com/python-2-vs-python-3

Go figure. It's an enduring debate with strong opinions on opposing sides.

The whole topic feels a little flame-baity to me.


If it was so easy to do some of these things that the author wants, why hasn't someone already done it?


I hope we can just call Python 2.8 -> Python 4 and do a bridge to import the good things about Python 3. Agreeing or not agreeing with the author points is irrelevant, Python is slowly and unfortunately dying because of this schism.


Never. Python 4 would be another breaking change that is not a step back. Not over BDFL's dead body will this happen.


He works at Dropbox, who use Python 2. Having experienced the pains of transition himself, he might yet see the light.


Am I the only one who thinks it's possible this is a parody?


A good enough parody is indistinguishable from extremism. Poe's law.


>There is a high probability that Python 3 is such a failure it will kill Python.

This seems unlikely. Python 2 is a very popular language. There is currently more than one implementation. So no matter how weird the Python 3 thing gets, Python 2 should be available in some form.

Python 3 is actually helping Python 2 here. It attracts those who want to do cool new language things, leaving Python 2 as a stable target.


Much unjustified criticism from a ruby developer! I must say I'm surprised there aren't any swear words in the essay, well done Zed.


Aw c'mon, Zed. Python 3 is what I've prefer to put into production, it's also going to get the most love from the community.


Frankly the article is whiney and doesn't provide much of a case except, "I don't like it, and strings are bad."


Except of course Python 3 isn't killing Python at all.

Just taking a look at the TIOBE, in July 2016 it hit #4, it's highest spot ever.


I remember having predicted something like this when he started cozying up with the Python community...

Python 3 is a ghetto...


"Currently you cannot run Python 2 inside the Python 3 virtual machine."

does this mean he tried to run python 2 using the python 3 interpreter? no shit it doesn't work ._. that's not how turing machines work


With all the talk about how the Python group is promoting 3 for its own benefit, I'm curious if in the next few weeks we will see the author's agenda change to "Learn !python the hard way"...

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: