That's nonsensical since it solves a completely unrelated problem.
2to3 was conceived of as a one-shot migration tool for Python 2 codebases. Not as a way to build cross-version Python, and not as something to use repeatedly (which it why it is relatively simplistic and fallible), just as a way to take an existing codebase and handle the first 90% of migrating to Python 3, leaving you with the second 90%.
2to3 was _conceived_ of as migration tool, as if evolutionary lineages just all stopped, changed their genome and restarted. Except it only refactored some easy stuff and let the rest fall on the floor.
Six has had way more impact than 2to3. It was only because core Python said running a single 2/3 codebase was UnPythonic that people didn't embrace it sooner. Running a single codebases across both 2 and 3 is now the defacto technique.
Writing in 3 and "compiling down" to 2 gets people writing 3 which is the point, even if they can't run CPython3.
There's nothing to be charitable about, GP's comment is rather clear and stands for itself.
> 2to3 was _conceived_ of as migration tool
And it still is one, so far as I know, which is actually part of the issues with it.
> Except it only refactored some easy stuff and let the rest fall on the floor.
I'm rather well aware of that, which you'd have noted if you'd actually read my comment.
> Six has had way more impact than 2to3. […] Running a single codebases across both 2 and 3 is now the defacto technique.
Sure, but again that has nothing whatsoever to do with the inanity of ranking 2to3 and backwards.
> Writing in 3 and "compiling down" to 2 gets people writing 3 which is the point, even if they can't run CPython3.
You do realise that's complementary to 2to3 right? A big issue of 2to3 is that once you've converted your P2 codebase to pure-3 (not a common subset of 2 and 3) you either have to maintain two different branches making packaging and backporting difficult or you just leave your P2 users out in the cold.
Once again, the purpose of 2to3 is not to regularly run 2to3 to generate a Python3 release from a Python2 source.
If I were writing a library that needed both 2 and 3 compatibility, I would prefer to write Python 3 code and then run 3to2 on it, rather than going the other way. My own solution is to fork the library, provide only bug fixes for the 2.x version and do all new development in Python 3 only. Porting has taken some effort but not as much as I feared. In retrospect, I'm happy to have invested the time.
It is completely nonsensical in the context it is made in, the purpose of TFA has nothing to do with the purpose of 2to3, it makes no sense to "rank" them.
> There was a tool 3to2 that took Python 3 code and produced Python 2 code. You had to write in a subset of Python 3 but the idea was sound, I think. Python 3 is more strict in ways than 2 and so downgrading the code was easier (e.g. no implicity str/unicode coercions). It looks like 3to2 is not maintained but I liked the idea.
That still has nothing to do with 2to3, whose intended purpose once again is as a helper for a one-shot conversion of an existing Python 2 project to Python 3.
> If I were writing a library that needed both 2 and 3 compatibility, I would prefer to write Python 3 code and then run 3to2 on it, rather than going the other way.
And guess what? If you're starting from a Python 2 project you can use 2to3 to convert it to Python3 then use some other converter to provide P2 releases.
I suspect this may be easier to achieve with JS than other languages because its prototypical nature makes it easy to compose dependencies. However, there is certainly a lot that other communities can learn from the JS community's solutions to this problem.
In theory more than practice. For example if you use array destructuring (`[x, y] = something`) and Babel cannot verify that `something` is always an array it inserts a 664 character function that handles generators and other things. That adds performance and size overhead (grep for _slicedToArray in your compiled js files), when all you really want is `x = something...`
The only reason the Python 2/3 issue exists is because of just enough breaking backwards compatibility in a way that a lot of people didn't agree with in the community. This isn't a thing in Ruby.
So the goal should be to have codebases in Python 3.
> python3 should be the input and not the output
How would this accomplish that goal? All the codebases would be in Python 2 then. I bet that removing barriers to run Python 2 code would cause it to stay around longer.
I never was able to find anyone talking about that goal with mypy, but it looks like this tool may be a solution.
A writeup on how you handled mapping bytes, bytearrays, unicode, etc, to 2.7 constructs would be interesting.
In code that supports both 2 and 3, I end up with ugly stuff testing sys.hexversion to deal with 3rd party libraries, like pyserial, that expect str in v2, bytearrays in v3.
As I mentioned elsewhere, I created "ppython" to handle these two porting problems. The source code is on github: https://github.com/nascheme/ppython . If someone needs a Windows binary, I could build it if you ask politely. ;-) For my project, it has been a useful tool to speed up porting.
Porting code is not trivial and there is huge amount of Python 2 out there. I wish more effort had been spent building tools to help porting of code. Even simple things like disallowing the 'u' prefix on strings in Python 3 was a big mistake. Mostly those things have been corrected but it is still going to take a very long time to move the majority of the community to Python 3. There will be Python 2 running for the next 50 years easily, I'm sure, maybe 100 years. At least Python is open source and you will not be stranded like VB developers where when MS drastically changed Visual Basic.
if sys.version_info >= (3,):
... # Python 3.X code
if str is bytes:
... # Python 2.X code
"Why should I switch to 3.x?" You haven't provided a reason to switch.
I don't know if it was your intent, but your question assumes that Python 3.x is the "default", and that there is some sort of obligation to use it instead of 2.x. That's a premise that a lot of people do not share.
As an example, they may not view 3.x as an "upgrade", but as a different language. So you might as well ask anyone doing work in one language "What reason is there for you to still use language X? Why not use language Y?"
As long as some team is willing to support Python 2.7 (and possibly backport non-breaking features), 2.7 will live on, and there is no reason it should go away. The only strong reason for many to switch to 3.x is "I have a library I need to use that is supported only in 3.x". Or "I need to hire developers, and I can't find people who know 2.x, but I can find those who know 3.x"
Languages are tools. As long as the tool is more than adequate for the task, the burden is on others to justify a change in tool.
Writing new code in Python 2 is completely nuts and just makes everyone's life harder (including your own).
And when it makes someone's life harder, they will change. If it's too late for them and affects them, they will have themselves to blame.
This, really, is the only reason for people to change to Python3. Yet people keep evangelizing all its great features, etc as if the existence of a much better language suddenly obligates everyone to switch to it.
It's fine to point out potential repercussions of not switching now. But moralizing it is counterproductive.
Just to put a stake in the ground, when did FreeCAD start? When did 3.0 come out, along with the announced end-of-life date for 2.7?
...Python 3.0 was released in 2008:
...It looks like the port of FreeCAD to Python 3 stared in 2015:
Because Python 3.X is not backwards compatible and we have millions of lines of production code in Python 2.X.
I believe the common subset of Python 2 and 3 was created to ease that transition.
That's far from the only pitfall. For instance some of the P2 stdlib relies on implicit coercion to bytestrings whereas the Python 3 version has been properly converted to unicode (I discovered that when I tried to disable implicit coercion in one of the codebases I work with for cleanup/migration purposes).
So cross-platform string handling is not just a matter of properly splitting bytestrings and textstrings, but also finding out where "native strings" need to remain.
I hate it because Python2 is a flaming garbage heap, and 2.7 is what happens when you piss on a flaming garbage heap to put it out, whereas Python 3 is actually a nice language to work in, but until OS distributions get their act together on the Python front, some stuff is going to continue sucking.
I worked in Python 2 daily for about a year and a half (doing side stuff in 3), and switched over to Python 3 a few months ago. It's just not that big of a deal. Maybe it's because I'm pretty shielded from the madness of strings vs. bytes (or just import from future).
import queue as queue
import Queue as queue
They couldn't even consistently name portions of the standard library. Python 2 has a few random things where the first letter is capitalized while 99% of everything else isn't.
There's all sorts of things like this, where poor design decisions cause you to sit there with question marks appearing over your head. That 10% takes Python from being a joy to work with to being incredibly frustrating. I would be happy if I could just work exclusively in 3, but making sure things are backwards compatible to 2.7 is a frequent source of driving me up the wall.
>Maybe it's because I'm pretty shielded from the madness of strings vs. bytes (or just import from future).
If you lose this shielding it will help you understand my vitriolic response as well.
The lack of UnicodeDecodeErrors.
Just because there is a 90% overlap doesn't mean one can't be a pile of trash (i think that's a bit harsh though). The improvements under the hood are huge (unicode, no 'new-style' classes, speed improvements etc) as well as syntactic sugar means that as time goes on Python 2 looks worse and worse.
Not all of us have hobby-sized projects on the go.
I'd really like to understand this from a technology transition management standpoint.
If you'll allow a strained analogy, the 2.7 to 3.x transition appears in retrospect like an exercise in herding cats, because the transition was done using cat-herding style incentives. What should have been done differently to make the transition into greyhounds chasing a rabbit? What should the rabbit have been?
Any technology that lives long enough eventually has to transition its customer base. I'm trying to learn to identify rabbits.
Now, you have 2.7 apps that are very mature, stable and reliant on lots of little things. Okay, big job.
Now let's throw another wrinkle into this. You can pay good money to develop your next feature, or to port your system to 3.x. One adds real value to your customers and improves your bottom line. One lets you say you are on 3.x and gets nods of approval from random programmers.
What do you think a business is going to do? Exactly.
When will we eventually port? Sometime shortly after 3.x becomes the overall standard. THIS is actually what we are seeing right now, which is great, but that hasn't been the case up until the last year or so.
>Any technology that lives long enough eventually has to transition its customer base.
While true, this costs money, time, and effort. So you better be damn well sure there is a good ROI that is something more than "This code that nobody but programmers see is more elegant and properly structured".
There's a reason banks are still using COBOL after all...
K&R C no longer compiles. Not that the C to C++14 transition is what I would call an example of greatness, but it is somewhere along the success spectrum.
Seriously, you will someday need to transition your customer base. How do you plan to do it?
foo@virt-ubuntu:~/c$ gcc --version
gcc (Ubuntu 6.3.0-12ubuntu2) 6.3.0 20170406
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
foo@virt-ubuntu:~/c$ cat 'k&r.c'
int main(argc, argv)
foo@virt-ubuntu:~/c$ gcc 'k&r.c'
Slightly tangential but system ruby is stuck at 2.0.0p648 (which is well into unsupported) and perl v5.18.2 (which I don't know the status support of but is old enough to wonder).
They might as well rip them out and make them an optional package like CLI dev tools for possible backwards compatibility needs (easily installable either with something like xcode-select --install or the java/javac stubs).
I don't really see the benefit of a Python 3.6 to 2.7 transpiler/compiler though. More realistically I would require that all new code be both 2 and 3 compatible, for when the eventual migration will happen. This isn't necessarily and easy task either, but then you're only depended on the Python project, not some random transpiler that might not see further development.
- I like lambda a, b: a + b syntax better than lambda a_b: a_b + a_b
- I prefer map/reduce/filter to return lists rather than iterable
- I prefer dict.keys/values/items to return sets/lists rather than iterables, unless I call dict.iter[keys/values/items]
It already works this way.
Python 3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 17:54:52) [MSC v.1900 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> add = lambda a, b: a + b
>>> add(1, 2)
You can produce a list from any iterable by passing it to `list()`. You cannot take a materialized list and make it lazy though.
> I prefer dict.keys/values/items to return sets/lists rather than iterables, unless I call dict.iter[keys/values/items]
Why? Sets are mutable. What happens when you mutate dict.values? It doesn't make sense.
>>> add = lambda a, b: a + b
>>> t = (1, 2)
l = lambda (a, b): a + b
With regard to the second point, I also would have liked a more "gradual" step there, I find myself (especially in REPL environments) often doing `list(map(sth, sth))`. A `mapi`, `filteri` or something like this would probably not be zenny enough.
Both don't pose very strong points for python 2 > 3.
These seem like OK reasons, but I find myself continuously needing the destructuring idiom in lambdas, while these introspection and documentation concerns are just not there for me.
However tuple-parameters being an exception to rules (such as args and kwargs not being usable with them) is more compelling. And allowing destructuring only in lambdas would be weird, because they wouldn't be normal function objects any more.
Unless the destructuring was only syntactic sugar for the translation in the PEP? But that's again inconsistent.
Python 2.7 has imap and ifilter in itertools. Python 3 could have imported them into global namespace to simplify usage without breaking map/filter functions.
This. I wonder if you could have a switch to make ipython print out the list instead of the iterator object at the REPL.
uh, `lambda a,b: a + b` works in 3.6...
> I prefer ... to return lists rather than iterable:
Why? I'm just curious. Iterables are much more flexible than lists, unless you need random access... at which point list(...) works pretty well.
I wonder why isn't "lambda (a, b): a+b"
That works perfectly well in Python 3, you're thinking about `lambda (a, b)` aka `def foo((a, b))` aka tuple-parameter unpacking.
> - I prefer map/reduce/filter to return lists rather than iterable
2. just wrap them in a list() call?
> - I prefer dict.keys/values/items to return sets/lists rather than iterables, unless I call dict.iter[keys/values/items]
You are aware that Python3's keyviews and itemsviews are sets but P2's are just lists right?
Python 2 is like FORTRAN. It might not be sexy anymore but it's not going anywhere.
Except Fortran is, and is going to be, further developed. (So, if a new useful programming concept appears, or a major design mistake is discovered, it always can be patched.)
Python 2 is going to be abandoned in 2020. Python 3 will supersede it.
But Python 2 is Free Software, so as long as people like it, they can keep improving it:
Elsewhere in the thread xrange has mentioned Tauthon. And there are more Python 2 interpreters than just the main C Python.
Pypy may stop supporting 2 syntax officially, but I doubt it. Since it never changes, there's next to no overhead to keeping it available.
And further, if your language isn't changing then maintenance and debugging become much easier. Conceivably the codebase(s) can asymptotically approach 100% correctness.
So I don't think there's much to be gained.
The 3 syntax and new features are not massively better than the sweet spot hit by 2, they're not even incrementally better. They're just massively more complicated.
So the cost/benefit ratio of Python 2 to 3 just isn't there. People are changing up because it's "the done thing" not because they really gain anything.
I was surprised as hell when I realized I wouldn't use Python 3, but it is a rational and considered opinion.
Does it mean I can only use f''-strings if the target version is 3.5? Why such a limitation?
That's my guess, anyway.
Python 2 is near end if life anyway.
So rustc is a transpiler when using the emscripten backend but not when using the native backend? How does that even make sense?
Like, in most American homes, there's a bathtub with a showerhead in it. When we're using the showerhead, we'll say, "I was in the shower..." When we're using the faucet, we'll say "I was in the bathtub." This sort of thing happens all the time. It makes plenty of sense that something is a transpiler when it's transpiling and not when it isn't.
Compilers are computer programs that translate a program written in one language into a program written in another language.
Example of a compiler in action.
$ cat hello.c
$ cc -O3 -S hello.c -o -
.macosx_version_min 10, 12
.align 4, 0x90
_main: ## @main
.cfi_offset %rbp, -16
movq %rsp, %rbp
leaq L_str(%rip), %rdi
movl $42, %eax
L_str: ## @str
.asciz "hello world"