Hacker News new | past | comments | ask | show | jobs | submit login
What's Coming in Python 3.8 (lwn.net)
638 points by superwayne on July 17, 2019 | hide | past | favorite | 536 comments

Despite controversy, walrus operator is going to be like f-strings. Before: "Why do we need another way to..." After: "Hey this is great".

People are wtf-ing a bit about the positional-only parameters, but I view that as just a consistency change. It's a way to write in pure Python something that was previously only possible to say using the C api.

f-strings are the first truly-pretty way to do string formatting in python, and the best thing is that they avoid all of the shortcomings of other interpolation syntaxes I've worked with. It's one of those magical features that just lets you do exactly what you want without putting any thought at all into it.

Digression on the old way's shortcomings: Probably the most annoying thing about the old "format" syntax was for writing error messages with parameters dynamically formatted in. I've written ugly string literals for verbose, helpful error messages with the old syntax, and it was truly awful. The long length of calls to "format" is what screws up your indentation, which then screws up the literals (or forces you to spread them over 3x as many lines as you would otherwise). It was so bad that the format operator was more readable. If `str.dedent` was a thing it would be less annoying thanks to multi-line strings, but even that is just a kludge. A big part of the issue is whitespace/string concatenation, which, I know, can be fixed with an autoformatter [0]. Autoformatters are great for munging literals (and diff reduction/style enforcement), sure, but if you have to mung literals tens of times in a reasonably-written module, there's something very wrong with the feature that's forcing that behavior. So, again: f-strings have saved me a ton of tedium.

[0] https://github.com/python/black

For me, the hugs thing about f-strings was that invalid string format characters become a compile time error (SyntaxError).

  print('I do not get executed :)')

  File "stefco.py", line 2
  SyntaxError: f-string: empty expression not allowed
This has the pleasing characteristic of eliminating an entire class of bug. :)

> If `str.dedent` was a thing

Have you looked at textwrap.dedent?

Yes! `textwrap.dedent` is great. On further reflection `wrap` is actually more useful for this kludge (see below). But my point is that that's a whole import for a kludge. Compare the f-string ideal (by my standards):

  raise ValueError("File exists, not uploading: "
                   f"{filename} -> {bucket}, {key}")
...which is short enough that it's readable, and it's clear where exactly each variable is going. It's the single obvious solution, so much so that I don't spend a second thinking about it (very Pythonic!). Compare it to using `str.format` with the same continued indentation:

  raise ValueError(("File exists, not uploading: {filename} -> "
                    "{bucket}, {key}").format(filename=filename,
Even this minimal example looks terrible! Remember that a lot of exceptions are raised within multiply-nested blocks, and then format pushes things farther to the right (while also ruining your automated string-literal concatenation, hence the extra parentheses), leaving very little room for the format arguments. You can use a more self-consistent and readable indentation strategy:

  raise ValueError(
          "File exists, not uploading: {filename} -> "
          "{bucket}, {key}"
      ).format(filename, bucket, key)
This is unquestionably more pleasant to read than the former, but it's 3 times longer than the simple f-string solution, and I would argue it is not any more readable than the f-string for this simple example. My point with having a `str.wrap` builtin is that at least you could use the docstring convention of terminating multi-line strings on a newline, which would get rid of the string concatenation issues while leaving you a consistent (albeit diminished by the "wrap" call) amount of rightward room for the `format` args:

  raise ValueError("""File exists, not uploading: {filename} ->
                   {bucket}, {key}
                                       bucket=bucket, key=key))
Maybe a little bit better than the first one, especially if you're writing a longer docstring and don't want to think about string concatenation. But still a kludge. You can use positional formatting to shorten things up, but the fundamental weakness of `str.format` remains.

Here's a clean way to do that:

  str_fmt = "File exists, not uploading: {filename} -> {bucket}, {key}"
  fmt_vals = dict(filename=filename, bucket=bucket, key=key)
  raise ValueError(str_fmt.dedent().format(**fmt_vals))

This is somewhat cleaner, and I also use this idiom when things get ugly with the inline formatting shown above. But my point is that none of these are very elegant for an extremely common use case. Throw this block in the middle of some complex code with a few try/except and raise statements and it still looks confusing. Having two extra temp variables and statements per error in a function that's just doing control flow and wrapping unsafe code can double your local variable count and number of statements across the whole function. AFAIK, there has been no elegant solution to this common problem until f-strings came around; the only decently clean one is using printf-style format strings with the old-style operator, but outside of terseness I find it less readable.

Alternate “clean” way, but sort of hacky.

  raise ValueError(“File exists, not uploading: {filename} -> {bucket}, {key}”.format(**locals()))

f-strings are nice, but when the problem is indenting too far, what if you just... didn't do that?

  raise ValueError(("File exists, not uploading: {filename} -> "
      "{bucket}, {key}").format(filename=filename, bucket=bucket, key=key))

Was the walrus operator really worth "The PEP 572 mess"? https://lwn.net/Articles/757713/

That post makes a few things very clear:

* The argument over the feature did not establish an explicit measure of efficacy for the feature. The discussion struggled to even find relevant non-Toy code examples.

* The communication over the feature was almost entirely over email, even when it got extremely contentious. There was later some face-to-face talk at the summit.

* Guido stepped down.

It may not have been a fair trade, but then it wasn't a trade in the first place. Those all seem to be problems with the process itself, meaning that it could have happened any time a contentious feature came up, this just happened to be the one to trigger the problem.

I agree that a mess looked (sadly) inevitable based upon that post and some of the other surrounding context. E.g. Guido citing Yoda conditions, how dumb that game can be, and getting ignored.

But just because the feature shipped and design-by-committee is upon us doesn't mean we need to accept the outcome. Why couldn't there have been a more evolutionary path for this feature? For example, there is surely a way to write a prototype library to accomplish the same effect with slightly different syntax. (How about a single function `walrus(a, b)` that does what `:=` does?). Then let real user adoption drive the change. Maybe somebody will discover case statements from scala and want that instead.

I hope the committee models some amount of their work after WG21. C++ hasn't evolved so effectively because some people were magic visionaries. For the past decade, C++ has mostly ridden on the proven success of boost. And skipped a lot of the parts of boost that suck.

I'd used f-string-like syntaxes in other languages before they came to Python. It was immediately obvious to me what the benefit would be.

I've used assignment expressions in other languages too! Python's version doesn't suffer from the JavaScript problem whereby equality and assignment are just a typo apart in, eg., the condition of your while loop. Nonetheless, I find that it ranges from marginally beneficial to marginally confusing in practice.

I love string interpolation! But this seems to take it to a bizarre level place just to save a few keystrokes. Seriously, how is f"{now=!s}" substantially better than f"now={str(now)}"?

Ergonomically, I see little benefit for the added complexity.

It really feels like it's very explicit at this point that you want to cast whatever value interpolated into your format string to a string...

I don't want a type error for the most clear use case, in the same way I don't want one for print, because if I wanted a behaviour other than the basic string representation then I would still need to call something differently anyway.

Given that explicit control of the __str__ method is also baked into the language it's also very clear what to expect.

I like type errors when handling something like '1' + 1 because the JavaScript alternative is surprising and hides bugs. No surprises that a string formatter would coerce to string for me automatically (although I get that's maybe just personal feeling).

I love the f strings, they have made my codebase cleaner and clearer. Definitely a pythonic win.

In GP's example, the call to str() isn't there to make sure you get a string instead of causing a type error; it's there to substitute str() for repr() (which also returns a string). Considering that this feature is mainly meant to make 'print debugging' more convenient, it makes sense that repr() is the default choice.

Readability counts, though I agree Explicit is better than implicit.

Special cases aren't special enough to break the rules, Although practicality beats purity.

I agree that the simple examples don't show much benefit, but imagine if you had a really complex expression. I can see the value there.

I'm trying to write an elisp macro because yes.. f-strings are too convenient

By itself I agree, every now and then you write a few lines that will be made a little shorter now that := exists.

But there's a long standing trend of adding more and more of these small features to what was quite a clean and small language. It's becoming more complicated, backwards compatibility suffers, the likelyhood your coworker uses some construct that you never use increases, there is more to know about Python.

Like f-strings, they are neat I guess. But we already had both % and .format(). Python is becoming messy.

I doubt this is worth that.

they should pick f and put depreciation warnings on the other two, python is getting messy.

They can’t just deprecate the other two; f-string’s nature precludes it from being used in situations where formatting needs to occur lazily, e.g. i18n. This is the same reason why other languages with string interpolation also keep a format method around, e.g. Swift’s String(format:). I guess you could argue that they should at least deprecate %-formatting, and this has indeed been raised multiple times, even prior to f-string’s introduction, but the power of Backwards Compatibility Gods are still strong there, for better or for worse.

Nothing stops one from evaluating f-strings lazily. They could simply return a format string with parameters captured instead of interpolated string.

They could... but then they wouldn’t return the interpolated string, and they’d be just like using format, just saving the characters “.format” which ruins the exact thing people like about them.

Can you elaborate on the exact thing people like about f-strings, because I honestly thought it was saving having to write `format`

Nah - you're clearly not using Python enough - the issue with current format is that of the identity of the arguments passed to it.

I need to do 2 changes every time I change a format string - I need to remove the symbol representing it's placement and then I need to remove the argument passed to .format.

Also old formatting does not easily support arbitrary expressions in the placements, thus in order to get those you need to change the arguments passed to the .format.

f-strings get rid of those issues altogether. What you're showing is whatever you have in the brackets - 0 indirection and thus less margin for (unnecessary) errors.

I struggle to understand how what you talked about is relevant here. The problem parent talked about is how this laziness could be implemented without losing what makes f-string awesome right now. It’s not viable, which backs my reasoning why at least one alternative format method is required.

That said, I also struggle to understand how you’d claim parent clearly not using Python enough, when your description of str.format shows a lack of understanding to it yourself. One of the advantages of str.format over %-formatting is exactly that you do not need to modify the arguments passed to str.format when removing components from the source string:

    >>> '{0} {1} {2}'.format('a', 'b', 'c')
    'a b c'
    >>> '{0} {2}'.format('a', 'b', 'c')
    'a c'
Or preferably (using keyword arguments instead of positional):

    >>> '{x} {y} {z}'.format(x='a', y='b', z='c')
    'a b c'
    >>> '{x} {z}'.format(x='a', y='b', z='c')
    'a c'
But again, this doesn’t matter to parent’s argument. Nobody is arguing with you that f-string is better than alternatives for what it can do; we are trying to tell you that there are things it can’t do, and you did not get it.

>> One of the advantages of str.format over %-formatting is exactly that you do not need to modify the arguments passed to str.format when removing components from the source string:

It's not the string that most of the developers care about, it's the presence of the arguments to that string. The issue they are solving is "I would like to see A, B and C", rather than the issue of "I have provided A, B and C - would you please hide B from the view".

>> But again, this doesn’t matter to parent’s argument. Nobody is arguing with you that f-string is better than alternatives for what it can do; we are trying to tell you that there are things it can’t do, and you did not get it.

Please elaborate on what the f-string can't do? You have not provided the answer in your post. In my opinion, the only issue f-strings haven't solved is capturing the arguments in lambdas (before interplation) instead of their direct values. You, on the other hand - do not provide a clear explanation.

Was the controversy really about the need for the feature? I thought most people agreed it was a great feature to have, and most of the arguments were about `:=` vs re-using `as` for the operator.

I like "as" instead. I didn't realize that was on the table. To me, it seems more Pythonic given the typical English-like Python syntax of "with open(path) as file", "for element in items if element not in things", etc.

It had some drawbacks, which a great majority of the time won't apply. I too would have preferred an "as" clause that limited to if-statements, maybe while, and comprehensions.

"if m as re.match(p1, line)" is not very English-like.

Because the variable goes after the as. if re.match(p1, line) as m:

`if re.match(p1, line) as m`, maybe?

Yes, this is what I would expect. Much like a with statement.

I don't know in this case, but I do know that the Python community tends to have strong opinions about things. The := resulted in Guido stepping down, which I think is a good indicator that there wasn't agreement that it was "a great feature to have" and just down to syntax... :-(

To be fair, Guido stepped down because of the way the community reacted.

"The straw that broke the camel’s back was a very contentious Python enhancement proposal, where after I had accepted it, people went to social media like Twitter and said things that really hurt me personally. And some of the people who said hurtful things were actually core Python developers, so I felt that I didn’t quite have the trust of the Python core developer team anymore."

Source: https://www.infoworld.com/article/3292936/guido-van-rossum-r...

Note that Guido also was in support of the walrus operator, it's not like he stepped down because he disagreed with it.

It's really disappointing that people could get so worked up, over what is essentially deciding which color to paint the shed, that they chase off the project founder. I wonder if there is any way for open source communities to effectively promote the "take a step back and remember what really matters in life" approach to conflict resolution.

Guido wasn't chased off, he is still in the core group. He just isn't BDFL anymore -- which, from a certain point of view, might be the best of both worlds. As a long-time python user and small-time advocate, I felt the walrus was a really bad decision.

>I wonder if there is any way for open source communities to effectively promote the "take a step back and remember what really matters in life" approach to conflict resolution.

We've spent decades debating on the merits of spaces vs tabs, and Vim vs Emacs, and a ton of other completely pointless stuff.

What you're hoping is just a pipe dream, people will get invested in the most petty and asinine stuff out there, and take it as a personal insult if you disagree.

All discussion I've ever seen was about the need for the feature, not its spelling. I didn't even know "as" was proposed, but in fact it is an "alternate spelling" they considered[1] in the PEP.

[1] https://www.python.org/dev/peps/pep-0572/#alternative-spelli...

The idea was accepted quickly, the rest of the debate was on its spelling and scope.

I've literally been wanting something like the walrus operator since I first started using Python in '97. Mostly for the "m = re.match(x, y); if m: do_something()" sort of syntax.

I mean, that isolated example doesn't really demonstrate the benefit of a walrus operator does it? You could have just written "if re.match(x, y): do_something()". If you re-used the result of computation within the if statement, I feel that would be a better example, eg. "m = re.match(x, y); if m: do_something(m)".

True enough, as you point out I would be expecting to do_something with m. :-)

I wonder if the controversial Go's error check function "try" proposal[0] will also be similar to this situation.

[0]: https://github.com/golang/go/issues/32437

That was already cancelled.

I think in certain situations the walrus operator will probably be useful. But it definitely makes code less legible, which makes me cautious. The only useful use case I have found so far is list comprehensions where some function evaluation could be reduced to only one execution with the walrus operator.

> But it definitely makes code less legible, which makes me cautious.

Disagree. In cases where it's useful it can make the code much clearer. Just yesterday I wrote code of the form:

    foos = []
    foo = func(a,b,c,d)
    while foo:
       foo = func(a,b,c,d)
With the walrus operator, that would just be:

    foos = []
    while foo := func(a,b,c,d):
Further, I had to pull out 'func' into a function in the first place so I wouldn't have something complicated repeated twice, so it would remove the need for that function as well.

Yep, I can't wait to use the walrus operator. I just tried it out (`docker run -it python:3.8.0b2-slim`) and I'm hooked already.

Also, it's times like these I'm really glad docker exists. Trying that out before docker would have been a way bigger drama

Python looks more and more foreign with each release. I'm not sure what happened after 3.3 but it seems like the whole philosophy of "pythonic", emphasizing simplicity, readability and "only one straightforward way to do it" is rapidly disappearing.

“I've come up with a set of rules that describe our reactions to technologies:

1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

3. Anything invented after you're thirty-five is against the natural order of things.”

― Douglas Adams, The Salmon of Doubt

It's wrong to frame this as resistance to change for no reason. See my other comment. I see some of this stuff as repeating mistakes that were made in the design of Perl. ...but there are quite few people around these days who know Perl well enough to recognize the way in which history is repeating itself, and that has at least something to do with age.

"resistance-to-change for-no-reason" vs "resistance-to change-for-no-reason" :)

This is possibly the best example of the ambiguity of language I've ever seen. Two contradictory meanings expressed in the exact same phrase, and both of them are valid in the broader context.

Jeez. What number of people who read the same phrase with either of those two meanings then continue to form opinions and even make decisions based on the resulting meaning?

Me, I am old enough to know Perl, and I've got plenty of one-line skeletons in my own closet. And it more-or-less entered the world already vastly more TMTOWTDI-y than Python is after 3 decades.

FWIW, I tend to think of comparisons to Perl as being a lot like Nazi comparisons, only for programming languages. And I do think there's some wisdom to the Godwin's Law idea that the first person to make a Nazi comparison is understood to have automatically lost the argument.

It's just that, at this point, Perl is both so near-universally reviled, and so well-understood to be way too optimized for code golf, that any comparison involving it is kind of a conversation-killer. As soon as it shows up, your best options are to either quietly ignore the statement in which the comparison was made, or join in escalating things into a flamewar.

I wouldn't call it reviled. Perl makes for a poor general purpose programming language, it always did. You can write an HTTP server in Perl but you probably shouldn't. It's very good for what it was always intended for, those situations where you need to process some data, but like just once not every week for the rest of eternity.

I've never regretted a Perl program that I wrote, used and discarded. And I've never been content with a Perl program I found I was still using a week after I wrote it.

The point #1 is expanded on in Feral by George Monbiot. Basically, we have a tendency to see the outside world we grew up with as the way things naturally should be, ignoring that previous generations may have changed it to be that way. That sheep-grazed pastoral landscape is easy to view as a thing worth preserving, but to an ecologist it might be a barren waste where there used to be a beautiful forest.

Forewarned is forearmed. I headed into adulthood watching out for such mirages. For example: Making sure to listen to pop music enough that it does exactly what pop music is supposed to do (worm its way into your subconscious) so I don't wake up one morning unaccountably believing that Kylie Minogue was good but Taylor Swift isn't.

My understanding of Python will probably never be quite as good as my understanding of C, but I can live with that.

How do you know to listen to Taylor Swift or whatever? In the last century it was easy to be in sync: you could just watch MTV. Is there something keeping the notion of pop coherent these days?

Not exactly pop, but there are some great weekly music podcasts that I listen to to hear new music, which tend to be a little more indie pop/rock/${genre} than pop :)

- Music That Matters: https://omny.fm/shows/kexp-presents-music-that-matters/playl...

- KEXP Song of the Day: https://omny.fm/shows/kexp-song-of-the-day

- All Songs Considered: https://www.npr.org/rss/podcast.php?id=510019

- KCRW Today's Top Tune: https://www.kcrw.com/music/shows/todays-top-tune/rss.xml

Apple/Google Music or Spotify or Pandora all have pop playlists that play the current top 100 songs on rotation. The Billboard Hot 100 also lists popular western music if you just want a list to review on your own.

I’d argue it’s easier now to stay in sync than even when MTV was popular. MTV you needed a cable subscription and be sitting at a TV, now SiriusXM or Apple/Google/Spotify can stream it right to your phone laptop or tablet, and regular FM radio will play it on the local Top 40 station.

I don't think it's just >35-year-olds who find what's going on in Python against the natural order of things?

I'm 34 and I don't like this, so it's definitely not only those above 35. Jokes aside, I would say I'm a minimalist and this is where my resistance comes from. One of the things that I dislike the most in programming is feature creep. I prefer smaller languages. I like the idea of having a more minimal feature set that doesn't change very much. In a language with less features, you might have to write slightly more code, but the code you write will be more readable to everyone else. Said language should also be easier to learn.

IMO, the more complex a system is, the more fragile it tends to become. The same is true for programming languages. Features will clash with each other. You'll end up having 10 different ways to achieve the same thing, and it won't be obvious which direction to go.

Furthermore, I did my grad studies in compilers. I've thought about writing an optimizing JIT for Python. I really feel like CPython is needlessly slow, and it's kind of embarassing, in an age where single-core performance is reaching a plateau, to waste so many CPU cycles interpreting a language. We have the technology to do much better. However, the fact that Python is a fast moving target makes it very difficult to catch up. If Python were a smaller, more stable language, this wouldn't be so difficult.

> In a language with less features, you might have to write slightly more code, but the code you write will be more readable to everyone else.

I disagree with this, which is precisely why I prefer feature rich languages like Java or better yet Kotlin. It doesn't get much more readable than something like:

    .filter { it.lastName.startsWith("S") }
    .sortedBy { it.lastName }
Now try writing that in Go or Python and compare the readability.

Python is a little more readable, but both Python and Kotlin are perfectly clear in this case:

    sorted((u for u in users
           if u.last_name.startswith("S")),
           key=lambda u: u.last_name
If last_name is a function, which it often would be in Python, it gets better:

    sorted((u for u in users
           if last_name(u).startswith("S")),
However, I think you probably got the sort key wrong if you're taking the first three items of the result. Maybe you meant key=abuse_score, reverse=True, or something.

I disagree this python version is as readable and here’s why. It’s about as many characters but more complex. The Kotlin version performs several distinct actions, each being clear to its purpose. These actions have the same syntax (eg requires less parsing effort). The Python version mixes at least 4 different language syntax/features, being list comprehension, if special form in the list comprehension, keywords, and lambda functions.

On top of the lessened readability, the Kotlin version makes it very easy to add, subtract, or comment out lines/actions which really helps when debugging. The Kotlin version is almost identical in structure to how you’d do it in Rust, Elixir, etc.

I agree. I don't know Kotlin and am reasonably well versed in Python, yet I immediately grasp the Kotlin example as more readable, while having to squint at the Python one for a few seconds. (this is anecdotal of course, and does not account for the example possibly being contrived)

One thing that I like more in the Python version is that it contains less names: .asSequence and .take are replaced by operators of much greater generality, while the ugly implicitly declared identifier it is replaced by explicitly deciding that sequence elements are u.

It should also be noted that Python would allow a more functional style, possibly leaving out the list comprehension.

It's surprising to me that there are people who disagree with my opinion about this, but it suggests that my familiarity with Python has damaged my perspective. You're clearly much less familiar with Python (this code doesn't contain any list comprehensions, for example), so I think your opinion about readability is probably a lot more objective than mine.

FWIW most of the programming I've ever done has been in Python, and while I have no trouble understanding either snippet, I think that the Kotlin snippet is much clearer in intent and structure.

I certainly didn't mean to imply that only someone unfamiliar with Python could prefer the Kotlin version! Perhaps you thought I meant that, but I didn't.

> this code doesn't contain any list comprehensions, for example

It does contain a generator expression though, which is the same as a list comprehension in general structure, but slightly more confusing because it doesn't have the relationship to lists that square brackets in a list comprehension would have given it.

Yes, it shares the structure of a list comprehension, but has different semantics. In this case a listcomp would have worked just as well.

My point, though, was that not being able to tell the difference was a key "tell" that the comment author was not very familiar with Python — in some contexts, that would tend to undermine credibility in their comment (and then it would be rude to point it out), but in this context, it probably makes their opinion more objective.

Good point, though it's less my familiarity with Python and more that I tend to simplify and call generator expressions as list comprehensions unless the laziness is important to call out (meta laziness there? ;) ). Mainly since L.C.'s were first and describing the differences is tedious.

I think you're all fighting for nothing here.

The map filter chaining is obviously simpler, but python code is not that difficult and it's a no brainer task anyway.

It's true the Python is still relatively easy. It may only take, say, 1.3 sec vs 1.1 to parse, but it adds up.

This isn't very readable at all and certainly not any more readable than a chain of method calls, being that you've spread the operations out in different places. It's not even syntactically obvious what the `key` argument is passed to if one doesn't know that `sorted` takes it. None of those problems exist when piping through normal functions or chaining method calls.

Python is for the most part overrated when it comes to these things, IMO. It's a nice enough language but it's aged badly and has an undeserved reputation for concision, readability and being "simple".

C# supports both conventions (in LINQ) - I mean the Kotlin one from the grandparent comment, and the Python's from parent's.

The method chaining syntax and the query syntax are alternatives. I think most devs lean towards the former, considered to be cleaner... whereas the latter is probably easier to learn in the beginning, to those unfamiliar with piping/functional style - owing to its SQL feel-alikeness.

ReSharper would offer converting the latter to the former, and that's how I learned method-chaining LINQ back in the day.

A little off-topic but how does that work? Is 'it' a magic variable referring to the first argument? Never seen magic variables that blend into lambdas like that before... would've expected $1 or something like that.

The idea of anaphoric macros[1] is first found in Paul Graham's "On Lisp"[2] and is based on the linguistic concept of anaphora, an expression whose meaning depends on the meaning of another expression in its context. An anaphor (like "it") is such a referring term.

I think if you like this idea, you will really like the book. Better still, you can download the pdf for free.

[1] https://en.wikipedia.org/wiki/Anaphoric_macro [2] http://www.paulgraham.com/onlisp.html

Inside of any lambda that takes a single parameter you can refer to the parameter as 'it'. If you prefer to name your parameters you can do so as well, it's just slightly more verbose:

    .filter { user -> 
    .sortedBy { user -> 

Yeah, a bit of PG’s Arc influence in the wild.

I believe groovy made this popular rather than arc, and it's likely where kotlin's come from, due to being in the java ecosystem.

Most apl deviatives (j, k, q, a) all had implicit arguments for functions that didn't explicitly declare them (up to 3: x, y, and z).

Probably before then too.

Dyalog APL too, but none of them call the implicit argument "it".

Groovy is from 2003. PG keynoted PyCon in 2003 talking about his progress on Arc: http://www.paulgraham.com/hundred.html. He had been talking about Arc online for a couple of years at that point, including in particular the convenience of "anaphoric macros" that defined the identifier "it" as an implicit argument.

(He'd also written about that more at length in the 1990s in On Lisp, but many more people became acquainted with his language-design ideas in the 2001–2003 period, thanks to Lightweight Languages and his increasingly popular series of essays.)

But surely Perl's $_ was way more influential than an obscure PG talk. I was reading PG way back in 2004, and I had never heard of anaphoric macros until now.

Wait, you think that, in the context of programming language design, a PyCon keynote is an obscure talk? I don't know what to say about that. It might be possible for you to be more wrong, but it would be very challenging.

Anyway, I'm talking specifically about the use of the identifier "it" in Kotlin, not implicitly or contextually defined identifiers in general, which are indeed a much more widespread concept, embracing Perl's $_ and @_, awk's $0 (and for that matter $1 and $fieldnumber and so on), Dyalog APL's α and ω, Smalltalk's "self", C++'s "this", dynamically-scoped variables in general, and for that matter de Bruijn numbering.

> a PyCon keynote is an obscure talk

Compared to the existence of Perl, yes. Anyone who does any amount of Perl learns that $_ is the implicit argument ("like 'it'") to most functions. It's pretty much one of Perl's main deals. The talk has about 100K views on YouTube, which is pretty good, but Perl is in another league.

Too bad Apache Groovy itself didn't remain popular after popularizing the name "it" for the much older idea of contextually-defined pronouns in programming languages. Using the names of pronouns in English (like "this" and "it") is easier for an English-speaking programmer to understand than symbols like "$1" or "_". But because of Groovy's bad project management, another programming language (Kotlin) is becoming widely known for introducing the "it" name.

Pretty sure the Go community will be fine with not being feature rich, since simplicity, maintainability and getting new people up to speed matter more for them.

The go community have gone to far the other way for me, the endless repetition introduces its own complexity.

Simple core languages that are syntactically extensible with libraries have the best of both worlds: https://vvvvalvalval.github.io/posts/2018-01-06-so-yeah-abou...

sorted(u for u in users if u.last_name.startswith("S"), key=lambda u: u.last_name)[:3]

Though I will conceed that I also find the fluent interface variant nicer.

That doesn't parse :-)

You’re doing it wrong :)

  users.apply {
      filter { it.lastName.startsWith("S")
      sortedBy { it.lastName }
(totally untested)

Furthermore, I did my grad studies in compilers. I've thought about writing an optimizing JIT for Python. I really feel like CPython is needlessly slow, and it's kind of embarassing,

Many have tried and failed, Google and Dropbox to name a couple, and countless other attempts.

It lags a bit in releases, but I understood pypy to be essentially successful?

Yes, PyPy is fantastic for long-running processes that aren't primarily wrappers around C code. In my experience, the speedups you see in its benchmarks translate to the real world very well.

Yes, and part of the reason they failed is the reason I pointed to: Python is a fast moving target, with an increasing number of features.

It's not the new features of Python that make it hard to optimize; it's the fundamental dynamic nature of the language that was there from day one. Syntactic sugar doesn't have an impact one way or the other on optimizing Python.

The new features aren't just syntactic, they're also new libraries that come standard with CPython, etc. If you want to implement a Python JIT that people will use, you have to match everything CPython supports. Furthermore, since the people behind CPython don't care about JIT, you also can't count on them not adding language features that will break optimizations present in your JIT. You can't count on these being just "syntactic sugar". Even if you could though, in order to keep up it means you have to use CPython's front-end, or constantly implement every syntactic tweak CPython does.

Lastly, AFAIK, CPython's FFI API is actually more of a problem than the dynamic semantics of the language. You can expose Python objects directly to C code. That makes it very hard for a JIT to represent said Python objects in an efficient way internally.

> In a language with less features, you might have to write slightly more code, but the code you write will be more readable to everyone else.

That's not universally true. C# has more features than Java but is generally easier to read and the intent of the code is easier to follow. The lack of features, like properties or unsigned integers, leads to Java coders creating much more convoluted solutions.

If languages with less features were universally better we would all be using C and BASIC for everything.

I think the importance is the orthogonality of the features. Eg. having so many ways to do string formatting or now multiple ways of doing assigments are not ortogonal and thus can be seen as cluttering.

I'm 38, and I'm fine with these changes, and ive been using Python for +15 years.

I can plainly see how these changes will actually make my code cleaner and more obvious while saving me keystrokes.

I also don't think these changes are very drastic. They're opt-in, doesn't break anything and looks to lead to cleaner code. I love the walrus operator (not so sure about the name, but hey. C++ is getting the spaceship operator... As has been said, naming things is hard). To me, the change of print from a statement to a function has been the hardest Python chamge over the years. Just too much mental momentum. Even though ive been on Python 3 for years, I still make the mistake of trying to use it as a statement. That said, I think it was the right (if painful) move.

I don't speak for everyone over 35, just myself.

theory : age itself with regards to computing has nothing to do with how old you act (with regards to computing), the time you spent doing a specific thing is what grows that 'characteristic'.

Anecdote : i'm fairly young, but i've been involved with python long enough and traveled to enough pycons to be a bit jaded with regards to change within the language.

I'm fairly certain it's only due to the additional cognitive load that's thrust upon me when I must learn a new nuance to a skill that I already considered myself proficient at.

in other words : i'm resistant to change because i'm lazy, and because it (the language, and the way I did things previously) works for me. Both reasons are selfish and invalid, to a degree.

Conversely, some of us oldsters think the outrage is way overblown.

No, those aren't really the reasons for my reaction. And if I told you my age, you would probably switch your argument and say that I'm far too young to criticize ;)

I am an example which supports this notion. I've done some Python programming about 10 years ago but then took a break from programming altogether for the last 9 years. Last year I got back into it and have been using Python 3.7, and I personally love all the most recent stuff. I hate having to go back to 3.5 or even 3.6, and I end up pulling in stuff from futures.

This 'resistance to change' catchall argument puts everything beyond criticism, and it can be used/abused in every case of criticism. It seeks to reframe 'change' from a neutral word - change can be good or bad - to a positive instead of focusing on the specifics.

Anyone making this argument should be prepared to to accept every single criticism they make in their life moving forward can be framed as 'their resistance to change'.

This kind of personalization of specific criticism is disingenuous and political and has usually been used as a PR strategy to push through unpopular decisions. Better to respond to specific criticisms than reach for a generic emotional argument that seeks to delegitimize scrutiny and criticism.

True, but this was not “specific criticism”. It was a general dismissing criticism without details, and so can be refuted with a similarly detail-less answer. A detailed criticism deserves a reasoned and detailed answer, but vague criticism gets a generic rebuttal.

Does that mean someone born in 2008 will think C++ is simple and elegant?

I am both a Python programmer and a C++ programmer. I have programmed professionally full time in one or the other for years at a time. I think C++ is now a much better language than when I learnt it first (cfront). In particular C++11 really fixed a lot of the memory issues with shared_ptr and std:: algorithms. It is a better language now if you are doing anything larger than then a program that takes more than a few weeks to write. On the other hand, I love python for everything else and some of the new stuff is great but making a new way to print strings over and over tells me some people have too much spare time or not enough real work to do. In my opinion formatting a string to print a debug statement should be as concise as possible whereas a lot of these fancier formatting systems are better suited to stuff that ends up staying for use by other people. Luckily there are ways to use ye olde printf style formatters in both for those times.

C++ might be "better" now (I doubt it, to be honest, it just has more features that try to fix the issue at hand; that you're using C++), but it will never, ever get simpler or simple enough. They'd have to remove something like 75% of the language to end up with something that approaches simplicity and even then there are languages that would undoubtedly do those remaining 25% much better.

I stopped writing C++ at some point in 2008/2009 but I still keep track of it to some extent and I'm continually surprised by the nonsense that is introduced into the language. The whole RAII movement, for example, is just one massive band-aid on top of the previous mistake of allowing exceptions, etc..

It'd be mostly fine in the long run, but you have all these people using like 15% of C++ and complain about it all day long, making their libraries not usable from stuff that understands C (most of which have drastically improved on the whole paradigm). There's a solution here and it's not using whichever arbitrary percentage you've decided on of C++, it's realizing that there are way better languages with real interoperability in mind to talk about lower-level things.

No, the claim is that it's ordinary and just part of the way the world works.

Good point. I think it should be rephrased in basis of personal familiarity: people who learned C++ before they were 15 indeed think that it's simple and elegant.

Except that Python existed before I was born and I still appreciate the concept of 'Pythonic'. The language should stay true to its roots.

* Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.*

Yep, I entered the Python world with v2. I eventually reconciled myself to 2.7, and have only recently and begrudgingly embraced 3. Being over 35, I must be incredibly open minded on these things.

Can you give an example of something like this happening to the language? IMO 3.6+ brought many positive additions to the language, which I also think are needed as its audience grows and its use cases expand accordingly.

The walrus operator makes while loops easier to read, write and reason about.

Type annotations were a necessary and IMO delightful addition to the language as people started writing bigger production code bases in Python.

Data classes solve a lot of problems, although with the existence of the attrs library I'm not sure we needed them in the standard library as well.

Async maybe was poorly designed, but I certainly wouldn't complain about its existence in the language.

F strings are %-based interpolation done right, and the sooner the latter are relegated to "backward compatibility only" status the better. They are also more visually consistent with format strings.

Positional-only arguments have always been in the language; now users can actually use this feature without writing C code.

All of the stuff feels very Pythonic to me. Maybe I would have preferred "do/while" instead of the walrus but I'm not going to obsess over one operator.

So what else is there to complain about? Dictionary comprehension? I don't see added complexity here, I see a few specific tools that make the language more expressive, and that you are free to ignore in your own projects if they aren't to your taste.

> F strings are %-based interpolation done right, and the sooner the latter are relegated to "backward compatibility only" status the better. They are also more visually consistent with format strings.

No, f-strings handle a subset of %-based interpolation. They're nice and convenient but e.g. completely unusable for translatable resources (so is str.format incidentally).

What makes % better than .format for translations (and isn't something like Django's _(str) better anyway?

F strings are obviously non-lazy, but _(tmpl).format(_(part)) seems fine?

`.format` lets you dereference arbitrary attributes and indices (I don't think it lets you call methods though), meaning you can run code and exfiltrate data through translated strings if they're not extremely carefully reviewed, which they often are not.

% only lets you format the values you're given.

> and isn't something like Django's _(str) better anyway

They're orthogonal. You apply string formatting after you get the translated pattern string from gettext. In fact, Django's own documentation demonstrates this:

    def my_view(request, m, d):
        output = _('Today is %(month)s %(day)s.') % {'month': m, 'day': d}
        return HttpResponse(output)

What would "do/while" look like in Python? Since blocks don't have end markers (e.g. "end", "}", etc.) there's nowhere to put the while expression if you want the syntax to be consistent with the rest of the language.

One solution would be to borrow from Perl. You make a do block that executes once unless continued, and allow conditions on break and continue:

        continue if condition
And you can now express "skip ahead" with a `break if X` as well.

Yes, although you don't have to be so perlish as to do the if in that order

        if condition:

I envisioned it like "if/else" or "for/else" or "while/else", where a "do" block must be followed by a "while" block.

    x = 0
        x += 1
        x < 10

This completely contradicts the rest of Python grammar, and indeed many languages’ grammars. The consistent way would then be `while x < 10` but that too looks ridiculous. The issue is that you can’t have post-clause syntax in Python due to its infamous spacing-is-syntax idea.

I'm not sure why the consistent way looks ridiculous.

    while x < 10
It's just a compound statement consumes the trailing while clause.

Decorators already precede a function (or class) definition[2], and one alternative for the ill-fated switch statement[1] was to have switch precede the case blocks to avoid excessive indentation.

So there's plenty of precedent in the other direction.

[1]: https://www.python.org/dev/peps/pep-3103/#alternative-3

[2]: https://docs.python.org/3/reference/grammar.html

I think you're really stretching it when you say "there's plenty of precedent," arguably there is none as the decorator syntax is pre-clause and thus poses no indentation reading issues. So too for the proposed switch statement syntax. Then there is the fact that the decorator syntax is perhaps the most alien of all Python syntax, sometimes criticized for being Perlesque, perish the thought (on account of it being introduced by creative interpretation of a single special character though, so perhaps unrelated.)

My main gripe is the indentation. Your code reads as if the while condition is tested after the loop finishes. What if the while statement was part of the loop and could be placed arbitrarily?

        while x < 10
IOW `do:` translates to `while True:` and `while x` to `if not x: break`.

Addendum: I would also entertain forcing the `while` to be at the end of the loop -- as I'm not sure what this would do

        if foo():
            while x < 10

I think it's precedent because it's just a line after a block instead of before it. It certainly is a break from Python's "big outline" look.

> What if the while statement was part of the loop and could be placed arbitrarily?

If you're open to that, I had thought this was a bridge too far, but:

        break if some_condition
        continue if some_other_condition
Under that scheme, the semantics translate to:

    while True:
And, of course, the `break if` and `continue if` syntax would be general.

Of course you can have post-clause syntax: if...else, try...except, for...else, etc.

(Edit: Actually, I think I know what you were saying now, and those aren't quite the same thing as they need a line after them.)

I do think the condition on the next line isn't the way to do solve this problem though (and I don't think it needs solving, while True: ... if ...: break does the job).

Why does `while x < 10` look ridiculous? It looks exactly like the syntax for regular while loops, just in this case it's after a `do:` block. And the example above yours looks like try/catch syntax, but tbh I like the one you suggested a bit more.

You're right, it would be pretty weird to rely on implicitly "returning" a value from an expression like that.

But I don't think having it all on one line would be that bad.

Most code still look like traditional Python. Just like meta programming or monkey patching, the new features are used sparingly by the community. Even the less controversial type hints are here on maybe 10 percent of the code out there.

It's all about the culture. And Python culture has been protecting us from abuses for 20 years, while allowing to have cool toys.

Besides, in that release (and even the previous one), appart from the walrus operator that I predict will be used with moderation, I don't see any alien looking stuff. This kind of evolution speed is quite conservative IMO.

Whatever you do, there there always will be people complaining I guess. After all, I also hear all the time that Python doesn't change fast enough, or lack some black magic from functional languages.

> Even the less controversial type hints are here on maybe 10 percent of the code out there.

I think this metric is grossly overestimated. Or your scope for "out there" is considering some smaller subset of python code than what I'm imagining.

I think the evolution of the language is a great thing and I like the idea of the type hints too. But I don't think most folks capitalize on this yet.

I mean 10% of new code for which type hints are a proper use case, so mostly libs, and targeting Python 3.5+.

Of course, in a world of Python 2.7 still being a large code base and Python being used a lot for scripting, this will far from the truth for the entire ecosystem.

The idea that types are hostile to scripting sounds really weird to me. Turtle[0] in Haskell is absolutely amazing for scripting -- especially if you pair it with Stack (with its shebang support) -- and it is as strongly typed as Haskell.

There is a bit of learning curve (because, well, it's not shell which is what most people are used to), and you do have to please the compiler before being able to run your script, but OTOH, you'll basically never have that "oops, I just deleted my working directory because I used a bad $VARIABLE" experience.

[0] http://hackage.haskell.org/package/turtle

What's an example of black magic from functional languages?

If you complained more specifically it would be possible to discuss. For what was described in article I don't see anything "foreign". Python was always about increasing code readability and those improvements are aligning well with this philosophy.

i've been hearing this since 1.5 => 2.0 (list comprehensions), then 2.2 (new object model), 2.4 (decorators)...

happy python programmer since 1.5, currently maintaining a code base in 3.7, happy about 3.8.

I cut my teeth on 2.2-2.4 and remember getting my hand slapped when 2.4 landed and I used a decorator for the first time.

It was to allow only certain HTTP verbs on a controller function. A pattern adopted by most Python web frameworks today.

That's especially funny given how everybody screams "that's not pythonic!!1!" nowadays when somebody does _not_ use a list comprehension...

The '*' and '/' in function parameter lists for positional/keyword arguments look particularly ugly and unintuitive to me. More magic symbols to memorize or look up.

I also cannot honestly think of a case where I want that behaviour.

The "pow" example looks more like a case where the C side should be fixed.

> I also cannot honestly think of a case where I want that behaviour.

There's plenty of situations where a named argument does not help, and encoding it can only hurt. It makes little to no sense to name the first argument to `dict.update` for instance. Or the argument to `ord`.

That, incidentally, is why Swift added support for positional-only parameters (though it has no concept of keyword-or-positional).

> That, incidentally, is why Swift added support for positional-only parameters (though it has no concept of keyword-or-positional).

Swift's syntax is a lot more intuitive and consistent:

    function(parameterWithImplicitlyRequiredLabel: Int,
             differentArgumentLabel internalParameterName: Int,
             _ parameterWithoutLabel: Int, 
             variadicParameterWithLabel: Int...)
which you would call as

    function(parameterWithImplicitlyRequiredLabel: 1, differentArgumentLabel: 2, 3, variadicParameterWithLabel: 4, 5, 6, 7)
[0] https://docs.swift.org/swift-book/LanguageGuide/Functions.ht...

It does not help but doesn't hurt enough to grant a special syntax to avoid it.

Yes, it limits your ability to rename a local variable, but that seems minor.

Or where the method should be exposed into several different methods.

Beyond the older-than-35 reason, I think a lot of folks are used to the rate of new features because there was a 5 year period where everyone was on 2.7 while the new stuff landed in 3.x, and 3.x wasn't ready for deployment.

In reality, the 2.x releases had a lot of significant changes. Of the top of my head, context managers, a new OOP/multiple inheritance model, and division operator changes, and lots of new modules.

It sucks that one's language is on the upgrade treadmill like everything else, but language design is hard, and we keep coming up with new cool things to put in it.

I don't know about Python 3.8, but Python 3.7 is absolutely amazing. It is the result of 2 decades of slogging along, improving bit by bit, and I hope that continues.

In my experience, every technology focused on building a "simple" alternative to a long-established "complex" technology is doomed to discover exactly _why_ the other one became "complex." Also spawn at least five "simple" alternatives.

Doesn't mean nothing good comes out of them, and if it's simplicity that motivates people then eh, I'll take it, but gosh darn the cycle is a bit grating by now.

Could you provide some examples? Without having had that experience, I’m having trouble picturing a concrete example that I would be sure is of the same kind.

Nginx is probably my fav of the surviving-and-thriving ones. It still remains very distinct from Apache, but calling it simpler would be a large stretch.

Projects like qmail discovered the reason in a somewhat _harder_ manner. And yes, I'd argue Python is yet another case, as it grew _at least_ as complex as Perl.

Haha, what was that quote? Something like, any language is going to iterate towards a crappy version of lisp.

Greenspun's Tenth Rule[0]

Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. - Philip Greenspun

[0] https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule

How would you subvert Greenspun in large codebases without Common Lisp? I once used Drools the rules engine which used a dynamic scripting language on Java objects. Python could have replaced that language, with much better tooling, errors etc.

Could you have written that system in a mix of Java and or another scripting language such as JRuby[0]?

[0] http://wiki.c2.com/?AlternateHardAndSoftLayers

IIRC MVEL language was integrated deeply into Drools. JRuby would have been awesome.

I'm working on a language with a focus on simplicity and "only one way to do it": https://vlang.io

The development has been going quite well:


Really interesting. For the skeptics, this is not just a proof of concept. There is a real app made using this language: https://volt-app.com/

and the REPL only leaks 1MB [1] to compile and run a hello world program.

1: https://github.com/vlang/v/issues/514

It doesn't anymore.

There are lots of issues that are being fixed. Strange nitpicking on alpha software.

This is great! Thanks for your work. Can V be integrated into existing c++ projects? I work in audio and constantly working in c++ is tiring. I'd love to work in something like V and transpile down.

Thanks! Absolutely. Calling V code is as simple as calling C (V can emit C).

>I'm working on a language with a focus on simplicity and "only one way to do it":

If I wanted a language with "only one way to do it", i'd use Brainfuck. Which, btw, is very easy to learn, well documented, and the same source code runs on many, many platforms.

I see what you're saying, but I kinda like the gets ":=" operator.

But now there are two ways to do assignment. That's not very pythonic, is it?

You think that's bad? Check out:

    a = 17
    print("a=", a)
    print("a=" + str(a))
    print("a=%s" % a)
    # python 3.8 =>
So many ways to do it...

But, if it sounds like I agree with you, I actually don't. I feel that the Zen of Python has taken on an almost religious level of veneration in people's minds, and leads to all sorts of unproductive debates. One person can latch onto "there should be one obvious way to do it" and another onto "practicality beats purity" and another onto "readability counts." Who's right? All can be. Or none. All could be applied to this particular case.

The Zen of Python is just a set of rough heuristics, and no heuristic or principle in the field of software development applies 100% of the time, IMHO. <= except for this one ;)

> there should be one obvious way to do it

In cases like this, different ways to do it (all equally good) are needed to get a good coverage of different tastes in obviousness and different nuances in the task.

The point is not uniformity, but avoiding the unpleasant and convoluted workarounds caused by non-obviousness (thus making the language easy to use).

String formatting is not trivial: there is the split (sometimes architectural, sometimes of taste, sometimes of emphasis) between concatenating string pieces, applying a format to objects, setting variable parts in templates, and other points of view; and there is a variety of different needs (cheap and convenient printing of data, dealing with special cases, complex templates...)

And there was also:

    print string.Template("a=$a").substitute(a=a)

I never felt like there was only one way to do something in Python. Every Stack Overflow question has a multitude of answers ranging from imperative to functional style and with various benefits and drawbacks.

Python is one of the least "only one way to do things" languages I've used. This even extends to its packaging system, where you can choose between virtualenv, pipenv, pyenv, etc. Same goes for the installation method too, do you want to install Python with Anaconda or use the default method?

As for my personal take on this feature: I think it's really useful. When web-scraping in Python, I oftentimes had to do this:

  while True:
      html_element = scrape_element()
      if html_element:
Now I can do this:

  while not html_element := scrape_element():

Prior to that you could use the special two-argument version of the `iter` function which makes it act completely different than the single argument version:

    for html_element in iter(scrape_element, None):
this calls scrape_element() until it returns None, returning each value.

It used to be more or less true in the early days. For me the "one obvious way to do it" ship has sailed with list comprehensions which was introduced in 2.0 (released in 2000)

Packaging isn’t really anything to do with the language syntax, or the zen of Python. Any critiques on Python-the-language?

And pyenv is just a version manager, like rbenv or nvm. I wouldn’t consider its existence confusing, not would I say being able to install something in more than 1 way has any relevance to the zen of Python!

Should Python create some cross-platform Uber-installer so that there is only one download link?

I don't see why the "zen of Python" shouldn't be applied to its tools too. Tools are part of the developer experience and few/none of the statements/guidelines in the zen of Python are exclusive to Python the programming language.

Regardless of what pyenv, the rest of my comment about the complexity of Python's tooling still stands. There's too many choices. I also seen people use pyenv as an alternative to virtualenvs, which is something I have never seen with nvm.

I don't understand why the Python community hasn't coalesced around a single solution to package management that has minimal complexity. It seems like pipenv is the solution, but there is controversy around it and it should have come several years ago. The fact that Python packages are installed globally by default is also pretty terrible, I much prefer it when applications bundle their dependencies. When I do `npm install --global`, the resulting program will always work, regardless of what other packages I have installed on my system.

> Any critiques on Python-the-language?

The point of my original comment was not to necessarily critique the Python programming language, rather it was to point out that adhering to the "zen of Python" is a lost cause because the language/development environment is not designed as a curated experience.

And my original comment did make points about Python-the-language. I talked about how there's many ways to do a single task in Python. One of the responses to it even proved my point:

"Prior to that you could use the special two-argument version of the `iter` function which makes it act completely different than the single argument version: <code sample>".

That unfortunately demonstrates my point.

>Every Stack Overflow question has a multitude of answers ranging from imperative to functional style and with various benefits and drawbacks.

This is one of the reasons I love Python. It's a great exercise to rewrite the same code imperative, recursive, with generators, with iterables, etc. Python is very good at supporting a wide range of programming styles.

I see this criticism every time the walrus operator is brought up.

You do know that this:

    x := 1
Is going to be a syntax error, right? The walrus operator is not permitted in the case of a simple assignment statement. It's only in an expression.

But it used to be that any expression on its own was a valid statement. Is that going to change?

When is an expression allowed to have := in it, is

  (x := 1)
on its own allowed?

For contexts where the walrus is not allowed, see [0]. You'll find that it's generally possible to circumvent the restriction by parenthesising the expression. So yes,

    (x := 1)
is a valid (but poorly written) statement.

But while there are now two ways of doing assignment, I wonder how often people will actually encounter situations where it's difficult to figure out which choice is better.

[0] https://www.python.org/dev/peps/pep-0572/#exceptional-cases

Allowed, yes. But the PEP that introduced walrus operators says not to do it.

Every possible line of code has an alternate ugly way to write it. This isn't a valid criticism. Anyone who decides to start writing simple assignment statements like that deserves to be ridiculed for writing ugly code.

Of course, and there's no reason to write such code.

I just dislike that the simple syntax rule "any expression can be used as a statement" now has an exception.

I haven't been able to think of scenarios where that might have consequences (code generation or refactoring tools?) but that doesn't say much as I'm not that smart.

Edit: having looked at the cases that are disallowed, they remind me of generator expressions. Those are usually written with parens, that can optionally be omitted in some cases. := is the same except they can be omitted in so many cases that it's easier to list the cases where they can't.

I think a generator expression used as a statement already requires the parens, even though they can be omitted e.g. as a single parameter of a function call. So that's probably ok then.

Not really, but neither are ugly nested if statements (Flat is better than nested, readability counts, etcetera). You need to make tradeoffs.

Maybe it would have been better to only have a single := assignment operator to begin with. But it's a few decades too late for that.

For what it's worth, := is for expressions only. Using it as a statement is a syntax error. So there won't be a lot of cases where both are equally elegant options.

Regular = can only be used in statements. Walrus := can only be used in expressions. There's no overlap there. However, := does simplify certain expressions (like those nested if-else statements and the common "while chunk := read()" loop), which I think does justify its existence.

This honestly makes it seem more confusing to me. The fact that there is now an operator that can only be used in certain statements just makes things more confusing. And if there really is no overlap, then why wasn't the "=" operator just extended to also work in expressions? "while chunk = read()" seems like it makes just as much sense without adding the confusion of another operator.

One of the good things about not using the "=" operator is that you cannot accidentally turn a comparison into an assignment, a feature that is a common cause of errors in other languages that do support it. By adding a completely different character to the operator it is not very likely to cause bugs, compared to just forgetting to type that second =

Is it really that common? I made this typo a few times in my life. It was corrected every time before the program actually run because the compiler warned me about it. I don't see how you can make this mistake if you're not aggressively trying (by turning off warnings for example).

I guess it is not common, but by using the = operator you would not get the warning, and instead get unexpected behaviour.

I expect the PEP authors want to avoid the "while chunk == read()" class of bugs

ninjaedit: indeed https://www.python.org/dev/peps/pep-0572/#why-not-just-turn-...

And also the converse (but no less dangerous) "if value = true"

> The fact that there is now an operator that can only be used in certain statements just makes things more confusing

The new operator (like many Python operators) can only be used in expressions (statements can contain expressions, but expressions are not a subset of statements.)

> The fact that there is now an operator that can only be used in certain statements just makes things more confusing

Because the “=” operator is the thing that defines an assignment statement. Even if this could be resolved unambiguously for language parsers, so that “statement defining” and “within expression” uses didn't clash, it would be create potential readability difficulties for human reading. Keeping them separate makes the meaning of complicated assignment statements and assignment-including expressions more immediately visually clear.

>it would be create potential readability difficulties for human reading

I think the major argument (at least, the one I see most frequently) is that the walrus operator does create readability difficulties for humans, which is exactly why many people view it as non-pythonic. This is one of the few times I've seen someone argue that ":=" makes things more readable.

An argument against expression assignment is that it can create readability problems compared to separating the assignment from the expression in which the value is used. Even most supporters of the feature agree that this can be true in many cases and that it should be used judiciously.

This is in no way contrary to the argument that the walrus operator improves readability of expression assignments compared to using the same operator that defines assignment statements.

There are at least three ways to iterate over a list and create a new list as a result. That's not very pythonic, is it?

The Zen of Python states:

> There should be one-- and preferably only one --obvious way to do it.

There are plenty of ways to do assignments. Walrus assignments are only the obvious way in certain cases, and in general there aren't other obvious ways. For testing and assigning the result of re.match, for instance, walrus assignments are clearly better than a temporary.

I can think of lots of nonobvious ways to do assignments, like setattr(__module__...)

I can think of more than two ways to do a lot of things in Python. Besides the ":=" doesn't work exactly the same.

Also, I can't bring myself to call it the walrus operator. Sorry, guys. I had a Pascal teacher ages ago who pronounced it "gets" and that has always stuck.

Assignment can be confusing already.

  >>> locals()['a'] = 1
  >>> a
If anything, the walrus operator allows for tightly-scoped assignment, which is good in my opinion.

You don't even need the locals() function to get into trouble:

    x = [1, 2, 3, 4]
    def foo():
        x[0] += 3  # Okay
    def bar():
        x += [3]   # UnboundLocalError
    def qux():
        x = [5, 6, 7, 8]  # Binds a new `x`.

    def bar():
        x += [3]   # UnboundLocalError
This is an especially funky one. x.extend([3]) would be allowed. Presumably x += [3] is not because it expands to x = x + [3]... However, the += operator on lists works the same as extend(), i.e. it changes the list in-place.

dis.dis(bar) shows:

              0 LOAD_FAST                0 (x)
              2 LOAD_CONST               1 (3)
              4 INPLACE_ADD
              6 STORE_FAST               0 (x)
              8 LOAD_CONST               0 (None)
             10 RETURN_VALUE
So INPLACE_ADD and STORE_FAST are essentially doing x = x.__iadd__([3])

This isn't really true. There's one way to do assignment, `=`, and one way to do special assignment that also works as an expression, `:=`. You should always use `=` unless you *need `:=`.

I don't think that philosophy was ever truly embraced to begin with. If you want evidence of that try reading the standard library (the older the better) and then try running the code through a linter.

The idea that str.format produced simpler or more readable code than f-strings is contrary to the experience of most Python users I know. Similarly, the contortions we have to go through in order to work around the lack of assignment expressions are anything but readable.

I do agree that Python is moving further and further away from the only-one-way-to-do-it ethos, but on the other hand, Python has always emphasized practicality over principles.

This is what happens when you lose a BDFL. While things become more "democratic", you lose the vision and start trying to make everyone happy.

Walrus operator is the direct result of the BDFL pushing it over significant objection.

Well, there were 4 versions released since 3.3 that still had a BDFL, so I dunno if that's the issue, yet.

I'm someone who loves the new features even though I don't think they're "pythonic" in the classical meaning of this term. That makes me think that being pythonic at it's most base level is actually about making it easier to reason about your code... and on that count I have found most of the new features have really helped.

You can write very Python2.7 looking code with Python3. I don't think many syntax changes/deprecations have occurred (I know some have).

Yep, I did a 2to3 conversion recently and it got the whole project 95% of the way there. A 3to2 would be in theory almost as simple to do for most projects.

My first though was the same as the snarky sibling comment, but after reading TFA I realized these are all features I've used in other languages and detest. The walrus operator an complex string formatting are both character-pinching anti-maintainability features.

To me, the headline feature for Python 3.8 is shared memory for multiprocessing (contributed by Davin Potts).

Some kinds of data can be passed back and forth between processes with near zero overhead (no pickling, sockets, or unpickling).

This significantly improves Python's story for taking advantage of multiple cores.

For us who didn't follow:

"multiprocessing.shared_memory — Provides shared memory for direct access across processes"


And it has the example which "demonstrates a practical use of the SharedMemory class with NumPy arrays, accessing the same numpy.ndarray from two distinct Python shells."

Also, SharedMemory

"Creates a new shared memory block or attaches to an existing shared memory block. Each shared memory block is assigned a unique name. In this way, one process can create a shared memory block with a particular name and a different process can attach to that same shared memory block using that same name.

As a resource for sharing data across processes, shared memory blocks may outlive the original process that created them. When one process no longer needs access to a shared memory block that might still be needed by other processes, the close() method should be called. When a shared memory block is no longer needed by any process, the unlink() method should be called to ensure proper cleanup."

Really nice.

It looks like this will make efficient data transfer much more convenient, but it's worth noting this had always been possible with some manual effort. Python has had `mmap` support at least as long ago as Python 2.7, which works fine for zero-overhead transfer of data.

With mmap you have to specify a file name (actually a file number), but so long as you set the length to zero before you close it there's no reason any data would get written to disk. On Unix you can even unlink the file before you start writing it if you wish, or create it with the tempfile module and never give it a file name at all (although this makes it harder to open in other processes as they can't then just mmap by file name). The mmap object satisfies the buffer protocol so you can create numpy arrays that directly reference the bytes in it. The memory-mapped data can be shared between processes regardless of whether they use the multiprocessing module or even whether they're all written in Python.


Also on linux is sysv_ipc.SharedMemory.

Isn't that already the case?

I thought that when you use multiprocessing in Python, a new process gets forked, and while each new process has separate virtual memory, that virtual memory points to the same physical location until the process tries to write to it (i.e. copy-on-write)?

That's true but running VMs mutate their heaps, both managed and malloced. CoW also only works from parent to child. You can't share mutable memory this way.

Empty space in internal pages gets used allocating new objects, refence counts updated or GC flags get flipped etc, and it just takes one write in each 4kb page to trigger a whole page copy.

It doesn't take long before a busy web worker etc will cause a huge chunk of the memory to be copied into the child.

There are definitely ways to make it much more effective like this work by Instagram that went into Python 3.7: https://instagram-engineering.com/copy-on-write-friendly-pyt...

Yes, the problem is sharing data between parent and child after the parent process has been forked.

Yes, sharing pre-fork data is as old as fork().

Sharing post-fork data is where it gets interesting.

If you have 4 cores, you may want to spaw 4 children, then share stuff between them. Not just top-down.

E.G: live settings, cached values, white/black lists, etc

> no pickling, sockets, or unpickling

But still copying?

If not, then how does it interoperate with garbage collection?

It works with the memoryview/buffer interface, so you can have eg a Numpy array backed by a sharedmemory object attached to a named SM segment.

So it's not for containing normal Python dicts, strings etc that are individually tracked by GC.


I’ve been waiting for this for a very long time. Thank you for mentioning this.

Would this work with e.g. large NumPy arrays?

(and this is Raymond Hettinger himself, wow)

An alternative you may want is Dask.

Dask doesn’t support shared memory without pickling because Python doesn’t.

Oh no way. That has huge potential. What are the limitations?

Agreed this is huge.

I long for a language which has a basic featureset, and then "freezes", and no longer adds any more language features.

You may continue working on the standard library, optimizing, etc. Just no new language features.

In my opinion, someone should be able to learn all of a language in a few days, including every corner case and oddity, and then understand any code.

If new language features get added over time, eventually you get to the case where there are obscure features everyone has to look up every time they use them.

Common Lisp seems to tick the boxes. The syntax is stable and it doesn't change. New syntax can be added through extensions (pattern matching, string interpolation, etc). The language is stable, meaning code written in pure CL still runs 20 years later. Then there are de-facto standard libraries (bordeaux-threads, lparallel,…) and other libraries. Implementations continue to be optimized (SBCL, CCL) and to develop core features (package-local-nicknames) and new implementations arise (Clasp, CL on LLVM, notably for bioinformatics). It's been rough at the beginning but a joy so far.


The "very compact, never changing" language will end up not quite expressive, thus prone to boilerplate; look at Go.

Lisps avoid this by building abstractions from the same material as the language itself. Basically no other language family has this property, though JavaScript and Kotlin, via different mechanisms, achieve something similar.

I like to think that lisp is its own fixed point.

The Turing Machine programming language specification has been frozen for a long time, and it's easy to learn in a few days.

So has John von Neumann's 29 state cellular automata!



(Actually there was a non-standard extension developed in 1995 to make signal crossing and other things easier, but other than that, it's a pretty stable programming language.)

>Renato Nobili and Umberto Pesavento published the first fully implemented self-reproducing cellular automaton in 1995, nearly fifty years after von Neumann's work. They used a 32-state cellular automaton instead of von Neumann's original 29-state specification, extending it to allow for easier signal-crossing, explicit memory function and a more compact design. They also published an implementation of a general constructor within the original 29-state CA but not one capable of complete replication - the configuration cannot duplicate its tape, nor can it trigger its offspring; the configuration can only construct.

Such languages exist. Ones that come to mind offhand are: Standard ML, FORTH, Pascal, Prolog.

All of which are ones that I once thought were quite enjoyable to work in, and still think are well worth taking some time to learn. But I submit that the fact that none of them have really stood the test of time is, at the very least, highly suggestive. Perhaps we don't yet know all there is to know about what kinds of programming language constructs provide the best tooling for writing clean, readable, maintainable code, and languages that want to try and remain relevant will have to change with the times. Even Fortran gets an update every 5-10 years.

I also submit that, when you've got a multi-statement idiom that happens just all the time, there is value in pushing it into the language. That can actually be a bulwark against TMTOWTDI, because you've taken an idiom that everyone wants to put their own special spin on, or that they can occasionally goof up on, and turned it into something that the compiler can help you with. Java's try-with-resources is a great example of this, as are C#'s auto-properties. Both took a big swath of common bugs and virtually eliminated them from the codebases of people who were willing to adopt a new feature.

Prolog has an ISO standard... I am not sure if it's still evolving, but specific Prolog implementations can and often do add their own non-standard extensions. For example, SWI-Prolog added dictionaries and a non-standard (but very useful) string type in version 7.

That said, it is nice that I can take a Prolog text from the 1980s or 1990s and find that almost all of the code still works, with minor or no modifications...


From the v1.9 release just a few weeks ago: https://elixir-lang.org/blog/2019/06/24/elixir-v1-9-0-releas...

> As mentioned earlier, releases was the last planned feature for Elixir. We don’t have any major user-facing feature in the works nor planned. I know for certain some will consider this fact the most excing part of this announcement!

> Of course, it does not mean that v1.9 is the last Elixir version. We will continue shipping new releases every 6 months with enhancements, bug fixes and improvements.

That's just an announcement that they reached the end of the list of user-facing syntax changes on their roadmap.


I imagine churn will still happen, except it will be in the library/framework ecosystem around the language (think JavaScript fatigue).

Most Elixir projects have very few dependencies because the Elixir and Erlang stdlibs are very batteries included. You don't typically reach for a dependency unless you need most of its features. Often you will reimplement the parts you need in your own code, except where it's reasonably complicated (pooling, DB connections, ORMs, web frameworks) or tricky to get right (security, password hashing, paxos).

Brainfuck has been extremely stable. You can learn every operator in minutes.

someone should be able to learn all of a language in a few days, including every corner case and oddity, and then understand any code.

Why should this be true for every language? Certainly we should have languages like this. But not every language needs to be like this.

Well, maybe not for every language, but probably for a language where simplicity has been a major feature.

I started using Python seriously only three years ago after 30 years of other languages and I didn't find it very simple. Maybe the core of the language is simple but the standard library and many other important modules can be very complicated. Among similar languages Ruby and JavaScript are far simpler.

JavaScript used to be simple... But Promises, closures, prototype chains, and npm/modules/minification/webpack has added a massive amount of complexity to being able to just read and understand a bit of code.

Javascript isn't simple any longer. And I'm not sure Ruby is that simple, not once you dig into the advanced features.

Verrrrrrry few languages in common use are like this.

All you're doing then is moving the evolution of the language into the common libraries, community conventions, and tooling. Think of JavaScript before ES2015: it had stayed almost unchanged for more than a decade, and as a result, knowing JavaScript meant knowing JS and jQuery, prototype, underscore, various promise libraries, AMD/commonjs/require based module systems, followed by an explosion of "transpiled to vanilla JS" languages like coffeescript. The same happened with C decades earlier: while the core language in K&R C was small and understandable, you really weren't coding C unless you had a pile of libraries and approaches and compiler-specific macros and such.

Python, judged against JS, is almost sedate in its evolution.

It would be nice if a combination of language, libraries, and coding orthodoxy remained stable for more than a few years, but that's just not the technology landscape in which we work. Thanks, Internet.

It's apples and oranges.

Python was explicitly designed and had a dedicated BDFL for the vast majority of its nearly 30 year history functioning as a standards body.

JS, on the other hand, was hacked together in a week in the mid-90s and then the baseline implementation that could be relied on was emergent behavior at best, anarchy at worst for 15 years.

Agreed, but the anarchy of JS was a result of a dead standards process between the major vendors that resulted in de facto freeze. The anarchy is direct result of a stewardship body not evolving the language to meet evolving needs.

The only frozen languages are the ones nobody uses except for play or academic purposes.

As soon as people start using a language, they see ways of improving it.

It isn't unlike spoken languages. Go learn Esperanto if you want to learn something that doesn't change.

Esperanto does change, in that new items of vocabulary are introduced from time to time. For example, 'mojosa', the word for 'cool' is only about thirty years old.

This is why a lot of scientific code still uses fortran, code written several decades ago still compiles and has the same output.

How long has the code which was transitioned to python lasted?

> How long has the code which was transitioned to python lasted?

A long time. 2to3 was good for ~90% of my code, at least

Good for 90% of your code is not equivalent to getting precisely the same results from unmodified code written in the 80s.

More likely to mean 90% of projects, not 90% of each file, which would mean that every one was broken.

We will review that statement in 30 years!

Likely, we will review that statement in 2038 at the latest.

I have compiled Fortran programs from the 70s on modern platforms without changing a line. The compiler, OS, and CPU architecture had all disappeared but the programs still worked correctly.

Fortran has added a whole lot of features over time though.

but you can still compile F66 with Intel Fortran compiler 2020 (and other compilers as well)

This isn't that good of a metric for code utility. Sure, very-long-lived code probably solved the problem well (though it can also just be a first-mover kind of thing), but a lot of code is written to solve specific problems in a way that's not worth generalizing.

I write a lot of python for astrophysics. It has plenty of shortcomings, and much of what's written will not be useful 10 years from now due to changing APIs, architectures, etc., but that's partly by design: most of the problems I work on really are not suited to a hyper-optimized domain-specific languages like FORTRAN. We're actively figuring out what works best in the space, and shortcomings of python be damned, it's reasonably expressive while being adequately stable.

C/FORTRAN stability sounds fine and good until you want to solve a non-mathematical problem with your code or extend the old code in some non-trivial way. Humans haven't changed mathematical notations in centuries (since they've mostly proven efficient for their problem space), but even those don't always work well in adjacent math topics. The bra-ket notation of quantum mechanics, <a|b>, was a nice shorthand for representing quantum states and their linear products; Feynman diagrams are laughably simple pictograms of horrid integrals. I would say that those changes in notation reflected exciting developments that turned out to persist; so it is with programming languages, where notations/syntaxes that capture the problem space well become persistent features of future languages. Now, that doesn't mean you need to code in an "experimental" language, but if a new-ish problem hasn't been addressed well in more stable languages, you're probably better off going where the language/library devs are trying to address it. If you want your code to run in 40 years, use C/FORTRAN and write incremental improvements to fundamental algorithm implementations. If you want to solve problems right now that those langs are ill-suited to, though, then who cares how long the language specs (or your own code) last as long as they're stable enough to minimize breaking changes/maintenance? This applies to every non-ossified language: the hyper-long-term survival of the code is not the metric you should use (in most cases) when deciding how to write your code.

My point is just that short code lifetimes can be perfectly fine; they can even be markers of extreme innovation. This applies to fast-changing stuff like Julia and Rust (which I don't use for work because they're changing too quickly, and maintenance burdens are hence too high). But some of their innovative features will stand the test of time, and I'll either end up using them in future versions of older languages, or I'll end up using the exciting new languages when they've matured a bit.

by the way, three-decades has gone since FORTRAN became Fortran.

From what I've seen, Go is the closest we have for mainstream language resistant to change.

Recently the Go team decided not to add the try-keyword to the language. I'm not a Go programmer and was a bit stumped by the decision until I saw a talk of Rob Pike regarding the fundamental principle of Go to stick to simplicity first. [1]

One of the takeaways is, that most languages and their features converge to a point, where each language contains all the features of the other languages. C++, Java and C# are primary examples. At the same time complexity increases.

Go is different, because of the simplicity first rule. It easens the burden on the programmer and on the maintainer. I think python would definitely profit from such a mindset.

[1] https://www.youtube.com/watch?v=rFejpH_tAHM

In my opinion, someone should be able to learn all of a language in a few days, including every corner case and oddity, and then understand any code.

"Understanding" what each individual line means is very different from understanding the code. There are always higher level concepts you need to recognize, and it's often better for languages to support those concepts directly rather than requiring developers to constantly reimplement them. Consider a Java class where you have to check dozens of lines of accessors and equals and hashCode to verify that it's an immutable value object, compared to "data class" in Kotlin or @dataclass in Python.

Sometimes a language introduces a concept that's new to you. Then you need way more time. For example, monads : I understood it (the concept) rather quickly, but it took a few weeks to get it down so I could benefit from it.

Try C maybe? It is still updated, but only really minor tweaks for optimisation.

Also Common lisp specs never changed since the 90s and is still usefull as a "quick and dirty" language, with few basic knowledge required. But the "basic feature set" can make everything, so the "understand any code" is not really respected. Maybe Clojure is easier to understand (and also has a more limited base feature set, with no CLOS).

C compilers like GCC and Clang have dialect selection options that work; if you pick -std=c90, you can write C like it's 1990.

remember the gang of four book? such books happen when the language is incapable of expressing ideas concisely. complexity gets pushed to libraries which you have to understand anyway. i'd rather have syntax for the visitor pattern or whatever else is there.

Python 2.7 is not far from that language.

What's stopping people from forking the language at python 2.7? Let the pythonistas add whatever feature they feel like while people who need stability use "Fortran python" or whatever.

Probably most of the people who like writing language interpreters understood that Python 3 fixed a lot of mistakes, so it would be funner to work on.

Though I'm surprised nobody really wrote a transitional fork (six gets you a lot of the way but "Python 2.8 which has _just_ the str/bytes change" would have been useful).

Ultimately Python 2 isn't a better language, it's just the language everyone's code was in...

In my fantasy-py language, there is no "str", base types would be explicit. unicode() bytes(). "something" could have an implicit u"". Composite types could be explicit. If I want a set of int's, I can use mypy now to s1: t.Set[int] = set(), but that's just linting.

> What's stopping people from forking the language at python 2.7?

If you don't want to change/add something to the language, then why fork it?. You can just continue using it as it is!

The implementation needs to be maintained.

I truly wish this would become a thing. It's really frustrating having to update my installed packages and my code for some stupid change the language designers thought is sooo worth it. Just stabilize the bloody thing so I can do some work. Updating code so it meshes with the "latest and greatest" is _not real work_.

Fixing the entirely broken string/bytes mess up in Python 2 was worth it by itself. For bonus points old style classes went away, and the language got a significant speed boost. And now it’s not going to die a slow death, choking on the past poor decisions it’s burdened with.

Trivializing that by suggesting it was some offhand, unneeded solution to a problem that some dreamy “language designer” thought up is at best completely and utterly ignorant.

Also maintenance, in all forms, is work. That does involve updating your systems from time to time.

> and the language got a significant speed boost.

I have not seen a clear win in real benchmarks. 3 was slower for the longest time, and nowadays it seems head to head depending on the project.

Check out https://speed.python.org/comparison/. It’s not significantly faster, but it’s getting more so.

I don't know how to say head-to-head more than this graph


I would say this makes it a bit clearer:


Maybe it's work if you get paid by lines of code and JIRA tickets but programming is just a tool for me to my real work done. So I would like to spend as little time programming as I possibly can.

Nobody here gets paid per Jira ticket or line of code.

Sure, if you don’t program and just write ad-hoc (unmaintainable?) scripts then the transition is annoying. But it’s also not required. Don’t re-write your scripts, you can always ensure that Python 2 is present.

But if you’re maintaining a project that uses the wider ecosystem, then you are at the mercy of that ecosystem. And, at the time of the decision to make Python 3, that ecosystem was saying “Python 2 has a lot of horrible legacy decisions that make it harder than it should be to write good code”.

Containers or environment management solve this problem quite easily. All of my major projects have a conda environment alongside them, and I expect I'll be shifting things over to Docker containers as my org starts shifting things to the cloud.

Isn't that what C is?

Certainly Common Lisp.

Lua is pretty close, and pretty close to Python in terms of style and strengths.

Edit: I actually forgot about the split between LuaJIT (which hasn’t changed since Lua 5.1), and the PUC Lua implementation, which has continued to evolve. I was thinking of the LuaJIT version.

I'm in operations and I've spent much of my career writing code for the Python that worked on the oldest LTS release in my fleet, and for a very long time that was Python 1.5...

I was really happy, in some ways, when Python 2 was announced as getting no new releases and Python 3 wasn't ready, because it allowed a kind of unification of everyone on Python 2.7.

Now we're back on the treadmill of chasing the latest and greatest. I was kind of annoyed when I found I couldn't run Black to format my code because it required a slightly newer Python than I had. But... f strings and walrus are kind of worth it.

That's what Go has been so far but it might see some changes soon after being "frozen" for ~10 years.

Why can't you do this with Python? No one said you had to use any of these new features...

Though to me that's like saying, "I want this river to stop flowing" or "I'd prefer if the seasons didn't change."

All human languages change over time. It is the nature of language.

Go? I moved a lot of my datascience and machine learning process to Go. Only thing really left in Python land is EDA

Absolutely agree. How many times have you heard "that was true until Python 3.4 but now is no longer an issue" or "that expression is illegal for all Pythons below 3.3", and so on. Not to mention the (ongoing) Python 2->3 debacle.

> Not to mention the (ongoing) Python 2->3 debacle.

When will this talking point die? It's not "ongoing". There's an overwhelming majority who have adopted Python 3 and a small population of laggards.

> There's an overwhelming majority who have adopted Python 3 and a small population of laggards.

That small population includes every BigCo with a large python codebase.

Who cares about syntax that doesn’t work in old, dead versions of Python 3? 3.5 and above is all that matters.

Lua is a great small language.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact