Hacker News new | comments | show | ask | jobs | submit login
Python 3.6.0 released (python.org)
260 points by plq on Dec 23, 2016 | hide | past | web | favorite | 185 comments



I absolutely LOVE format strings. Whereas before, you had to format strings in one of these ways (among other more verbose ways):

  "My name is {} and I am {} years old".format(name, age)

  "My name is %s and I am %s years old" % (name, age)

  "My name is {name} and I am {age} years old".format(name=name, age=age)
Now, we can finally use f-strings, where anything in brackets is eval'ed, with the result subbed in:

  f"My name is {name} and I am {age} years old."


As a long-time Perl developer, it's kinda amusing that this is the new big Python feature now, when shells, awk, Perl, PHP and Ruby have had string interpolation for ages.

Yes, I know you can do more with format strings than plain interpolation, but that's all that the basic examples show.

(Also: Perl has allowed underscores in number literals for ages).


Thanks for the sneer, it really helps the Xmas spirit! I'm almost done fully getting rid of Perl at my corp, it's been an ordeal of unreadable obfuscated mess. I'll be glad when it's fully gone.


As a python guy learning perl for fun, I've found simple string interpolation and heredocs to be very nice.


String interpolation isn't a new feature, you are confused.

Python just got one more way of doing string interpolation.


I believe that, when most people say "string interpolation", they mean "first class string interpolation syntax", as opposed to the use of function invocation. I don't have citations, but I hope this perspective helps in some way.


printf-style specifiers and substitutions, like in the examples GP provided, aren't considered string interpolation.


> "My name is {name} and I am {age} years old".format(name=name, age=age)

This is a textbook example of string interpolation.

You can look it up on wikipedia. I am not sure what you mean.


The wikipedia examples are somewhat misleading.

The string "My name is {name} and I am {age} years old" is a plain string. By itself, there is no interpolation:

    >>> x = "My name is {name} and I am {age} years old"
    >>> x
    'My name is {name} and I am {age} years old'
The format function is what confers meaning. This is identical to a standard printf situation, wherein the string "%d %x" encodes a transformation but in and of itself isn't a transformation.

Contrast with the proposal for f-strings:

    >>> x = f"My name is {name} and I am {age} years old"
    # this is an error unless name and age are defined
The literal mandates a particular interpretation, and the f-string cannot exist outside of that in the same way that the standard string does. Or as the PEP puts it:

> It should be noted that an f-string is really an expression evaluated at run time, not a constant value.


By itself there's no interpolation, which simply means it's not an expression.

But when you do: "{foo} {bar}" % {"foo": "blah", "bar": "blooh"}, this is string interpolation.

Is there a definition, anywhere(even a book would suffice), where it has your strict definition of what interpolation is?

I'm not trying to undermine the usefulness of the interpolated expressions and what you said. I just really want to know if the definition is so strict.

UPDATE: http://rosettacode.org/wiki/String_interpolation_(included)

I think that you are the one who's making the definition so specific and strict. I can't find a single source on the Internet that agrees with your definition.


Reading the text of the Wikipedia article, I think it does make a distinction between the two styles.

  string interpolation is the process of evaluating a string literal containing
  one or more placeholders, yielding a result in which the placeholders are 
  replaced with their corresponding values.
I read this as saying that the evaluation of the string literal yields a string where the substitution has already been made. That's not the case with the format style of "string interpolation." You provide a template string and call a function with some arguments, and that replaces the placeholders. The string literal does not handle the substitutions itself. I believe the Wikipedia article agrees that this is the deciding factor.

A few sentences later, the article includes this statement:

  Some languages do not offer string interpolation, instead offering a standard
  function where one parameter is the printf format string, and other(s) provide
  the values for each placeholder.
This sounds pretty similar to what was in Python prior to this release (especially if you know how methods work in Python).

Also notice how the statement begins by saying that not all languages offer string interpolation. I think that's the final nail in the coffin of the argument. You could use virtually any language to write a function that handles substitutions in template strings. If that qualified as string interpolation then every language would have it. Since some languages do not, I think we can safely say that we're talking about a syntax feature.


Dont worry :) soon the other ways will be deprecated. There can be only one _pythonic_ way to do it.


I remember when I started python that one of the big differences was that there wasn't a 1000 ways to do something, but one right way.


There's always been multiple ways to do something in Python, just one best way, and the best way has been known to change between versions.

See: list comprehensions obsoleting most uses of map() and filter(), then generator expressions obsoleting any uses of list comprehensions that don't need a list object

Here's four ways to do the same thing.

Naïve way:

    foo = []
    for x in bar:
        if meets_some_condition(x):
            foo.append(transform(x))
    do_something_with(foo)
With map/filter:

    do_something_with(map(transform, filter(meets_some_condition, bar)))
With list comprehensions:

    do_something_with([transform(x) for x in bar if meets_some_condition(x)])
With generator expressions:

    do_something_with(transform(x) for x in bar if meets_some_condition(x))
All four of those are valid in modern Python, but the latter and only the latter is the right way to do it.

Before Python 2.0, either map/filter or the naïve way was the right way (TBH, I'm not familiar with what was considered right in the old days). When Python 2.0 came out, list comprehensions became the right way. When Python 2.4 came out, generator expressions became the right way. The right way depends on your version of Python.

Or for that matter, look at old-style classes vs. new-style classes. When new-style classes came out, they immediately became the one and only right way to do classes, but old style classes still stuck around for the rest of the 2.x series.


I'm going to disagree with you there and prefer the "naive way". This is the clear and explicit way to express your intent. Personally, I'm a big fan of a few more lines for more clarity (within reason), and not a fan of unique, language-specific incantations like comprehensions just for the sake of syntactic sugar.

An implicit coding style that requires the reader to look up the semantics of Python's comprehensions (and also makes it harder to unwind/inject debugging/keep line lengths reasonable) is a negative unless you're gaining something besides brevity.

I understand some claim that comprehensions are faster for various reasons, but this is presumably implementation dependent and shouldn't be assumed unless you know it's the case in your implementation. I also understand that the generator expression uses less memory, but the generator version can be written the long way too:

    def test_bar(bar):
        for foo in bar:
            if meets_some_condition(foo):
                yield transform(foo)

    do_something_with(test_bar(bar))
I know that Python is pretty reasonable but I don't like starting the descent down into the alphabet-soup madness that we get in a lot of other languages.


But List comprehensions, and now generators, are what python, "Pythonic". I'm a relative novice with python, but they are straightforward to understand, and are fairly consistently used by practitioners of the language.

But, I agree with you regarding creeping complexity in general - @wrappers/decorators confused me for the longest time, and I often wonder how someone not familiar with the language is supposed to understand what they are doing.


> This is the clear and explicit way to express your intent.

I disagree. For the (common) circumstances where genexp's are appropriate, they are also the most clear and explicit way of expressing intent (explicit loops express the mechanism, not the intent.)

For cases where you need a list object, list comprehension are the most direct expression of intent.

There's cases where explicit looping is the most clear expression of intent; usually, this would be where complex logic is needed within each loop iteration.


To be fair, string interpolation is an outlier. Also, Python 3 cleaned up a few areas where there were redundant ways to do something, like standard library modules that had both C and (pure) Python versions.


Yup, we could do with a python4 to clean things up ...


awk doesn't have string interpolation, though.

--- "Avoid programming languages starting with the letter P except Prolog"


Do you use Prolog in day-to-day work?


Currently, no. But I've implemented an (ISO-)Prolog engine (twice, even). Tragedy is, there are more Prolog engines than Prolog programs, just as there are more web frameworks than web apps ¯\_(ツ)_/¯


I would frequently use

   "My name is {name} and I am {age} years old".format(**locals())
... but I always felt a little guilty like this is some kind of dynamic-programming-on-steroids trick that I should use sparingly.


Ha, I love that trick but always feel guilty because it's breaking "explicit is better than implicit" and someone on the internet told me it's poor style. Very happy the new style was implemented.


Your guilt was right.


Yeah, this is the biggest reason why I'm excited about 3.6. I'm still using % for all formatting because .format() is so ugly and verbose, but I'll switch to f-strings when I actually get the opportunity to use 3.6. In fact, when the PEP first appeared I immediately decided that from now on, I'd write all greenfield projects in Python 3 just so I can take advantage of f-strings as soon as I can use 3.6 (yes, this of all things was what made me switch from 2 to 3).

Though I think it would be cool if you could have a deferred format string, like so:

    template = 'My name is {name} and I am {age} years old. Next year I will be {age + 1}!'
    name = 'Amy'
    age = 32
    formatted = template.apply()
    # formatted = 'My name is Amy and I am 32 years old. Next year I will be 33!'
So that way you could store the template as a constant somewhere and apply it after receiving input from the source of your choice.

You can kind of do that with [0], but it doesn't evaluate expressions (so the age + 1 thing won't work).

    [0] .format(**locals()) # putting it here because HN eats the asterisks if it's not indented


What about..

    def template(name, age):
        return f"My name is {name} and I am {age} .."
    formatted = template('Amy', 32)
?

[Edit: I only started learning Python recently, and this is how I'd think of doing it come from JS:

    const template = (name, age) => `My name is ${name} and I am ${age}`; 
    const formatted = template('Amy', 32);

]


Check string.Template from the docs.


You can always put your f-string in a lambda if you don't want to evaluate it immediately. Pulling values from the scope of the function you call your apply() method from, though, is some scary magic. I wouldn't be confident that I hadn't screwed that up.


Yeah, using templates like that is something you can do with format (and then just do template.format(arg_name1 = something1, arg_name2 = something2) and it seems like usecase important enough to not use f-string exclusively.


Cool. string.Template might be what you are looking for.


But I really miss "There should be one-- and preferably only one --obvious way to do it."

(Conflict between improvement and backward compatibility, but I miss it.)


The quote in context shows no conflict:

    In the face of ambiguity, refuse the temptation to guess.
    There should be one-- and preferably only one --obvious way to do it.
    Although that way may not be obvious at first unless you're Dutch.
It pretty clear that if you support python 3.6 you should use `f` strings.


> It pretty clear that if you support python 3.6 you should use `f` strings.

And if you support any earlier version of python, you cannot use f strings. Hence, there is a conflict between improvement and backward compatibility.


But for new projects, where you can chose the version of Python you want, there is no requirement for "backwards compatibility" - it's not as though Python 3.6 isn't available for every platform that 3.5 was on.


Yeah! Believe it or not (shameless plug) it was yours-truly who finally got the idea to snowball through the gauntlet at python-ideas. The extent of my participation, but still, am happy with how it all turned out. (pat pat)

Python is no longer inferior to Bash and Perl for short scripts and more concise. Looking across the modern language landscape, this functionality is becoming an industry standard. The format string is also a little more sophisticated that it looks on the surface.

Still, I was a bit down that so many showed up to complain about it, mostly with purity or adversity-builds-character arguments. I'm glad lots of people are enjoying it.


The problem I've found is that when we start using cute tricks like the old "xxx" % locals(), or the newer f-strings, is that we tend to start forgetting basics like escaping the strings that we're substituting in.

For the example above, this might be fine, but most examples that I see using this kind of thing are things like:

"select * from foo where id = '%s'" or "<input type='hidden' name='foo' value='%s'/>" or "rm -rf /path/to/customer/%s"

I tend to think that in general appending strings is more often than not quite a dangerous operation, and it behooves the caller to need to have a long think about exactly how the strings being substituted in need to be escaped - as it's pretty rare that they don't need to be.


So python now has 4 different ways to do this? Ugh.


It's a fair complaint but they tried deprecating % and that ... well ... ended badly.


Yep, can't please everyone. Someone is going to complain that you broke backwards compatibility if you remove something, and someone is going to complain that you've muddied up the language if you don't. Python 3 was an attempt to straddle that chasm and it definitely caused problems.

I think you just have to play it by ear. Old formatting styles seem like one of those things you want to leave alone for a really long time, because they're so common that it'd be a major PITA to fix all occurrences, and because it doesn't seem like there's an obvious negative ramification to allowing the old styles to continue to evaluate according to the old methods.


It is built right on top of .format(), so perhaps it's more accurate to say 3.1 ways.


My colleague found a little bug in his code when he upgraded to Python 3.6 last week. Python ≤ 3.5 allowed "\o" (evaluating it as a backslash followed by a letter o). Python 3.6 now (correctly) treats this as a warning.


I’m curious actually why an f-string syntax is considered OK when a print-statement is not.

I mean, there’s a precedent for short built-in functions like len(), and apparently 'print "x"' as a statement was terrible so 'print("x")' replaced it. If they wanted such consistency, shouldn’t they have used 'f("{a} {b}")' or something? Or alternately, allow p-strings like 'p"hello, world"' to print?


That would not be a traditional function, because you want {a} to be evaluated in the context of the caller, rather than the context of f.


As I understand it, print() was created not from a sense of consistency, but because a few awkward cases could be simply solved by turning print from a statement into an expression. Use in lambdas would be one example.


I don't like the extra parens, but I made an editor shortcut to alleviate that.

Other cases such as redirecting to a file, and changing the line ending are handled much more intuitively with the print function.


In order to implement `f` as a plain function with same behaviour as g-strings, you would probably have to do some stack trickery to properly capture locals.

In fact, I don't think it would be possible to write such a function in the general case, because it's ambiguous what environment you would need to use.

For example:

def call(func, arg):

    a = 2

    return func(arg)
a = 1

print(call(f, "{a}"))

Does this print 1 or 2?


> I’m curious actually why an f-string syntax is considered OK when a print-statement is not.

One reason might be that f-strings work in expressions, while statements do not. Even with the BDFLs apparent distaste for functional programming constructs, syntax that works within expressions has lots of practical advantages over statements.


The prefix isn't new, there were already u, b, and r strings.


I can imagine this to be especially useful in implementing __repr__ and __str__ for classes.


When can I expect to import this into 2.7.x from future?


It is my understanding that the __future__ module is frozen.


You might want to check out Python 2.8

https://www.naftaliharris.com/blog/why-making-python-2.8/


Does anyone else find Python just as 'crazy' as Javascript gets called?


  >>> 1 + '1'
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  TypeError: unsupported operand type(s) for +: 'int' and 'str'
No.


    >>> float('inf') < 'hi'
    True
Fixed in Python 3, but still. There are more of these things.


So, you mean it was fixed 8 years ago, then.



No. Python is probably the least crazy mainstream language IMO. (Except the middle-endian for comprehensions, and the silly truthiness definition for dates)


I find it weird that there are built-in functions like `map` but string objects still have methods like `split` - but I am so used to everything being a method on an object in JS.

I mean that it feels so much nicer to do:

    let x = [ 1, 2, 3].map(i => i * 10)
than

    x = list(map((lambda i: i * 10), [ 1, 2, 3 ]))


The pythonic (i.e. right) way to write this is:

    x = [i * 10 for i in [1, 2, 3]]


    i. let x = [ 1, 2, 3].map(i => i * 10)

    ii. x = list(map((lambda i: i * 10), [ 1, 2, 3 ]))

    iii. x = [i * 10 for i in [1, 2, 3]]
All these methods are really just multiple sides of the same coin. All I see is different syntax, none looks any better from my point of view and unless there is a clear performance benefit, just use what you are used to, or what your coding standards define.


ii. vs iii. boils down to explicit rather than implicit. iii. is simply easier to understand. I believe most people would also agree that ii. doesn't look particularly nice.


I think in general the comprehension is faster because of how method calls and symbol lookups work in python. For the most part it's just a style thing.


I understand superficially that this works, but can you explain the syntax (or is there a name for this)? So far I've also seen a for loop like

    for i in [1,2,3]:
        ...



There's a lot of value in having the option of using top-level functions e.g. len. Sometimes something is more natural as a method, sometimes it's more natural as a function. In the specific case I probably agree with you, but I don't think any of the differences qualify as "crazy"; generators being lazy by default is a reasonable choice (though I think the wong one), map is often used partially applied so it's nice to have it available as a function, => was only recently added to javascript and I would hope Python will get something similar sooner or later (though I'd rather have scala-style _).


I read that apparently it is impossible to add a true lambda functionality to Python because of the whitespace blocks thing. There's intentionally no way to block multiple statements together except to have them on multiple lines sharing an indent level, which means there's no way to write a lambda longer than a single expression and have a syntax for it that makes sense.


That issue has never really bothered me tbh. If you're writing in functional style then everything tends to be a single expression anyway, and in the rare cases where you need statements the function tends to be complex enough to be worth naming IMO.


What does the _ do in Scala? I've not used Scala, and it wasn't terribly clear from context unless you mean they use _ exactly as javascript uses => (which would seem really weird to me so I'm assuming its different, apologies if that is asinine).


_ is a placeholder for the next lambda argument. So, for example, `x => x.blah()` could be rewritten as `_.blah()`, and `x => blah(x)` could be rewritten as `blah(_)` (or, since we don't need partial application here, simply `blah`).

It also works for multiple arguments: `_ * _` is equivalent to `(x, y) => x * y`, which is really `(x, y) => x.*(y)`.


"_" is kind of the equivalent of "x => x" (only rather than x it's some temporary generated name that doesn't overlap with anything else). So you'd write "_ * 10" rather than "i => i * 10"


All good points, thanks for explaining!


Obligatory xkcd: https://xkcd.com/927/

In typical perl^Wpython spirit, we now can save a few characters by typing:

    f"My name is {name} and I am {age} years old."
instead of:

    "My name is {name} and I am {age} years old.".format(**locals())
Look how much typing we saved! Now we just have to explain to new users that their strings get eval'd for stuff in brackets, but only when the string is prefixed by `f`...


> "My name is {name} and I am {age} years old.".format(locals())

Well that's a definite code smell.


Don't realize that this is exactly what f-strings are doing? If you accept the `f""` version but not the `"".format_map(locals())`, you're lying to yourself[1]. At least the `str.format_map(locals())` isn't magic (and IMO, much more pythonic (by virtue of explicit), although not as pythonic as `str.format(template_var=scoped_var)`).

The only way `str.format_map(locals())` is a code-smell is if the people working on the code don't actually know Python. Builtin methods of core types, dictionary unpacking, and builtin functions are all extremely common and only complete novices at python would find any issue with them. Compare that to what you need to know when you encounter `f"hello {name}"`:

0) am I working with Python 3.6+?

1) when does replacement happen?

2) what scopes are examined for substitution?

Of course, 2) is a trick question because you EVALUATE ARBITRARY PYTHON EXPRESSIONS IN THE FORMAT STRING![2]

[1] Technically, PEP-0498 states that we're not exposing a full locals() or globals(), but this actually matters very little. The danger here is that both .format() and f"" use object.__format__() (falling back __repr__ IIRC) to figure out the replacement. If your object.__format__() is malicious, it gets access to more data than it needs. However, since python has zero security model, if it's malicious, it pings whatever command and control server it's reporting to and starts up a thread listening for commands anyway (aka you're fucked). Realistically, this means that, you might leak credentials (or other sensitive information) to third party loggers, but only if your method for censoring outgoing log/exception data is a a poorly-built blacklist.

[2] https://www.python.org/dev/peps/pep-0498/#supporting-full-py...


Yeah, I don't really have a problem with the old way:

    print("My robot's name is {}. It's {} years old.".format(name, years_old))
Seems reasonable to me. I don't really see any reason to hack in the locals splat. I guess the one thing it saves you is ordering issues, but in practice, I haven't found those to be terribly inconvenient, and in the rare case they are, you can just do "{name}, {age}".format(name=name, age=years_old)).

Yes, both forms are a little longer, but as you state, it's easier to see what's going on. One of Python's biggest attractions, IMO, is that it's batteries included, but not batteries-sealed-under-1000-layers-of-carbonite-locked-by-a-mystical-incantation, like many other dynamic languages. It's usually pretty easy to get in there and see what's happening without having to traverse tons of indirection due to Python's design principle favoring explicit programming styles.

I won't lie and say I've never wished for more conventional style string interpolation, but it certainly wasn't a showstopper, and it's not something we should assume we get for free.


I was just using the locals() to show equivalence. I think I've done that maybe a handful of times in production code, and it was always an already hairy templating situation. Most of the time, I see:

    "{} world!".format('hello')
or

    "{greeting} world!".format(greeting='hello')
There is a huge value in Python's historical approach of straightforward code built on a relatively small foundation. f-strings are another example of the Python I fell in love with growing up/moving on.


Sorry, you're wrong. First, it's asterisk asterisk locals(), they don't show here for some reason.

Second, the parser at compile time pulls out the actual expressions and fills them in to a .format call. It _does not_ use locals!

... and your worries about security are not useful, you could make the same statements about an entire code base.


> Sorry, you're wrong. First, it's asterisk asterisk locals(),

I meant what I wrote. Someone, in a rush to criticize about a topic on which they are ignorant, didn't consult the Python documentation:

https://docs.python.org/3/library/stdtypes.html#str.format_m...

> they don't show here for some reason.

Asterisks only show in code blocks. They denote italics in HN's markup. Double asterisks outside of a code block begin and end empty italicized text.

    ** no italics because we're in a code block
> Second, the parser at compile time pulls out the actual expressions and fills them in to a .format call. It __does not__ use locals!

I'll quote myself here, since you missed the first part of this sentence when jumping to a conclusion about why security is irrelevant, but we need to blacklist a well-understood and widely used builtin function anyway.

>> Technically, PEP-0498 states that we're not exposing a full locals() or globals(), but this actually matters very little.

But, while we're trying to be pedantic, let's actually be pedantic:

    >>> import dis
    >>> dis.dis('f"{foo}"')
    
        1           0 LOAD_NAME                0 (foo)
                    2 FORMAT_VALUE             0
                    4 RETURN_VALUE
Shouldn't I be seeing some LOAD_CONST, LOAD_ATTR, and CALL_FUNCTION somewhere?

    >>> import dis
    >>> dis.dis('"{foo}".format("bar")')
    
      1           0 LOAD_CONST               0 ('{foo}')
                  2 LOAD_ATTR                0 (format)
                  4 LOAD_CONST               1 ('bar')
                  6 CALL_FUNCTION            1
                  8 RETURN_VALUE
If you'd like, you can go ahead and examine what it is that FORMAT_VALUE does (hint: invokes PyObject_Format which is defined in abstract.c around line 670 in the Python-3.6.0 tarball).


The guy that implemented it (Eric) said that it is largely syntactic-sugar where the byte compiler splits the string and converts it into a format(string, args) call. Therefore it gets exactly/only what it needs.

It is true that I've never inspected the C code implementation myself, but I took him at his word.

load_name looks like it read foo, then format_value did format('{}', foo).

I didn't realize it before, but it looks like it might have better performance.


That was the original implementation, AFAICT from reading the ticketing. FORMAT_VALUE was introduced to fix a slight inconsistency where someone might monkey with str.format(). While doing so, it was mentioned that there was a slight performance improvement to fewer lookups. See here for more info: https://bugs.python.org/issue25483


That's ringing a bell now, the implementation has some elegance to it.


I'd like to credit abarnert for starting the wordcode changes: https://github.com/abarnert/cpython/tree/wpy We were discussing for a bit & then he went silent. Haven't heard from him since

I kind of stepped in, rewrote the peepholer, & then initiated the python dev discussion earlier than he wanted to. He wanted to develop the idea & benchmark it in a bubble, whereas I really felt that this should be developed with feedback from core Python developers. Serhiy was very patient with me, there was a lot of feedback it turns out

Fun little issue that thankfully got caught last month: http://bugs.python.org/issue28782 (I had thought about this error case, but at the time I had lasti being -2 & the first byte of the bytecode is never 72. I forgot this constraint when having lasti be -1 still)


I didn't know what you meant...until I saw this

The Python interpreter now uses a 16-bit wordcode instead of bytecode which made a number of opcode optimizations possible. (Contributed by Demur Rumed with input and reviews from Serhiy Storchaka and Victor Stinner in issue 26647 and issue 28050.)


The bytecode interpreter I test has over a thousand instructions, because much of the standard library functions are baked in. I asked a colleague whether it made sense to convert to wordcode. He thought it might: simpler decoding, less branching, and easier alignment. I think code size would go up, but not by much.

Really interesting to see this come to Python. Thanks for mentioning it.


> PEP 515 adds the ability to use underscores in numeric literals for improved readability. For example:

    >>> 1_000_000_000_000_000
    1000000000000000
    >>> 0x_FF_FF_FF_FF
    4294967295
> Single underscores are allowed between digits and after any base specifier. Leading, trailing, or multiple underscores in a row are not allowed.


Nice. I really like the look of f-strings as well. All the power of str.format() with none of the verbosity.

>>> name = "Fred"

>>> f"He said his name is {name}."

'He said his name is Fred.'


Surprisingly these are very controversial. A bunch of people seem to hate them. I don't understand why.


I dislike them because not only are they one more string-formatting method (they're similar to but not actually identical to format-strings as you can use arbitrary expressions in f-string placeholders) and more importantly they're the first one which does not work at all for i18n or more generally for deferred interpolation (which is also used by `logging`)


Not sure, why you think they cannot be used with defered interpolation. You, can easily combine the two like this:

    >>> f'{"fstring"} and {{deferred}}'.format(deferred=1234)
    'fstring and 1234'


> importantly they're the first one which does not work at all for i18n

But surely i18n/deferred interpolation is a corner case, not the norm. If you need that then use what you are currently using, nothings changed.


There was a large attempt to get i18n/deferred but Guido decided to keep it simple for this release. Perhaps it can be extended in the future. In the meantime i18n uses string.Template.


There are already three ways to do string interpolation. Adding a fourth way goes against the whole "only one way to do things" aspect of the Zen of Python.


The "only one way to do things" is quite low in the Zen of python. It is preceded by for instance "Beautiful is beter than ugly", "Readability counts" and "practicality beats purity" [1]. There's always a trade of in these rules. In this case I think the right choice was made to add this easier way of doing string interpolation for cases where it can be used, also "Now is better than never".

[1]

    Python 3.5.2 (default, Nov  7 2016, 11:31:36) 
    [GCC 6.2.1 20160830] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import this
    The Zen of Python, by Tim Peters

    Beautiful is better than ugly.
    Explicit is better than implicit.
    Simple is better than complex.
    Complex is better than complicated.
    Flat is better than nested.
    Sparse is better than dense.
    Readability counts.
    Special cases aren't special enough to break the rules.
    Although practicality beats purity.
    Errors should never pass silently.
    Unless explicitly silenced.
    In the face of ambiguity, refuse the temptation to guess.
    There should be one-- and preferably only one --obvious way to do it.
    Although that way may not be obvious at first unless you're Dutch.
    Now is better than never.
    Although never is often better than *right* now.
    If the implementation is hard to explain, it's a bad idea.
    If the implementation is easy to explain, it may be a good idea.
    Namespaces are one honking great idea -- let's do more of those!
    >>>


I didn't know that it was a ranking.

I'm disappointed that there's so many ways to do string formatting/templating, but I think we're all so very painfully aware of how important backward compatibility is. So for the language to gain features like this, I think it should be understood that it will also keep the old ways.


It's not so much that there is only one way, it's that there's only one best, or right way to do it. If f-strings are better than the ways before, then it becomes the new way to do it, and Python is the better for it.


There are several ways of doing multiplication and division (statements, math library, a loop). Should we enforce only one?

No. Readability counts. One (and only one) point from the Zen of Python does not make an argument against it.


Readability absolutely counts - that's precisely the reason why the "one obvious way" rule is there.

If there's four different ways to something, then that's four different things the reader has to fully understand in order to parse someone else's code.


> If there's four different ways to something, then that's four different things the reader has to fully understand in order to parse someone else's code.

Sure, which is why .format() kind of sucks.

> "Hi {name}, your age is {age}".format(name=name, age=age)

You have to parse 'name' and 'age' three times to figure out whats going on. The .format() is completely superfluous in most cases. It's busywork and it's harder to parse.

I quite agree with the "one obvious way" to do something, but that should not hinder progress and improvement. f strings are an improvement and are the obvious way to do things, in the same way that '.format' was an improvement over 'str % values', and how 'str % values' is an improvement over manual concatenation.

They all still exist, but lets use f strings now and stop complaining?


I'm sorry, but I think you missed my point here. There's now four different ways to do string interpolation, which means that to understand someone else's code, I will need to know how `%`, `.format`, `string.template` and `f` works - I won't know which method the library author uses. I will need to know how they break in edge cases (and they will be broken, otherwise we wouldn't have replaced them!)

Worse still, I will also need to know how they interact with each other - you can do

  something = "abc"
  x = f"{something}".format(something="def")
and I honestly have no idea what x will be.

So yeah, I would love to "use f strings and stop complaining", but the fact that the other string interpolation methods still exist, and are still supported, is still going to be a problem.


It will be an error, the {something} will be gone by the time the second .format runs.

It might help to realize the f-string is just syntactic sugar for a .format() call.


It's not obvious which .format() is evaluated first, though. For 'in-line' string interpolation I think the f'' syntax wins. .format() is still nice for template-style interpolation though. For this reason, and to avoid situations like the parent, I think it should be made static, so that you could write:

    >>> formatted = str.format(template, var1='a', var2=3)
Of course, this will never happen due to the need to maintain backward compatibility.


In python, function calls start from left and continue to the right. Though f'' doesn't look like a function call, in practice it is.


C# got these recently too (it uses $ instead of f, but is otherwise identical), and I love it. The first thing I did when our build tools supported C# 6 was convert almost all string.Format() statements in our repo to use it. Why wouldn't you?


It's getting better with each release :)


Every language should have this.


I totally agree with you. But that opinion reflects two kinds of programming that make up 75% of what I do -- either close to the metal, or comms protocol implementations. I was at PyCON 2016 when Guido was reading off the list of 3.6 new features. In a room full of many 100's of people, I was the only one (that I could hear) applauding for underscores. Guido himself was very dismissive of the feature and it was clear he thought it pointless, but OTOH harmless so why not let it in.

I suspect there is a strong correlation between programmers that want underscores in numeric literals, and programmers that live and breath hex.


I don't do Python, but PEP 515 has got it all wrong.


Well, maybe they could've proposed to used the non-breaking space character ' ' instead of '_', but few people know how to type ' ' from their keyboard..


Ok.


Ok


Here comes type hinting.

> PEP 484 introduced the standard for type annotations of function parameters, a.k.a. type hints. This PEP adds syntax to Python for annotating the types of variables including class variables and instance variables:

  primes: List[int] = []

  captain: str  # Note: no initial value!
  
  class Starship:
      stats: Dict[str, int] = {}


Actually type hinting was released with 3.5. I've been using it for a while now, and while there's occasional bugs, I love it (I was able to crash my program at runtime because of a type declaration, which isn't supposed to happen).

The difference here is the ability to add type annotations to variables without needing to use a comment to declare it.

I'm also pretty excited for the formatted string literals and secrets module!

https://docs.python.org/3.6/whatsnew/3.6.html


Interesting.

https://www.python.org/dev/peps/pep-0484/

> It should also be emphasized that Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention.


Yeah, the typing (in theory) makes no actual changes to the runtime code. It's just for linters like mypy to warn you when you write code that goes against your own type declarations. However because it's added directly into your code, it does have a slight runtime effect. Casting types calls a function (which should just be a no-op function), as well as defining TypeVars.

The crashes I got at runtime aren't because of the typing module preventing code from running due to types, it was just a bug in the runtime part of typing. I'll try to find the github issue to share, but iirc it was related to using Generics with a TypeVar.


OK. I'm wondering if this

  captain: str  # Note: no initial value!
changes the norm: Python's names don't have associated types but values do. I know there's no such thing as variable declaration and variable initialization in Python.


> I know there's no such thing as variable declaration and variable initialization in Python.

That is not quite true, the first assignment is an implicit declaration, and will set up a slot in the enclosing code object, which is why you get an UnboundLocalError if you use a name before it's first assigned to:

    >>> def foo():
    ...     a
    ...     a = 3
    ... 
    >>> foo()
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "<stdin>", line 2, in foo
    UnboundLocalError: local variable 'a' referenced before assignment
but a NameError if you use a name which is not assigned to at all:

    >>> def foo():
    ...     a
    ... 
    >>> foo()
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "<stdin>", line 2, in foo
    NameError: name 'a' is not defined
In the latter case the slot doesn't exist at all in the lexical context, while in the former there is a slot set up but it has no value associated to it.


I agree, but these are problems with Python's implicit variable declaration.


That's orthogonal to my demonstration that there is in fact "such a thing as variable declaration and initialisation in Python", given that the cases of an undeclared variable and a declared but unassigned to variable both exist with very different errors.


It doesn't affect runtime at all (except when it does :P, but that's a bug). So there is a difference between what's valid in mypy, and what's valid in "real" Python.

Typing does associate type with the names as well, which isn't necessary for legal Python (but is often cleaner). For example, mypy will yell at you if you do something like this:

    def add_week_to_day(d: str) -> str:
      # d arrives as an ISO string, like '2016-12-23'
      d = datetime.strptime(d, '%Y-%m-%d')
      d = d + timedelta(days=7)
      return d.strftime('%Y-%m-%d')
Because d was original defined as a str, and now I'm re-using it as a datetime. This is of course perfectly valid Python, and will work at runtime. But mypy prefers you use a different name for the different types.


If language permits it, then #$%^& mypy. My bet is on types.


It's not like mypy is the only linter that warns you about stuff that's technically legal.

Like, I make it a point to run all my stuff through pylint, and it typically catches a few things that will technically run but would cause problems.


Really there's many more examples where re-using a name causes more harm than where it's idiomatic like the one above. And you always add an ignore comment to that specific line.


Revolutionary.


What's new in Python 3.6.0: https://docs.python.org/3.6/whatsnew/3.6.html


Dang, tau (τ = 2π), was added to the math module. I like tau, but I thought for sure that wouldn't actually get accepted.

Issue 12345 - https://bugs.python.org/issue12345

PEP 628 - https://www.python.org/dev/peps/pep-0628/


Why dang if you like it?


do async comprehensions ( https://www.python.org/dev/peps/pep-0530/#asynchronous-compr... ) give you concurrency? Or are they a building block to getting that in the future?


async in general gets you concurrency. async def/await is syntactic sugar around generators built with __iter__ and __next__. It is totally possible to do cooperative multitasking with __iter__, __next__, yield, yield from.

What this is doing is painting in some of the missing corners of the syntax. Anything you can do with a list comprehension can be done with a for loop, less tersely, often but not always less readably (choose wisely). This new syntax allows you to write comprehensions with asynchronous behavior that in 3.5 and earlier requires you to write a loop.

Progress is in the eye of the beholder. "Code is read more often than it is written." Please chose the most readable form.


Yes, async comprehensions are concurrent like all other async mechanisms in Python 3 are. But you are likely thinking about parallelism, not concurrency, in which case the answer is also yes, if you use thread or process executors for tasks.


Great to see even more support for type annotations. Having to specify the type of variables in comments was never a good solution.


Anybody knows what´s the state of python 3 right now? I mean, are there popular libraries and frameworks that still uses python 2?

Or this is no longer an issue?


According to the Python 3 Wall of Superpowers[1], almost every commonly-used library has adopted 3. Now the problem is getting people to realize that... I was talking with someone just a month ago who said that there were no numeric processing libraries on 3 and so they couldn't switch.

[1]: https://python3wos.appspot.com/


>Now the problem is getting people to realize that...

I realize it, but (1) even after large libraries support 3 there will still be a long tail of libraries that aren't ported (and will never be), and (2) I see very little value in porting our very large code bases to Python 3, with its breaking changes and subtle behavioral differences that need to be thoroughly tested. There's no must-have feature for us to justify that effort, so we won't be making the switch for the foreseeable future.

IMO Python should have put a lot more effort into backwards compatibility when releasing Python 3. I'm genuinely concerned that this release has killed Python, and I'm not alone.


> I see very little value in porting our very large code bases to Python 3

That's too bad. We've started writing all new code in 3, and now I can't stand working with 2. It feels clunky and old in comparison. A year ago I never would have believed I could feel that way, but I do: I will not willingly (that is, absent strong business reasons) write Python 2 code again.


> (1) even after large libraries support 3 there will still be a long tail of libraries that aren't ported (and will never be)

That should be a non-issue unless you require one of those libraries.

> There's no must-have feature for us to justify that effort, so we won't be making the switch for the foreseeable future.

There's no reason for making the switch if you're going to ditch your code base in the foreseeable future. Otherwise, you'll have to justify your decision in the unforeseeable future. And thereafter.


>That should be a non-issue unless you require one of those libraries.

That's kind of a truism, isn't it? I don't understand the dismissiveness - there's certainly a sizable portion of users that use libraries other than the top 50 most popular.


There is ton of new interesting libraries being compatible with Python 3 only, so you can make this argument in either way.

In the end, you should ask yourself some questions:

Am I starting a new project or continuing an older one? If the former one, Python 3 should be preferred unless some library I need is only available for Python 2. If the latter, I should ask if the cost of migrating my codebase makes sense. If not, Python 2 is still a viable choice until 2020 at least.


Isn't that the same argument that so many companies used to hang on to IE6 for so long? There were a ton of in-house applications built on ActiveX and IE6 quirks and porting to something newer wasn't worth the cost.

Eventually, don't you think that you will start using a more modern version of Python?


Or switch to another language, just as so many people switched to Firefox or Chrome while waiting for a sane modern IE.


"Modern" for its own sake adds no value, so no, we won't port our existing code bases to something "more modern." It's funny that string interpolation, a port of a feature other languages like PHP have had for decades and the fourth way to do the same thing in Python, is being held up as something modern.


How about "supported?"

You may have also missed the other features from 3.0 to 3.6. Cumulatively they are substantial.


The frustration with the neverending 2 vs. 3 transition pushed me to looking at Elixir more deeply and now I am very happy. Of course it is not yet possible to completely avoid Python, but new things are Elixir only for me.

Of course I pray every night to Eris that such a fiasko will never happen to Elixir.


What is a numeric processing library? Numpy?


Also SciPy, SymPy, and Pandas. They've supported Python 3 for several years.


My comment was kind of a loaded question. If they consider Numpy a numeric package, then yes, it's been there for a while and they haven't been paying attention. That includes the packages you mentioned as well


The amount of pain you'll experience is dependent on your area of work. E.g. web-dev, systems, science.

Most certainly use Python3 for any new projects.

Here's a quick overview of the most popular PyPI packages and their support for Python 3: http://py3readiness.org/


Most? Do you have a source for that?

As a counterpoint, this JetBrains survey [1] shows only 40% of Python users ever use Python 3. Considering it has been eight years since the initial release, that's a very low share.

[1]


I think you may have misinterpreted what was meant by the "Most certainly use Python3 for any new projects." sentence.

I don't think it's using "Most certainly" in the sense of "Most developers using Python use Python 3 for any new projects.", like you appear to be thinking it says.

Rather, it appears to be saying, "Without any doubt, if you're using Python use Python 3 for any new projects."

It appears to be a statement regarding the suitability of Python 3 for new projects, rather than some quantitative claim regarding the number of users of Python 3.


Correct. Sorry for the ambiguity.


Sounds like a healthy portion to me in comparison to other major software upgrades. The original expectation was about 5 years before the shift started. Seems like we're mildly behind the expected schedule.

After it was released, it'd be only the early adopters for a couple years, then the major libraries would follow a few years after. After 5ish years, the big libraries were ported and the second wave of early adopters could shift over. Now we'll start seeing major enterprises make the move over the next few years. Facebook already made the change, but they're tech-forward.

How many people use old versions of RHEL? Of Windows?



You can see this recently submitted link [0] which tries to answer this question in depth. I believe that python 3 readiness[1] and python 3 wall of superpowers[2] are good indicators of the overall trend towards Python 3 adoption, but the sample size of packages considered in these two websites are not big enough to provide a complete picture of the situation.

[0] https://news.ycombinator.com/item?id=13244627

[1] http://py3readiness.org/

[2] https://python3wos.appspot.com/


For us (and a crazy ton of systems people who dont even use python for application code), the "supervisor" incompatibility is a blocker.

supervisord is one of the most useful tools to run production systems.. and is more relevant when we look at the Docker world.

A port of supervisord is something we would gladly donate to :(

Unfortunately, I think the maintainers are handicapped by py3 expertise - https://github.com/Supervisor/supervisor/labels/python%203

Supervisor has some major issues on Python 3. Many of these issues are encoding related as well so merging this one patch doesn't move the needle much. We need someone who has strong experience in Python 2/3 porting and is willing to spend a non-trivial amount of time looking at these bytes/strings issues together.

https://github.com/Supervisor/supervisor/pull/471#issuecomme...


It is standalone though, not a library, and should not be an impediment to others, unless I'm missing something.


And that's why everyone goes back to "we are most definitely not using TWO Python versions. Since everything works on 2.7, let's stay there)"


Been using 3.x for a few years now, 3.6 is about to seal the deal.


OpenCV is still a pain. While technically the latest major version is able to work with Python 3, it is relatively unstable and all samples/tutorials out there are for the previous major version.


I'm not sure what issues you've been having, but I've been using, OpenCV with 3.5 for over a year with no problems. The docs are out of fate, but that's because the code is all auto-generated.


The majority of the largest libraries support 2 and 3. It's become a very small issue for me. I still need to "fall back" to python 2 occasionally though when a vendor doesn't support 3. Namely for me, I need to use the google-ads client library and AWS Lambda, neither of which support Py3. However the majority of my projects are Py3.


perception of python 3 is really bad compared to the actual state, especially for those groups that look at py3 every 5 years and assume nothing changes in between checks. the truth is that if you're in the 95th percentile, library support a non-issue.


There are now more python3-only libraries than python2-only libraries.


The real problem is that there are "only" libraries at all - too many of them. It kills the python ecosystem in a really disgusting way.


I disagree. I wouldn't bother writing a Python 2-compatible library today. I mean, if something I wrote happens to work with it, cool!, but I wouldn't expend non-zero effort making that happen. That's especially true for anything involving async or other new language features that would be a PITA to backward-compatibly support.

IMHO, what kills the Python ecosystem is people attempting to stay on a 6+ year old major version when they've been told repeatedly that it's officially dead. Upgrading to Python 3 isn't exactly like porting from Java to OCaml, and writing new code in Python 2 today seems flat-out negligent to me.


No need to be rude. Libraries can't reasonably be expected to support old versions of the language indefinitely - languages have to be able to grow and evolve or they will die.


The most interesting parts are PYTHONMALLOC and Probes (DTrace and SystemTap). Profiling and debugging are very important in production.


Hmmm. I'm curious. Alpine Linux 3.5.0 just came out, so I wonder - how long will it take for this to be shipped with the next release?

(I maintain https://github.com/rcarmo/alpine-python, which saves me hours and hours of deployment time)


Please wait until Alpine Linux 3.6.0, Alpine Linux 3.5.0 is better served by Python 3.5.x :-)


Alpine has (usually) 6 month long development cycles.


This is very exciting. Python got me into programming. So I watch its development very closely.


Related discussion from a few weeks ago:

https://news.ycombinator.com/item?id=13123156


I recently started learning Python for using some machine learning libraries. Professionally I mostly do Javascript (and a tiny bit of Go). Javascript is often bashed for it's nonsensical choices.. but I actually find the latest version of JS quite eloquent, most especially compared to Python.


Can you name specific areas you think are better? (I'm not challenging your statement; I'm just curious.)



How can I install this (3.6.0) using Homebrew (brew.sh) alongside Python 3.5.2? (Not replacing it)


I would uninstall the homebrew Python versions and instead install pyenv [1], via homebrew, and then install whatever Python versions you want. They will run happily side-by-side.

[1]: https://github.com/yyuu/pyenv


Wow I can't believe I didn't find this earlier!

Well-organized virtual environments AND python versions. Simply amazing!

"brew install pyenv-virtualenv" has me up and running with everything I ever wanted in one line


I'd go with https://github.com/collective/buildout.python or with Macports. Eventually someone will have it for Homebrew.


I was wondering how the new `datetime.fold` attribute would avoid breaking old code. From looking at the full PEP, it appears that there are new fold-aware tzdatas that you have to choose to use.


Does anyone know dev team plans for version 3.7?


Are there any breaking or behavioral changes in this from 3.5? I'd love to start using some of the new async features.


There aren't any breaking changes from 3.5 The only behavioral change is that dictionaries now maintain insertion order. But that likely won't break anything.


[flagged]


is that you Zed?


No I'm Hex.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: