Hacker News new | past | comments | ask | show | jobs | submit login

Speaking as someone who has written Python code almost every day for the last 16 years of my life: I'm not happy about this.

Some of this stuff seems to me like it's opening the doors for some antipatterns that I'm consistently frustrated about when working with Perl code (that I didn't write myself). I had always been quite happy about the fact that Python didn't have language features to blur the lines between what's code vs what's string literals and what's a statement vs what's an expression.




F-strings have appeared 2 versions ago. All in all, the feedback we have has been overwhelmingly positive, including on maintenance and readability.


I second this. F-strings make string formatting so much more concise. I'm excited about the walrus operator for the same reason.


Not just more concise; less error prone.

A reasonably large number of the bugs I encounter relate to the order or number of formatting arguments not matching the slots in the format string. It's pretty hard to make that kind of mistake with an fstring.


f-strings allow mutation ?

mutation is tricky, a whole field of programming language research is built on avoiding mutation


I love f-strings. I just wish tools like pylint would shut up when I pass f-strings to the logging module. I as the developer understand and accept the extra nanosecond of processor time to parse the string that might not be logged anywhere!


It's not just performance!

Using percent formatting is superior in many ways:

- errors that occur in formatting a message will be logged instead of raising an exception into your application

- error reporting services like Sentry can properly aggregate events

- interpolation won't occur if you don't have the logging level enabled

- it's recommend by the Python docs: https://docs.python.org/3/howto/logging-cookbook.html#use-of...


It's not always a nanosecond, some string representations can take a while to create. In poorly coded Django models they could involve a trip to the database.


    if logger.isEnabledFor(logging.DEBUG):
        logger.debug(f'{expensive_func()}')


I hope that’s a joke, because that is a verbose and ridiculous way of duplicating the work that the logging module does, while also making the code less readable and maintainable!


That’s how you defer an expensive function. The fstring part is the joke.


Disable it in pylintrc. Pylint is unusable without a good config file anyway.


Ideally the defaults should be sensible. I have found they mostly are, except the f-string one.


For something as finely tuned as pylint, my sensible default and your would never be the same.

I don't want it to scream on a missing docstring for every single thing. I do want to be able to use a, b and x as variables. No, this root var is not a constant, don't ask me for uppercase. Etc.


on the contrary, i've seen code slow down by 50% due to a log.debug() that wasn't even emitted in prod. should've seen my face when i saw the pyspy flamechart.


pylint still sucks at partial function definitions. It wants to name anything that exists in the module scope that isn't an explicit function definition with CAPS_LOCK_MODE.

    $ pylint pylint_sucks_at_partials.py 
    ************* Module pylint_sucks_at_partials
    pylint_sucks_at_partials.py:7:0: C0103: Constant name 
    "add5" doesn't conform to UPPER_CASE naming style 
    (invalid-name)
The program in question:

    #!/usr/bin/env python
    """ proving pylint still sucks at partials """
    from functools import partial
    from operator import add
    
    
    add5 = partial(add, 5)
    
    
    if __name__ == '__main__':
        import sys
        print(add5(int(sys.argv[1])))


F-string are great and should have been in the language since the beginning. Many other languages had with their own version of them since version 0. What I don't understand is why Python needs a special string type when other languages can interpolate normal strings (Ruby, Elixir, JavaScript.)


f-strings need a prefix so old strings can keep working the same way.

If `print("{x}")` printed "{x}" in Python 3.5, it shouldn't print something else in a newer version. But `print(f"{x}")` was a syntax error before f-strings, so no code is broken by giving it a meaning.

JavaScript can't interpolate ordinary strings either, for the same reason. You need to use backticks (``).


You're correct about JavaScript. I forgot about the backticks.

Thank you for the explanation of the f-strings. I'm pretty sure that migrating old strings to "\{x\}" could be automated but we can't force everybody to migrate their code. There is probably no other way than the one they followed.


"\{x\}" includes the backslashes into the string literally. So even if that worked for f-strings it would change the meaning for older Python versions. Requiring people to update common old code is also just something to avoid unless necessary.

The correct way to escape braces in f-strings is with double braces, so f"{{x}}". This is consistent with the way str.format works. f-string syntax is very similar to str.format syntax in general.


It's also just safer. You want your default string type to have as few gotchas about what can be put in it as possible.


Shells do it with "" and ''. Unfortunately Py and Js decided to allow both for regular strings, so there is not such an easy way to delineate them.


Because explicit is better than implicit.


...but every addition to make them more powerful and feature-rich is one more step in the direction of blurring the lines between what's code and what isn't, since more and more things that are supposed to be code will be expressed in ways that aren't code at all but fed to an interpreter inside the interpreter. And with every release, the language specification that I'm having to hold in my head when dealing with other people's code grows more and more complex while the cost-benefit calculation around the additional complexity shows diminishing returns.

It kind of goes to the question: When is a language "finished"?


there's a difference between stability and stagnation and IMHO Python gets it.

finished is dead, to put it less mildly.


As someone who has written Python code almost every day for both professional and personal projects for a few years: I’m really happy about these assignment expressions. I wish Python would have more expressions and fewer statements, like functional languages.


Do you have an example of bad code you'd expect people to use assignment expressions and f-strings for?

I don't think I've come across any f-string abuse in the wild so far, and my tentative impression is that there's a few patterns that are improved by assignment expressions and little temptation to use them for evil.

It helps that the iteration protocol is deeply ingrained in the language. A lot of code that could use assignment expressions in principle already has a for loop as the equally compact established idiom.


Many languages don't distinguish between statements and expressions—in some languages, this is because everything is an expression! I'm most familiar with these kinds of languages.

I'm not familiar much with Python, beyond a little I wrote in my linear algebra class. How much does the statement/literal distinction matter to readability? What does that do for the language?


The philosophy that most of Python's language design is based on is that for everything you want to do, there should be one and only one obvious way to do it.

The first part of the statement (at least one obvious way to do it) goes to gaining a lot of expressive power from having learned only a subset of the language specification corresponding to the most important concepts. So you invest only a small amount of time in wrapping your head around only the most important/basic language concepts and immediately gain the power that you can take any thought and express it in the language and end up not just with some way of doing it, but with the right/preferred way of doing it.

The second part of the statement (at most one obvious way to do it) makes it easy to induce the principles behind the language from reading the code. If you take a problem like "iterate through a list of strings, and print each one", and it always always always takes shape in code by writing "for line in lst: print( line )" it means that, if it's an important pattern, then a langauge learner will get exposed to this pattern early and often when they start working with the language, so has a chance to quickly induce what the concept is and easily/quickly memorize it due to all the repetition. -- Perl shows how not to do it, where there are about a dozen ways of doing this that all end up capable of being expressed in a line or two. -- Therefore, trying to learn Perl by working with a codebase that a dozen people have had their hands on, each one preferring a different variation, makes it difficult to learn the language, because you will now need to know all 12 variations to be able to read Perl reliably, and you will only see each one 1/12th as often making it harder to memorize.


> "iterate through a list of strings, and print each one"

  print(*lst, sep='\n')
:)


That's exactly what I'm talking about. A REAL python programmer would immediately recognize that to be the bullshit way that someone would do it, if proving what a f*ing master they are were more important to them than clarity.


i do it when i'm in a REPL and want to minimize typing, but probably wouldn't put it in an actual program... so i guess we mostly agree


The only reason I can imagine being opposed to it is fear that hordes of bad programmers will descend on the language and litter the ecosystem with unreadable golfed garbage.

I obviously don't want that. I don't think anybody wants that. But I also don't think that's going to happen as a result of the recent changes in the language. If anything, I feel like the average code quality in the wild has gone up.


Python being the start language for newbies these days, you already have the hordes of bad programmers. Everyone is bad when they start out, after all.


It's natural for some operations to be used only for their side effects, and for those a return value is just noise. What does a while loop evaluate to in your favorite language? Are there any circumstances where you'd want to assign one to a variable? What do you lose by making that a parser error?


> What does a while loop evaluate to in your favorite language?

depends on what you want! for example this Haskell package¹ defines three versions of while:

  -- collect the result of each iteration into a list
  whileM :: Monad m => m Bool -> m a -> m [a]

  -- combine the result of each iteration together into a final result
  whileM' :: (Monad m, MonadPlus f) => m Bool -> m a -> m (f a)
  
  -- drop the results and return `()`
  whileM_ :: Monad m => m Bool -> m a -> m ()
by convention, the ones ending in an underscore (`forM_`, `whileM_`, `sequence_` etc) drop their results, i.e. are only used for their side-effects.

¹ http://hackage.haskell.org/package/monad-loops-0.4.3/docs/Co...


> what's a statement vs what's an expression

never understood the need for this. why do you even need statements?

if there's one thing that annoys me in python it's that it has statements. worst programming language feature ever.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: