Hacker News new | comments | show | ask | jobs | submit login
Why Python Is Not My Favorite Language (2016) (zenhack.net)
243 points by rbanffy 152 days ago | hide | past | web | 389 comments | favorite



So, this shortens down to "I don't want people touching my private class members", "I want multi-line lambdas", and "I want types".

WRT private class members, I can understand why this might be frustrating for library writers. But it's just so damned useful to be able to reach into a library and get functionality the library's writer didn't think I'd need, that I'll personally never consider this to be bad.

Multi-line lambdas would certainly be nice, but most of it can be handled with a scope-level function. It adds, on average, one line of boilerplate. As for syntax, no need to use Ruby's syntax, just incorporate parenthesis (a well established continuation construct within Python):

    with(open("foo.txt"), lambda f: (
        print(f)
        print("I am a teapot")
    ))
The sinatra example doesn't really make the writer's argument about decorators to me; since the Flask example exposes a bit more functionality for only two lines of code - I now have a distinct app object, and can create more, or explicitly access attributes of the app itself. I can even add authentication to that function with just two lines:

    auth = flask.ext.httpauth.HTTPBasicAuth()

    @app.route("/")
    @auth.login_required
    def handle_route():
        return "I am a tea pot"
No, Python isn't Ruby; it doesn't support the same degree of metaprogramming. That's good, IMO. Metaprogramming is the source of exponentially more technical debt (and bugs) than is reasonable, frankly.

As for the lack of a type system, yeah, that is definitely Python's biggest weakness for large programs. A lot of it is resolved with the type hints in Python3, but a lot of it is just Python itself. Love it or hate it, that's how Python is, was, and will be.

As for the example error - there is a typo. Of course it's going to throw a traceback. The error message is even more explicit than I expected, frankly. It pointed out the missing function name, which would make the typo rather easy to find.


> But it's just so damned useful to be able to reach into a library and get functionality the library's writer didn't think I'd need, that I'll personally never consider this to be bad.

I was ready to accept this as a matter of opinion, but then with this :

> Metaprogramming is the source of exponentially more technical debt (and bugs) than is reasonable, frankly.

Once you've jumped the shark and play with introspection and internals, are you really going to make the point that metaprogramming (or almost anything else, for that matter) should be banned ?


> are you really going to make the point that metaprogramming should be banned ?

As a fan of the concept of the "Catfish" developer (even though I'm a "corporate drone"), absolutely.

Or, if not banned, kept to an absolute minimum. It's like the preprocessor in C. You can do anything and everything in it... but should you?

You can be exceedingly clever with metaprogramming, but clever code is exceedingly hard to read and maintain. Give me boring, explicit code any day of the week.


> You can be exceedingly clever with metaprogramming, but clever code is exceedingly hard to read and maintain. Give me boring, explicit code any day of the week.

Boring, explicit code which depends directly on the unspecified, unsupported, implementation details of the libraries you're using ... ?


Well, I also commit the faux pas of recommending limiting external dependencies to the standard library in Python whenever possible, resulting in many fewer dependency issues than so many of my colleagues.

So, while occasionally under specified, they are quite well supported and stable.

I do miss out on the fun of sorting out the requests dependency chain with every release of my code, but I'm mostly OK with that.


It's weird how people neg on some abstractions for Python but then brag about the absolutely intense degree of abstraction something like Tensorflow brings to the table.

It's almost like the qualification is that the reader can't even imagine writing the abstraction and therefore they can ignore it.


There is a material difference between syntactic abstraction and functional abstraction. It's analogous to the difference between someone learning to program a computer and someone learning to use a computer. The second is far easier to do and requires far less intellectual overhead.

By the same token, if someone implements TensorFlow in a different language, there is little room for complaint assuming the same functional abstraction (and minimization of intellectual overhead) is present in that language's implementation.


There's a huge difference between abstraction you have to write and debug yourself, and abstraction that you can use and debug through documentation and StackOverflow.

Third-party resources are where it's at for taking advantage of things that require skills you don't have. I haven't written enough frameworks and abstract libraries to expertly debug, test, and write my own.


> There's a huge difference between abstraction you have to write and debug yourself, and abstraction that you can use and debug through documentation and StackOverflow.

Indeed. You're more likely to be able to use and debug it if you wrote it yourself rather than relying on the weirdom of the crowd from SO.

> I haven't written enough frameworks and abstract libraries to expertly debug, test, and write my own.

I suspect you'd do just fine if you tried, assuming you don't try and displace django.


I mean that's the point of tooling. Metaprogramming removes any advantage and IDE can give you to make sense of unknown code.


This is strictly false though. IDE integration can give insight into metaprogramming in many languages.


Python, the language in question, does not easily offer this support, since all of the metaprogramming occurs at runtime. You have to run a debugger while creating the object to see what is actually happening.

Lisps, C Preprocessor (within certain limits), and so forth are a bit different, since you can expand the macros without having to run the code itself.

Does that indicate a strength of Python or a weakness? A complete value judgement there. In either case, metaprogramming in Python (and many other languages, even those with support for metaprogramming resolution in their IDEs) is still simply harder to understand and debug.


I feel no real problem shaming the entire design of Python as a long bet on failed premises. But other languages with similar properties can and do report their metaprogramming results to language servers.


Agreed completely—this is the difference between metaprogramming and "black magic", which is familiar to any ruby programmer.


Yes, that explains why the language is failing.


Even that IDE known as Vim + Ctags will jump to a "defmacro" definition if your cursor is on a macro call and you type Ctrl-] to chase a tag.


This only works for metamagic identifiable macros. What if you refer to an identifier that doesn't exist anywhere in the code? What about ruby's method_missing? It's impossible to provide tooling for those cases without running the compilation or evaluating.


Metaprogramming goes way beyond what the C preprocessor does.

Metaprogramming also cannot necessarily be "hard to read and mantain". Take a look at macros in Common Lisp. They are almost the same as a normal function. Yet they do metaprogramming.

> Give me boring, explicit code any day of the week.

With macro metaprogramming you can eliminate boilerplate code / unnecessary copy-paste repetition of code, and this does improve maintainability of the code.

At the same time, replacing 10 repetitions of the same big boilerplate code (similar code, written with slight variations each of the 10 times) with 10 very simple (one line each) calls to a macro, improves readability: because you will easily read what is the difference between those 10 calls each.


Macro metaprogramming can (and in practice often does) break ctrl+f search. If I want to find the definition of PREFIX_varname, I'd expect to grep for "PREFIX_varname", but often, the correct solution is to search solely for varname, because PREFIX_varname is defined by a call to MACRO(varname).


You are speaking about C macros, which have almost nothing in common to Lisp's macros.

There is no such problem with Common Lisp macros.


There could be such a problem with Lisp macros, if the programmers are complete idiots.

C programmers gluing together symbols from pieces so that MACRO(PREFIX) expands to some PREFIX_var or whatever are often not complete idiots; they are desperately doing whatever they can to simplify what they are doing in the best way that is supported by the portable language.


Maybe don't use such a primitive editor integration?

I'm not trying to dunk on your editor, but people have had this problem solved since the early 90s. If your programmer's editor doesn't have some affordances for your language, then it's not a programmer's editor at all, it's a text editor.


On the other hand, if my editor has to incorporate a full-blown parser for your program's (meta)language in order for me to find where something's defined, then maybe your program needs refactored more than my editor needs replaced. Or perhaps better documentation on your program's part and/or better education/familiarization on my part is warranted.

Most modern editors are able to horizontally scroll and thus handle long lines in source code. That doesn't make a 500-character-long line of code acceptable by any stretch.


> On the other hand, if my editor has to incorporate a full-blown parser for your program's (meta)language in order for me to find where something's defined, then maybe your program needs refactored more than my editor needs replaced.

Either you want your editor aware of your language's semantics if you don't. I rather do, because it saves a ton of time and basic integrations are usually quite straightforward in 2017.

> Or perhaps better documentation on your program's part and/or better education/familiarization on my part is warranted.

Barring my opinions on this, I don't get why this would exclude editor semantics.

> Most modern editors are able to horizontally scroll and thus handle long lines in source code. That doesn't make a 500-character-long line of code acceptable by any stretch.

Okay, but this is a false equivalence. You're telling me not to use metaprogramming tools because they make it hard for a specific editor feature to be used in conjunction with them in the most difficult case for any system (introducing new run-time-dependent lexical bindings into a scoped block and handling that at compile time).

But unlike code visibility, in most good solutions exist to work with these systems (within reason).


I'm curious: what editor can do this? Consider that the codebase I work on is large enough that it cannot fit in a project file or ctags or similar. (Which is to say that VS, Clion, and others don't work, nor will vim (for this specific feature))


For which language? In the case of systems like Clojure, if you're introducing new toplevel bindings you're not going to get completion without direct interactive evaluation (naturally). But everything short of that should be well-formed.

In many cases with common macros I'd add decls for Cursive Clojure + Idea to let me autocomplete introduced bindings. For example, I helped my editor understand unique structure, e.g.,

    (defauthedendpoint get-frobnaz [request user] 
        ...re<tab>quest)


C/C++. In the general case, I don't believe its possible to resolve macros without linking.


I can't parse this in a way that it makes sense; I suspect you're mixing up some terminology. I'm guessing you mean that you need access to all transitively included headers to know a macro's definition. That's certainly true, but it's not linking, which involves resolving symbols in the object files that are the output of the compiler. The C preprocessor logically runs before the compiler; its output forms the compiler's input. You can run it without running the compiler at all.


Ack, you're correct.


Actually, I'm not sure. If we're referring to the C pre-processor here, can't we always just run it through before symbolizing?

I guess that without the right environment ifdef stuff falls out, but I dunno what to tell you there other than making builds contingent on the environment is a sketchy practice.


It may be sketchy, but you may not have a choice. If you have a Qt application (written in C++) which needs to use a platform-specific API to do some magic stuff with a window, there's a method on the window to get the handle that the platform's GUI library offers. Of course that's going to be dependent on the environment because each environment has an entirely different GUI library (e.g. Cocoa only exists on macOS, not on Windows or Linux or Android).


So, say I have this:

    #include a.h
    #include b.h
    #include c.h
    MACRO(varname)
welp.


Since the include chain has to terminate, can't we just run the pre-processor over every file before we index it?

Seems, um, like a pretty reasonable idea to me if you care about C-style macros (I sure don't.)


Lisp macros go far beyond what is possible in Python/Ruby, and oddly enough are less of a problem because of that. In Python and Ruby, the inability to change the underlying syntax makes alot of developers go to absurd lengths to create new syntax around the existing syntax. And that's what makes Python/Ruby metaprogramming really painful to understand.

That said, my (admittedly few) days spent debugging bad macros were pretty damned bad. Having to inspect the results of the macro expansion and figure out what went wrong is not my idea of a good time. And that doesn't even account for reader macros.


If a macro is correctly expanding, but you have to go into the expansion to find out what is going wrong when the macro is misused, that means the macro lacks robustness; it is not checking arguments, or not sufficiently or in the right way for the cases you're running into. Bad arguments make their way into the expansion and then trigger errors which are conveyed in terms of the expansion, not necessarily making sense to the macro user.

It takes extra time and effort to make macros that are suitable for broader use (people other than the macro writer).


Agreed on all points.

But ultimately, bugs in software happen, and bugs in macros can be a pain in the butt to debug. The extra steps to go in and expand the macro, and then translate the fix in that expansion into a fix for the macro itself gets complicated, fast.


The job of the macro is to convert a form into an expansion. If it's doing the job wrong, I don't see how you can avoid looking at the output. With respect to what is that "extra".

It's no different from any other "input -> process -> output" computing situation where the output is wrong, the right output is obvious, and you have work back into the process to make that output come out.

You shouldn't be fixing macro output. If the macro is wrong, the first step is to find the simplest macro invocation which reproduces the problem. Use a dummy argument for anything irrelevant, and use an atom for any argument that allows one. For any argument position which accepts a list of items, see if the problem reproduces with an empty list, or a list of just one. If you can see the problem in (mymac (a b) c d), you have it made. Hey look, b is being wrapped in an extra list, and d should be quoted."


"Reaching into a library" means depending on something not meant to be public, with the understanding that it is undocumented and subject to change without notice. Giving a reliable but soft warning (through quasi-encapsulation naming conventions and explicit module exports) and assuming developer maturity is the typical Python approach to double-edged features.

Hacking something with private library parts doesn't involve any more "introspection" and "internals" than reading the library's code and finding useful loot to steal; metaprogramming is a completely different and orthogonal way to compromise code maintainability.


Once you've jumped the shark and play with introspection and internals, are you really going to make the point that metaprogramming should be banned ?

"Banned", no. Just that's not simply this big trove of goodies that it's often cracked up to be. And in particular, it isn't strictly speaking necessary or essential for a language to be successful -- or even among the top 5 considerations that make a language successful or not.


"WRT private class members, I can understand why this might be frustrating for library writers. But it's just so damned useful to be able to reach into a library and get functionality the library's writer didn't think I'd need, that I'll personally never consider this to be bad."

I'll second that. I'll share my own experience to give some perspective. Older versions of the JDK had an LDAP library that for some reason I can't understand (actually pretty sure it was a mistake because it was changed in subsequent version) specified a File as the type instead of a Stream for its configuration file. There was no way for me to modify the configuration during run time without restarting the JVM. This wasn't acceptable to the software we were writing because it supported multi-tenancy.

What did I end up doing? I used introspection to grab the "private" configuration table (Vector or Hashmap?) and wroten an API around it to allow modification on the fly.

Lesson here is that "private" things aren't really so private and Python style discretionary "private" variables would have required a lot less work.


Is it possible that encapsulation just doesn't work, in general? It really boils down to a library author saying "trust me, this'll work great!" Sometimes it does, and sometimes it doesn't. In that light, Python's convention that a leading underscore means "probably better avoided" seems pretty reasonable. This is especially true for Python, where the type system gives you no guarantees about anything.


At my previous company, we ended up writing Python code that looked an awful lot like Scala or Java:

    class PricePredictionCalculator():
        def __init__(self, price_prediction_calculator_settings: 'PricePredictionCalculatorSettings'):
            self.x = price_prediction_calculator_settings.x
            self.y = price_prediction_calculator_settings.y

        def calculate_something(self) -> int:
            return self.x + self.y

    class PricePredictionCalculatorSettings():
        def __init__(self, x: int, y: int):
            self.x = x
            self.y = y
It was actually kind of nice. It allowed rapid prototyping without tons of boilerplate, and then tacking on the boilerplate at the end. Mind you, I was a data scientist, not a full-time engineer. There was also a lot of room for stylistic differences and varying opinions.


Regarding your variable naming: why `price_prediction_calculator_settings' and not, say `settings'? Same with the type; why not:

    class PricePredictor:
        class Settings:
           def __init__(self, x: int, y: int):
               self.x = x
               self.y = y
        
        def __init__(self, settings: Settings):
               self.x = settings.x
               ...


I'm curious why about why you don't like that name?

I'm asking because I'm usually torn between short names that assume the reader can understand them and long overly descriptive names that (in theory) require less implicit understanding.

Is the name price_prediction_calculator_settings too long? If we call it settings I can see an issue come up if we need to pass in another type of settings object calculator_format_settings or something like that.


Long names require lots of keystrokes and more reading effort. Also you have to split your lines earlier. If I have long names, I won't remember what I called things, at which point I'm leaning on autocompletion. Hitting tab every few keystrokes really takes me out of my flow.

The other general issue is why have the redundancy? There's only one settings object here. If you need to pass something different in later, refactor! Remember the principle of You Ain't Gonna Need It.


This is one of those areas where you're actively being hurt by the need to name something or give it a type. Wouldn't passing in a simple dictionary for the settings be simpler? Something called PricePredictionCalculatorSettings probably isn't being reused elsewhere.


The whole idea was to get away from passing dictionaries around.

There's no IDE support for "plain" dictionaries, they resulted in code duplication and broken encapsulation since you can't add any logic to a dictionary, and they put unnecessary cognitive load on the developer by forcing them to keep track of which fields go in which dictionary.

I saw it as a form of design-by-contract and self-documenting code. The app I worked on had a ton of these mappings being passed around between different subsystems and it was becoming nightmarish to deal with. With this style, if you were on Team A and needed to plug your work into Team B's FooBar class, all you had to do was look at the FooBarInputData class to see exactly what the FooBar class needs.


Interesting point. If you were using Clojure I'd suggest checking out Schema and spec. They allow you to define the expected fields of a map/dictionary, without requiring a full type, which is handy for well-defined data objects without attached behavior.


Overly-long names are an indication that the design could use some refactoring. There's a tradeoff between variable name length and the amount of readable logic you can fit into a line.

The name ``settings`` is too generic. There's probably a better way, but to demonstrate that we'd need a real example and not hypothetical.


That battle was lost before I even had a chance to fight it.


At that point why not just write Scala though?


I think "Yeah, that's good, but why not use X?" is Python's biggest problem right now.

At this point, The Zen of Python has been absorbed into many late-gen programming languages; it's easy to see a heavy Pythonic influence on Swift and Go, for example, and some languages, like Nim, even draw the parallel as a promotional thing.

Sadly, the technical underpinnings of the Python runtime itself have not kept up, and it leaves people asking why they shouldn't just enjoy the same advantages they'd get with Python through a newer language with a modern implementation, providing better performance and package/dependency resolution. As much as it pains me to say it, even cutting-edge ECMAScript code can be made to look pretty Pythonic these days.

Python will always hold a special place in my heart, but I'm not sure that a Python implementation is the obvious choice for a dynamic application anymore.


On the plus side of writing it in Python, no one can type "import scalaz" and start writing brittle backwards haskell-ish in your project, causing massive performance regressions they will defend to the death on grounds of leaky abstractions.


Even Scalaz-heavy code runs a lot faster than equivalent Python, so I don't think that argument makes any sense.


It's a bit of a joke, but certainly avoiding the majority of the scalaz community is a big plus for anyone using Python. You'll never have to endure another Tony argument, for starters.


Yeah, that is a big black mark on Scala. Just an unfortunate failure of leadership - I'm not convinced any other language community would handle an individual like that any better, but the Scala community did not acquit ourselves well to say the least. I dare to hope that with the rise of Cats that kind of thing will cease to be tolerated and the language community can become a pleasant place, especially for newcomers.


I mean, Tony tries to edge into the Haskell community and gets rebuffed. He simply can't neg the people there, they're too busy actually producing novel results rather than talking about them.


Yeah, I meant to say any language community at a similar maturity level - he was there in the early days of Scala and wrote a foundational library that continues to be extremely widely used, which creates quite a different dynamic.


Some Haskellers love Tony. I'm one of them.


To be fair to him, Tony is much more well behaved in the Haskell community because he doesn't seem to feel the license to run roughshod over everyone there.

Anyone who says his behavior hasn't been troublesome in #scala at times over the last 5 years isn't paying attention.

I still remember my first run in with him where he demanded I take a "test" to let him "grade" me. In the middle of a larger conversation about a sbt bug.


> WRT private class members, I can understand why this might be frustrating for library writers. But it's just so damned useful to be able to reach into a library and get functionality the library's writer didn't think I'd need, that I'll personally never consider this to be bad.

I can only really think of one instance where I've used this, and it was to work around an outright bug in the library (which was unmaintained, and I was using out of a certain amount of desperation).

FWIW, most of the complaints around private in my post would actually be solved just by having a separate namespace for "private," even if there was a back door. I'd be pretty willing to forgive that.

> Multi-line lambdas would certainly be nice, but most of it can be handled with a scope-level function.

In the absence of the with statement can you really imagine yourself writing this all the time:

    def _with_body(f):
        print(f)
        print("I am a teapot")

    with(open("foo.txt"), _with_body)
?

Strikes me as pretty ugly, despite not actually adding much code.

> As for syntax, no need to use Ruby's syntax, just incorporate parenthesis (a well established continuation construct within Python):

Yeah, that's a bit nicer.

> As for the example error - there is a typo. Of course it's going to throw a traceback. The error message is even more explicit than I expected, frankly. It pointed out the missing function name, which would make the typo rather easy to find.

Yeah, that was kinda my point; I was comparing this to what happens if you try to "take duck typing to heart", where you don't get the error until you try to call request. The earlier you find out about mistakes the better.


> can you really imagine yourself writing this all the time

Honestly, no. But not because of any measures of beauty, but because I just don't do callbacks all that often.

    try:
        f = open("foo.txt")
        print(f)
        print("I am a teapot")

    finally:
        try: f.close()
        except: pass
is what I would be more likely to write (and did write, before context managers were a thing), if the "with" statement didn't exist. It's ugly as sin, but I can put it in front of anybody and they will understand what's going on. I can also tell at a glance that it is correct.


Re types - actually, mypy is getting very good, i highly recommend checking it out. It smells a bit like ML/Rust except there's no pattern matching.


There is no excuse for missing multi-line lambdas other than negging on programmers.


Really, is it that hard to come up with a name for your function?


I don't even understand why your question is relevant.


If you want a multi line lambda:

    def print_each_twice(xs):
        map(lambda x: print(x) # whoops, impossible
You can define a local function:

    def print_each_twice(xs):
        def print_twice(x):
            print(x)
            print(x)
        map(print_twice, xs)
The former is more intuitive to write. The latter reduces nesting, makes you name the function, and improves readability IMO. It's currently easy to scan blocks of Python and see each indentation level starting with one of a few keywords (class, def, for, with). Adding multi line lambdas would make the ultra simple structure harder to quickly parse.


> If you want a multi line lambda:

I don't want this though. This doesn't do what multi-line lambdas do. It can never do what lambdas do. It's not a lambda function if it's named and fully pre-computed.

> Adding multi line lambdas would make the ultra simple structure harder to quickly parse.

This is just about the least compelling argument I've ever heard. "The parser writers are not going to have fun."


Maybe I misunderstand what you mean with "fully pre-computed", but local functions in Python close over local scope at runtime:

    >>> def splatter(t):
    ...     def f(): print(t)
    ...     return f
    ... 
    >>> beep=splatter("beep")
    >>> boop=splatter("boop")
    >>> beep()
    beep
    >>> boop()
    boop
That's the problem with Python, everything is dynamic.


I meant structural precomputation. But yeah, I forget you can discard names. You're right: through sufficient use of discarded intermediate bindings you can replicate multi-line lambdas.

Still, the existence of this sorta demands we ask, "So why can't we use it then?"


I'm not steeped in functional programming, but isn't anonymity the least interesting part compared to being first class and forming closures?


I don't really see it as an FP thing at all, because the same argument can be made with respect to variable names. The issue is that without serious anonymous functions, you're forced to come up with more names, increasing cognitive load. An anonymous fn clearly communicates "I'm not going to be used elsewhere".


The small increase in cognitive load when writing the code pays off when reading it. My editor can tell me where a function is used. It can't tell me what a function does.


And with anonymous functions, your traceback often hasn't a clue where the function was defined.


That's untrue. That's usually the least interesting part of the stack trace, but the easiest to compute.

But your complaint isn't without merit here, a profusion of unique nameless verbs can get overbearing sometimes. It's often better than the alternatives (e.g., dependency injection vs lambdas in an environment) but Haskell, ocaml and lisps all seek to have a small but very robust set of combiner and base primitives to help make this kind of code avoid a complexity explosion similar to OO ontology explosions.


So call it something generic related to usage: keyfunc, repeat, parse, callback, etc.


Why increase the complexity and density of the code while complaining that the alternative would be too complex though?

If I can simulate anonymous functions with locally scoped functions that then behave as lambdas then what hill is Python dying on here? The "we don't want to submit a patch to make our parser do this?" Hill?

It's definitely possible to do. So...


I meant to parse mentally. Control flow in Python is very easy to mentally parse at a glance. Powerful lambdas get rid of that.

It's an extremely contrived example, but from the code I posted, it should just be:

    for x in xs:
       print(x)
       print(x)
Much easier to read because there's only one way to write it. Hence GVR shunning map/reduce/filter.


GVR is wrong. It's okay to be wrong. But the last 5 years of software development and the remarkable growth of Javascript as a platform accessible to everyone make a compelling case that GVR was wrong.


No it doesn't. It makes the case that the browser is the most pervasive and flexible application runtime, which we already knew. Server-side JavaScript was never a thing until that was solidified, thus providing a large developer base.


Why is this a meaningful distinction? Who cares if it's server or client side in this discussion? The Python community has said passing anonymous closures is too complex, the majority of the industry disagrees and has pushed forward.

Lambdas are everywhere BUT Python now.

Javascript is even pushing forward with more sophisticated constructs to help them build custom monadic constructs, and shipping a super useful monadic construct in syntax. C# has been doing that for years. Java will soon as well.

The rest of the industry has moved on from this conceit and you can't really argue that it's made environments like Javascript overwhelming to newcomers.

If Python 3 could just get over this conceit, its maintainers could start solving major Python problems really quickly. E.g., Python's janky iteration primitives could be replaced in an afternoon or three with equivalent but cleaner and clearer options.


> Why is this a meaningful distinction? Who cares if it's server or client side in this discussion?

It doesn't really matter, except that the browser's dominance as client-side platform is the cause of JavaScript's current ascendancy, rather than programming language design decisions of any sort. This is a refutation of the claim that you made saying "the last 5 years of software development and the remarkable growth of Javascript" make a compelling case that van Rossum is wrong about multiline anonymous functions in Python. Classic post hoc ergo propter hoc fallacy.

If you're going to stick by the argument that somehow the support of anonymous functions is responsible for JavaScript's recent successes with respect to Python in terms of adoption and pace of development, then you'll have to explain why Lisp didn't take the world by storm when they were introduced nearly sixty years ago.

Or, of course, admit that whether or not a particular programming language conforms to your personal preferences has relatively little bearing on the success of that programming language in the world at large.


Huh? What do you mean? That multi-line lambda is not standard Python is it?


No, it's not included. I meant: "It would be nice; here's my suggestion for syntax"


Decorators should be higher on this list. I work with Python daily and I absolutely loathe people's custom decorators that obfuscate functionality away, like STL in C++, except no type system of course.

Also, preach on duck typing it's a python myth that is absurd. If it quacks like a duck means I had to figure out what it was.

This language is slowly destroying my soul, one painful unit test at a time.


I love decorators so much (when used well).

For example, I used a decorator the other day that I called `@logperf` that I can pin to any function, which will `logger.info` the approximate time it takes to run the whole function. It doesn't mutate what the function does, it just adds a side effect.


It doesn't take away the argument that neither decorators and with would be necessary if lambdas was a language feature.


The point isn't about the functionality of a lambda. The point is that the syntax for a decorator denotes semantic meaning that you don't get when composing a lambda on your own.


Decorators are just higher-order functions. Of all my frustrations with Python, decorators are not one of them.


Exactly, decorators are just a syntaxic sugar for foo = bar(foo). Nothing bad about it.


Not true. Decorators hijack your function surreptitiously. They turn y = f(x) into y = g(f, x) in a sneaky way that doesn't require you to adjust to call site. In this way it's more than "just higher order functions".


I could turn:

    def make_money():
        blockchain_pyramid_scheme()
into:

    @with_jazz_hands
    def make_money():
        blockchain_pyramid_scheme()
or equivalently:

    def make_money(*args, **kwargs):
        with_jazz_hands(make_money_impl, *args, **kwargs)
    
    def make_money_impl():
        blockchain_pyramid_scheme()
Both ways wrap with_jazz_hands around make_money without altering the call sites of make_money.


and then suddenly, your signature goes from `f()` to `f(args,*kwargs)`, you lose all of make_money's documentation too



You've called it sneaky and surreptitious, but I suspect in many cases it can be very useful to be able to wrap a function without performing shotgun surgery on everywhere it's called.


Of course you're right. These things are trade-offs. If it had no upside we wouldn't be talking about it because no one would use it.


No, you're strictly wrong. Decorators are less powerful higher order functions. They "hijack" the definition of your function.

Please, please learn about what higher order functions are in a more general setting before invoking them.


Uh, I know what higher order functions are. You misinterpreted my comment, I wasn't saying decorators are more powerful than higher order functions. My point was that decorators are implicit in a way standard, explicit use of higher order functions is not.

This "please, please learn" bullshit comes off as so smug and patronizing, did you realize? And when you're misunderstanding something it's even worse.


So if I do "f = g(f); y = f(x)" then it's suddenly not decorator?


Doing it that way at least means your code looks exactly as bad as it is.


Heaven forbid we compose functions. We're only allowed to use composition on classes, where it has multiple definitions and caveats! Much less confusing.


Function composition is great. In-place mutation of a function definition is not great.


It's not mutation if it occurs before anyone can ever reference it. Then it's called construction.

Even languages with labeled effects allow for this sort of construction effect. If they can properly sequence and isolate it, they allow mutation as well!


a = f(b) is fine - it's clear to the reader that a might or might not end up having anything to do with b. @f a = b is not at all clear as a syntax for the same thing - a reader would very much expect a to = b after execution of a line like that.


I'm a bit confused. Is @f a = b valid Python syntax? What's it implying? Why would it be a mutation? Is it because of a call-by-name convention you're implying, or because function metaprogramming in Python has write as well as read for properties?

If you dislike call-by-name, why is Python's practice of insisting all non-trivial functions be called by name is not distasteful to you?


> Is @f a = b valid Python syntax?

No, I was trying to isolate the important part. "@f def a: return b" would be the Python syntax, and I think it's misleading as sugar for, hmm, you'd have to write it as "a = f(lambda: b)" (hmm, I'd forgotten how different defining a function "natively" or as a lambda value are in Python).

You seem to be using a different definition of call-by-name to the one I'm used to. What bothers me is that a decorator can make a function behave completely differently (e.g. its body might never even be executed) but the syntax doesn't look like it can make that big a change to the function.


I think that's a fair concern, although the semantics of the decorator probably make that what you want in some points.

I guess in Python land, blueprint's entire purpose is to selectively fire your handlers.


Decorator are litterally what I said.

Litterally.

You can take any code with decorators and replace them with this syntax. If you have arguments you must call the factory first, that's the only exception.


(« Literally » n'a qu'un « t ».)


Merci.


No, they turn y = f(x) into y = g(f)(x), or rather they turn f into g(f), considering side effects.


>This language is slowly destroying my soul, one painful unit test at a time.

So we're two! I'm trying out Rust for a side project now and so far it's fun. Lots of compile-time checks, non-crippled lambdas, and you can use itertools too!


Decorators are good when used properly and functionally as in it returns the composition of two functions. In one django project one of the old coders created this decorator that adds this huge context object that automatically folds little tidbits of info from cookies and GET query params as an object stuck into an attribute that's part of the request parameter, then proceeds to use that context object everywhere. We get mysterious view functions that has lines that looks like this: request.context.start(), and will utterly fail if you removed the decorator.

Literally, please do not do this shit. Your function should be modular and able to be useable with or without being decorated.


That honestly sounds like a very poorly written decorator to me. I work with Django professionally and have been for years now. I'm not going to say I know the framework like the back of my hand, but I'm pretty damn close. Django already supplies the request object to every single view regardless of whether or not the view is class or function based. Why you would need a decorator to replicate existing functionality is beyond me. It's possible that the decorator was created for an older version of Django that was missing some important part of the request object, but I can't for the life of me figure out what that would be.


This language is slowly destroying my soul, one painful unit test at a time.

At the end of the day, Python is just an abstraction. It isn't destroying your soul, or doing anything else to it.

OTOH, your conscious decision to stick it out in a job that makes you work with a language that you find "absurd", and where people do things you "absolutely loathe" but have to put up with, anyway? That's what's destroying your soul, man.


Why do you think the STL obfuscates functionality?


Classes, objects and functions themselves all obfuscate (or rather encapsulate) functionality away in similar ways. What is so particularly bad about decorators?


I'm with you except "like STL in C++". What does that mean? STL is not custom, it's by definition standard.

Maybe you meant to say TMP?


I have seen some ridiculous templates in C++. Isn't STL Turing complete?


You're definitely talking about TMP, template metaprogramming. The STL is a set of useful libraries that do specific things. Like represent strings or vectors. Or sort, or generate random numbers. If anything the STL is too small.

TMP, on the other hand, is an entirely separate language within C++ that allows you to create monstrous, arcane code that's incredibly hard to debug or understand. Some of the STL uses TMP. But it sounds like custom TMP is the thing you don't like.


C++ template metaprogramming is Turing-complete, except that in practice your loops are going to be cut short by the maximum recursion limit of the compiler (900 in g++, see -ftemplate-depth option).


I like decorators.

I don't quite understand the author's complaint about this specific case of decorators. An initial read makes me wonder why he doesn't use dynamic routes, which is where the decorator / function-name pattern really starts to shine.


> Documentation. There’s a common mis-conception that type systems make code verbose. Firstly, type inference is pretty doable and addresses the problems that give rise to these beliefs. Java and C++ aren’t verbose because they have type systems, they’re verbose (partially) because they have bad type systems. Second, in my experience, actually clearly documenting an interface in a language that doesn’t have types ends up being much more verbose than it would otherwise be. You need docstrings regardless, but if the type of the thing says it’s an integer, you don’t have to write “x is an integer” in the docstring too. And now you have a concise, agreed upon notation for that sort of fact.

Preach it. Also, you don't have to worry that a method whose signature is `public int DoWork(string input)` actually expects char[] as its input and returns a long, the way you do when the types are only documented through comments.


Except Python is not only used for what you do. People use it for batch testing, SIG embeded language, sciencitic analysis, sys admin, 3D tool scripting, glue code, product pipelines, etc.

Having a required type system for this huge part of the community would be a great let down.

Those criticism come from people comming from a strong dev background. They completly ignore the rest of the world.

That's why having the current OPTIONAL type system is so good. Because you can use it and benefit from it, and many do, but you don't screw half of your community.

Python strength is versatility. It's not the best at anything. But it's very good at most things.

That's why if you know it, you can solve most problems decently. That's what makes it fantastic.

I'm still waiting for people doing data anaylsis in Go, sysadmin in JS, 3D scripting in Haskell, batch testing in erlang or geography in lisp.

Because Python... well it can. And AI. And Web. And embeded systems. And IoT. And... And...


OK, sure, but people are pitching Python not just as a language to write scripts but also one to do development of large projects on large teams, and I think this property makes it somewhat ill suited for that.


I wrote plenty of big projects in Python. It works very well, the problems and solutions you have are just different that in other languages you are used to.

The question is more about "what kind of problem do you want to have and what price are you willing to pay to solve them ?". After that, choose the tech that matches this wish.

For me the iteration speed, the ease of adding new members and the flexibility are key.

But what I discovered is that most people just code bad Python and say they have problems, just like most people write bad C/java/whatever and blame the language.

You are supposed to writte Python in a certain way, like any language, to scale.


It is of course possible to build a great large product in Python, just like it is possible to do so in whatever your least favorite language is. But I'd argue there is more discipline needed to do so than there is in other languages with more type checking and there are whole classes of errors you simply wouldn't have to worry about.


I have not really found this to be the case. I work daily on a codebase of a few hundred thousand lines of Python code with a hundred dependencies and type errors are usually solved quickly.

For small, local needs to pass around data, a plain dict or tuple usually suffices. If you need stricter contracts, you define good classes and interfaces.

There's a need for discipline in your coding style, but I've frankly not found this to be an issue with moderately competent coders who understand this. Yes, you can shoot yourself in the foot with this, but if anything those bugs are usually obvious and solved in the first pass of testing.


> I work daily on a codebase of a few hundred thousand lines of Python code with a hundred dependencies and type errors are usually solved quickly.

Yeah, but they're often solved after the code goes into production and doesn't work, rather than before the code is ever committed.


"Language X allows you to be more undisciplined" doesn't sound like a great selling point for those languages.


Would you perhaps mind giving your opinion of Ruby?


I had the choice between Ruby and Python years ago. I choose Python because of the community, not the language. They are more or less equivalent, although I prefer forced indentation.

Nowaday, ruby is dying in his RoR and Japan niches so the question is moot.


> batch testing, SIG embeded language, sciencitic analysis, sys admin, 3D tool scripting, glue code, product pipelines, etc.

> Having a required type system for this huge part of the community would be a great let down.

... Why? These days types generate much more code than they cost in modern FP languages. E.g., https://github.com/haskell-servant/example-servant-minimal/b...

> That's why if you know it, you can solve most problems decently. That's what makes it fantastic.

Most of the things you're talking about are available in other languages. And quite frankly, if pure versatility and available open sourced code is your argument? Then why aren't you using Javascript and nodejs?

Name a general domain that I can't find at least one or more well-maintained projects supported on NPM. I dare you.

> And embeded systems.

Who.. who is doing IoT and Esys in Python outside of toymakers?


You may be able to find a well-maintained data science/machine learning project on npm, but I doubt you'll find as many as are needed in a typical data science workflow.


Maybe, not sure how that is relevant. You definitely CAN do it, it's just not as common. Same could be said of F#.


> Name a general domain that I can't find at least one or more well-maintained projects supported on NPM. I dare you.

one is named

> Maybe, not sure how that is relevant.

Man, it was your dare!


You didn't do what I said at all. Everything you need is there, it's just not as popular.

Please tell me what pieces you need and I'll do my best to make good on my claim. At a minimum, tensorflow, nltk and spark bindings exist. And in fact, the popular notebook software packages are reaching out to other runtimes already.


That's far from where the bulk of time is spent for most data science workflows. You need a pandas/dplyr and you need a ggplot2/[long list of Python plotting libraries here]. You mentioned F#, which has Deedle for data frames. What's JavaScript/NPM's story on this?



> who is doing IoT and Esys in Python outside of toymakers?

Aren't most IoT things basically toys anyway?


Zing.


NPM has way more garbage and moving targets than similar Python packages.


This is hardly a rebuttal.


Being able to pick a time-tested, reliable package that you can suspect will be maintained the next few years forward has incredibly value. The NPM ecosystem does not provide it.


I can point to a full suite of data science projects in npm with varying lifespans from 1-3 years. Given the null assumption (it will be maintained about as long as it has been maintained so far), this seems to meet your requirement?

That NPM has churn is an example of how massive that ecosystem is compared to PIP, which is comparatively microscopic and is much more dependent on specific corporate actors to continue investing.


What data science project in NPM are people actually using? There are literally thousands of people using NumPy/SciPy + Matplotlib. NumPy, BTW, is over 10 years old and counting.

Javascript is a terrible to do data analysis, given its inferior numeric data types.


> Javascript is a terrible to do data analysis, given its inferior numeric data types.

I'm still amazed that ya'll put up with Python's trashy numeric tower, coming from other contexts.


Automatic bignum promotion is exactly the intuitive behavior one wants for numeric types.


Micropython on the ESP32 is pretty awesome. It may not be getting serious use yet but I'd say it's on the verge of something big.


How would a type system hinder those people?


Most of those people are not dev. They don't write libraries, don't have time to learn many docs or type many lines.

They have 30x 200 lines longs scripts and couldn't care less about code quality. They just want the result.


I am unable to find an answer to my question in what you just said.


Path of least resistence. A strong type system is much more extra work than you think.


That might be true at the start of the project. But to be honest, having worked quite a few years with strong type systems, weak type systems and dynamic type systems, I've found that not having a strong type system is precisely what makes work grow faster than expected. Maybe it's because I'm accustomed to big projects, but I've been bitten too many times. I'd even choose Java over any language that doesn't have a dependable type system.


"having worked quite a few years with strong type systems, weak type systems and dynamic type systems"

Sorry to be pedantic, but you probably mean static type systems, not strong type systems.

See: "What to know before debating type systems" - http://blogs.perl.org/users/ovid/2010/08/what-to-know-before...


I have worked with strong and no type systems, and have found that the absence of a type system is extra work.

Instead of the compiler telling me exactly what something is, what I can do with it, and if it makes sense to pass it to some function, I have to figure all of that out myself, greatly slowing me down.

Edit: It occurs to me I have yet to hear an answer to my question of how types hinder anything, especially if they are inferred. The only exception is the trivial requirement of sometimes having to cast between numeric types.


I really like type systems. I think that if you take the time to learn about type theory, then you are more likely to create better solutions (both with code and in life in general).

However, it isn't free. Type theory is a kind of math that most people have very little exposure to, so there's going to be a lot of work in order to start becoming proficient.

Additionally, there's more than one type of type theory. Are you using a system F like system? Are you going to have sub-typing? Is it going to be structural sub-typing? Maybe you want row polymorphism. Is there going to be a kind system? What about higher order poly kinds? Dependent typing? Intensional or extensional?

Additionally, there's more than one type of implementation of these type systems. Ocaml functors ... is it a generative or applicative functor? Haskell ... are you using gadts, functional dependencies, or type families?

In the end I think that type systems will eventually be able to get you a completely typed experience that feels exactly like a completely dynamic experience, but with compile and design time checks that always make you feel good about the experience. However, I don't think we are quite there yet and I don't think you can expect everyone to be able to take the time to get sufficiently familiar with an arbitrary type system in order to be productive with it.


i would be pleasantly suprised to find an extensional type system in a mainstream language :)


Yeah, you basically make initial implementation easier at the expense of maintenance and debugging, which sounds like a good tradeoff until you think about the relative amount of time you spend doing one thing vs the other.


> at the expense of maintenance and debugging,

Do you realize how insane that sounds? Static typing makes programs harder to debug? Harder to maintain??

On the contrary, static typing helps debugging and maintenance: changes that break invariants are more likely to be caught by the type system.

This speak of tradeoff sounds wise on the surface, but this is hardly a tradeoff at all. For many people (including me), a good static type system makes prototyping and maintenance easier.


It probably sounds insane because you have interpreted my post to mean the exact opposite of what I intended.


Your comment sounded like this to me:

> Yeah, you [DarkKomunalec] basically [use static typing to] make initial implementation easier at the expense of maintenance and debugging

It was not clear that "Yeah" was an approval (and not a dismissal), and it was not obvious that "you" was a general "you" (and not a personal "you" directed at DarkKomunalec).

Nevertheless, you were still talking about a tradeoff, and I personally see none: in my experience, dynamic typing makes initial implementations harder (or longer), because so many errors are caught too late. Static type systems have a much tighter feedback loop.


Many, many people have a very different amount of time spent doing each that what you imagine. There's a lot of scripting code that is written once and then never maintained.


If your claim is "Python is great for toy scripts" then fine, I don't have any issue with that.


But you are a dev. These people are not. Code is not their product.


A programmer sees 2 and 2.5 as different types.

A normal person doesn't care. He just wants, and expects, 2+2.5 to yield 4.5. He doesn't want to use a cast, or write the 2 as 2.0, or use some sort of baroque type conversion procedure, or anything like that.

This answer is not Python-specific, of course, but it's a good example of the overhead that gets introduced when a language becomes too type-happy.


> A programmer sees 2 and 2.5 as different types.

So does anyone who's done math. They also know 3/5ths is different. It's not unreasonable to ask for addition to be defined in a reasonable way though.

> This answer is not Python-specific, of course, but it's a good example of the overhead that gets introduced when a language becomes too type-happy.

Besdies OCaml, who actually does this for general programming? I can't think of many examples at all.

P.S., "A normal person doesn't care. He just wants". Stop this. The community here might give you a pass for being tedious and correct. Being tedious and incorrect is pretty much unforgivable.


"So does anyone who's done math."

Hmmm... I could have had an undergrad math degree in addition to the CS degree if I'd stuck around one more semester, but decided to head off for CS grad school instead. And yeah, I understand cardinality, and why there are more real numbers than integers (and could even write the proof for you from memory).

I also completely understand that 2.5 in a computer isn't actually represented as a non-integral real number, or anything like it. The computers we have now can't represent arbitrary real numbers (quantum computers can, I think, but I haven't studied those in any great degree). At one time I even wrote some non-trivial asm programs that ran on the 80x86 FPU, but I'd have to do a fair amount of review before doing that again.

So yeah, I'd say I've both "done some math" and have a good handle on how integers and floats are represented in a computer.

That still doesn't mean I want to have to put in a freakin' type cast when I add 2 and 2.5 on a pocket calculator. Nor does anyone else.


Answer my question. I'm not going to defend a non-existent problem.

Or is this about Pascal again? Did ocaml bite you and you still have a mark? I'm trying to give you an opportunity to suggest this isn't a straw man. My most charitable hypothesis is that you really don't know much about modern static and strong typing techniques.

Everyone's numeric tower accounts for this and does sensible (if not optimal) type conversions. The best of the bunch give you fine grained control on what happens when. That something must happen is inescapable.


I'll bet he starts to care if floating point arithmetic introduces errors into his results. You can only push off the complexity for so long if you want to do things that aren't trivial.


Can you think of a (non-contrived) example where automatic promotion to float is going to cause a non-trivial error when computing (say) a household budget?

"You can only push off the complexity for so long if you want to do things that aren't trivial."

There are a lot of things that aren't "trivial" that nonetheless don't require a totalitarian type system.


> Can you think of a (non-contrived) example where automatic promotion to float is going to cause a non-trivial error when computing (say) a household budget?

Having your share of holiday costs come out as NaN is fiddlier than getting an exception at the point where you actually divided by zero.


Both of those are runtime errors, though.

The "type safe" guys like to pretend that their approach can catch all that stuff at compile time. It does catch a certain class of error, but at the cost of making the code take much longer to write. That doesn't work in a world where your competitor is iterating five times while you're still building the first one. Excellent way to get your milkshake drunk, that.


> Both of those are runtime errors, though.

Sure, the point is that using integer arithmetic for integer calculations gets you better error reporting that saves you time when tracking odwn other bugs.

> The "type safe" guys like to pretend that their approach can catch all that stuff at compile time. It does catch a certain class of error, but at the cost of making the code take much longer to write. That doesn't work in a world where your competitor is iterating five times while you're still building the first one.

My experience is that I can iterate a lot faster if the compiler's able to help me catch errors faster. It doesn't slow down writing the code; almost anything I'd write in say Python translates directly into the same Scala. (I guess the language pushes me into defining my data structures a little more explicitly, but that's something I'd want to do in Python anyway if only for documentation reasons).


Define "much longer to write." I don't think that claim is true.


Isn't the biggest non-programmer audience who has any interest in writing Python scripts scientists? I don't think it's totally contrived to imagine that floating-point precision is an issue in such cases.


Fun fact: half of the professional programmers I met in my life don't even try to do anything for floating point precision issues.

You are living in your bubble. The bubble of people who knows what they are doing.

Get out, you'll be surprise how much amateurish the world is.

Yet it runs.


I'm really not a fan of this argument. No one is arguing for a banishment of the concept of a competency PMF. We're just saying, "If you use these newer techniques and tools and patterns, you get more bang for your buck."

The common response is, "But then I have to learn something new." But this is the curse of technology, and pretty much inevitable. Some learning needs to occur because tech changes so fast.


"If you use these newer techniques and tools and patterns, you get more bang for your buck."

But you don't, necessarily. Dealing with type wankery takes time. And no, it has nothing to do with "learning something new". Languages that tout "type safety" have been around since at least Pascal (47 years old)... arguably even before that, with some of the Algol variants.

Yet they've never made it to mainstream acceptance. It's not even about hype -- Pascal was pushed HARD, by just about every university. Where is it now? Turbo Pascal had a small degree of success, but that's only because it chucked a lot of real Pascal's rigidity out the window.


So... Can you clarify this? "Pascal is no longer popular. Pascal had a static type system. Therefore, static typing has failed?"

If so, I counter: the last 3 years have been a series of breakthroughs both in terms of technology and social acceptance of typed programming. TypeScript is the rapidly growing language, Haskell's never been more boring to use, Scala's edging out Clojure even though it has very fragmented community leadership. C++ has adopted a lot of powerful new features and you're seeing functional programmers speaking at C++ conferences because the folks there can use it. Java has more capable and composable lambdas than Python.

Systems not using these techniques are plagued by security flaws, while those that are work on stabilizing performance under various workloads.

It's never been a better time to be using a strong, static type system..


"Can you clarify this? "Pascal is no longer popular. Pascal had a static type system. Therefore, static typing has failed?""

I would make it "Pascal was never popular", but yes.

"If so, I counter: the last 3 years have been a series of breakthroughs both in terms of technology and social acceptance of typed programming."

This isn't the first rodeo for many of us. "Compile-type static type checking will solve all of our problems" is an idea that's come around repeatedly. Outside of a few niche applications, it never works, or even catches.

As for the supposed booming popularity of TypesScript... dude, TypeScript doesn't even make the top 30 on GitHub. It's less popular than assembly language and Visual Basic.


> I would make it "Pascal was never popular", but yes.

Then why... why bring it up? Should I discount all of dynamic typing because Io and Pike never took off? C++ did stick around, Oak became Java. APL is still in active use.

> This isn't the first rodeo for many of us. "Compile-type static type checking will solve all of our problems" is an idea that's come around repeatedly. Outside of a few niche applications, it never works, or even catches.

"It never works" is a pretty bold claim given that the majority of code you interact with on a daily basis has SOME sort of type system. I'd say C++ is better endowed than most in this dimension.

> As for the supposed booming popularity of TypesScript... dude, TypeScript doesn't even make the top 30 on GitHub. It's less popular than assembly language and Visual Basic.

My dude it would be extremely suspicious if it did. Instead, look at the growth stats: https://octoverse.github.com/. It's the fastest growing language that has non-trivial presence on github (and of course, that's the correct way to phrase it, a new language appearing can appear to have quadruple-digit growth percentile).

This seems profoundly disingenuous. Is that your intent?


Have you ever heard of an obscure language called Java?


Yeah, I have. Java is notorious for tossing 30 page runtime exceptions all over the place. Given that the alleged advantage of getting type happy is to prevent runtime errors, can you explain how Java actually supports your case?


Are you mad at the existence of stack traces? Would you prefer it if the errors were unsourced? Are we pretending Python does it better or even that differently?

As for "the case", Java does reduce the # of NPEs you get by not letting you deref invalid keys on objects, and it makes it easier to handle objects in the abstract.


Well, for one, the post I was responding to claimed that no language touting type safety had ever caught on, and yet there is Java, likely the most used programming language on Earth next to C and C++ (which themselves offer some level of type safety).

But moving on to your claim in this post, nobody ever said "compile-time checks eliminate errors altogether." What they do do is reduce errors and completely eliminate certain classes of errors. They also make maintenance and debugging much easier because they define clear expectations for the interfaces of their arguments and return values. The length of stack traces is a completely orthogonal concern.


Yes, everything looks like it works, but occasionally it's completely wrong. I don't think just chugging along is desirable in all circumstances. But even if I accept your premise, that just says to me that Python is a good choice for people who don't really know how to program and don't care to learn too much.


Floating point precision isn't even a problem for most scientists, at least not the ones dealing with real-world data.

It's pretty rare to have measurements that are accurate to more than a few decimal places.


Yes, we had non-trivial floating point errors in Level that appeared after multiple divide operations in an early version of the product. We stopped using it.


I think that would depend on the type system. Dart has an optional type system (no types, some types, strong mode) with different levels of safety. Interestingly though, even if you write strong mode code, the types are completely thrown out before execution. It's a bit of an aside, but I don't know (libs aside) why anyone would choose python over dart.

Types are great for large projects, but tend to add verbosity and development time for small scripts (thus why there are so few strongly typed scripting languages). SML/ocaml show that there is a nice middle ground where most types are inferred, so you can keep your types without too much work. Unfortunately, they've been around for decades with little usage in the profession.


Hey, Lisp is pretty good for geography and was the language of choice for writing quick extensions in Autocad back in the day. Lisp lists are as natural for representing 2D and 3D points as Python lists and dicts.

My first programming job was writing map reports (choropleth maps) using AutoLisp.


The geographes I know would never, ever be able to do in lisp what they do in Python. They are not computer minded at all.


This meme about "computer-mindedness" is pseudo-scientific nonsense. It's something I fell hard for as well, because it's a lovely idea. But it's simply not true.


Lisp is not for the enlightened only! Anyone that ever used a scientific calculator can hack around.

In fact Lisp can be much simpler than Python, it is just like using a RPN calculator if you don't want to dare into macros and other advanced stuff.


I am thinking of a function in the Windows 3.0 API that was declared in the header file as returning BOOL, but actually could return TRUE, FALSE, or -1 (which, of course, is true according to C's rules).

Worse, I didn't have the docs -- just the header file. That one cost me a fair amount of head scratching before I figured it out.


Python 3.5+ has type hints now (and a standalone typechecker). It doesn't do runtime checking by default, but you can use a 3rd party library to enforce these things if you are concerned about type safety (at the boundaries of your programs for checking inputs for example).

More info on this in the pep: https://www.python.org/dev/peps/pep-0484/


Read again, I got it covered.


I think the author has good points, but from where I sit they seem fairly subjective.

I've written Python for several years, and as a result the way I reason about problems have been heavily influenced by it.

I'm fluent in Ruby as well, but it takes mental effort for me to conform to its "flow", for lack of a better term. My time in Ruby has made me a better Python programmer, too - I understand more fully the idea of a DSL and how it should work. Ruby is great for writing DSLs and writing concise code that's guided by the language of the domain in which you're working.

I'm competent in Clojure, which if nothing else has given me a distinct distaste for impure functions - if a Python function takes an array, it should not modify that array in place unless it's very clear from the name that's it's going to do so, and a function should not both modify its input and return it.

At the end of the day, Python is still the language that I reach for whenever I'm writing pretty much any personal project. It's clean, easy to read, and the overall feel of the language guides me to write maintainable code.


Don't get how any competent programmer can recognize a glaring flaw... type checking. A program without type checking will inevitably lead to less maintainable code.


Python does type check - at runtime. That said, I understand where you're coming from. Maybe it's because I'm very familiar with Python, but lack of static typing has not been a problem for me in a long time. If I really need to be sure that something is the correct type, i can try to catch `TypeError` and handle it. Not the most terse of methods, but I rarely find that I'm all that concerned with types in my Python work.


I think you fundamentally misunderstand what type checking is.


I don't think so. I think you're referring to static type checking. I'm referring to dynamic/runtime type checking.

For instance, `1 + "hello"` will throw a `TypeError` at runtime because the `+` operator is not supported for use with `str` and `int` types. In a statically typed language, the code would not run or compile unless a `+` operator has been defined that takes a `str` type as it's first argument and an `int` type as it's second argument.

I guess you could say that a `TypeError` at runtime isn't really "type checking" in a sense. I guess it would be more like "type enforcement".


> I don't think so. I think you're referring to static type checking. I'm referring to dynamic/runtime type checking.

No, you're referring to type-related errors at runtime. That's not checking. Nothing is checked. Code breaks and may recover, but it has no idea what the types of the arguments to the offending expression were, only that it didn't work with that bit of code.

This is not type checking.

> I guess you could say that a `TypeError` at runtime isn't really "type checking" in a sense. I guess it would be more like "type enforcement".

All it does is say, "This code cannot execute with this implicit prior state." It has nothing to do with types except in the most tangential way.


1 + 'hello' resulting in a seg fault would be a "type related error"

1 + 'hello' resulting in an Exception would be a product of dynamic type checking.

A language would not be able to explicitly throw a type related error unless it had some information regarding the type of the data.

Most resources show little confusion on this, I think you are wrong. https://stackoverflow.com/questions/1347691/static-vs-dynami...


> 1 + 'hello' resulting in an Exception would be a product of dynamic type checking.

Incorrect. It might be the product of a run time type check. It is not inherently so. You ca't even be sure you actually had a type error when you get a TypeError.

But even then, this is using "type check" in the most vacuous and equivocal way possible. It's not the concept most people are referring to.

> A language would not be able to explicitly throw a type related error unless it had some information regarding the type of the data.

You (and this resource to some extent) are confusing strong vs weak typing with static vs dynamic typing. A + method might have a type check embedded (esp python since + is magical), but it's actually quite rare of that a.foo(b) is anything other than an assertion that the object A's vtable-analogue has an entry.

This is sort of dynamic typing, in the sense that I can think of formalisms that model this (named extensible row types come to mind), but this is a profoundly useless defintion of "type."

I'm not sure why we'd tolerate redefining type checking to a worthless concept and then using that divergent definition to imply that it's unhelpful. It's 2017, functional languages are fully baked. Powerful static type systems exist even for the Javascript environment and they are taking over that ecosystem rapidly.


It seems python itself also confuses these topics then. https://wiki.python.org/moin/Why%20is%20Python%20a%20dynamic...


I find it amusing that the first look Google gives me on strong vs weak typing says there like static vs dynamic typing. Then wiki says there is no strong definition for either term. Maybe it would be easier to drop those terms and go one level deeper?


There isn't too much confusion, but people bend over backwards to avoid having their type system called "Weak."


Isn't this covered by IDEs, which warn you when switching the type?


Modern python supports optional type checking.


What is modern python?


3.4+ (really 3+ with syntactical support, and really 2.7+ with comment support).


I don't know, that sound like details to me, or even taste.

There are many more important problems with Python: packaging, no way to know which exception you are gonna get, no nice async web framework and so on.

But this? You have those in any languages.


Agreed, particularly with packaging.

In the past, I've created scripts that have not needed packaging. Packaging is what is making me fall out of love with Python, especially as I'm learning Rust and seeing how great it can be.

There are many different metadata files with slightly overlapping information without a clear place that documents all of them.

I'd love to see poet take off.

https://github.com/sdispater/poet


Standalone packaging is even worse. There are a number of tools each with their own quirks and bugs. I thought Electron's 100MB packages were big, but then I tried to wrap up a ~30 line Python script that used one SciPy feature for portable redistribution on Windows. The whole thing came out to over 300MB!


For packaging pains, do you mean the fact that there is no single agreed on packaging/provisioning method (easy_install, pip, conda, apt, etc.), or that these solutions are terrible?


- too many config files

- no clean way to freeze / update

- no good way to distribute a standalone executable

- binary packaging is still a mess

- big libs like GUI framewok are still terrible depend on

- compilation solutions like the awesome nuitka requires a lot of manual work

- too many things to know

For a beginer it's hell. I haven't been a beginer in 10 years, but I train people in Python for a living, and I know it's a pain point.


May I add:

- pip and setuptools maintainers break everything quite often*

- basically no mobile support what so ever for packaging

(* quite often you say, skeptically? Well, they completely screwed it up twice this year already. That's twice more than any other package manager I use)


I AM a Python beginner, and I was amazed recently to discover that there was no way for me to assemble an executable for an OS I'm not on that didn't involve buying a computer running said OS.

I'm on Win10, and pyinstaller made it easy to create an .exe for Windows, but I could find no way on Earth to assemble an executable for Mac.

As it happened I just asked a tech-savvy colleague on a Mac to use pyinstaller on their machine, and it worked, but still - I'm really impressed with Python generally, but this seems like a surprising and considerable oversight.


>I AM a Python beginner, and I was amazed recently to discover that there was no way for me to assemble an executable for an OS I'm not on that didn't involve buying a computer running said OS.

Are you a beginner in general maybe? Because that's either arcane, damn difficult to setup or nigh impossible in tons of other languages too -- even ones that actually do produce executables to work (which Python by default does not).

(And of course you can always just create the executable on a vm -- no need to buy a computer running the other OS).


20+ years of working in the tech industry and adjacent, so not so much, no.

But my work has, until this year, rarely involved compiling software for other people to use. So that may be the noobness you're detecting.


Basically, why I asked, is because you seemed to imply that you expected that cross compiling would be easy.

Whereas, from my experience, it's usually a pain in the ass to set up.


To be fair, Python programs aren't really designed to be compiled into a single executable. They are designed to be run through an interpreter. Therefore, multi-target compiling isn't really something the core developers focus on. The fact that you were unable to compile for multiple sources is more a failing of whatever third-party tool you were using.


Makes perfect sense, yes. And that's what I use Python for 99% of the time.

I just thought it was interesting quite how hard it turned out to be! As someone who's usually on the games/VR side of things, I may have taken some of the magic that Unity, for example, uses to compile to all sorts of platforms for granted.


If your into building games and want a language that can easily cross-compile to multiple platforms, you may want to take a look at Nim: https://nim-lang.org. You'll get better performance than with Python as well. It's definitely a little less intuitive than Unity although.


Not disagreeing with your main point, but it has historically been possible to run Mac OS in a VM. It might take some tinkering that is definitely not as straightforward and easy as some other OS's, and you might be breaking Apple's Terms of Service which require Mac OS to run only on Apple hardware, but it was usually possible.

Other operating systems apart from Mac OS should be much easier to run in a VM, and where they're not, I'd blame that more on the creators of the OS's themselves rather than on Python.


Ah, interesting. I did think of using a VM but my understanding was that Mac OS VMs were an absolute pain in the ass to set up ;)


This is one of the reasons golang has a growing share of popularity right now, a lot of people want to hit the linux compile target without a VM.

But hey, you can use VMs to hit a lot of these targets.


All but conda are pretty terrible. Conda seems to work, most of the time at least. The others seem to find interesting and novel ways to bork your system all the time. At one point a simple letsencrypt refresh ended up re-installing some critical python component eventually resulting in a full re-install of a server. That probably could have been avoided but the feeling of ease and convenience with which that simple operation resulted in a trashed server hasn't left me yet.


I know it doesn't address the root of the problem, but if you are running Python in production, you should always be using a virtualenv. Virtualenv is essentially a separate Python runtime with it's own packages and interpreter. You can have as many virtualenvs as you like on the same machine and have them all configured completely separately.

> At one point a simple letsencrypt refresh ended up re-installing some critical python component eventually resulting in a full re-install of a server.

The letsencrypt developers explicitly recommend using a virtualenv to run the certbot script.

The best part of virtualenvs is that you can almost use them like little containers. For instance, let's say you have a script running a web interface and another script that performs data calculations both on the same server.

Your web application can have its own virtualenv and be listening for proxied requests from an nginx instance. The data processing script can be running in it's own virtualenv as well and listening on a local unix socket for incoming data to process. The data processing script is 100% independent of the web application script and vice-versa. They each have their own interpreter and dependencies. Hell, you could even run your data processing script in Python 2 and your web application in Python 3 if you needed to.


> The letsencrypt developers explicitly recommend using a virtualenv to run the certbot script.

Yes, but virtualenv is not always available and updating a certificate should not require a large amount of software to be installed on the sly on a machine, it should just upgrade the certificate and be done with it.


How is virtualenv not always available?

``` pip install virtualenv

virtualenv -p python3 venv ```

If you have a heavily locked-down server or something, talk to the administrator.

> updating a certificate should not require a large amount of software to be installed on the sly on a machine, it should just upgrade the certificate and be done with it.

I respectfully disagree. First, updating a certificate can be done by hand without the use of any software. The point of letsencrypt is to automate the process. Automation requires software. If letsencrypt were written in C you would still need to ensure that the executable was compiled for your architecture and that you have the correct header files available in the correct locations.

I'm also not sure what you mean by "on the sly" here either. If we assume that you mean that the letsencrypt package automatically creates a virtualenv, how is this any different from postgres installing libxml2 as a dependency for example?


> ``` pip install virtualenv virtualenv -p python3 venv ``` Yes, if everything always worked as advertised that is how you would do it. Unfortunately it isn't.

> First, updating a certificate can be done by hand without the use of any software.

Yes, I'm aware of that.

> The point of letsencrypt is to automate the process. Automation requires software.

Exactly. So, how difficult can it be to upgrade a certificate that was already there, nothing on that machine needed 'upgrading' over and beyond the certificate, especially not without doing so in an irreversible way. All the software required to do the upgrade was in place because it worked 90 days before then.


> Yes, if everything always worked as advertised that is how you would do it. Unfortunately it isn't.

You'll have to explain. Any problems that arise would be problems that would arise with installing any package from any packaging system. I fail to see your point. Errors and bugs are always possible in any situation. This isn't really an argument against vurtualenvs, it's an argument against software in general.

> Exactly. So, how difficult can it be to upgrade a certificate that was already there, nothing on that machine needed 'upgrading' over and beyond the certificate, especially not without doing so in an irreversible way. All the software required to do the upgrade was in place because it worked 90 days before then.

When dealing with security measures such as SSL, it's extremely important that all packages involved in the process are secure and up to date, Therefore, it makes sense to me that an SSL library would want to ensure that all of it's dependencies have the latest bugfixes and security patches.


not op, but yes. all those things are what make python packaging more of a hassle than it should be.

virtualenv helped a lot when that came out but it still doesn't save you from shared system libraries and dependencies


I'd say a little bit of both.


> no nice async web framework

You should check out aiohttp: https://github.com/aio-libs/aiohttp

I've built a couple of tester projects with it in Python 3.5. It's actually quite pleasant.


If your everyday life is working with Python, there really isn't a part of it that's not going to be repeated over and over.

For example, I slowly go insane dealing with the dumpster full of clownshoes that is python iteration. It's so dumb. I know it's a relatively small thing and only a few small tweaks would fix it, but after about 100 lines or so where I have to deal with it I'm ready to rage-eat my office chair.


There is certainly no perfect language out there. It would be easy to make a similar list of problems for just about any other platform. To avoid mentioning Python's advantages and other languages disadvantages when judging "favorite" seems misleading.


While you're right the article is unbalanced -- I think it's acceptable when judging "not favorite".


I don't know about that. Packaging and web frameworks are issues with a language ecosystem. Not having a strong type system is an issue with the language itself.


"no nice async web framework"

Tornado? http://www.tornadoweb.org/


Very low level.

There is nothing half as good as Django in the async world.


Can't you just run your app on an async wsgi server, like gunicorn w/ gevent workers? You could use whatever web framework you like so long as it supports WSGI (Flask, django etc).


"no way to know which exception you are gonna get"

Java much?


I don't know about OP but you don't even need to go to the level of checked exceptions, with their pain.

In Python, you are encouraged to follow Easier-to-Ask-Permission-than-Forgiveness and catch specific exceptions. Doing either / both requires you actually know what exceptions to catch so you don't hide bugs. The problem is that the docs rarely mention the expected exceptions. You have to resort to discovering them experimentally and hope that that exception was an intended part of the API contract and not an implementation detail.


IMO package authors should document the exceptions that they throw.

The Python exception hierarchy is not as good as I'd like. It should be harder to match NameError, AttributeError, etc because there's rarely a sane way to handle those.

People often write code expecting to "handle" exceptions but usually end up masking design defects. Most of the time you can only sanely handle a handful of exceptions way back at the base of the stack. And the handling at that level is usually "log" and/or "limited backoff/retry mechanism". Most of the code you're writing shouldn't do much with exceptions other than perhaps add context and re-throw.


Not to mention old craft like old style exceptions that for some reason hasn't been cleaned out. Python always choose strict backwards compatibily instead of creating aliases to mimic the old behavior with new standard behaviour which make certain old libraries a minefield.


Java too little?

Even if you're not required by the compiler to handle an Exception, knowing which ones are possible to be thrown in a part of the code is still handy.

The same way Optionals and Enums in language like Rust force you to handle all cases.


I was under the impression that every method had to declare which exceptions are thrown via the `throws` keyword. Is that not the case?


Only for checked exceptions; a RuntimeException does not need to be declared.


Correct, all checked exceptions are declared in the method signature.


That's the opposite of the spectrum.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: