Hacker News new | past | comments | ask | show | jobs | submit | page 2 login
Python Is Eating the World (zdnet.com)
879 points by gilad 39 days ago | hide | past | web | favorite | 956 comments



If only its package management were as easy as its syntax...

I wish pip worked the same way as npm: -g flag installs it globally, otherwise it creates a local "python_modules" folder I can delete at any time. And optionally I can save the dependency versioning info to some package.json...

Instead, pip is a nightmarish experience where it fails half the time and I have no idea where anything is being installed to and I'm not sure if I'm supposed to use sudo or not and I'm not sure if I'm supposed to use pip or pip3, etc.


Here's a simple but less-than-completely-documented way to keep Python package management under control:

1. Don't install anything globally. Don't pip install --user, definitely don't sudo pip install.

2. For each project you want to work on, create a venv. Yes, there are tools for this, but the base venv tool is totally fine. (venv should be included in your Python, but a few distributors like Debian put it in a separate package - install it from them if needed.) Use python3 -m venv ~/some/directory to create a venv. From here on out

3. As a first step, upgrade pip: ~/some/directory/bin/pip install -U pip.

4. Install things with ~/some/directory/bin/pip install.

5. Run Python with ~/some/directory/bin/python.

Slightly advanced move: make a requirements.txt file (you can use .../bin/pip freeze as a starting point) and use .../pip install -r requirements.txt. That way, if you get any sort of package resolution error, you can just delete your venv and make a new one. (Downloads are cached, so this isn't super annoying to do.)

A "project" can either be actual Python development, or just a place to install some Python programs and run them out of the resulting bin/.

(Edit: Yes, I know about activate, see the replies below for why I don't recommend it. With these rules, you get to say "Never ever type pip, only .../bin/pip", which is a good safety measure.)


Herein lies my problem. If I want to start a Node project I run `npm init` and then `npm install --save` to my heart's content. If I somehow manage to mess it up I just delete node_modules/ and install again.

If I want to start a Python project I have to set up venv and remember to put relative paths in front of every command or else it'll run the system version. Sounds simple, but it's still something to always remember.


Just source the activate script and it'll prepend the correct path so that you don't need to do anything else.


You have to use something like anaconda. It's basically the same as npm_modules (sorta).


Use virtualenvwrapper.

$ mkproject foobar

$ workon foobar

You're set


Also: set PIP_REQUIRE_VIRTUALENV=true

You won't be able to install with pip unless you are in a virtualenv.


    1. pip install --user and sudo pip install are fine actually, they will not interfere with venv, they can co-exist just fine.
    2. yes
    3. probably do "source bin/activate" first, then run 'pip install -U pip'
    4. just run pip install whatever, no need the full PATH
    5. just run python directly, no need the full PATH
    6. run 'deactivate' when you're done for now, 'source bin/activate' when you want to continue/resume sometime later
in fact I like this better than node_modules, the venv layout of bin/include/lib is more natural comparing to the odd name of "node_modules" in my opinion, and I don't need npx etc to run commands under node_modules either, all are taken care by venv with its 'activate' script


pip install --user and sudo pip install won't break your venv. But they will break your system Python and any OS commands that depend upon system Python, perhaps including pip and virtualenv themselves, which is incredibly confusing. I've helped both friends and coworkers un-break it, and the symptoms aren't generally obvious. I wrote the patch to pip in Debian to prevent sudo pip install from removing files from Debian packages via upgrading packages. It's a huge pain, it's only worth running if you know exactly what you're doing, and as someone who does know exactly what they're doing I can attest that it's never necessary. After all, you can always just make a virtualenv.

One thing I did at my last job was to make a Nagios alert for machines with files in /usr/local/lib/pythonX.Y/site-packages, indicating that someone had run a sudo pip install, which was super helpful for "why is this machine behaving slightly differently from this other machine which should be identical". We had a supported workflow involving virtualenvs and also we had multiple members of the Debian Python team on staff if you needed systemwide packages, so not only was there always a better solution, there were people to help you with that better solution. :)

Re activate/deactivate, that's a matter of taste but I find it easier to avoid it completely too - see my reply in https://news.ycombinator.com/item?id=20672299 for why. Basically, you get the simple rule of "Never run bare pip" instead of "Remember which pip is your current pip and whether it's the one you meant."


> But they will break your system Python and any OS commands that depend upon system Python

Sudo pip install might on some distros (and I consider this to be a bug on the distro level, not a Python issue) but I've never heard of --user breaking anything


Maybe I'm misremembering, but, isn't the point of pip install --user to get things onto your import path when running the base Python interpreter, just like sudo pip install would (except scoped to your user)? If so, wouldn't installing an incompatible newer version of some library (or worse, a broken version) break system commands that import that library, when that same user is running those commands?


$ source some/directory/bin/activate

And you don't need to keep typing out the full directory name. When done with the environment

$ deactivate


I've grown to dislike activate, because it breaks the simple rule of "never run pip, python, etc., only run your-venv/bin/pip, python, etc.". Now the rule is "Don't run pip, python, etc., unless you've previously run activate and not deactivate" - and it has the complicated special case of "make sure the command exists in your virtualenv." (For instance, it's definitely possible to have a Python 2 virtualenv where pip exists but not pip3, and now if you run pip3 install from "inside" your virtualenv it's global pip! Or you might have a Python 3.6 virtualenv and type python3.7 and wonder where your packages went, or several other scenarios.)

If you have the shell prompt integration to remind you whether you're in a virtualenv or not, it's fine, but I don't always have it, and I find it helpful to manually type out the full directory name (generally with the help of up-arrow or tab...) so I know exactly what I'm running.


for bash/fish it automatically prefix with a (your-venv-name) so it's obvious you're under some venv, not sure about csh but I would assume it will do something similar. looks like venv supports bash/csh/fish only by default however.


This is good advice, but it elucidates the problem. There should be 2 steps (1. list your dependencies, 2. pip install). Not 5.


I agree. I'm optimistic about tools like Poetry https://poetry.eustace.io/docs/basic-usage/ for solving this. Unfortunately Python predates the realization that good packaging tools were a thing that needs to be solved in the core language and not externally (Go, Rust, Node, etc. postdate this realization; C, Perl, Java, etc. also predate it).

The flip side is that decoupling the interpreter/compiler from the build system makes it more possible to write tools like Poetry (and, indeed, virtualenv) that explore new approaches to the problem. At my day job where we do hermetic in-house builds with no internet access and (ideally) everything built from checked-in sources, building C and Java and Python is straightforward, because we can just tell them to use our source tree for dependency discovery and nothing else, and we can set up CFLAGS / CLASSPATH / PYTHONPATH / etc. as needed. Building Go and Rust and Node is much more challenging, because idiomatic use of those languages requires using their build tools, which often want to do things like copy all their dependencies to a subdirectory or grab resources from the internet.

Of course, given that it's Python, there should be one - and preferably only one - obvious way to do it....


>Unfortunately Python predates the realization that good packaging tools were a thing that needs to be solved in the core language and not externally (Go, Rust, Node, etc. postdate this realization; C, Perl, Java, etc. also predate it).

Sure, but it's also a cultural thing. Ruby is nearly as old, and also predates this, but has nowhere near the insanity of Python. The community jumped on bundler and rvm/rbenv/etc super quickly, and rapidly improved them, while the Python community is barely even aware of pip-tools / pipenv AFAICT. Even virtualenv is really only a "real pythoners know" thing, it's rarely mentioned in guides, so newbies screw up their global environment frequently.


Ruby was exotic until someone translated the docs to English, but the whole ecosystem is indeed one reason I love Ruby. I really don't understand why python 2.7 is still a thing 11 years later. Sure, legacy systems, but if I install recently active open source on my machine I wouldn't expect it to use an outdated version of a programming language. Upgrading can't be that hard.


python 2.7 is still a thing because it was announced as the LTS* release over a decade ago.

* nobody called it that, but thats effectively what it meant to say “there will be no python 2.8. Python 2.7 will be supported until T. Python 3.x will be superceded in a year.”


Use virtualenv. Always.

Python 3 comes with it as "python -m venv". Once in the virtualenv, you don't have to worry about the various pip forms and effects, you can just pio install.

You can get fancier than that of course, but that's what works with stock python, on all OS.


Meanwhile, all the python developers who have started doing any Js work are saying “I wish npm worked as easy as pip/virtualenv”...

It really isn’t that difficult, it’s just different. Different always seems wrong, at first.


I haven't seriously JavaScript'd in a couple of years but my problem with it then was different versions of node or npm. Nice thing about python virtual environments is that problem never exists (can make environments with whatever version you want).


pipenv really solves the Python version problem, IIRC. I don't actually use pipenv, myself, since I haven't had time to thoroughly figure out the new magic and I prefer to know exactly what's going on with my Python's.

npm doesn't solve the duplicating of deps any better than python/pip, as far as I know. The react-native app I'm currently working on has a node_modules folder at 960Mb. That's probably bigger than nearly every virtualenv I've ever seen. A react-native node_modules on a bare project with `--template typescript` is at least 350Mb (created one a few minutes ago). I'm using nvm for node version management. No problems so far.


Exactly. NPM gets a lot of hate but lockfiles and local install by default is great. The default mode should not be global installation. Also imo virtual environments aren't amazing. Having some mode you have to remember to flip on that changes essentially the semantics of your pip commands seems a little brittle. Tools that work on top of pip and venv like Pipenv or Poetry seem a lot better.


The default should actually be a shared cache that is symlinked to local projects, like pnpm does.


Just use Poetry. https://poetry.eustace.io


Man that looks fantastic. It looks so useful that it ought to be an official part of Python.


This isn't even the start of the problems with pip and pypi. If I install pylibtiff, it embeds a copy of an old libtiff which is binary incompatible with the libraries on my system. I have to build it from source, then it works just fine. But I can't inflict this level of brokenness on my end users.

This applies to many packages embedding C libraries, including numpy, hdf5 and all the rest. There has been minimal thought put into binary compatibility here, meaning that it's a lottery if it works today, or will break in the future.


I couldn't agree more with this. I was forced into doing some UI coding and although I could never full embrace js, the package management aspects (esp having the sane default of not installing packages globally) were definitely superior to python.


That's what virtualenvs are for. Admittedly they're not a perfect solution but the tooling is quite good nowadays.


Use Conda envs!


I feel uncomfortable with the fact that people feel a third-party solution is the best way to solve this mess. It can also get messy when packages installed with pip, pip3, conda and apt are all entangled with one another in various ways.


It’s unfortunate that it’s third party, but conda has the unquestionable advantage of being the only Python-centric packaging system that has a reasonable shared binary library story


I'm curious, do you not find wheels + manylinux reasonable? I agree that until recently, Conda definitely had that advantage, but now that you can `pip install scipy` and have that get you a working library and not try to compile things on your machine what does Conda offer beyond that?

I guess one thing Conda has that the pip ecosystem doesn't is that it supports installing non-Python shared libraries like libcurl on their own. Is that an advantage? (We absolutely could replicate that in the pip ecosystem if that was worth doing, and it's even not totally unprecedented to have non-Python binaries on PyPI.)


I think it would definitely be great if pip could install non-python dependencies. One problem right now is that many projects will tell you to just pip install xyz. You execute that, things start building, and the process fails partway with some cryptic message because you're missing an external dependency. You figure out which one, you install it, start again, and another dependency is missing. Rinse and repeat. It's definitely not a turnkey solution, and this issue trips up newcomers all the time.

With respect to versioning, I think pip should be way more strict. It should force you to freeze dependency versions before uploading to pipy, not accept "libxyz > 3.5", but require a fixed range or single version. That would make packages much less likely to break later because newer versions of their dependencies don't work the same way anymore.


Does pip allow version number dependencies? Conda is able to upgrade/downgrade packages to resolve conflicts, whereas pip just seems to check if a package exists and shrugs when there's a version conflict.


pip does handle versioned dependencies and ranges, and know enough to upgrade existing packages when needed to resolve an upgrade. Its resolver isn't currently as complete as Conda's - see https://github.com/pypa/pip/issues/988 . (On the other hand, the fact that Conda uses a real constraint solver has apparently been causing trouble at my day job when it gets stuck exploring some area of the solution space and doesn't install your packages.... so for both pip and conda you're probably better off not relying too hard on dependency resolution and specifying the versions of as many things as you can.)


... the same thing happens if you mix stuff in your usr/bin directory that isn't managed by your system package manager.

The solution is: don't mix your package environments. Use a conda environment. Just like in Linux, you'd use a container. If you wait for the Python steering committee to fix pip you'll be waiting a long time.


Conda doesn't really solve the packaging problems of python. It can make them worse if conda doesn't have a package you need.


Conda supports pip installing into a conda environment if there is no conda package.


Yes, exactly my point: then you have to potentially deal with conflicting dependencies between pip and conda packages. This happens and it's a pain to deal with.


please no.

As someone who has to look after a bunch of servers used by researchers, conda is the biggest cause of things failing for everyone.

conda installs stuff all over the place and its very easy to install conflicting stuff in the same place.

I just wish that either venv was automatic, or at least the second thing you learn in python.


Are you sure you're talking about Conda's virtual envs? Everything gets installed into the folder /envs/my-env-name.


I agree but is it intentional? Seems like a lot of folks rail about how npm works (not just because of the "micro package" philosophy)


Is there any particular bottleneck towards getting this to work with Python? Would not a simple pip wrapper be enough for the task?


What you are describing is a virtualenv and a requirements.txt file. There are tons of other options for Python that do the same thing.


Packaging is definitely recognized as an issue; e.g. here: https://cecinestpasun.com/entries/where-do-you-see-python-in...


Holy Crap! What a lot of irrational, hyperbolic hate for Python.

I think everybody should spend their first couple of years working in Fortran IV on IBM TSO/ISPF. No dependency management because you had to write everything yourself. Or maybe [edit: early 90's] C or C++ development where dependency management meant getting packages off a Usenet archive, uudecoding and compiling them yourself after tweaking the configure script.

I'm not saying Python is perfect, but if it's causing your burnout/destroying your love of programming/ruining software development you seriously need some perspective.


Here's some rational "hate" for Python then.

I just returned to Python for the first time in a little while to collaborate on a side project and ran into a few tricky-to-debug errors that caused a fair bit of lost time. Know what the errors were?

In one case, I was iterating over the contents of what was supposed to be a list, but in some rare circumstances could instead be a string. Instead of throwing a type error, Python happily went along with it, and iterated over each character in the string individually. This threw a wrench into the works because the list being iterated over was patterns, and when you apply a single character as a pattern you of course match much more than you're expecting.

And another one, I typoed a variable name as groups_keys instead of group_keys (or vice-versa, I don't remember). Instead of just throwing an error, Python happily went along with it, used an uninitialized value, and then all the logic broke.

There's entire classes of errors you can cause yourself in Python that aren't possible in stronger, statically-typed languages. For a large project, I'd pick the old and boring Java over Python every time.


Python is a dynamic language, that's what dynamic languages do, you don't have a type checker but have greater flexibility, but you don't have to settle on that, you can actually use mypy and annotate types and get best out of both worlds.

> And another one, I typoed a variable name as groups_keys instead of group_keys (or vice-versa, I don't remember). Instead of just throwing an error, Python happily went along with it, used an uninitialized value, and then all the logic broke.

This isn't what Python would do, if the variable was undefined Python will throw an error, so you must have defined it with this name or you're misremembering what have happened.


It has nothing to do with static vs. dynamic. There's no reason that being an early-binding language that a string has to be iterable itself, and the proposal to change this was only rejected as it broke too many things[1] and couldn't be automatically fixed.

Point in the GP's favor: Fixing it would definitely not be a problem with an early-binding language! In fact, the nigh-impossibility of automated refactoring puts lie to the notion that late-binding languages are more "agile."

It's a design flaw, in the same way Python 2's allowing comparisons between different types was a flaw, e.g. "a" < 3 succeeds. Python 3 now, correctly, throws a TypeError because there's no sensible ordering between the two things.

(While I'm griping: Another design flaw is conflating iterables and iterators, which makes generators almost useless. Say a generator is pass to a function expecting an iterable. If the function uses it twice, the second time it silently returns nothing!)

> This isn't what Python would do, if the variable was undefined Python will throw an error

I think GP must have assigned to the name, in which case Python will create a lexically bound name.

Python's rules for naming can make perfect sense or be quite surprising:

    try:
        x = four()
    except Thing:
        x = 5
    print(x)  # 4 or 5

    for a in [1, 2, 3]:
        pass
    print(a)  # 3 ?!
[1]: https://mail.python.org/pipermail/python-3000/2006-April/000...


It's only surprising if you expect it to behave like another language. Python variables are function (not block) scoped.


What’s supposed to be the surprising thing? Are you confusing pass and break and expecting it to print 1?


What's surprising, for people used to block semantics, is that `a` survives outside the for loop at all.


mypy is a great effort, but very experimental. Try using it on any real-world large enough project and it loses most of its value as there are still a lot of unimplemented things, or because you'll depend on a third party module that hasn't support for it yet.


Case in point: Pandas, the foundation of data programming in Python, does not provide the Series or DataFrame (that's a table) types in a way that MyPy can use.


I'm pretty sure this is what they meant...

  def my_func():
      group_keys = 'My thing'
      while group_keys == 'My Thing':
          if some_logic() == 42:
              groups_key = 'My other thing'  # typo here


I don't see how that's avoidable in any language that doesn't require explicit variable declaration


Yeah, that's the point. Python doesn't have it. It'd be better if it did.


compiler/IDE would complain about "unused local variable"


Your 2nd error isn't possible in Python, so I'm not sure what you did there. Regarding the first, sure, it is a bug that was annoying to catch. But, having an `Iterable` interface in Python is also really neat and useful if used responsibly. If you're programming regularly in Python, you are accustomed to the tradeoffs that come with a dynamic programming language and no static types, and you can still avoid issues like the one above.

Right off the top of my head, using type hints with a decent IDE or an assert statement would likely have caught the issue.

I'm not saying that Python doesn't have issues (all languages do), but I don't see the error noted above as any sort of deal breaker. On the other hand, if you're only ever going to use Python like a strongly typed language without taking any advantage of its dynamic characteristics, then I can see why it would seem as a total downgrade compared to languages like Java.


I didn't explain the second one well. Here's some exact code.

  group_keys = ...
  if not isinstance(group_keys, list):
    groups_keys = [ group_keys ]
So rather than listifying the non-list variable, it was creating a new variable. The cause of this bug is that Python doesn't distinguish between declaring new variables and overwriting existing ones.


Well, this should have been caught as an unused assignment in static analysis. A whole ton of languages allow this situation, so I'm not gonna ding Python too hard for that one.

However, here's a related but different python gotcha:

    if foo(a):
        v = list(bar(a))
    for i in v:
        print i
In this example, v is only defined inside the if. Due to python's limited scopes, v is also valid outside the if, but only has an assignment when foo(a) is True. When foo(a) is false, the for loop throws an NameError. And yes, a coworker wrote code that accidentally implemented this antipattern, albeit much more spread out in the code base.

This is clearly a bug in the code, yet no static analysis tools I've tried have successfully discovered it. There's a bug in pylint that's been marked WONTFIX because it requires a wildly different implementation. At a language level, it feels weird that if blocks aren't a scope level for new variables. If you want to reference v outside the if loop, declare / assign it outside the loop first.


Indeed, as another user mentions, mypy will detect this issue, as will pytype, even without any annotations.


Interesting, I had not looked at these because I'm not interested in volunteering to add type notations to our code base.


mypy was able to detect these kinds of issues for me


I'm retty sure PyCharm catches that 2nd one with a warning


FYI, the google style guide (or maybe the internal only version) suggests to avoid initialize-then-assign in favor of single assignment form:

    unclear_type_thing = ...
    if isinstance(unclear_type_thing, list):
      group_keys = unclear_type_thing
    else:
      group_keys = [unclear_type_thing]
statically avoids this problem. In general, prefer immutable variables where possible. Single-assignment form is nice for a lot of reasons, not the least of which is that it avoids this particular gotcha.

And I should add that the "right" way to do this would be to factor this out to a function:

    group_keys = coerce_to_list(...)
is much clearer than either block, and avoids the possibility of the issue.


All of these things are true, but they require a non-trivial level of experience and discipline to avoid most potential gotchas. Your average Python project on the Web isn't written to this level of quality, and when people are learning programming using Python in school they certainly aren't there yet, and are gonna hit all kinds of problems related to this stuff.

But is there a way to force immutable variables in Python? You can easily still end up in the same situation when you typo something (easy to do when plurals are involved), and then end up reassigning something when you meant to create a new variable.


I don't think that's fair to be honest. If you had simply used Pycharm with default settings you would have easily caught the first bug due to the linting. It's a fair complaint, but this specific bug is easy to catch using any modern Python IDE.


I've never found the "Use this specific IDE" defense particularly valid, considering that many IDEs don't have these features and that in other languages the compiler itself protects you.

Needless to say, I was not using Pycharm for this development, nor am I likely to install an entire IDE just for a small change I'm making on a random project. It's a non-trivial burden to configure and learn an entire IDE, vs just using what I already know (which is often just emacs).


> a non-trivial burden to configure and learn

any new tool chain.

It's hard to take complaints like this seriously.


It's even harder to take "The IDE should make up for deficiencies in the language" seriously. In languages that handle this stuff well, you can edit in Notepad and still not make these mistakes. Why push it up several levels to a few specific IDEs that most people don't even use?


> Why push it up several levels to a few specific IDEs that most people don't even use?

Because those IDEs those solve problems, so that you can close tickets on the project at hand, without having to port it to the best language evar.


> But is there a way to force immutable variables in Python? You can easily still end up in the same situation when you typo something (easy to do when plurals are involved), and then end up reassigning something when you meant to create a new variable.

Not always. Mypy has experimental support for `Final[T]` [0], and attrs/dataclasses support final/frozen instances, but that's opt in on a per-argument basis.

[0]: https://mypy.readthedocs.io/en/latest/final_attrs.html


I see this often and it is a bad pattern that people do.

Typically type checked languages wouldn't even allow you to do. If you would use mypy for type checking it wouldn't like it because you're redefining the type of a variable. Best practices would suggest you use a different variable for the conversion if you must, but ideally you should just make the function accept list as an argument. If you're really worried about passing something else than a list, you should use type annotations to tell type checker what it is. If you want to add extra runtime check then do:

assert isinstance(group_keys, list)

You can complain that Python allowed you to something dangerous, but you have tools to help you avoid it and this flexibility is what makes tools like SQLAlchemy so powerful.


I still don't think you quite understand what's going on here. Python wouldn't create a new variable in this case. I It would re-assign the value represented by the variable you already assigned once. I agree that it would have been better if Python had explicit variable declarations (this is one of the few things I think Perl got right.).

On the other hand, Ruby made this same mistake. If you wrote this code in Javascript you wouldn't get an error, but you would in fact have two different variables.

For instance, this code runs for me using node 8:

  var fun = (bool) => {
    var x = 1;
    if (bool) {
        var x = 2;
        x += 1;
    } else {
       x += 1;
    }
    console.log("x=" + x);
  }

  fun(1);
  fun(0);


> I still don't think you quite understand what's going on here. Python wouldn't create a new variable in this case. I It would re-assign the value represented by the variable you already assigned once.

Uh. no. he typoed the reassignment, so it wouldn't re-assign the value.

> So, of three of the most popular dynamic languages, Python, Ruby, and Javascript, none of them would have helped you catch this kind of error at script-parsing time. So again, it seems like you have an irrational dislike for Python, all things considered.

Sure, but he's made it clear he likes Java. Fundamentally he's against dynamic typing, so of course he doesn't like any of the dynamic languages.


Ah my bad. I think I would have understood it more if he included the typo in his example.


He did. But that just illustrates why this kind of bug is so annoying, its hard to spot.


I don't understand why you're accusing me of being irrational. These seem like very rational problems to have with Python. They literally caused me bugs that cost me time to deal with that I wouldn't have faced in other languages.

You're also assuming that I don't have the same problems with Ruby or JavaScript. I do. The exact same critique could be made of them as well, but they're not the subject of this thread; Python is.


You can't argue with someone who has chosen to overlook your viewpoint.

I've ran into the same issues while writing python code. People who are newly picking up python are especially prone to these kind of bugs. Also, with python I have to spend lot of time to figure out what went wrong in my code as compared to other languages.

People who have been using python for long have wired there brain to avoid such pitfalls and now they happily defend it.


I don't think what you're saying is true. I already said I think it would have been better if Python and Ruby had explicit variable declarations. But, if this is your biggest issue with a language and it's ecosystem, then IMO that language is doing pretty well. I would rather, for instance, have to deal with implicit variable declarations in Python that the gigantic mess of Java frameworks that have been invented to "reduce boilerplate", such as Spring/Guice, AspectJ, Hibernate, etc.


My bad. I didn't realize you were against all dynamic languages in particular. FWIW I prefer Java and static types as well, but as far as scripting languages go, I think Python is pretty great.


I disbelieve. And I disbelieve despite being a fan of dynamic languages.

The tradeoff is that dynamic languages are faster to develop, more concise, but more expensive in maintenance exactly because of issues like this. The data that I base this opinion on is an unpublished internal report from nearly a decade ago at Google quantifying costs of projects of different size in their different languages. Which was Java, C++, and Python. Python had the lowest initial development, and the highest maintenance costs. That is why Google then moved to Go as a replacement for Python. It was good for the same things that they used Python for, but being statically typed, its maintenance costs were lower.


I can believe that. But for a lot of people, the lower initial development time/cost aspect matters a lot. If I had Google resources, sure, I'd Go with other languages perhaps, but you can still write high-quality and capable software in Python. And while the batteries included aspect of Python is not everyone's cup of tea, I personally find it quite handy to have that so I don't have to waste a ton of time evaluating different libs to do fairly standard things.

To be clear - I'm not trying to say that Python is better in any objective way. Ultimately, I think people should use the tools they have available and prefer, to build what they want.


But for a lot of people, the lower initial development time/cost aspect matters a lot.

As I said, I'm a fan of dynamic languages. :-)

One of the top ways that startups fail is failing to build what they need to build quickly enough. Maintenance costs only matter if you succeed in the first place. Using dynamic languages is therefore a good fit.

But, even if you're not Google, if you're writing software and have the luxury of paying attention to the lifetime costs of the project up front, you should choose a statically typed language.


Maybe they could write unit tests to make sure what's being passed is lists and strings. But that's probably crazy.


That would not catch the bug if the input is not under is control.

You could as well say "Just check if the object is a string" in the method, which would work but the point was rather that it is difficult to notice if you did not think about it. Compared to other languages that would crash or not compile instead.


Yeah, the input isn't really under control because it's coming from deserializing a YAML file. It worked for the exact type of input I was expecting, namely, when you configure a specific value as a list, but it wasn't working for anything else. And YAML has plenty of types it can split out, so my naive fix still only handled lists and strings properly!


Yeah, YAML deserialization is the worst case scenario for dynamic typing. In most situations, types are pretty consistent and assuming you run your code at least once, you'll find most errors. But with YAML deserialization all bets are off. YAML is even worse then JSON for this because seemingly minor changes in the YAML can change the shape of the data.

I've had success validating such data against a schema, so I know it had consistent type structure before working with it.


You can write a unit test for anything you can think of, of course.

But a strongly-typed language will catch such errors automatically (and for free) at compile time, even if you didn't anticipate the failure case.


The values were coming from a YAML deserializer, for what it's worth.


>Iterable` interface in Python is also really neat and useful if used responsibly.

Honestly this was a major attraction to python for me a decade plus ago as a student when I started learning--even when I used it irresponsibly. There are so many small tasks where you just kinda have to iterate over 100-1000 items that you're not worried about big-O or anything like that—you just want to iterate and work on a collection quickly for some task in the office.


>In one case, I was iterating over the contents of what was supposed to be a list, but in some rare circumstances could instead be a string. Instead of throwing a type error, Python happily went along with it, and iterated over each character in the string individually.

I've been using python for about 13 years professionally and I wrote up a list of "things I wish python would fix but I think probably never will" and treating strings as iterable lists of characters was on there.

I've seen this bug multiple times and the fix is relatively easy - just to make strings by default non-iterable and use "string.chars" (or something) if you really want to iterate through the chars.

Nonetheless, I still love the language and wouldn't use anything else.

>Instead of just throwing an error, Python happily went along with it, used an uninitialized value, and then all the logic broke.

This one gets caught by linters - unfortunately 90% of most python linters spit out rule violations which aren't important which drowns out stuff like this in the noise.


> I wrote up a list of "things I wish python would fix but I think probably never will"

What else is on your list? I'd be interested to see what other parts of Python you would wish to change.


Among other things:

* Implicitly casting strings, integers, dates, etc. to boolean (e.g. "if x" being true if x is a non empty string). Cause of more unexpected bugs that I can count, but would cause massive headaches if implemented and memories of the the 2-to-3 transition would scare anybody away from doing this I think.

* Treating booleans as integers (True + True = 2). Probably wouldn't cause that many headaches if implemented but everybody still seems to think it's a neat idea for some reason.

* Treating non-package dependencies of pip packages (e.g. C compilers, header files) as something that is either the package's problem or the OS's problem. Nobody looks at this problem and thinks "I should solve this".


Iterating over characters in a string is something that's done very often in introductory CS classes, but very little in the real world. Python has support for string finding and regexes; why in the world would I be individually iterating over characters? Generally, when you see that, it's a code smell.

So yeah, I totally agree with you, it'd be better if trying to iterate over a string were a flat-out error, and if you really want it, you should mean it. Though Python being dynamic still means that you'll only spot this error at runtime.

As for linters, how do they know if your intent was to reassign the value of an existing variable, or to define a new one? The language has no way to indicate which of these is intended.


For your first error, you can do some foot-shooting with a statically typed language too.

I remember a bug I made using C#, where I wanted to delete a collection of directories recursively. I got mixed up into the various inner loops and ended up iterating over the characters like you. But C# allows implicit conversions of char to string so the compiler was OK with it, and since those where network drive directories (starting with "\\server\"), the first iteration started deleting recursively the directory "\", which in windows means the root directory of the active drive (c:\)... And SSDs are fast at deleting stuff.


The first problem is a valid problem in Python: it essentially don't have a "char" type, instead it only have strings with size 1.

This actually ruined type annotation in Python: you can't properly annotate a function to accept anything iterable except string.

https://github.com/python/mypy/issues/4334 is a FR to at least check it with mypy.


It's not a perfect solution, but you could perhaps hack it with @overload to say that the "str" version returns NoReturn.


> And another one, I typoed a variable name as groups_keys instead of group_keys (or vice-versa, I don't remember). Instead of just throwing an error, Python happily went along with it, used an uninitialized value, and then all the logic broke.

Python doesn't have uninitialized values, it throws NameError when you try to access a variable that hasn't been set. So I don't see how this could have happened.


Other way around, the typo was in the name of the variable being set, so it defined a new one instead of modifying the existing one.


Well this is anything but a new complaint. I would assume a user who has worked in Python for some modest amount of time to have made peace with this. One works in Python knowing that this can and will happen (well one does have linter on steroid like mypy now to counter these).

Python code needs more testing, more run time type checking of function arguments than a statically typed language. If that's a deal-breaker then one shouldn't be using Python in the first place. What you gain though is some instant gratification, and the ability to get something off the ground quickly without spending time placating the type checker. Its great where your workflow involves lot of prototyping, exploration of the solution space and interactive use (ML comes to mind, but even there int32 vs int64 can byte, correction, bite). I see it as a trade off -- deferring one kind of work (ensuring type safety) over another. Hopefully that deferral is not forever. I like my type safety but sometimes I want that later.

What I typically do is once I am happy with a module and I do not need the extreme form of dynamism that Python offers (something that's frequently true) I take away that dynamism by compiling those parts with Cython.


Here is the counter-argument to everybody who thinks there is too much python in the world:

It could be javascript.


> In one case, I was iterating over the contents of what was supposed to be a list, but in some rare circumstances could instead be a string.

The creator of a well-known alternative to Python has a single-letter email address, and regularly receives email generated by Python scripts with this exact bug (which means instead of sending an email to "user", sends an email to "u", "s", "e", and "r"). So I’ve heard.


In my CS program, we learned Python as a convenient way to sketch a program. We also learned C++ for speed and OCAML for those functional feels. A programming language is a tool, Python has some great use cases mostly focused around ease-of-programming.


The bugs you describe should both be easy to catch with unit tests. It sounds like the problem is not that you're using Python, it's that your project lacks tests. Sure, you can typo this sort of thing; but it should be apparent within seconds when your tests go red.

(And nowadays, you can also use type hints to give you a warning for this kind of thing, e.g. your IDE/mypy will complain about passing a string where the function signature specified a List.)


Serious question: If you are writting unit tests to check types, why not just use a language that has a compiler that does that for you? And if you are writing python with type hints, why not just use a language that uses the types you spend time adding to make your program faster.

Python is great for sharing ideas / concepts, but under some circumstances it seems irresponsible to choose it over other viable options like Go (if you use Python because it's easy), or C# (If you use Python because it's a 'safe' enterprise choice). (Ecosystem specific things aside at least)


As the sibling comment said, I'm not proposing checking types in unit tests, I'm proposing checking that the behaviour is correct.

If there's a code path that passes in a bare string instead of a list, and your logic breaks, then that code path should have a failing test case. However, type hints can provide another opportunity to catch this kind of mismatch before they even get committed.

> under some circumstances it seems irresponsible to choose it over other viable options like Go (if you use Python because it's easy)

This is probably true, but I think people tend to overuse this argument (i.e. use an overly broad set of "some circumstances"). I build fintech apps with Python, for example, and don't find any of these issues to be a problem. In my experience, if you implement sound engineering practices (thorough testing at unit, integration, and system levels, code review, clear domain models, good encapsulation of concerns, etc.), then the sort of errors that bite you are not ones that a type checker would help with. I agree that the worst Python code is probably far more unsound than the worst Go code, but I don't think that's the correct comparison; you should be comparing the Python and Go code that _you_ (or your team) would write.

I think it's easy to be dogmatic about this kind of thing; in practice most people are substituting personal preference for technical suitability. Sure, there are cases where the performance or correctness characteristics of a particular language make it more suitable than another. But for most software, then whatever your team is expert in is the best choice.


The problem was caused because I didn't know that there was a code path that passed in a bare string instead of a list, though. It's hard to write tests for situations you aren't aware of.


Just because you have a Python function that has strict requirements for input doesn't mean every function you're writing has strict requirements.

Moreover, using a strongly typed language doesn't magically make you invulnerable to invalid input. Unit tests are useful in every language.


Because the unit tests are not to "check types", they are to check that incorrect values (e.g. a string instead of a list of strings) do not occur. They are no different from other kinds of incorrect values, like attempting to cluster an odd number of items into pairs.


> Python happily went along with it, used an uninitialized value

There is no such thing in Python. You should get NameError if a name doesn't refer to any object.

  >>> def f():
  ...     name
  ... 
  >>> f()
  Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "<string>", line 2, in f
  NameError: name 'name' is not defined


okay, so use type annotations and mypy --strict


It's not my project; I'm just a collaborator. My experience has been that a very tiny minority of Python code out there is written in this style, so unless you're only starting projects from scratch, you can't benefit from it.


you can gradually type (probably don't use --strict in that case). It might not be a ton of benefit if you aren't actually writing new code though.

There's a good document on this:

https://mypy.readthedocs.io/en/latest/existing_code.html


And that'd be fine if everyone were on board with it and that were the general direction of the project, but I don't think that's true.

I've never seen a strict, type-annotated Python project out there in the wild, and I've seen a decent amount of them. A random non-primary-contributor isn't going to have much luck stepping into an established project and getting everyone to go along with adding annotations to the whole thing.

And if I were starting a project from scratch, rather than coercing the language to do something it wasn't really designed for, I'd just use a language that has first-class support for types directly in the compiler, like Java or Go.


But at that point why not get something for they time you spend adding types and just use a different language?


Are you blaming Python for shitty design? It's a dynamic language. All dynamic languages have those issues.


TBO these are way too trivial stuff compared to the argument of the post you're replying to.


Agreed. I really don't understand all these buckets filth being poured on Python in this thread.

It's a first language I worked with in my life that just clicked with my brain and doesn't just drain me.

I would take a Python job over a Java/C/C++/Go/Rust any day. There's some languages that could pull me away from Python (Nim, Crystal) but they're nowhere popular enough to move wholesale to them.


> I would take a Python job over a Java/C/C++/Go/Rust any day

it's funny, I feel the exact opposite. I work on a team that maintains a digital catalog, and a lot of what we write is about taking in asset- and metadata files, asynchronously processing them, and then publishing that to a denormalized read-optimized data store. We often joke that we mostly take data from 'over here' and put it 'over there'.

All our stuff is in Java, and honestly, if you use Lombok to squeeze out the boilerplate, and a decent dependency injection framework like Guice or Dagger, modern Java really isn't so bad. Streams are clunky but they get the job done. We use Jackson a lot to serialize/deserialize Java pojos to JSON and XML, which is pretty seamless for us so far. The Optional class is again clunky, but it works well enough.

The thing for us though is that the problems we spend most time solving are just not really related to the language we write it in. The hard problems are much more around things like operations (cd/ci, metrics, alarms, canaries), performance (latency, load, etc.) and just the nuts and bolts of the business logic (what type should this new field be? what values can it take? how do we plumb this through to downstream system X owned by team Y? etc.)

I honestly wouldn't want to have to write this stuff in Python for a simple reason: I don't think I could live without static typing, which is a fantastic tool when you need to manage a large code base written by multiple people over multiple years. I can make a change in some package, do a dry-run compile of every system that uses it, and then see what needs updating. It gives me certain guarantees about data integrity right at compile time, which is super helpful when you're doing data conversion.

But hey, different jobs, different tools. Glad you found something you're happy with.


> I honestly wouldn't want to have to write this stuff in Python for a simple reason: I don't think I could live without static typing, which is a fantastic tool when you need to manage a large code base written by multiple people over multiple years. I can make a change in some package, do a dry-run compile of every system that uses it, and then see what needs updating. It gives me certain guarantees about data integrity right at compile time, which is super helpful when you're doing data conversion.

Programming in the large without type safety is a fool’s errand.

> But hey, different jobs, different tools.

Exactly. There’s a reason your kitchen drawer isn’t full of just sporks.


> Programming in the large without type safety is a fool’s errand.

Lol. Right. No big system has ever been built in an untyped or weakly typed language. Well, except just about every bit of software we all use everyday. But it does seem like some small startups can't get by without it.


>No big system has ever been built in an untyped or weakly typed language. Well, except just about every bit of software we all use everyday. But it does seem like some small startups can't get by without it.

Many have built models of the Eiffel tower with toothpicks too, so?

You can still built things with inadequate tools: inadequate != prohibitive. You just have more problems going forward.

Which is exactly the lesson people who write large scale software have found.

What is this "just about every bit of software we all use everyday" that you wrote about as been written in weak types?

Most major software is still written in C/C++ (anything from operating systems, Photoshop, DAWs, NLEs, UNIX userland, MS and Open Office, databases, webservers, AAA games, what have you). One could use just that C/C++ software, and they'd have almost all bases covered.

The rest is e.g. Electron based software and online services. For the latter, most of the major ones (e.g. Gmail, Apple's iCloud services, Microsofts, online banks, online reservations, etc, etc) are not written in "weakly typed languages", only the client is.

And those that were initially written in a weakly typed language, e.g. Twitter with Ruby on Rails, others with Python, etc, have rewritten critical services (or entirely) to statically typed languages (e.g. Twitter went for Java/Scale, others for Go, etc).

And even for the client, most shops are now turning to Typescript (and FB to Flow) because they've found weakly typing is not good enough for large scale. So?


Python is not weakly typed. It is strongly typed in that it forbids operations that are not well-defined. For example, adding a number to a string) rather than silently attempting to make sense of them. I agree wholeheartedly about weakly typed languages, though.


I believe that marketing Python as "strongly typed" has the potential to confuse rather than educate. Python still crashes at runtime with these errors. It has nice error messages, but it still crashes, potentially in production. If you want to create your own "types", you'll have to add your own runtime checks. It's much more sane than JavaScript, but it's not strongly typed like Haskell. Python does not automatically coerce some built-in runtime values, that's it.


Not automatically coercing values is all that strong typing means. Getting a type error before you run the program is static typing. They're separate axes, and both useful to talk about in a language.


> Not automatically coercing values is all that strong typing means.

It's at best a colloquial term and it's misleading to non-technical management.


You are describing static typing. There is a well defined difference between strongly typed and statically typed.


Could you elaborate or point to a resource? AFAIK, term "strongly typed" is usually used to refer to that the type cannot change but I'm failing to find a well defined definition or the comparison against statically typed.


Static typing means that types are figured out statically by looking at the source code, and type errors are detected then when it notices a mismatch. Dynamic typing means that types are worked out at runtime by looking at live objects when code operating on them executes.

Strong typing means that types cannot be substituted for other types. In C, you can write `int x = "one"` and the char * (address of) "one" is automatically converted to an int, or in Javascript you can write 1 + "2" and a string "1" is automatically created; depending who you're talking to, either or both of these qualify as weak typing.

They're both spectrums, and commonly confused with each other.


You're explaining static typing vs dynamic typing. I'm still failing to see how different Strong vs Static. If the only difference is "Static" means "types are figured out statically by looking at the source code" do you mean it's possible to change the type unlike strong typing? If not, can we say Static encapsulates Strong?


Static typing is not a superset of strong typing, they're on different axes. Strong vs weak typing (which I explained in the second paragraph) is about how strictly types need to match expected types before you get a type error. Static vs dynamic typing is about when you get a type error (during a static typechecking phase, or at runtime when you try to use a value as that type).

When you say the type cannot change, that's ambiguous: do you mean the type of the value a variable holds, or the type of the value itself? In C (a statically typed language), "int x" means that x will always hold an int, but you can still assign a pointer to it, it just turns into an int (weak typing). In Python (a dynamically typed language), the variable "x" wouldn't have a type (so it could hold an int at one point and a string later), but the value it holds does, and because it's strongly typed, it would throw a type error if you attempted to use it in a place where it wanted a different type (eg, `1 + "2"` does not turn 1 into a string or "2" into an int).


If I got this correct, you're saying strong can be compared to weak and static can be compared to dynamic. So there is no such thing as strong vs static typing comparison.


Right, they describe different aspects of how types work in a language.


Thanks. I appreciate the time you took for clarifying in detail.


"Dynamic typing" is really just case analysis at runtime. Every static language is capable of dynamic typing, it's not some feature that statically typed languages lack. A dynamic language is really just a static language with one type.


Why aren't statically typed programs really just dynamically typed programs where all the types happen to be statically inferable?


Because most statically typed languages allow us to define our own types, add type signatures to constrain etc. Dependently typed languages also allow types to depend on values. Inference is useful, but only one aspect of static typing.


The word "type" has a specific meaning in maths/logic, which is not the same as that used by the "dynamic" languages community.

Professor Bob Harper of CMU would refer to Python as unityped, i.e. having a single type: https://existentialtype.wordpress.com/2011/03/19/dynamic-lan...


My point is that your marketing is misleading. Use "strong dynamic types" if you must, but for Python, it would be more accurate to say "strongly tagged".


C's typing is so week it might as well be an untyped language - not even a dynamically typed langue. And that's what most of the software you run every day runs on.

Static typing was all the rage 20 years ago. C++ and Java were going to save us from the chaos of C. What people found was the vast bulk of software defects are not problems that can be detected by static typing.

Static typing just created a constraining, inflexible code base, that was no more reliable than C or smalltalk or lisp. Once your beautifully conceived collection of types were demolished by the cold hard reality of changing business requirements the type system actively worked against you.

Python and ruby and javascript started gaining traction, and at first it seemed crazy to use a language that didn't have a static type checker. But after people started using them they realized they just didn't have the kinds of bugs that a static type checker would catch anyway - because those types of bugs are caught by the dynamic type checker (something C doesn't have, and C++ only sort of kind of has) at run time when you write tests. And writing tests also caught all kinds of other logic bugs that didn't have anything to do with types. They were writing software faster and more reliably in dynamically typed langues than they ever could in the old statically typed languages.

Of course no language is a silver bullet, and writing software is still hard. Combine that with the fact that our industry has no sense of history, and a fair number of programmers today have only used dynamically typed languages, and you can see why the static typing fad is coming back around.

It seems intuitive that caching these type errors at compile time rather than run time will make for a more reliable system. But history tells us otherwise. Unless you just don't run your code before pushing it to production the dynamic type checker will catch it just as well when you run tests. And your types will drift away from the reality of the business requirements grinding development to a halt.

The static typing fad has a 5 year shelf life. Just enough time for managers to force a new generation of programmers to re-write all their code in typescript or whatever and learn it is just as unreliable, and much harder to work with.


> Programming in the large without type safety is a fool’s errand.

Programming in the large without tests is a fool's errand. Type systems don't guarantee correctness.


You've got it backwards.

(Sound) Type systems guarantee correctness for the invariants encoded as types. If it compiles, you know it doesn't have any type related errors at all. With more evolved type systems even your program's logic (or large parts of it) is guaranteed.

Tests just allow you to test random invariants about your program. If it compiles and your add() method works when passed 2, 2 and gives 4, it still might not work for 5, 5... (contrived example: imagine it with much more complex functions, though even a simple e.g. "one line" time conversion can have similar issues).


You need to test anyway. So, is it the case that type systems provide much value beyond what a proper set of tests, which are necessary, are going to provide anyway?

If you skimp on testing your system will be crap, but at least the type system can fool you into thinking otherwise because it still compiles.


>You need to test anyway.

Actually, if your type system is powerful enough, you don't need to test. That's the source of the "if it compiles, 99% of the time it works right" people mention about Haskell (and even more so languages like Idris etc).

Type systems are tests -- just formal and compiler-enforced, not ad-hoc "whatever I felt like testing" tests, like unit tests are.

From there on it's up to the power of the type system. But even a simple type system like Java's makes whole classes of tests irrelevant and automatically checked.

A programmer can also leverage a simpler type system to enforce invariants in hand crafted types -- e.g. your "executeSQL" function could be made to only accept a "SafeString" type, not a "string" type, and the SafeString type could be made to only be constructed by a method that properly escapes SQL strings and params. Or the same way an Optional type ensures no null dereferences.


> Actually, if your type system is powerful enough, you don't need to test. That's the source of the "if it compiles, 99% of the time it works right" people mention about Haskell (and even more so languages like Idris etc).

Types only eliminate certain tests. You will always have system tests, acceptance tests and unit tests. One should use types to augment their system reliability.

Types will not catch logical errors in your code.


Haskell's type system most definitely does catch some of your logical errors. That's exactly why it is so revered.

An effective use of a type system such as Haskell's Hindley-Milner can result in a vastly smaller surface area for possible problems and thus can cut a big number of otherwise mandatory unit tests off your todo list.


>Types only eliminate certain tests. You will always have system tests, acceptance tests and unit tests.

Yes, so let's eliminate them with types, instead of doing them. "Acceptance tests" are not concerned with programming.

>Types will not catch logical errors in your code.

Actually, depending on the type system, it will.

That's how program logic is tested as "proof" and programs, implementations of algorithms are determined to be logically correct in more exotic languages (but even in C + some restrictions + the right statically checking tooling, NASA/JPL style project do that).

https://en.wikipedia.org/wiki/Formal_verification


The question is not whether a type system will catch bugs. The question is whether a type system finds enough bugs that tests (sufficient to cover the things that the type system does not catch) would not also catch.

If you have to point to something like Idris I don't think you're making a real world argument yet.


Exactly! While tests, on the other hand, totally guarantee correctness.

I don't get why people try to use sophisticated types systems to prove software, when writing and maintaining tests is so superior, and funnier too!


Both static type systems and unit testing are just tools which are supposed to help programmers to deliver higher quality software.

Both static type systems and unit testing have their disadvantages. For static type systems, you sometimes need to bend backward to make it accept your code and it's not very useful before the code grows large enough. For unit tests, even if you have 100% test coverage, it doesn't mean that you're safe - underlying libraries may behave in unexpected ways and the test data input won't ever cover the whole range of values that the code expects to work. Integration tests have the same problem, the prepared input represents just a few cases, plus they are generally harder to run so they are run less frequently.

So, both tools are useful but they aren't solutions for all the problems in programming. Static type systems have the advantage of being checked without running any code, which should be much quicker than running the tests. Static type systems become more useful as you increase the precision of types and the amount of annotated code in the project. When used correctly, they provide certain guarantees about the code which you can rely on and they are used to restrict the domain (set of possible inputs) of type-checked procedures and classes. This means that you can write fewer unit tests because you don't have to worry about certain conditions which the type system guards against (static guarantee of something never being `null` is quite nice).

Anyway, I think that both static type systems and tests are great tools and they can and should be used together if you value the quality of the code you write. This is getting easier thanks to gradual type systems (optional type annotations like in Python or JS) which allow you to get some of the static guarantees without insisting on everything around being typed. With tests and mypy (in Python) you're much better off in terms of code quality than if you used just one of them. I see no reason not to use them both.


> For static type systems, you sometimes need to bend backward > to make it accept your code and it's not very useful before > the code grows large enough.

How large need a program to become, before the advantage of being allowed to write fishy code is counter-balanced by the types becoming untractable and the code impossible to refactor in any meaningful way?

This is a serious question. Some years ago, apparently Guido Van Rossum though 200 lines would be already quite an achievement [0]. Based on my own experience, I feel that 99 out of 100 errors thrown at me at compile time are valid and would have caused a crash at runtime (ie. when I do not expect it and have lost all the context of the code change). And I get about 50 such compilation errors in a day of work, so I guess I could write without the compiler safety net for about 10 minutes. That's my limit.

One could object that a 10 minutes program written in python can accomplish much more than a 10 minutes program written in Java. That's much certain! But then we are no longer comparing the merits of compile time vs runtime type checking, but two completely different languages. Of course it is easier to write a powerful/abstract language with runtime type checks, while writing a compiler for a powerful language is much harder. Still, since (and even before) python/perl/php were invented many powerful compiled languages have appeared thanks to PL research, that are almost as expressive as script languages. So it would be unfair to equate runtime type checking with lack of expressive power.

Now of course tests are important too. Compile time type checking does not contradict testing, like you made it sound somewhat in your message. Actually, if anything, it helps to test (because of test case generators based on type knowledge to exercice corner cases).

I'm sorry if all this sounds condescending. I am yet to decide whether I should allow myself to sound condescending as the only benefit of age :) But I'd not want to sound like I'm upset against anyone. Actually, I'm happy people have been using script languages since the 90s, for the same reason I have been happy that many smart people used Windows: my taste for independence gave me by chance a head start that I'm afraid would have been much tougher to get based on my intelligence alone.

And now that static type checking is fashionable again I'm both relieved and worried.

[0]: https://www.artima.com/intv/pyscaleP.html


> Some years ago, apparently Guido Van Rossum though 200 lines

I think it's better to measure the number of separate code entities (classes and functions and modules in Python) and how many different use-cases (ways of calling functions and object constructors) each entity is expected to cover... After converting to LOC, I'd say ~500 would be the limit. After that, it's a constant fight with TypeErrors, NameErrors, and AttributeErrors - it's just that everyone is already used to this, while not many know of any alternatives. Also, there are substantial differences between languages - in some 10 lines are enough to start complaining, while in some others I've seen and worked with ~2k loc code and it was manageable.

> many powerful compiled languages have appeared thanks to PL research, that are almost as expressive as script languages.

Yes, but on the other hand, some powerful static type systems for dynamic languages also appeared, and some of them are close to Haskell in terms of expressivity. The particular example here would be Typed Racket, which has a state of the art type system which is built on top of untyped Racket. It supports incrementally moving your untyped code to the typed one (whether a module is statically or dynamically typed is decided when module is created; as you can define many (sub)modules in a single file, you can just create a typed submodule, re-export everything that's inside, and move your code there one procedure at a time). Also, it automatically adds contracts based on static types, so that they still provide some guarantees when a typed function is imported and used in untyped code. There are many interesting papers on this, and TypedRacked is really worth looking into, if you have nothing against Lisps.

> Compile time type checking does not contradict testing, like you made it sound somewhat in your message.

Damn! I actually wanted to argue exactly this: that both tools are useful and both can be used together to cover their respective weaknesses. :) Looks like I need to work harder on my writing skills...

> I'm sorry if all this sounds condescending. I am yet to decide whether I should allow myself to sound condescending as the only benefit of age :)

Well, it didn't sound condescending to me, so no prob :) But, if you'd like an advice on this: please don't try to be condescending on the basis of age alone! It's totally ok to sound condescending if you have knowledge, experience and skill to back it up... Well, at least in my book :)


What? Tests don't guarantee correctness. Sophisticated type systems can prove correctness. See Idris for instance.


he is joking


> Programming in the large without tests is a fool's errand. Type systems don't guarantee correctness.

I never said you do not need tests nor that static typing is a panacea. In my view it's a necessary, but not sufficient condition, when programming in the large.


No but they help. You can find figures of a 15%-38% reduction in bugs for TypeScript versus JavaScript. So that does not consider the additional effect of strong versus weak typing.


I'm in agreement with you about Typescript, but JS has other deficiencies that contribute to typing issues.

Anecdotally, I'm frequently enough bitten by type issues in JavaScript, but I can't recall very many in Python. Certainly not 15-38%, perhaps 1%.

Which furthers my point (for my set of circumstances): I find the majority of my bugs when I'm writing tests.


It's a bit off-topic, but I wanted to comment on this:

> a simple reason: I don't think I could live without static typing

The gradual type system for Python (mypy at the moment) is actually a very good tool. It's as expressive as C# easily, despite some limitations. It fully supports generics and protocols (interfaces or traits in other languages), it allows the user to control the variance of generic arguments, it supports pretty accurate type inference (although not as powerful as OCaml), and so on. Just set up a CI where one of the build steps is running mypy and make the build crash if there's an untyped and not type-inferrable statement anywhere. This is what I've been doing for a year already and it really helps with the maintenance of the projects and with development once the codebase becomes large enough.

This may be as good a chance as any to say this: gradual type systems are here to stay. It's been more than 10 years since the original paper (J. Siek paper was written in 2006; the PLT Scheme (Racket now) guys started working on what became Typed Racket around that time too) - as usual, the industry lags behind the research significantly, but it's bound to catch up at some point. Facebook's Flow and Mypy are the first large scale industrial applications (if I remember correctly) of the theory, but I believe it won't be long before similar functionality pops up all over the place.

While there's still much to be done (like deriving run-time constructs from static type annotations and preserving at least some of the benefits of static typing when interacting with untyped code), these type systems are already powerful tools, and the fact that they are "optional" isn't really a problem for bigger projects, where it can be enforced by the build process. Currently, the lack of type annotations in external libraries poses a problem, but the number of annotated ones is bound to grow because the static type system is an incredibly helpful tool if used correctly and consequently.

So, what I want to say is the distinction between statically and dynamically typed languages will continue to blur and, at some point, will become irrelevant. Especially when you notice that many statically typed langs started to also grow features from the "other side" like marking a variable `dynamic` and allowing the user to do whatever they want with it without complaining.


I know that this is not strictly static typing but in Python 3.5 they added an optional type system. See https://docs.python.org/3/library/typing.html


We have the typing enforced as mandatory for all new code in our codebase (and have progressively been retrofitting it tonokd code as we touch it). It's saved our asses many, many times.


Interesting that you mention the difficult problems being around CI/CD and operations. I had to get our Python application’s CI/CD pipeline off the ground and it was much harder than it would have been in Go, for example. Notably, figuring out how to manage dependencies and run tests in a way that was reasonably performant was a massive challenge. We made the mistake of using pipenv, but downloading dependencies took half an hour. We should use something like Bazel to solve those problems, but it doesn’t support Python3 (allegedly some folks have hacked things together to get it working, but I haven’t managed to reproduce it). Further, packing dependencies into a lambda function is tough because Python libs are often bloated and static analysis tools are lacking, making it hard to trim the tree. I’m sure they’re are solutions, but they’re hard to find relative to Go. Not sure about Java or other languages.


>All our stuff is in Java, and honestly, if you use Lombok to squeeze out the boilerplate, and a decent dependency injection framework like Guice or Dagger, modern Java really isn't so bad.

So, basically, if you go out of your way not to use Java as is, Java is not so bad for the task?


Well, sure. Or just use Kotlin.


I've worked in so many languages and environments in my career and Django/python/virtualenv has to be one of the least painful. I tried Rails which is very similar but feels "inside out", a good friend of mine loves Rails and hates Django and has the exact same feeling about Django.

That's kind of my point, you may like other environments better, such as React/Node/NPM but that doesn't mean Python is a horror show.

I'm quite enjoying Go though.


How stable is python to run a full trading / quantitative algorithm on?

I feel there are benefits for every language. I am just curious if super stable and scalable conditions can be met on python.


>How stable is python to run a full trading / quantitative algorithm on?

JP Morgan operates a ~30 million LOC Python platform for trading and analytics. (related talk: https://www.youtube.com/watch?v=ZYD9yyMh9Hk)

Yes, there are very, very large working python codebases in fields out there that demand correctness. I'm honestly getting tired of the static typing circlejerk that has entered the industry.


Btw, jpmorgan dev team is rediculous because support is so massive the only way new projects get done is by hiring massive amounts of people / consultant firms and then lay offs.

Not saying there is a problem, I’m sure some people like to have their throat taken out when the trade doesn’t execute at markdown price.

Anyways I don’t have a problem with that type of pressured enviorment. I’m moreso pointing out people’s need for comfort of solution rather then sustainability. Getting started is more difficult so many are turned away.

Also some of the computation python can do is very powerful and I would trust it if I was no risk besides myself going balls deep.


That's my talk! Thank you for posting that :)

Python has been my main programming language since 2000, fwiw.


What do you mean by "stable"?

FORTRAN will give you super scalable conditions. I don't think you really meant that either... do you have a GPU cluster at work?


Sorry couldn’t respond due to fault segmentation.


Fortran (since 1990)


Python is the first language that clicked with my brain as well and in college I often used it to prototype homework algorithms before translating them into the language I needed to actually submit my work in. I have nothing but love for python as a language.

At the same time even when I used it heavily I never saw it as anything more than a scripting language to sit in front of some tool that was written in a language I couldn't be fucked to learn at that moment (numpy and scipy were used heavily throughout my college career).

If I'm being honest I don't understand how anyone could get as worked up about a language as the people in this thread have. At the end of the day most of us are still writing unportable imperative code that runs like shit. Maybe blaming language is how we cope with our own failure as engineers.


> At the end of the day most of us are still writing unportable imperative code that runs like shit. Maybe blaming language is how we cope with our own failure as engineers.

Sounds just about right ;)


Python is great; I picked it up back in the early 2.xx days. My main problem with it is the string handling/conversion code is brittle and the breakage of backward compatibility. But it's overall a great language.


And some people feel the opposite. I’m glad Python works for you. I had the same click-with-my-brain feeling with Ruby, whereas I find working with Python to be draining and demoralizing.


> I would take a Python job over a Java/C/C++/Go/Rust any day.

Why do you group those languages like they’re similar but different from nim and crystal? They’re wildly different in terms of their target domain, runtime models, etc. Go and Java are general purpose application languages and the others are more suited for systems or performance critical applications.


I had to learn Fortran IV for my first job. Am I allowed to hate Python?

Are you assuming everyone complaining here is young, and this is their first language? Consider that maybe they're complaining because they've used older languages they liked more.

Often, not having a feature is preferable to having a feature designed or implemented poorly.


Sure! I dislike all sorts of languages and environments.

That wasn't my point. My point is if you had to write Fortran IV using an IBM 32xx terminal you wouldn't be quite so hyperbolic about modern Python.

Unless you are claiming you would rather return to writing Fortran IV than use Python because you like Fortran IV better, in which case I'm very confused.


Just because languages were more of a PITA in the past, doesn’t mean we shouldn’t pick out faults of current languages and search for new/better solutions...


What does all this have to do with Fortran, terminals, or weaving your own core memory?

Python's competitors are Lisp, OCaml, Swift, C# etc.

I prefer at least Lisp and OCaml.


Python's actual competitors are Ruby, Perl, R, Shell, Visual Basic, Javascript, PHP, and Matlab.

Nobody's going to bother out OCaml or Lisp for web development, data science, or OS scripting where Python is most often used.


Clojure has got pretty nice tools for web development (both backend and — with Clojurescript — frontend). It’s definitely out there in some places.


Well, the original context here was about comparing python to fortran, which also doesn't fit this "competitor" criteria. It's an apples to oranges comparison, sure, but that's the way this whole discussion started. At least Lisp is roughly the same age as fortran, which gets at the root assumption that python is an improvement over older languages.


Python and Lisps do directly compete as the preferred introductory language for university computer science classes.


In this decade?

Not even MIT teaches lisp anymore.


Yep. https://github.com/racket/racket/wiki/Courses-using-Racket

Lots of universities teach Scheme, particularly Racket. Although Python is more popular, even in that domain.


>Not even MIT teaches lisp anymore.

That's their loss.


Well, Facebook is trying to make OCaml happen for web development (ReasonML). Not sure if they're succeeding, though.


There's a difference between hating Python, and saying (I'm guessing is the comment that spurred this one) this: "I try to be a good sport about it, but every time I write python I want to quit software engineering.", like a top-level comment below says.

If you had to write Python, would you also want to quit software engineering? Would you go back to Fortran instead of Python?

Of course you're allowed to hate Python but someone saying "every time I write python I want to quit software" is either extreme hyperbole, some tangentially related issue like depression, or just no language at all would make them happy enough.


In my opinion, as a computational researcher, Python was not really meant to be a Scientific Computing Programming environment. It was a big historical mistake to go in that direction. In the near future, hopefully, it will be replaced, by a better alternative. and believe me, for most people who do not speak highly of Fortran, when it comes to developing a new language for scientific computing, they pretty-much end up reinventing Fortran.


Well I actually feel betrayed...

Python 1.4 was an awesomely simple programming environment and I pretty much immediately fell in love with it. Then features were added. Now it is a whole home improvement store full of kitchen sinks.

I think that programming is a sort of theological process. Popular languages attract ideas. Unfortunately, in the case of Python, those ideas were not effectively filtered and now we have an expression of as many ideas as can possibly fit. The ultimate design by committee...

I suspect that the recent excitement about assignment expressions is really a kind of straw that broke the camels back. The problem isn't just this one feature, it's the sum of them.


I write a lot of toy/hobby one-off scripts in Python and have since 1.5; what has significantly changed that prevents that type of usage for you?


>I think that programming is a sort of theological process. Popular languages attract ideas. Unfortunately, in the case of Python, those ideas were not effectively filtered and now we have an expression of as many ideas as can possibly fit. The ultimate design by committee...

It's funny, a lot of people hate on Elm for the exact opposite reasons: one person dictating the language's direction and removing features. I suppose a nice balance could be struck between the two ends of the spectrum.


>Python 1.4 was an awesomely simple programming environment and I pretty much immediately fell in love with it. Then features were added. Now it is a whole home improvement store full of kitchen sinks.

I've used Python at the time (and up to now). It was a revelation compared to Perl, but it sucked compared to modern Python.

What exact features you have a problem with?


I think it's pretty disappointing that most of the top comments don't talk about the interview with Guido himself over the history of Python. Tangentially related discussion is one of the appeals of HN but I think it's a bit out of control here.


Well in just glad this is the top comment, as Python really is taking over the world for a reason.

And of all the bugs I have written in recent memory, not one came down to a lack of static typing. They were due simply to logic errors, flawed assumptions, misunderstood requirements, and good old race conditions. The static typing zealots like to think if it compiles is must be perfect, however this is a mirage. Unit tests in Python can compensate quite well for lack of static typing.


Have you ever worked in a large engineering organization full of engineers with varying degrees of experience all trying to accomplish the same goal?

I can't imagine anyone has ever tried to do engineering at scale (people wise) and did not find the value in static typing.

It's why startups eventually moved off RoR once they started scaling. It's why there is such a large push to type JavaScript (have you seen the rollbar article about the top 10 errors in JavaScript? All but one have to do with types: https://rollbar.com/blog/top-10-javascript-errors/), it's why Facebook created Hack, and outside of parentheses repulsion, it's probably why so few large projects have been written in a LISP or LISP descendant.

Python is great for small: small teams, small organizations, small projects with a few dedicated tasks, small scripting tasks. Most people aren't trying to take anything away from python here in the comments save a few irrational responses.

*again want to stress in my comment when I speak of scale I mean scaling people wise: more organizational structures in your company, more engineers, more collaboration between teams.


>I can't imagine anyone has ever tried to do engineering at scale (people wise) and did not find the value in static typing.

>why so few large projects have been written in a LISP or LISP descendant

The major dialect of Lisp, Common Lisp, is strongly typed, and many large projects have been written in it, for CAD/CAM, controlling a NASA spaceship, complete operating systems (Open Genera), the Mirai 3D graphics suite used for creating Gollum in "the lord of the rings", etc.


From the link:

> 1. Uncaught TypeError: Cannot read property If you’re a JavaScript developer, you’ve probably seen this error more than you care to admit. This one occurs in Chrome when you read a property or call a method on an undefined object.

Does typing stop null object errors in JS, Java or C for that matter? No. You need to continually check for null objects in all langs I use including Python. It seems most of the bugs on that page are of a similar vein.


Null reference errors in Java and C are due to The Billion Dollar Mistake, which is a specific deliberate weakening of a static type system. Statically-typed languages that do not commit The Billion Dollar Mistake do not have null reference errors.


And in the same way that folks are adding typing to JavaScript, it has been added to Python.

Python typing is quite similar to Flow, a JavaScript type checker.


> It's why startups eventually moved off RoR once they started scaling.

I thought it was because of Ruby's poor performance characteristics.


I assume we are all taking about Twitter and that's what I thought too.


The counter-example to this is github, and obviously basecamp, however.


Just wanted to mention that you can selectively statically type variables with the cython library. Of course using cython also changes other things and requires compilation, but i have found that it generally just works.


Possibly as the first few paras of the story are just so weird


I cut my teeth in FORTRAN IV (on RSX-11M). I lived through the archie days of uuencoded fragments to build my C environment, supplemented by DECUS tapes. Those were good old days.

I use Python 3 these days for a lot of stuff. It's pretty good.

These are better days. We all have complaints, but on the whole, things are not too bad.

I think for the most part that when nothing meets your expectations, it may be that your expectations need to be adjusted.


uuencode still holds a special place in my heart.


People don't realize what a revelation Python 1.x was back in the day. Around 97 or so I was tasked with porting a giant mathematica program for calculation diffraction grating efficiencies into something which ran open source (there was no free mathematica engine for running scripts pack then). Back then, that meant either C or Fortran. Sure, stuff like Perl existed; nobody thought of it as a real interpreter that could be used to construct complicated things any more than Awk was. When I realized Hugunin over at Livermore had done lapack extensions for Python (whatever numpy was called back then) ... well this massive job was done in a week and worked the first time.

The winning thing python had that nothing else had at the time was it was social, it was readable, and there was generally only one way to do things: the right way. It no longer has the latter quality, and the preferred coding style in it seems to be java-ish OO-spaghetti, but it's still pretty good.

That said, these days, I resent every damn time I have to use it. It's eating data science, more or less because pandas and scikit is ... mostly good enough, and because unlike R it's .... mostly good enough to deploy in an enterprise application. But if you're working on the exploratory side of data science, Python is shit compared to R. Doesn't have the tooling, doesn't have native facilities, and is vastly more long winded. All the attempts to make Python more ... X ... are probably a mistake also. You're taking a beautifully simple tool and making it more exotic and complex. It's like trying to use Matlab to build webservers.


> ... Perl existed; nobody thought of it as a real interpreter that could be used to construct complicated things any more than Awk was ...

I can't help but to gently interject here. By 1997, I'd been programming for some time, and there certainly were people who consider Perl suitable for programming in the large, and there were certainly big projects so written.

While I disagree with the specific word 'nobody', I agree with the sentiment: Perl was widely considered to be only useful for 'small' things at the time. Widely, but not exclusively.


Famous one:

> There are two kind of languages: the ones everybody hates, and the ones nobody uses.


Slightly wrong. Here's the original:

"There are only two kinds of languages: the ones people complain about and the ones nobody uses."

Bjarne Stroustrup's FAQ: Did you really say that?. Retrieved on 2007-11-15.


This is spot on. In my current workplace we use Clojure, and Clojure has many of the same problems in package management as Python does (no lockfile, no easy way to create reproducible builds, no way to declare range of dependencies unless you use version pin, etc. etc.).

However, I never saw any complaints about Clojure package management in any topic about Clojure here.


Maybe because Clojure has a much smaller total number of possible dependencies? That is, "just as bad theoretically, but easier to wrangle by hand".


We use Clojure massively enough to have multiple issues with dependencies, including the fact that sometimes we need to build a new version of a library simple to build it with more recent versions of dependency X, for example.

It is not that bad, much like I also don't think Python packaging bad. Other ecosystems have better solutions, though.


That's just a demonstration of the same underlying phenomenon: "nobody uses it".


aka People complain about things they use.


I'm a somewhat older programmer, and I've worked with a variety of languages (C, OCaml, C++, Scheme, Go, Java...). I think all of them are great in their own way and there's a lot to be learned with all of them.

I started to use Python quite recently and I really like it. It is a well-designed language with high-level abstractions that are really fun to use. I like the pervasive use of iterators, the 'everything is an object' philosophy, the minimalist syntax, the build-in datatypes...

That being said, I feel that the dynamic types show their limits when projects getter big. I use linters and static type annotations but I find refactoring very error-prone and there's a point where I don't really trust my programs.


You shouldn't trust your programs. That's why you test them mercilessly.


But should you trust them afterwards?


"It is a poor workman who blames his tools — the good man gets on with the job, given what he's got, and gets the best answer he can."

—Richard W. Hamming[0]

I have rarely "chosen" to use Python at work, but it has never failed to get the job done.

[0] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2041981/


OT: Your cited NCBI ref to the paper "Ten Simple Rules for Doing Your Best Research, According to Hamming" is pretty neat in itself [0].

e.g. "Rule 1: Drop Modesty", "Rule 7: Believe and Doubt Your Hypothesis at the Same Time".

[0] - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2041981/


a programming language designer should be responsible for the mistakes that are made by the programmers using the language. [...]

It's very easy to persuade the customers of your language that everything that goes wrong is their fault and not yours.

- Tony Hoare


I always thought that sentiment to be too broadly applied. A good craftsman should be able to make use of the tools he is given, but nonetheless not shirk his duty to improve upon them.

Not to say I have any real complaints about Python.


I don't think Python is a bad language but that quote is a pretty ridiculous response to criticism when the discussion on this article is a far cry from people blaming failures on Python/Python tooling.


Are you saying that python is good for the 90s? VB6 was good for the 90s. But that's not really relevant to what language to use now.


No, not at all, I'm saying if you had been a C programmer in the 90's you would have some perspective on some of the complaints and comments about python in 2019.


Why are you complaining about C in the 90s? If you'd been using punchcards in the 50s you'd have some perspective on some of the complaints and comments about C in 1990.


I'm actually not complaining about C in the '90s, it was amazing compared to Fortran in the '80s.

Funny story, my first co-op job (Fortran IV), my boss made me fix a bug using punch cards so I'd appreciate why the codebase wasn't as nice as it might be. THAT was prespective.


Fortran was designed for punch cards - sometimes you could even "write" new code by picking old cards from your desk drawer.


For what use case? for math and tech programming in the 90's I would have gone with Fortran.


Fortran IV? But yeah, Fortran was the goto tool for mathematics, simulations, etc. for the same reason Python often is today: libraries and existing code. I had to convert a simulated annealing algorithm from Fortran to C in the '90s


In the 90's it would should been F77 or later if they where stuck on Fortran IV in the 90's I am not surprised Python was invented.

Why did you convert the algorithm from Fortran to c seems a waste of time to me, unless it was a training exercise.


Abacus in the 0000's...


Why would you rationally compare something that exists and lives today, in the form it has today - with something from early 90s? In what world is that an objective comparison?


Back in my day we had to walk 20 miles to get to school -- uphill both ways!

As if the only way to gauge quality is to forever compare to the technologies of the past. Python was a great improvement on its peers in its heyday, but people, technologies and philosophies have changed since its inception and there is nothing wrong with wanting something more. That's how progress is made.


I find programming in Python dull and a little mind numbing. Sure, maybe Python was an okay choice 20 years ago, but the world has left it behind at this point. It’s lack of functional programming constructs, very few data structures in the stdlib, terrible performance, worthless statement vs expression semantics, no multi threading, no macros, etc.

Instead of evolving into something okay, Python is pretty much the same broken language as it’s always been, run by a guy who is openly hostile to PL theory, FP, and any major changes to Python whatsoever.

If you ever need/want to use Python, do yourself a favor and use Racket instead. Racket is better than Python in every single way (unless you are doing data science with Python, in which case fine keep using it).


Isn't it also just as much about Python is having it's day, granted a day long in the waiting but many langs go through this (Ruby, PHP) and then it tapers off and the next language has it's day.

Probably Go will be the next hotness in 5 years.


> Probably Go will be the next hotness in 5 years.

I think it will be difficult to grow a large ecosystem for a language with very poor FFI performance [0] in the long run. Golang's poor FFI performance is the number 1 reason I wouldn't use it for my own projects.

[0]: https://github.com/dyu/ffi-overhead


It depends on how active the community is. Java has had the same issue, and people just knuckled down and rewrote stuff in Java.


People gripe about the strangest things.

After using Go almost exclusively for about 18 months I have had to interface with existing C libraries exactly zero times.


I think it's less an issue of the average programmer using FFI and more an issue of common libraries leveraging it.

With python, why is it so popular currently? For large part because of its very good data science and machine learning ecosystem. And why does that exist? Mostly because python libraries like numpy, theano, and scikit-learn were built on top of mature, high-performance C libraries like OpenBLAS, LAPACK, and Cuda.

I very much doubt that anything like scipy would exist if the developers had to reinvent the wheel of the underlying numbers libraries from scratch. C's been around a long-time. There's huge amounts of high-quality mature software that already exists in a C framework. A language's ability to easily "plug-in" to the C ecosystem is a major leg-up when it comes to bootstrapping its own comprehensive library ecosystem.


How is FFI remotely strange? Not everyone is doing webdev, and even then, you would be surprised how many libraries are leveraging C ones.


I've run a few semi popular open source projects, and it's surprisingly common to hear people tell me it's useless because it's lacking their specific pet feature. Now these comments just make me laugh, just like the parent's gripe with slow FFI


Calling the FFI a pet feature is nonsense.


I think you don't get the point. Most people don't create such interfaces, but the libraries/systems they work with use them a lot, at least in other languages.

This is for example the reason why there won't be a large mathy/scientific ecosystem in golang.


Python needs FFI, because its native performance is fairly poor [1]. In general, Go gets away with less FFI because it's fast enough that it doesn't need things to be implemented in C to run quickly. This is especially true if you consider "Go" as "Go + its ASM", which is probably what you'd want if you think of it in terms of science programming.

For another example of a similar effect, Rust has great FFI. Yet I would expect over time it will be necessary for fewer and fewer things, because Rust is already roughly on par with C, and over time, a native Rust API will still be preferable to a C API wrapped with a Rust access layer. It will always have great C FFI, by its nature, but the percentage of projects that won't need it is already pretty high and probably only going up over time.

[1]: People seem to misinterpret this statement a lot, as if I'm saying Python is bad or something. No; it is simply this: Python performance is not very good at a very raw level. It is merely one characteristic of a language out of many, many relevant ones, not a full assessment of the language. Python has many other dimensions in which it has superior capabilities. It just pays for that on the performance dimension. (Whether that's an essential or an accidental tradeoff, well, ask me again in ten years; the programming language community seems to be in the process of working that out right now.)


It's not just about speed. Great mature libraries have been written in C over the decades. Rewriting them in a new language is a herculean effort. Writing bindings to them lets you use the fruit of all that labor.


What you say is true, yet, I observe that people undertake that herculean effort with some frequency.

Whether that's a good idea, and why that is, those would be entirely separate conversations. But it's just an observable fact that languages pick up native implementations of core functionality over time, subject to certain performance restrictions (e.g., I'm sure that if it wouldn't be unusably slow, Python would have a native-Python image library... it's just Python doesn't really have that option).


The effort I'm familiar with is the many attempts to make a decent linear algebra library in haskell. There are a number of libraries with huge work put in, but none has reached blas/lapack parity. Nothing's remotely comparable to numpy in ease of use yet. It picks up more libraries as time goes on, and the existing ones mature, but it's so slow that I'm skeptical they'll ever match numpy's usefulness a decade ago.


I think it's really easy to focus on the big hitters and forget they are the exceptions. Yes, matching lapack, any GUI toolkit, a browser engine, and a handful of other things is a big challenge that takes its own community to overcome, not something any language community can do with an incidental fraction of the community's available firepower.

But those are the exceptions, and often you don't need a best-of-breed solution and may prefer the language-native one.

Again, I'm not theorizing about what could be here; I'm looking out in the world, where I see that most libraries tend to emerge out into a native version if the underlying language can possibly meet the basic requirements for performance and such. This is something that needs to be explained, not explained away.

Also... I love me some Haskell, and on a per capita basis the community is great, but if Rust's community isn't already several times larger and growing faster, I'd be stunned, just to pick one example. Haskell has some very interesting cases of best-of-breed libraries, but it doesn't exhibit the library profusion you get from sheer personpower. (Of course, it doesn't really have the problems you get when your libraries are generated by sheer personpower either.)


> if Rust's community isn't already several times larger and growing faster, I'd be stunned

I'd love to see the numbers on that if you know where to find them.


If the go-silo offers everything you need than yes, there is no need for a good FFI in go... but if you want access to the low-level ecosystems of C/C++/Rust without a heavy penalty, then it might not be an option.


> This is for example the reason why there won't be a large mathy/scientific ecosystem in golang.

That seems weird. Go has a pretty large (and growing) Data Science Community.

Lots of Math/Scientific stuff there, especially including many people from Python and Ruby backgrounds.


Actually, Golang isn't so great. Try to change something lower level, for example in their socket implementation. Also, it's trying to promise a sane concurrency and all code I've seen use mutex all over the place.


> Also, it's trying to promise a sane concurrency and all code I've seen use mutex all over the place.

I think that's a fault of the devs, more than the language. In many cases (not all, of course) Go gives you multiple ways to achieve concurrent safety. Channels, being the big secondary. Yet it generally (in my experience) requires a very different implementation and in general has a lot of pitfalls. Over all I don't like Go these days, but I prefer it over Python (mostly due to at least having basic types)

I've switched everything to Rust though. Just as productive as Go (to me), with more tooling. Though, Rust will be much better in a couple years with some additional baking on new features (GATs, Futures, etc).


> I don't like Go these days, but I prefer it over Python (mostly due to at least having basic types)

You should try type annotations, with mypy and/or PyCharm it helps finding bugs before you run the code. It also makes autocomplete and refactoring work correctly (I would say they work better in PyCharm than in GoLand)


The main reason I get so curious as to why so many developers would rather use premade packages then spend r&d into highly optimized solutions.


As someone with a tendency towards writing my own custom solutions (from scratch or forked), I'm learning the hard way why people prefer depending on premade packages: they're usually better documented, better tested (both written tests and real-life battle-tested), and in a team/collaborative environment, having some canonical packages that everyone is familiar with helps to have common understanding, rather than having to explain how a one-off custom solution works.


It really depends, I've seen many examples where:

1. the package offering the functionality I needed was much more complex than needed and my solution implementing that functionality was much simpler (because it implemented only what I needed) 2. was not the best quality, because it was written by someone who was not necessarily better than me, or didn't spent enough time understanding the problem he was trying to solve

Having said that generally popular packages are good quality, although even then #1 applies, if you need just small functionality of a specific package try implement it yourself, it might turn out that the problem was not as hard as you thought and because you're just implementing what you need it might be more elegant.


Yes and no. Go has its applications, but is not built to replace python but rather to be a better c. That overlaps with the "easy code" part and not much else. Glue code, application scripting, etc. are python's strengths, and I don't see those going away.


Source on your claim? The best I was able to find is this:

> Go attempts to combine the development speed of working in a dynamic language like Python with the performance and safety of a compiled language like C or C++.

https://techcrunch.com/2009/11/10/google-go-language/


Maybe, I love Go but I'm not sure it's the next "hot" thing, all the recent "hot" languages have been scripting languages PHP, Ruby, Python, JS ... I'd say JS and Python are currently jockeying for that position.


> all the recent "hot" languages have been scripting languages

All the languages you list are over 20 years old. I think the popularity of Go and Rust show that statically-typed languages are having a resurgence.

As a long-time C programmer (with significant bits of bash and perl on the side), I've really enjoyed learning Go. But the responsiveness to developing new language features has been _sooo sloooow_. We'll have to see in 10 years if this turns out to have been a wise decision or not.


You could be right, especially as microservice architectures are very popular.


Isn’t JS one of the most popular languages? It has a monopoly in the browser so that and C++ already has the most impact on users. Hard to see how it can become more popular.


The server-side of things can start eating away at other languages. Node.JS is obviously already very popular but it can still grow massively.

I for one have started running TypeScript on my servers whenever I can, I love it. There's still room for improvement though.


> running TypeScript on my servers

I'm curious, do you mean compiling to JS and running it on the servers? Because I would love for the runtime itself to run TypeScript, something like Deno [0] but (maybe in the future) as mature as Node.js.

[0] https://deno.land/


> It has a monopoly in the browser

Well, there’s WebAssembly…


That's what I mean: Python and JS are jockeying for position as the hot language right now.


I'm sort of wondering if there might be a PHP resurgence. Things have gotten much better recently. The historical warts are easier to avoid now.


I was a PHP hater before but in my last job, I had to rewrite a big chunk of code from PHP 5.x to 7.2. Actually quite liked it. I would happily work with PHP 7.2+ code in any day but still wouldn't start a new project in it.


Sounds like the author of the article has never been involved in the Javascript community.


Yeh I recall working on MAP/Reduce using Fortran/ PL1G (1980's) we had to build an entire suite of JCL programs to mange every thing including build / deploy.


I don't agree at all. Your comment basically says: "It used to suck badly. So don't complain it sucks now.". I think dynamic typing is the bane of good software and we as an industry should try to actively discourage new code to be written in dynamically typed languages.

That's not to say that Python doesn't have it's place. But I see it more as a programming language for small utilities no more then 2k loc in length.


No, what I'm saying is that it used to suck really badly and we survived just fine, so I find all the over-wrought hand wringing about the havoc and burnout caused by Python's flaws hyperbolic.


>and we survived just fine

No we damn well DID NOT. This is one of the most infuriating lines used all over for dismissing concerns about modern things and developments. "Oh why do we need vaccines/antibiotics, we survived just fine without them" except, you know, for all the hundreds of millions of deaths. Minor detail that. Mere "survival" through gross inefficiency is not a real yardstick.

By the same token, "it used to suck really badly" in programming and the result was awful practices, crashing, and security problems out the wazoo. We sort of "survived" that by a mixture of just plain eating the heavy losses and having them be somewhat mitigated by virtue of simply having less surface area for damage since less stuff was tech based or connected. Infrastructure was less built up. But times have changed and standards for, and value of, security/stability/supportability have increased dramatically.

Yes in a sufficiently large group I'm sure you'll be able to find individuals engaging in hyperbole about any such thing on the internet. But that doesn't in turn mean that there aren't very real, very serious concerns raised around the context of modern practice. Dismissiveness based on bad historic practice is not merely uncalled for, it's just plain weird.

Edit to add: another issue a lot of this dismissive comments tend to ignore is cost & skill. Yes, great things have been done with options we'd now consider subpar in the past, that's what they had to work with. But those things were done by the best of the best, with huge budgets, lots of experience and so on. A very important part of advancement is allowing the "same thing" to be done more cheaply by more people, and in turn be used for wider array of applications.


All the languages you mentioned are statically typed though so they already have a large advantage compared to Python. To be honest if I can choose to inherit a 1M LoC code base I would even choose COBOL over Python.


To understand the hate, you have to realise that no one likes being forced into using a particular technology. Especially one that is more of a lowest common denominator and ignores much of the progress in programming language research over the last 40-50 years (e.g. expressive static type systems, and Python still markets itself as "strongly typed").


> and Python still markets itself as "strongly typed" What's wrong with that? It is strongly typed (doesn't coerce everything like Javascript) just not statically typed (objects have types but variables don't).


Type means statically typed unless you qualify it with your own nomenclature "dynamic". Please don't hijack the mathematical definition of type.


I go with what helps me get work done. I have a feeling many people are the same. Python lets me be productive in a crazy variety of tasks and mostly gets out of my way when I do so.


In my industry, the companies that have standardised on Python and consequently now have large Python codebases are not very productive environments anymore. Perhaps they were once, for the first few people, in the first few months. It's no surprise that the Python community has now started to try and retrofit types.


Why do you think that Python should be compared to Fortran? Why not Rust or Julia? If we compared everything to worse humanity would not progress at all. I have spent 10 years on Python and I sympathize with some of the criticism below here pretty much. I wish Rust or Julia will replace it for data crunching soon.


Fortran is not worse. Far better than what you think is, really. Just try Fortran 2018 standard and you will see.


I love python as a scratch pad for playing around with code, but I don’t think I would put anything into production written in it. It’s just too hard to debug and maintain and deploy once it gets to even a medium amount of complexity.

Interactive python and Jupiter notebooks are an absolute joy to work with though.


Could you please elaborate what exactly gave you such a negative experience? Which stack and tools, why is it hard to debug?


Don't forget COBOL


A hundred thousand years from now, when the Terran Empire's Dyson Swarms are ubiquitous throughout the Orion Arm, the relativistic generation-ships of the Andromeda Colonization Fleet have set out on their multi-million-year journey across the intergalactic deeps, and the World Computers housing the Great Intelligences serve the daily needs of quadrillions of citizens, there will still be job ads for COBOL programmers.


And they'd still bitch about how everything else still runs on Javascript.


Many of us are trying very hard to forget COBOL.


Better yet, punch cards.


I hate hammers with soft grip handles, but my current job is forcing me to use one. They're so mean. I prefer hammers with wood handles.

Oooh look, there's a hammer over there with a modern handle. I want that. I don't like wood anymore.


https://conan.io/ https://vcpkg.readthedocs.io/en/latest/

You were saying about C/C++ package management?


Yes, pls, go on and tell us how it's a default package management tool for C/C++...

It is a step forward, sure, but it's a far cry from Python's pip or Rust's Cargo or even Dlang's dub.


Python's PIP? Or do you mean Anaconda? Or VirtualEnv? Or Poetry, which someone here has pointed to?

Also - you're moving the rhetorical goalpost. First you claimed there was no package management system, no you're complaining about the lack of default.


Technically he's wrong, and you are right, in that there are package management systems for C++. Practically, your being technically correct does not matter.

No package management for C++ is popular enough for C++ programmers to avoid all the hardworks of manually managing packages, because there are always some (or most) packages you rely on not in these package management systems.

pip, for all its shortcomings, does include almost all of the Python packages you ever need to use. That is a significant different in practice.


Given that OP was talking about getting C/C++ packages "off a usenet archive", I think it's clear that they were talking about what coding C and C++ was like two or more decades ago.


Exactly -- one of the blog posts on the page they link to has a "3 Month Anniversary Survey"


Historically, that is a relative newcomer to the C/C++ world. You completely missed the parent's point.


I'm pretty sure this didn't exist in the 80's and 90's ...


What about pkgconf + autoconf? Might not be the grand unified package manager you're conditioned to look for, but works well and is pretty much universally used. Besides, C and C++ aren't about language ecosystem lock-in, but about creating standardized artifacts (shared libs, static libs, and executables) designed to work and link well in a polyglot environment (well C++ maybe less so with its symbol mangling/typesafe linkage).


I don't think its hyperbolic. If so many people are complaining then we have a issue. I have tried to learn Python but every time I did, some things kept turning me off.

- First indentation was a issue for me but I looked past it and went ahead to give another go at Python.

- Even the best in class IDE suffer to give any kind of insight into python code.

- Two versions: At my workplace we use Python 2. I prefer to learn newer version but don't have choice. When Python 3 came out they should have initiated deprecation of Python 2 but that does not seem to be the case. Tonnes of libraries are still in Python 2. They should have maintain language backward compatible instead of fragmenting the entire community.

- So many ways to do a thing. This is subjective but I need a language which gives predictable performance for given piece of code. Go does this. There's usually one way of doing things and only one way. No need to know nooks and crannies of the language nor there is such a thing in Go


> At my workplace we use Python 2.

You’re probably aware of this, but at the odd chance you aren’t, Python 2 end of life is 2020. You really should be moving to Python 3.

> Go does this.

There is a lot of love for Go and Rust on HN, but unless your region of the world is significantly different than mine, then chances are that there won’t be a Go or Rust job in your lifetime. I’ve only ever seen one of them mentioned in a job opening for my entire country, and that was as “nice to know” for a c++ job at Google.

I’m sure Rust and Go are truly excellent languages, but that doesn’t really matter for most people, if they never manage to see any adoption outside of Silicon Valley.


> "nice to know"

So true. There are tonne of Python, Java and other legacy software jobs compared to Go.

I am probably gonna have to learn Python (which I think is better than existing legacy technologies) because of this though I am not really interested.


Place your bets now; I predict the day they turn off pip2 will be the real y2k.


This is how I predict it will happen.

On January 1, 2020 will happen ABSOLUTELY NOTHING.

It will be gradual, as long as you're developing more and more packages will refuse to work. If you realize there's a bug in one of your dependency and the fix is in a version that doesn't work on python 2, tough luck you either will have to backport the fix or migrate the code.

As time progress it will be more and more work to deal.

And guess what, some packages didn't even want to wait they already dropped python 2 support: https://python3statement.org/ (look at Projects Timeline) so it is starting happening right now. I'm wondering if pip will decide to drop the support in 2020, that might end up being the biggest hit.


Python 2 support ends in less than 5 months. They extended the deadline 5 years ago.

I’d say deprecation has been initiated.


Complaining isn't the problem, yes there are package management problems, a virtualenv package solves most of them. I'm not saying there isn't. Python 2 really stopped being a problem for me last year, I haven't hit an issue for a while.

I'm talking about the comments saying it was causing their burnout "x100" and other such hyperbolic statements.


Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: