Hacker News new | past | comments | ask | show | jobs | submit login
Drawbacks of Python (quora.com)
97 points by iflywithbook 33 days ago | hide | past | web | favorite | 116 comments



I agree with the response that there are a few "gotchas" with python, like with any language! I have gotten caught up with a few type problems when dealing with sending raw binary data over serial and things like that - but luckily python has great unit testing frameworks to help with this.

My main beef with python is that it is harder to deploy projects to other people, without them setting up a virtual environment or things like that. I know that there are some tools to help compile but they aren't easy to use.

Also, it is hard to create GUI applications vs languages like C#!

Other than that I love using python and I think it is a great tool to use in embedded development along with c/c++.


I wish it had a better async story. We've had production outages that were difficult to debug because some third party library made a sync call deep in the call stack and starved the event loop. APM showed performance degradation in unrelated endpoints. Eventually health checks began failing and containers were killed, putting the load on other containers which inevitably fell over and so on. We've seen similar issues with CPU starvation due to CPU-intensive tasks as well, though they were more straightforward to debug. We also continue to see runtime type errors because someone forgot to await an async function: `rsp = aiohttp.get() # oops, rsp is the promise, not the response!`.

As far as deployment goes, we've had good luck with pex files (executable zip files containing everything but the interpreter). https://github.com/pantsbuild/pex. Your deployment target still needs the right version of the Python interpreter and .so files that your dependencies might link against.

Personally, I've found that Go solves most/all Python issues without introducing too many of its own--and anyone writing Python (sans mypy) doesn't get to chastise Go for lacking generics! :)


> anyone writing Python (sans mypy) doesn't get to chastise Go for lacking generics! :)

Why do you say this point in particular? Python's duck typing makes generics very easy, no? i.e., you can write an algorithm in python that just assumes a particular method is callable and you don't have to code gen individual implementations for different types. plus the dunder (e.g., `__iadd__`) methods give a lot of useful basic functionality


Mostly I was just amusing myself, but here's what I meant:

Usually when people refer to generics they're referring to a static type system that allows for expressing generic algorithms. (Untyped) Python inherently has no static type system, so necessarily can't support generics by that definition. The other definition of generics that _is_ satisfiable by (untyped) Python--the definition you presumably had in mind--is equally satisfiable by Go via its `interface{}` type. So regardless of which definition you adhere to, "Go lacks generics" is no more valid a criticism for Go than for Python.

Practically speaking, there's some syntax ceremony involved in using `interface{}` in Go; coming from Python or another dynamic language, this ceremony will probably offend your sensibilities--why is it so hard to opt out of the type system!? But that's by design--Go generally tries to keep you on the path of correctness without being overbearing (as many find stricter languages--e.g., Haskell--to be).


>The other definition of generics that _is_ satisfiable by (untyped) Python--the definition you presumably had in mind--is equally satisfiable by Go via its `interface{}` type. So regardless of which definition you adhere to, "Go lacks generics" is no more valid a criticism for Go than for Python.

Well, that's not exactly the case. Interface{} might have the same tradeoffs with Python's dynamic types (plus more ceremony), but that's not the real issue.

Python is inherently a dynamic typed language, so not having types and generics is expected and idiomatic.

For Go, you have a type system and static type checking, but you can't properly use it when you resort to interface{} to make generic algorithms.

So while for Python not having types/generics is business as usual (and that's part of the very promise of the language), for Go not having generics means losing two of it's main promises, static type checking and speed.


For whatever it's worth, I'm a proponent of generics in Go.

"Static type checking" does not mean "every conceivable program is safely expressable by the type system". Haskell doesn't make this promise and neither does Go.

Anyway, Go is as slow and as unsafe as Python in the ~1-5% of code that is generic. It's still a pretty good deal even if you were mistakenly expecting 0%.


>"Static type checking" does not mean "every conceivable program is safely expressable by the type system". Haskell doesn't make this promise and neither does Go.

No, and I didn't make this claim or demand this either.

But it should mean: "any algorithm variation where just the type changes should be expressable by the type system without me having to rewrite e.g. a sort list for int64 lists, int32 lists, float lists, etc, it's 2019 already".


That's not what static typing means. If you want it to mean that, go right ahead. I agree that Go should support that feature, but it has no bearing on the definition of static typing nor on Go's position relative to Python with respect to speed or safety.


Trio is my favorite async library. It's simple and extremely powerful. https://trio.readthedocs.io


PEX looks interesting! Thanks for the link. I have this same issue


I have the same two complaints. I’ve automated some large tasks at my job with Python, and avoided external dependencies because while I’m sure I could deal with it, nobody else here is a programmer and it don’t think it would go well if someone had to try and figure that out later. “Install python (3.6 or later), point script at the csv file” avoids that complexity.

It’d be better if it were a self contained executable with a GUI but not worth the time to figure that out.

I messed around with TkInter a while back, not sure if anything's new since then. For executables there was py2exe, cx_freeze, and PyInstaller, but the situation with packaging DLLs into an executable that would actually work on someone else's computer was iffy last I checked.


if you just want to make a self contained executable for windows users I've used py2exe a few times and it always works well. It's basically one command and it gives you an exe of your .py file

https://pypi.org/project/py2exe/


py2exe and anything else that uses Joachim Bauch's memory loader is loading DLLs in a broken fashion that is bound to bite you sooner or later, assuming it works in the first place.

Reimplementing - in a buggy incompatible way - the Windows loader (which is what that code is doing) from scratch, is a gross hack and should not be used to build anything that's meant to be reliable.

Note that pyinstaller and cx_freeze do not suffer from these issues. Do not use py2exe.


I used to be type of of developer who would always make executables and installers in years gone by. I find now in all cases I whip up what I want to do as a script and then embed it in Flask as a webapp. The only thing I don't do this is with compiler tools that I can bring in with cmake etc. If the app needs data I just add an upload button or have it point at S3. YMMV.


I believe fbs might have massaged some of your GUI packaging problems. I haven't used it myself, but a coworker who builds a Windows GUI application swears by it. From him, "It's the package manager python doesn't deserve"

https://build-system.fman.io/


Looks interesting, though the license situation seems less than ideal.

If it's for an internal only tool, does that fit under all the GPL / LGPL requirements?


> If it's for an internal only tool, does that fit under all the GPL / LGPL requirements?

Yes.


I've found that the built-in tkinter lib is good enough for basic GUIs that you might use on a utility.


That's the one I've tried a bit before, but haven't spent enough time to be comfortable with it. The crux of it for this project is that if I were making a UI I'd like to move the data validation in to that (and add some more sanity checking) with visual indications if there's a bad row or something looks off. Currently it just it yells at you and terminates if there's invalid data.

So a UI rewrite would make it much friendlier to other users, but it'd be a time investment I can't justify over the "just don't give it bad data" strategy.

The inputs and outputs are all handled manually so reliability isn't a huge focus as long as someone can poke around for a bit and make it work. Only user is me for now, so ¯\_(ツ)_/¯

    # TODO: make it less shitty


All true, but I don't think python's really supposed to be perfect for making desktop apps, its lack of static typing means a large scale app would end up becoming a pain to maintain.

That's why it is best used in small teams for data science and scripting; run once and never look back. Most complaints about python IMO fail to appreciate its design philosophy.


Just my $0.02:

I've maintained and/or written more than a few large desktop apps in both python and C++. The python ones are a hell of a lot easier to maintain, i.m.o. Python _is_ strongly typed unlike JS, and I don't feel like the dynamic typing aspects have ever really bit me from a maintenance perspective. At least in my experience, I've seen a lot of either verbose/repetitive or highly cryptic C++ written to in situations that dynamic typing makes straightforward and i.m.o easier to maintain.

To be fair, though, I have a pretty accurate mental model of what python's doing under the hood, and I still regularly find myself mystified by certain C++ language features and what the compiler actually does in slightly ambiguous situations. I've nominally been using C++ longer, but I've used python a lot more frequently. (I'm primarily in scientific computing.)

As a whole, I've always found python to be a nice language from a maintenance perspective, though I'll grant that people have more room to do bizarre things in some circumstances. I've definitely seen some unmaintainable and unreadable python, but I've seen that every bit as frequently in static-typed languages.

I feel that python is quite well-suited to desktop application development. Part of this boils down to Qt being a nice library regardless of whether you're working in C++ or python. However, python codebases tend to be more succinct and readable. In the end, I've consistently found that to be a larger advantage in maintenance than static typing.


I find any one block of code more enjoyable and succient to read in Python, but with very large implementations I can't figure out a good methodology to maintain it all. How do you deal with arbitrary objects being passed around and not know what's potentially coming from where? With something like C# without having to know the whole solution I can easily figure out what's coming into a function and manipulate that to get the desired result. With Python, is it just matter of getting a model of the whole system? Or do you take advantage of your IDE to hopefully supply all that information to you?


> How do you deal with arbitrary objects being passed around and not know what's potentially coming from where?

Honestly, by trusting that those objects conform to what things are documented to expect and using operations that will naturally raise an error for unexpected cases.

Basically, you don't depend on compile-time or IDE-time checks, you depend more heavily on tests.

It allows for much more of a direct approach. E.g. if something's documented to accept a sequence, I don't care whether it's a list, tuple, some custom object, etc, I just need it to have certain properties like iteration and ability to index.

You write things to expect certain abstract categories of types and do things that will raise errors if they're not in the category you expect. The downside is that nothing is enforcing that in an IDE or at compile time. As a result, you need more integration tests.

In some ways, yes, you're more at the mercy of other developers behaving well. The flip side to that is more flexibility and less duplication.


> Honestly, by trusting that those objects conform to what things are documented to expect and using operations that will naturally raise an error for unexpected cases.

Which is really hit or miss in the wild. Even the standard library documentation doesn't reliably document types. There are abundant references to "file-like objects" as though that is a type. It is (or at least was) totally unclear if a "file-like object" supports seek() or just read() and write(). Our internal Python code is a mess as well; we can't keep documentation up to date with function signatures (even though it's part of our code review process).

As someone who develops primarily in Python, I think you're missing out on quite a lot if you think C++ is a good characterization of the state of static typing. Having used Python and Go and a few other languages, I'm convinced that static typing is more productive when it's done well.


Ah, OK, so you build the "checking" in via process and practice. It requires more discipline, but you end up evolving to kind of a similar place, but with more flexibility.

Thanks!


> My main beef with python is that it is harder to deploy projects to other people

On the other hand, I think python has more "batteries included" than other scripting languages and for many projects you can usually figure out how to eliminate dependencies.


I know you mentioned the build tools but I didn't know if you knew they can work with Qt.

QT for Python and The Fman build system would allow you to deploy in some cases a single executable. https://wiki.python.org/moin/PyQt/Deploying_PyQt_Application...

That being said I had issues getting everything installed correctly with Pipenv and then locking the specific versions that are compatible. fman and pyside2.

A GUI builder comes with the Pyside2 package qtdesigner.exe I think.


C# has been more focused on Windows but there it has been very easy to create gui applications. In fact there are probably more (Windows) gui applications made with C# than with any other language. Maybe Electron and js has taken over now.

For C# on Linux Avalonia looks promising. https://avaloniaui.net/


This project- https://github.com/unixtreme/D3Edit, has an interesting approach of bundling the virtual environment for linux/Mac/Windows and activating the corresponding when you launch the script.


> Also, it is hard to create GUI applications vs languages like C#

While this is a problem, for many projects, I have overcome this by including a local web-server, that accepts requests from 127.0.0.1 and serves a web app as a GUI.

It's not a perfect solution, but It worked for me in many projects.


I also run up against the deploying problem quite a bit. For the most part, only Windows has been a real issue for me. Lately I've use Cython and found that adds an even tougher layer of difficulty to the problem.


These are minor annoyances to me. The major annoyance I have with Python is that most Python code makes such heavy use of classes, even when it only needs lists, tuples, and dicts.

Why do I really need a class, when:

- Instance fields can be added and removed at runtime

- Private fields and methods are not actually private ("we're all consenting adults here")

- No pattern matching on types and `if isinstance(foo, Bar):` is an anti-pattern because it's a duck-typed language

- Interfaces are neither available nor necessary due to late binding. Abstract Base Classes are the closest thing to an interface, but they're also unnecessary because again, it's duck-typed.

- Inheritance is widely considered to be a mistake ("Prefer composition over inheritance" etc)

- Classes tend to complicate testing

I can use classes to write custom data structures, I guess, but in a language as high-level as Python, I can just model trees and graphs with dicts. If I need a more performant data structure, I'm not writing it in Python in the first place.

I've seen some very experienced Pythonistas recommend using namedtuple instead of classes for most use cases, and I tend to agree with them. That gets you immutability as well as dot notation for accessing fields.

Clojure programmers seem perfectly happy with just lists and maps, and after experiencing that, classes feel like a bolted-on misfeature in Python.


You can even subclass namedtuples to get immutable value objects with helpful methods.

I like classes over dictionaries because I can look at the class definition to see what the variables my function gets are, and what functions (methods) exist to work with them.

I used to work with a code base that passed dictionaries around, and there were a lot of hacks where people had some data somewhere that they needed somewhere else, and the dict was available both places, so why not add it to that... Different pieces of code added different things and you were never quite sure which version of the data was passed this time for the "layers" parameter. Somehow it doesn't happen as much with classes.

It was like duck typing to the extreme. I call it smurf typing, if it smurfs like a smurf, smurfs like a smurf and smurfs like a smurf, nobody knows what's going on anymore.

Immutable value classes, those are good.


Isn't the reason to use classes the dispatch aspect of it? If I take in a duck-typed object and call obj.foo(), then I get an appropriate class-specific version of foo().

Like you said, isinstance() is an anti-pattern - I don't want to define a function that behaves differently based on explicit testing of input type, because that wrecks the duck typing. Right?


Yes, exactly. Nail on the head.

When I do have a datatype that needs to be handled differently based on some condition, I tend to use ABC's with only one level of concrete subclasses, just to make that implicit method interface explicit.


> Why do I really need a class,

Among other cases, you need a class where natural, convenient, consumer-friendly use of a data structure should leverage features like operators and standard protocols that rely on the existence of particular methods (often special methods.)

> If I need a more performant data structure, I'm not writing it in Python in the first place.

While Python is linearly slow, that’s no excuse to use data structures with poor big-O performance (small n may be, but that's no more or less true of Python than of a more performant language.)

> I've seen some very experienced Pythonistas recommend using namedtuple instead of classes for most use cases

Namedtuple is just a function that returns a fresh class; you can't use “namedtuple instead of classes”, as it's a convenience mechanism for creating a class with certain commonly-useful features.

> Clojure programmers seem perfectly happy with just lists and maps

And vectors. And sets. And records. And a bunch of other common data structures.

But, yes, the use of multimethods for type-based dispatch that clojure uses replaces classes. But Python doesn't have multimethods built-in and the core language and stdlib don't use them (they are available in a non-standard library).


> you need a class where natural, convenient, consumer-friendly use of a data structure should leverage features like operators and standard protocols that rely on the existence of particular methods (often special methods.)

I understand and I can't disagree, this just seems much, much narrower in scope than what classes are used for in typical Python code I see every day. Which is perhaps due to influence from other languages like Java/C++ etc.


My solution to this is http://attrs.org or dataclasses. They make a huge difference in readability and type safety.


I feel like PEP557 missed an opportunity when it named data classes. The syntax is such an improvement that there is very little reason not to use them over ordinary classes for most purposes.


Python probably should've just used attrs directly, like it does with setuptools, rather than making a whole new package.


Yeah namedtuple is the standard thing now but it is much heavier and boilerplatey compared to e.g. a struct in Typescript.


I like Python because of zen of python [1]. This fits my brain and had been very helpful. Indeed in 2004 I chose Django instead of Ruby On Rails just because explicit is better than implicit. Overall Python is productive and good. Yes it has some warts and corner case, but than its evident. Among programming language python has a humble community and very approachable. Also most programmer in Python will not hesitate to use low level language and interface with Python when necessary. It's not considered bad.

  >>>import this
"Beautiful is better than ugly.

Explicit is better than implicit.

Simple is better than complex.

Complex is better than complicated.

Flat is better than nested.

Sparse is better than dense.

Readability counts.

Special cases aren't special enough to break the rules.

Although practicality beats purity.

Errors should never pass silently.

Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess.

There should be one-- and preferably only one --obvious way to do it.

Although that way may not be obvious at first unless you're Dutch.

Now is better than never.

Although never is often better than right now.

If the implementation is hard to explain, it's a bad idea.

If the implementation is easy to explain, it may be a good idea.

Namespaces are one honking great idea -- let's do more of those!"

[1] https://www.python.org/dev/peps/pep-0020/


I agree with a lot of these things, but actually disagree that Python is good at exemplifying some of them.

For example, semantic whitespace is arguably implicit, hard to read, invites density (compared to an extra line for braces), and (again arguably) is ugly.

Yes, some of these things are subjective, but it's hard for me to confidently recommend Python to someone with these values.


I can't even tell you how many hours I've lost due to someone committing an incorrectly indented line in python. This often happens during merge conflicts, for example.

Because logic in python is interpreted via indentation, it's very difficult for python to realize when a line is indented incorrectly. It just assumes that if a line is indented a certain way, it was intentional.

In programming languages like Ruby or Javascript, indentation is stylistic and has no effect on the logic. You can actually use the "auto-indent" feature of many editors, and you can use linting tools to ensure formatting. If you have an extra "end" or an extra ending bracket, your IDE can immediately warn you. Not so with python!


Using python for about 5 years or more now and never had an issue with the indentation.

If you are using notepad as an editor then maybe this could be an issue?

Every other editor I've used validates the syntax and quickly catches misaligned statements. Auto formatting works too.


> very difficult for python to realize when a line is indented incorrectly. It just assumes that if a line is indented a certain way, it was intentional.

How is this different from some language not realising that a statement should be inside of the curly brackets instead of outside?


No compiler can read your mind. But in a language with brackets, the compiler can at least determine scope even when things are poorly-indented. In Python it can't in a lot of cases.

It's like static type checking: The class of errors the compiler can catch is larger when it's present. Likewise, with visible scoping characters, the class of errors the compiler can catch is larger than when they are invisible.


> But in a language with brackets, the compiler can at least determine scope even when things are poorly-indented.

Yes, but can it determine scope if things are poorly-bracketed? Because that's the fair comparison.


Poor bracketing almost always causes syntax or compiler errors. As long as you have a language that's block-scoped (which is true of modern JavaScript and most other popular languages), you'll get an error if you make a bracketing mistake.

That's not true of indent mistakes. Semantic whitespace is lossy -- there's data contained in those brackets that some people think is low-value that it's expendable for the sake of saving keystrokes, but it does have value in practice.


With Python, generally speaking, if you are more than a few indentations deep you are doing something wrong.

Flat is better than nested


That's a good heuristic for a programmer but it's no help to a compiler.


I've lost approximately zero hours to indenting or formatting issues in Python or other languages. Unit tests usually do a good job of catching unexpected behavior quickly.


For some of those issues, you could type brackets instead of unit tests. Brackets are a lot less work.


for all of these same issues, using a decent editor and a linter saves approximately the same amount of work.

every time this discussion comes up, it's clear that people simply aren't willing to accept that this is a solvable problem, despite the many people who always come out of the woodwork to insist that we've solved it. Is it really that hard to accept that someone out there may have solved something you haven't?


As people have said in other comments, no static analysis can read your mind. You can have valid Python that contains logic errors due to indentation mistakes, and you can only catch it at runtime. Period.

That said, everything is a trade-off. Semantic whitespace has downsides, but if the upsides outweigh, then it's a great idea.

At this point, I have no idea what the upsides are, other than saving a few keystrokes (which truly is solvable with a good editor).

It's like removing the ending semicolon in JavaScript. Sure, it almost never causes a problem, but sometimes it does, so why do it?


gotofail (https://www.imperialviolet.org/2014/02/22/applebug.html) was written in C with brackets and everything.


>For example, semantic whitespace is arguably implicit, hard to read

How is it implicit and hard to read? If anything it's very explicit, instead of just a single { and } to denote "the next part is the body", it explicitly requires visibly indenting the body.

That's easier to read, and is the expected "good practice" in C and similar "brace" languages anyway (you're not supposed to use { and } without also indenting your code there either).

Python's semantic whitespace is more visible than having a { and } in C e.g. and not indenting, or having a for/if/etc without braces with a single statement attached and not indenting.

Not sure what's ugly about:

  def double(n):
      return n*2
compared to:

  function double(int n) {
      return n*2;   
  }
or worse:

  function double(int n) {return n*2;}
(especially with larger bodies -- in fact this non-required whitespace property is where 90% of the "obfuscated-C" ability is based on)


May be different people have different experience with Python. After c and Perl had been using it for over 15 years and among the languages I have seen Python code is one of the most readable one so far, again subjective as I like sparse and empty spaces in architecture with minimalist design. For me whitespace is same in Python code.

Whitespaces forces programmer to make code readable. Obviously today with tools like gofmt, rustfmt it seems easy and those are inspired by Python.

Python PEP process has been adopted by many programming language. Try JCP process and see what I am talking about.


I'd be fine with python requiring indentation, what bugs me is the lack of an end delimiter. IMO it makes things like aut-indentation, auto-formatting, copy-pasting chunks of python code etc. a worse experience.


If C and Perl are your comparison points, obviously python will seem better...


I have tried golang, Java, C++, js and recently rust. I can still confidently say python code is the most readable of them all.

Indeed so much so that python style is adopted by swift language and it does look cleaner than others.

Obviously I like concise perl, Haskell code too they have a beauty of their own, but that does not make python whitespace is bad.

It's just people who feel it's a burden to explicitly follow whitespace hate python indentation based on my observations. Since suddenly instead of braces or statement end one need to use a visual sense of beauty along with logic.


In what way has Swift adopted Python style?


Having a bit of experience with Python, I’d say it doesn’t exemplify

> Special cases aren't special enough to break the rules.

> There should be one-- and preferably only one --obvious way to do it.

Python is complicated enough that the language and standard library often have multiple ways of doing the same thing. And there are still a number of “special cases” in the language, or at least non-intuitive results. Certain things are mutable (probably for performance?) when they shouldn’t be, some things behave in odd ways, and certain distinctions are made blurry but then it comes back to bite you when you do something non-trivial and the differences become apparent.


This is the way I feel too. Python is both redundant and inconsistent at a basic level. The APIs for `map`, `filter` and `reduce`, the inversion between `split` and `join`, `sort` vs `sorted` -- it's all very weird. And these aren't advanced features.

When I hear people describe Python as simple and elegant, it feels like they're using a different language than the one I know.


Note: in practice, almost no one uses map, filter, and reduce. Those are almost 100% replaced by list and generator comprehensions in code written in the last decade or so.

The difference between split and join is kind of weird, I'll grant. I'm not sure why it was done that way.

`sort` is in place. You're telling an object to order itself: "hey you! Sort yourself!" `sorted` returns a copy of an object, but... sorted. You're asking for the sorted version of that thing.


So this just links to someone's opinion on Python 2 vs 3 instead of an actual discussion on the drawbacks of Python, right?


To be fair, it also highlights a benefit of Python 2.2 over 2.1. (I.e., improvements made in the language eighteen years ago.)


Dunno if Ruby locked my mind into a certain mode, but I've never felt good doing anything with python :( 2 different projects I tried participating with a lot of years in between, still felt the same. Granted, I can't say I've really studied it properly.


Quora click bait. A dull and benighted article that adds nothing to the discussion.


"No language is perfect" is the perfect reply to asking about the drawbacks. The ability to get things done in a language is what allows someone to deliver VALUE to a client (or for their own projects). A workable solution now trumps a perfect solution that will take too long to implement.


JavaScript is a pretty bad language in my opinion, but does have the universal deployment problem solved a lot better than any other.


JavaScript is the perfect example of give hackers a bit and they'll take a damned terabyte.


JavaScript is like Buckley's cold medicine - It's awful, but it works.


Unrelated to this comment, based on your earlier comment - check this out:

http://www.classicshell.net/

This transforms the start menu on Windows into any version of it you like.


I actually expected some elaborate work about the imperfection of Python.

What I got was a list that started with at least three points that are already obsolete, because only valid for Python 2, which is already declared obsolete.

Every language with some history has its humble start and is evolving. To carry the errors of the past is just not fair. You could of course drag along the first Java version and conclude that Java is bad -- but you would really invalidate your own opinion.


The main drawback of python for me, compared to NodeJS is the whole sharing projects onto different machines with virtualenv or whatever. package.json and node_modules just works by default in so much cleaner a way. You can use npm or yarn or just a zip of the node_modules to share the environment with a colleague or to deploy.


I feel like this is a common argument, but a bit disingenuous. One of the major reasons venvs are more complex and less portable than node_modules is it's fairly rare for node to bind to c/c++, but very very common for python to do so.


conda solves that for python at least as well as npm/yarn does. I don't understand while it is not in wider use (except for lack of PR).


If you want to add your own project to Conda, you need to use conda forge, because Anaconda itself is a curated distribution.

Maintaining a conda forge package has been, for us, a complete nightmare. If you depend on another conda-forge package you can have the issue that the library is configured to suit whoever first wrote the conda-forge package - i.e. by disabling parallelism or providing only shared/static libraries, and have that maintainer be completely unresponsive or unwilling to change it.


That is a downside of community maintenance, for sure. Conda-forge probably needs official policies and suggested workflows to have multiple varieties of recipes where feasible. There is never a one-size-fits-all solution when a community serves so many different people with so many different use cases. Spack fits the "my way and only my way" mentality better. Conda is more "close enough, and already ready for me to use."


If you let us know about the package, we can work with you to create a variant.


conda / miniconda has definitely solved most of my grievances with deploying python applications


how does conda solve this? Will conda fallback to pip if the conda repository doesn't contain one of your dependencies?


With your conda environment activated, you can run pip install requirements.txt. You don't have to use any of the conda commands outside of creating the environment.


What is wrong with virtualenv, pip and pip freeze?


They work, but there's much more friction and many more gotchas than node. The requirements file for example, doesn't distinguish between direct and ancestor dependencies, it has to be explicitly passed to pip, you need to have created and activated your virtual environment correctly, many python libs have fragile, c based builds. Node and nvm just work.


I use separate requirements-to-freeze.txt and requirements.txt, so that I maintain only list of direct dependencies (no transitive dependencies) and only pin versions when I know the code won't work with the latest version, but pip freeze creates a file like package lock with all the dependencies versioned so I can recreate the exact environment again without needing to store a docker image of it or something like that. See https://www.kennethreitz.org/essays/a-better-pip-workflow

Needing to name the requirements file and needing to activate venv are not pain points I feel.

Wrapping native libraries and not invoking a C compiler at package install time means some trade-offs. https://cffi.readthedocs.io/en/latest/cdef.html


I have to deploy web applications written in Python on Windows... no uWSGI, no Gunicorn and no easy multiprocessing because no fork(). It's painful quite often and it makes me loathe the GIL, but I still enjoy writing Python code.


I'm curious what what your approach to this is. Relying heavily on Twisted seems to be the obvious choice?


Maybe I'm naive, but in my mind besides their ecosystems and superficial syntax differences there isn't that much functionally different between Python and Ruby, with the major to exception of blocks in Ruby.

In which case the decision between the two seems to be between a thing and a slightly better version of itself. I totally respect that many people see this differently and probably exactly opposite but to me it always feels really hard to justify using python for this reason.

Just for fairness sake I'm not trying to trash python. I think it's a great language.


Back when I wrote a lot of Python, something I disliked was how one could easily reload a module, but existing instances of classes wouldn't get updated. You could implement the machinery to do so for your own stuff, but of course that could be arbitrarily flaky.

Performance was definitely an issue, but for the sort of programming we were doing it normally wasn't a big issue.

Duck typing could definitely be an issue — one had little confidence that code would run as desired.

Overall, I really liked Python, but these days I'd sooner use Go or Lisp.


Almost every discussion of programming languages falls into bike-shedding[1] and this is no exception. The only thing he mentions worth talking about is the GIL. Syntax and small API gotchas can be learned by a beginner programmer in a matter of days of using the language, and are fairly irrelevant to your overall experience. If whitespace and using the division operator are big problems for you, significant software development probably isn't within your abilities.

In addition to the GIL, other problems I'd point out with Python:

1. At one point, Python was "batteries included". This is largely no longer the case any more--most projects are largely dependent on PyPI, and there's been a growing sentiment in the last few years that "the standard library is where modules go to die". PyPI libraries are of inconsistent quality and have a high turnover rate; the "standard" for what to use changes fairly rapidly. And if you manage to make good choices of mature libraries and not have to change them every year or so, you still have to manage dependencies, which complicates deployments.

2. Fracturing community: there are now a bunch of different non-standard ways to install Python and Python dependencies, which conflict with each other, and none of which fit all use cases, so you have to use all of them and deal with conflicts. This exacerbates issues with 1.

3. Increasing dependency on non-pure-Python dependencies which don't build trivially. There are reasons for this: Python often isn't performant enough for certain tasks, and other languages have some very good tools. However, this means you can't just `pip install` a library and expect it to work--even very common libraries like Pillow don't without fiddling. Again, this exacerbates problems with 1.

4. Lack of a good desktop UI framework. It's apparent that not as many people are using Python for native UIs any more. tk is cludgy and doesn't result in pretty UIs, and despite being part of the standard library, it doesn't work out of the box on MacOS. wxWidgets doesn't build on MacOS without significant work, and documentation is for the C++ version of the library. This is sort of the intersection of 1, 2, and 3, but in desktop UIs they combine to form a perfect storm. If you're developing a desktop UI, there are better languages, but I'm not going to post them here because I'd rather keep this as a constructive criticism of Python than a language competition.

5. Introspection being used to change language syntax without solving the problems the language syntax solves. I'm really looking at Django here, but they're not the only guilty ones. When you call a function, i.e. route_on_http_method(GET=get_handler, POST=post_handler), functions can fairly easily check arguments and throw an exception from a logical spot--i.e. route_on_http_method(GIT=get_handler) throws an exception immediately. But when you have `class MyView(GenericView): def git(self, request): ...` you get no exception until you try to call the view, in which case you get a wonderful cornucopia of meaningless line numbers in a useless stack trace. Sure, Django code looks nicer and more organized, but as I said, syntax isn't that important. Debugging IS important. Using the bikeshedding example, this is breaking the nuclear power plant to fix a problem with the bike shed. And this is a generous example: it's fairly simple and Django is one of the libraries that does a better job of this. If you really want configuration-oriented programming, you'd be better off parsing in JSON files and doing explicit error checking when you parse them in, rather than introspecting out a syntax that's not really code and not really configuration. Introspection hasn't played out well as a Python feature.

You'll note that all these problems are CULTURAL problems, rather than problems with the language itself. And that's my point, because those are the problems you can't easily work around, and it's the problems you can't easily work around that make or break the language.

I say all this because I love Python. I've worked in Python for the last 6ish years, and don't see that changing any time soon.

[1] https://en.wiktionary.org/wiki/bikeshedding


> Syntax and small API gotchas can be learned by a beginner programmer in a matter of days of using the language, and are fairly irrelevant to your overall experience.

I agree with the first part but not the second. One can only write

  arraylist.add(arraylist.size() - 1, arraylist.get(arraylist.size() - 1) + 1)
before wondering if there’s a better way to do this.


Even if your language doesn't support a better syntax, it probably supports functions, so you don't have to write this over and over again. Just pull out into `appendLastIncremented(ArrayList a)` and call that.

You'll have forgotten the pain of writing that function within an hour. I'll never forget the pain of debugging a {bar: 1}.baz returning undefined (failing silently) and then going into a future inside a minified 0.0.2 versioned library that doesn't do any constraint checking before finally throwing an exception. Nor will I forgive JavaScript for this.


I don’t want to write a random free function for one line of code. I want nice syntax:

  list[-1] += 1
  array.last += 1
  *(vector.end() - 1)++;
Having reasonable syntax and a strong type system are not mutually exclusive.


> I don’t want to write a random free function for one line of code. I want nice syntax

shrug Sure, me too, everybody wants nice syntax.

What I'm saying is that syntax is never the most important problem with a language. People talk about problems with syntax because they're easy to understand, and the more important problems with a language are much harder to understand and talk about.

If a genie told me I could wish for three problems with Python to be instantly fixed, none of them would be syntax. I can't think of a language I've used where syntax would be on the list.


Sorry but it just seemed odd I didn't see a mention for GIL (global interpreter lock).

Because of this mechanism that ensures concurrency safety Python does not support parallel execution of threads. I am wondering if the rest of the HN community finds it as a major disadvantage.


I love python and it is my main language for almost everything.

However, yesterday I was playing with python’s async/await and I came to the conclusion that it is a bit... useless.

Not a big deal, because i can use other stuff to accomplish the same I was trying to do.

Or maybe I was doing it wrong.


Python is the simplest language I know of that does operator oveloading. If I had to write my pet maths project in c++ I would not even have got started. Sometimes abstract "drawbacks" are irrelevant!


Simple is better than complex.

There should be one-- and preferably only one --obvious way to do it.

https://xkcd.com/1987/


Also, here are 4 different ways to do string formatting


From the article:

> Strings were just a sequence of bytes in Python 2, but now are Unicode by default. Much better.

whelp, not trusting this guy's judgement anymore


Python also has a bytes type that’s the same as python 2 strings. Why would you want your strings to not be Unicode?


Obviously nobody cares what I think, so perhaps you would prefer the words of Armin Ronacher, the creator of Flask.

http://lucumr.pocoo.org/2014/5/12/everything-about-unicode/


I've been using python for years now, I don't know the last time I had to use `sys.stdin`. I (like most people) deal with `open`, and unicode-only file apis (or the very nice things like flask that mitsuhiko and others have built). For most people, this isn't a problem.

I've migrated literally, 10s of thousands, of files from py2 to 3. I've yet to encounter an issue with surrogateescapes or sys.stdin. Unicode issues yes, but mostly because we were previously being fast-and-loose with unicode vs. bytes, and mishandling, for example, emojis.


I certainly wouldn't trust Python to open the standard streams correctly, explicitly check the mode/encoding or reopen the descriptor.


I have a lot of respect for Armin as a smart engineer and all around good guy, but I think he was and is wrong about this one.


While the sentence is a bit weird, along the lines of "That word is made up." "All words are made up." strings ARE potentially harder to deal with in 2 than 3.


in my experience, most people want bytestrings most of the time, and the main effect of changing the str type has been to drive a lot of traffic to SO when they have encoding errors trying to deal with strings that used to "just work" in 2.7


The thing about "just works" in 2.7 is that it does ... until it doesn't, and when it doesn't, it's often way harder to resolve.

If you're only using the original ASCII (ordinals 0-127), all is well, and the py3 distinction between strings and bytes seems to get in the way. But as soon as you have a single character in your input or data which is not ASCII, a lot of things stop working and it isn't even clear why, or how to fix it -- often, the error comes from a library-within-a-library-within-a-library, which did some manipulation that (wrongly) assumed some encoding or its properties.


Maybe you're right. Maybe there are applications where this really matters.

All I can say is that I have never once had that problem since I started using Python (circa version 2.4). It has always been fine. For example, when dealing with .csv files created with MS Excel, you get strange characters because Excel uses CP1252 encoding, not ASCII. They never tripped up my app in python 2.7. It happily passed them along. Even regular expressions worked. Python3 died repeatedly until I went through and forced everything to be bytestrings.


> Even regular expressions worked.

Did you try doing regular expressions for non-ASCII characters? For example, given a file "latin1file" containing the string "L£", encoded in Latin-1, Python 2 will fail silently:

  $ python2 -c 'import re; s = open("latin1file").read(); print(re.search("L", s));'
  <_sre.SRE_Match object at 0x7fb992a44bf8>
  $ python2 -c 'import re; s = open("latin1file").read(); print(re.search("£", s));'
  None
Python3 forces you to deal with the problem to handle the first case, but if you do, the second case will work as well:

  $ python3 -c 'import re; s = open("latin1file").read(); print(re.search("L", s));'
  Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "/usr/lib/python3.7/codecs.py", line 322, in decode
      (result, consumed) = self._buffer_decode(data, self.errors, final)
  UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa3 in position 1: invalid start byte

  $ python3 -c 'import re; s = open("latin1file", encoding="latin1").read(); print(re.search("L", s));'
  <re.Match object; span=(0, 1), match='L'>
  $ python3 -c 'import re; s = open("latin1file", encoding="latin1").read(); print(re.search("£", s));'
  <re.Match object; span=(1, 2), match='£'>


I've never had to do that. Maybe because I live in the U.S.


You're reading non-Unicode into strings that expect Unicode, and since there's no way to auto-detect CP1252, you get bad results. If you want to stay in CP1252 you should clearly use bytestrings. You could convert to Unicode, I expect that:

text = file.read().decode('cp1252')

would work (haven't tried it, though).

I frequently work with CJK text, and Python 2 was a real pain. You had to do things one way to output to a terminal, but a different way to output to a pipe, and all your code had to remember to encode('utf8') every time you printed a string (unless you were using pipes, if I recall correctly). The situation is much better with Python 3, especially if you keep your files in UTF-8. Unfortunately, I think Windows is still stuck in their own world, so it might be harder to get UTF-8 by default. In Unix pretty much everything writes UTF-8 files by default, so everything has pretty much just worked for me (macOS/Linux).


That's funny as I had the opposite of experience - pulling non-ASCII characters into Python 2.7 caused me trouble - I ended up having to mark everything as Unicode to solve the issues.


I’m not quite sure about this. I think while python 3 has a tendency to blowup on IO, I still think that is preferable than more cryptic problems. Python 3 did seem to handicap working in bytes though, but I think that has improved in the last few versions with the path objects and str accessing Unicode text.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: