Hacker News new | past | comments | ask | show | jobs | submit login
Why Python is Important for You (haard.se)
391 points by zzzeek on Feb 11, 2012 | hide | past | favorite | 225 comments



I like to call this cognitive compatibility - the amount of effort required from an uninitiated English-speaking observer to understand what a program does. Most people who are deeply familiar with a language will have a tendency to discount this effort, but really it is essential because it minimizes the translation layer that your brain has to use any time you read code.

Python encourages code which is readable in English and reasonably brief. Having readability, consistency, and the "principle of least surprise" as primary language and community principles ultimately saves developers time and makes them more efficient.

And the maturing of PyPy is incredibly exciting.


I want to make what I think is an important qualification which conflicts with a reading of your comment: Python is readable to people who are familiar with conventional Algol-based programming, i.e. every modern programmer. That does not make it readable to an arbitrary English-speaker, which is an assertion I sometimes hear: "Oh, Python is so much like English, people who aren't even programmers can read some of it!" Which I can say from experience is not the case in the slightest. If it were, my job—which involves teaching Python to non-programmers—would be inestimably easier.

Python is simple and easy because it takes everything you know from other languages and systems and makes those things simple and easy and pleasant to do. I've even seen Python used as a kind of runnable pseudocode for projects that would eventually be written in other object-oriented Algol-based imperative languages, which is a testament to how well it distills those concepts down and makes programming simple. But it is not "readable in English." It is readable in pseudo-Algol.


Thankyou! So many programmers seem to forget they have an enormous base of syntax and logic which is not at all like natural language. They see python and say, oh wow this is so easy to read, just like English! But its only an easier way to express a syntax that really is nothing like what a nonprogrammer would consider English.


It varies.

When I was helping @limedaring learn Python, I'd read source with her and explain what it does. Quite often I'd feel somewhat embarrassed when explaining what a line does by verbatim reading what it says.

"for user in users: ..."

"if user.is_admin: ..."

It was especially nice that the topic of "what code block does this condition apply to?" did not even occur to either of us. The indentation-based blocks made it completely organic. (Try explaining Bash to someone and when to use do/done, if/else/fi, case/esac.)

But sure, things get weird with decorators and generators and other advanced features.


IMHO, decorators, although very useful are a big departure away from python's "simplicity" in syntax. I remember having lots of headaches several years ago wile trying to fit it in my brain. Even after I realized it is something as simple as JavaScript's myfun(fun(){}), my brain somehow continued denying to absorb it.


Agreed that they can be tricky initially and hard to debug, but I think they're totally worth it for eliminating repetitive code. And IMO as long as there isn't a bug in the decorator itself, code with decorators can be just as readable (if not more so) to someone else (e.g. decorating your functions with @authenticated instead of if self.user.is_authenticated: # blah, blah blah)


if Python had better function syntax and anonynous functions, decorators wouldn't even be a 'thing'


Can you elaborate on that? The more I use Python, the more I find it to be a well thought out language with a very consistent feel, allowing me to write code without thinking. Well, except when it comes to decorators. How would you propose to improve the syntax?


> if Python had better function syntax and anonynous functions, decorators wouldn't even be a 'thing'

Better function syntax? What's wrong with python's function syntax and how does it relate to decorators?

Python does have anonymous functions, though they are limited: You can have only 1 expression, you can't have statements, and you can't have local variables. The reasoning is lambdas should be used for small operations; for everything else, named functions should be used.

The reasoning behind lambdas is arguable, but it still doesn't explain how it affects decorators.

Consider:

    def memoize(fn):
        from functools import wraps
        @wraps(fn)
        def memoized_fn(*args):
            if not memoized_fn.cache.has_key(args):
                memoized_fn.cache[args] = fn(*args)
            return memoized_fn.cache[args]
        memoized_fn.cache = {}
        return memoized_fn

    @memoize
    def fib(n):
        if n < 2: return n
        return fib(n-1) + fib(n-2)

    print fib(100)

What would a better function syntax and anonymous functions do so that memoize decorator won't be a thing?


Less restricted anonymous functions would allow stuff like this:

    fib=memoize(function...)
So the @ shortcut for:

    def fib():
        ...
    fib=memoize(fib)
wouldn't be necessary to 'clean up' the decoration of the function.


Even if it were possible, I would stick with

    @memoize
    def fib:
        pass
fib = memoize(function ...) has unnecessary nesting, and @memoize is cleaner IMO.

I am curious - do you find @memoize confusing? Personally for me and anyone I have talked to, @memoize is vanilla - the confusing part is the decorator itself, especially one that takes arguments; that coupled with the fact that classes(which override __call__) and functions are both used as decorators.


No, I don't find it confusing. But I read craigyk as mostly talking about the special decorator syntax, not function decoration in general.


Yes - I agree with you in principle. Python is not the perfect combination of computer language design and cognitive compatibility - it's just that I think it's doing better at it than anything else that has traction right now. Hopefully something better will come along and improve on it!


Why, I have a feeling that teaching Python to non-programmers is easy, very easy when they want to learn. For instance a friend of mine just sent me a pong game his son and daughter wrote last Sunday. Apart from maybe Logo, I can't imagine using anything else than Python.


Python code is superficially easy to read, but to know what a piece of code really does can be extremely hard, specially when metaclassed, generators and the like are involved.

Also there is plenty of Python code that is not particularly readable even superficially, particularly convoluted uses of list comprehension come to mind.

And I don't think 'consistency' is one of python's main qualities, when even in the stdlib there are plenty of very different naming styles. (Yes, this is improving in Python 3, but has been a big issue for a very long time.)


+1 for the metaclasses example... list comprehension doesn't seem that big a deal, but then am biased way too much.. Also agree with consistency part.... but personally python is beginning to lose some of it's charm with the move towards consistency...


This is the kind of reasoning that lies behind LFMs: that it is better to cater to someone who knows nothing about the language (but maybe has a few mainstream languages under their belt already) than to someone who has put in the work to understand it. I'd rather pay to learn a difficult but more abstract language feature just one time, than pay all the time for its absence.


A programming language being similar to spoken english isn't necessarily a good thing. SQL being a prime example of this.


I could not agree more with this comment and just wanted to thank you for expressing this so clearly.


My biggest problem with Python is that projects larger than a given size tend to become unmaintainable rather quickly.

This is in large part because of the lack of strong typing and type annotations; if you aren't the only author or can't keep the codebase in your head, it takes real effort to figure out what a function does. Even the type annotations provided in a language like Java or C++ make this task much easier, not to mention languages with real strong type systems like Haskell.

That's not to say that building large systems in Python is impossible, but it takes a lot more effort and documentation that it would in other languages.


This is a significant problem for me as well, coming from C++ and Haskell. Even just going through and making a few changes here and there, there’s no way for me to know without extensive testing whether my edit was even remotely correct beyond a cursory syntax check.

I’ve never had to deal with testing all that heavily in statically typed languages, because I can rely on the type system to do a lot of very helpful static analysis. The lack of basic static knowledge about a Python program is frustrating and discomfiting, and ultimately discourages me from using the language for anything serious.


I find the focus on dynamic typing a bit fascinating: you don't see people focus so much on it on a discussion about LISP. [EDIT: I meant to say that this most likely reveals something else is at work in the particular case of python].

When a C program compiles, it really does not tell you much about whether it will work or not. Maybe it is a matter of application domains, but I tend to actually spend more time testing C parts of the code compared to the python code. But, in proportion, I spend more time testing in python than in C, because I spend so much less time doing things in python than in C.

Haskell is different, because types can be used to enforced constraints in a much better fashion than C/C++/Java/etc…

The one part when simple typing systems is useful is refactoring. Doing so in a dynamically typed language is difficult (but possible, as shown by smalltalk, which is supposedly even more dynamic than python).


Lisp’s other useful features just make the particulars of the type system less of an issue. When writing prototypes, I move things around and refactor a lot. Python makes this easy at the cost of certainty about whether it was done right. I don’t write a lot of Lisp, but in my experience refactoring there is quite different: I’m not editing methods and classes, I’m factoring functions and macros. If I were one for supposing, I’d suppose the OO philosophy of Python forces you to create interfaces before you necessarily know that they’re correct, so you end up refactoring them, and there’s poor support for that when no static type system has your back.


This is already a more interesting discusion. I cannot comment on Lisp, as I have too little experience with it. When you say python forces you to create interfaces, I am not sure I understand. For me, programming is mostly about creating interfaces (as the interface part of API), python or not. When I refactor some C code, static typing does not help me that much, as what's difficult in C is lifetime's object (e.g. ref counting in C extensions), things like that.

What is true is that it is easy to write code that is impossible to refactor, because it is full of globals, or "stringified API" (e.g. using a dict instead of objects when the set of keys is supposed to be fixed). But I am afraid people writing this kind of code would write bad code in any language, or are at least very inexperienced in the language.


"I find the focus on dynamic typing a bit fascinating: you don't see people focus so much on it on a discussion about LISP. "

Interesting point.Thinking back on the Clojure discussions here over the last two years i don't recall many issues with readability/maintainability due to the lack of a static types system.


Are there any really big multi programmer clojure code bases yet though ?


+1 on the static type checking problem.. i am beginning to feel very annoyed by the same... have found that writing and maintaining unit test cases help.. but are a PITA nevertheless...


Well, as I see it, more or less every error should be expressible as a type error. In order to actually do that, you’d need full dependent types, and that’s a huge language-design “now you have two problems” feature—compiler authors are understandably uncomfortable with undecidable, non-inferable type systems.

Anyway, Haskell and even C++ used intelligently can tell you a lot about your program at the outset, so ultimately your tests can be more focused.


Yep, I agree with you - although every error should be expressible as a type error, languages like Coq where this is actually possible make writing usable code pretty difficult. I think there's a pretty wide range of middle ground, ranging from C++ to Haskell, where you get the benefit of static types without making it really difficult to write working code.


> more or less every error should be expressible as a type error. Am not sure i agree.. i would like to believe it's true though.... >Haskell have just begun to poke around, like the currying + function composition.. will keep poking...


> I’ve never had to deal with testing all that heavily in statically typed languages,

Lack of tests means bugs now, or later when you're refactoring. They're pretty essential in static languages, too, although you can certainly lean on the compiler for many things.

Anecdotally, my bugs usually aren't caused by having passed in the wrong kind of an object.


Well yeah, to be fair, neither are mine. It’s just irritating to be running a prototype thinking everything is more or less fine, then SPOOM goes a runtime error because you didn’t properly change a method invocation, which could have been caught by a static check and saved you some time. The more tools I have to help me attain my actual goal of writing software that does stuff without spooming, the better.


Python 3 actually does let you annotate function arguments, although it's with an arbitrary expression rather than explicitly being a type. So this is valid:

    def haul(item: Haulable, *vargs: PackAnimal) -> Distance:
As is this:

    def thing(something: "A string!") -> "does stuff":
It's also not explicitly checked, although if you felt like it you could probably write a metaclass to enforce type checking. (Though I imagine to do so would be rather un-pythonic)


Just to be clear, Python is strongly typed but not statically typed. But I definitively agree with your comment. The startup I'm working for is Python based, but I'm currently learning Haskell, because I'd like to take advantage of a good type system for my larger efforts.


Aren't you concerned about unpredictable memory requirements?


To be honest I don't have enough experience with lazily evaluated languages to be concerned about it yet.


I must respectfully disagree. I work with web-based systems and find the dynamic typing of Python to be its biggest strength.

The ability to just throw another property onto an object before sending it to the template-engine saves so much engineering work, it totally overcomes the downsides of not annotating required types.

I can see where it may be an issue, but I think the documentation should focus on what function calls need to be supported, as opposed to type-checking i.e. "This function takes an object with the read() and update() methods."


"I must respectfully disagree. I work with web-based systems"

Web-based systems tend toward smallness and/or flatness, with fairly straight execution paths through the system. You're not going to run into the limitations being discussed.


Yeah and big systems architects should really learn from that. A bunch of small services connected by well defined interfaces is the flipping bomb!


When possible, I totally agree. It's great when you can get it. Most of the work I do is just like what you describe. Sometimes it's not, and that's when I really feel the utility of types.


I think the proper way to do this is through the use of interfaces (Java), abstract superclasses (C++), or typeclasses (Haskell, etc.). These give you the benefits of polymorphism, e.g. you can pass anything that implements "read" to a function that processes input in a given way, but also gives you the compile time typechecking that helps you avoid bugs.


Using interfaces is nice, in theory, but I find them cumbersome in practice.

Say you have a function that takes a IAmReadable object, with the method read(). You implement it, and all is well with the world.

Then, you encounter a function that also uses read(), but requires an IAmAlsoReadable object. Again, this function only uses read(), but IAmAlsoReadable requires you to implement random(), silliness() and 4 other methods that you'll never use.

Java's interfaces require you to implement all of those methods, even if it's just to stub them out (or return 0, 0f, or null). Blame poorly designed interfaces, but ultimately we have to live with them. And yes, you can get your IDE to generate them, but that's just working round the basic problem which is that the type system sucks.

Personally, I think the answer, if you're using a static language, is to analyse the code to see what methods are actually required. This would reduce the interface system to merely providing "hints", but would keep you from cluttering up your code with meaningless method definitions.

That's my two cents on the matter, at least. I'm sure someone more qualified will point out a glaring hole in my logic :-)


That's essentially how Go handles polymorphism. http://golang.org/doc/go_spec.html#Interface_types


Yup. This is also the case with Ruby, and is one of the challenges when trying to dive in to a large unfamiliar project. For example, when something in Rails doesn't behave correctly and you flip deep into a source file, it's not always easy to track down exactly what kind of arguments a method expects. Often you'll have to breakpoint inside the method just to inspect what's around.


I find that it's a problem with most languages. Beyond a certain point a project will start to need infrastructure, documentation, code style guidelines and tests - especially when there's more than one developer.

My personal preference is for building a large project as multiple smaller ones, or else leaning on other libraries as much as possible (eg. Numpy, Scipy, pygame)


An alternative viewpoint is that Python makes it so you GET to that point a lot quicker, since your iterative development cycle is so much faster, No compiles, no need to refactor a bunch of stuff to accommodate minor data structure changes...


I agree and I was worried I was the only who would say it :)


Working on big legacy code base without proper unit testing and strong style checks is probably hard in any case. With Python the idea is that e are grownups. If someone accesses private methods or changes the type of an argument without proper checking, it's his problem.


I do love Python, but I often wish there was more rigor for some things. I currently work on a large project (>500K LOC) and have started to see it fall apart with a bigger team. Lack of enforced typing in function arguments, inability to create strict interfaces, inconsistency in standard library conventions, and package management would probably be my biggest gripes.


Curious: what is this 500k LOC project (if you can say) and how much of that is Python? If it's an in-house codebase that's a really neat datapoint because it would rank among the largest open-source Python projects..

Also, Traits (from Enthought) is an attempt to deal with some of the issues of typing and interfaces. It's led to some great software (the Chaco plotting package is fantastic, as is Mayavi), but I've yet to figure out how to debug a Traits error and stay sane. There is a lot of auto-configured magic that happens, and when something goes wrong the backtrace is often pages long and doesn't actually show the offending line.


Project is automated testing for an enterprise private cloud platform. 500K of tests and test library is all python, the application itself is probably 2M+ LOC, Java + C running on Linux.


On most projects I have seen where python was seen as problematic, I think that having types ala C++/Java would not have improved much. The real problem is that the function is not documented, nor is the function expectations. Typing in C++/java does not replace that (more advanced typing systems do much better, though).

Package management and generally deployment is certainly a pain in python. I think it is one of the main issue in typical "corporate environments"


In most C++/Java code, the function is not documented either. The nice thing about manifest types is that the source code becomes the documentation (shitty documentation, yes, but usually good enough that you can figure out how to use it by looking at the source). Type inference with a doc generator would work too, as long as you run the doc generator regularly and everybody knows where to find the docs.

History has shown that programmers are too lazy/busy to write good documentation, and when they do, they don't keep it in sync with what the code does. If it's not checked by the compiler or automated tests, it's probably wrong.


It is not my experience that you can figure out how to call functions correctly just from the function signature. Certainly, that's not true from C++ when you need to figure out lifetime issues, etc… It may be possible to figure out from the source code, if the code is written well enough, but that's true independently of the language IMO. Developer not writing decent doc is an institutional problem, and if this cannot be enforced, then the institution is most likely not capable of producing very good code either.

I think it is more interesting to talk from the POV of how to write code that is actually readable: type may help, but that's only a not that important part of the equation in my experience.


C++ with good coding conventions can encode pretty much all the information you need in the type. If a parameter is read-only, pass by const reference. If it's writable but ownership is not transferred, pass by pointer. If ownership is transferred, pass by unique_ptr. If it's owned within an object or function lifetime, use a scoped_ptr. If it can't be copied, inherit from boost::noncopyable.


This, this, and this. We document our stuff using ReST and Sphinx. You can optionally specify data types in the docstrings, which does a ton to help with this. It isn't time-consuming, and it makes your team not hate reading your stuff.


500K LOC in Python is what, 2.5M->5M in a statically typed language? At that size I would expect any project to require explicit convention and communication management, simply because no one person can hold the whole thing in their mind at a resolution that reveals the code's meaningful features.


"500K LOC in Python is what, 2.5M->5M in a statically typed language?"

Depends if the statically typed language is Java or Haskell.


And it's probably 200k lines in Haskell, which is very statically typed :)

(For reference, GHC--a very complicated compiler with clever optimizations, language extensions, multiple backends...etc--is something like 125k lines of Haskell.)

I've spent a significant amount of time with Python, both at work and in my classes and yet my Haskell code is usually 2-3x shorter for very similar programs. I think it's also easier to read, but that's indubitably a matter of taste.


That is interesting. According to the computer language benchmark games, Python programs always have fewer LOCs:

http://shootout.alioth.debian.org/u64/benchmark.php?test=all...

I know most programs there are written in an unusual way, but that at least means something.


There are a couple of things to note about that particular comparison.

First, Haskell--particularly on GHC--gives you a ton of options for speeding your code up, including unsafe* functions and pragmas. So the fastest code is going to use things like {-# INLINE #-} and unsafeAt as opposed to ! (the array index operator). However, you would only use these things in performance critical bits of your code.

Secondly, the length of the identifier (! vs unsafeAt) actually matters because the benchmarks do not measure the number of lines--they measure the compressed size of the code. [1]

[1]: http://shootout.alioth.debian.org/help.php#gzbytes

Finally, these are all small algorithmic problems. The goal is to write the fastest code possible, not something readable or maintainable. Some of the biggest reductions in Haskell come from generalizing and reusing functions in different contexts; I expect this does not happen in the benchmarks because of limited size and the focus on performance.


Compilers are programs for working with type systems. It's not surprising that a language that is all about working with type systems expresses them concisely.


I've seen projects that use decorators for type checking. You could turn it into a noop for production.


I have used Python for a rather long time (I think 1997 when I was 17 was when I first used it). As the the writer of the article stated Python is awesome for writing "tools" and throughout my career I have used it for such.

However I have used Python also for big projects and that is where its gets a little messy. I know I am going to get down-voted to kingdom come but I think Java is a better language for big projects mostly because of the static typing (try renaming a class in Python compared to Java).

Modern Java is for sure more verbose than Python and more complicated but the languages are extremely similar (Guido has even take some of the design consistency from Java for Python 3.0).

The reason I bring it up is Java is very often in various circles crapped on to no end. The reality its basically just more verbose Python. Not to mention Java offers an introduction to Parametric/algebraic typing (generics) and more sophisticated concurrency (futures ... yes actors are better but python really has neither).


Just replace Java with Haskell, and you've got yourself a nicer Python replacement.

Less code than either Java or Python, far more compile-time guarantees than either, and nearly as fast as Java when speed matters (speed takes a bit more effort in Haskell than in Java, but compared to Python, it is a no-brainer).


In my mind--probably not the majority opinion, but so be it--it's really the opposite: Python is a dynamically typed Java. So it's not that Java has been underrated but rather that Python has been overrated. I do not like working with either.

Also, I'm not quite sure what generics have to do with algebraic types.


As a commenter on the article wrote: what about Ruby?

I say this as a happy Python user. Ruby seems very similar but I'm reminded of a pg essay on language power: looking up the power curve, you see '$Language plus a bit of weird stuff that is probably irrelevant.' So I don't trust myself.

As someone who loves Python and doesn't know any Ruby beyond a few bits of syntax and the obvious bits that are common to most languages, what am I missing?


Some many years ago I decided I should learn a new scripting language because PHP wasn't general-purpose enough. I decided it would be either Python, Ruby or Perl. For some reason Perl was dropped early, I forget why (don't worry I learned some Perl later).

So I started Googling for Python vs Ruby. After checking some language basics tutorials, I decided that really, for my purposes they were about equivalent in power (didn't know much about library support yet).

The one thing that pushed me towards Python was that most articles I could find comparing Ruby and Python were written by a happy user of one of these languages. However, all Python users wrote something like "I guess you can do everything in either of them, it's just a matter of preference and I happen to have picked Python", while the Ruby users tended to be somewhat more flaming and arrogant about it, especially in the comments sections.

In the end I went with Python because the community seemed more friendly. However, I stayed for the library support. Especially after that one time I installed pyTwitter and I could just do `import twitter`, I was pretty impressed by that (I assume Ruby has an excellent Twitter lib too, btw).

Also, just a little disclaimer: This was a couple of years back (I forget how many, 5 or so) maybe the Ruby community changed, and it's also totally possible that I just happened to land on a rather "particular" region of the Ruby blogosphere, so take this cum grano salis. I had to make a choice and was about this close to just flipping a coin over it :)


Also, just a little disclaimer: This was a couple of years back (I forget how many, 5 or so) maybe the Ruby community changed, and it's also totally possible that I just happened to land on a rather "particular" region of the Ruby blogosphere, so take this cum grano salis. I had to make a choice and was about this close to just flipping a coin over it :)

ruby-talk used to have, as one of a number of perma-threads, the question of "Why Ruby over Python". The discussion was almost always entirely civil, with most people suggesting that the questioner try both for a bit and pick the one that clicked best.

I used to host (on ruby-doc.org) a page called "Ruby Eye for the Python Guy" that tried to summarise the comments and offer links to help folks make a decision.

The community on ruby-talk at that time (probably 5 years ago) was extremely polite and helpful, and I'm sorry you landed on some opinionated blog that likely only reflected a small group of developers.

I don't follow ruby-talk much any more, so I can't say how it is, but there has always been a pretty strong tradition of MINSWAN: "matz is nice so we are nice." I expect that's still encouraged. There are, of course, various sub-groups around this or that library or framework that may have zero exposure to that, think they're the shit, etc., but I hope people considering Ruby don't a priori dismiss the majority of Rubyists as rude because of bad reports about a handful of people.

OK, end of Ruby PSA.

Side note: I'm often puzzled by people who think a language or framework or whatever geek thing has to "win." Be happy for happy people.


Well, one issue is that with a language as popular as Python (or Ruby or Java...) it's relatively hard to avoid working with them. So it's actually very beneficial if a technology one likes becomes popular.


Call me a cynic, but I'm weirded out by the idea that the only reason for civility is to emulate an idol who is civil.


Call me a cynic, but I'm weirded out by the idea that the only reason for civility is to emulate an idol who is civil.

Idol? He's a nice guy who does admirable work and many people think it ungracious to be rude when publicly discussing his language.

Only reason? Geeks get into stupid pissing matches. Sometimes a little reminder is a good thing, that's all.


Library support.

For example, there really is no Ruby equivalent to SciPy, NumPy, Matplotlib or NLTK (http://news.ycombinator.com/item?id=3179370). However, there has been some effort in the Ruby community to begin the process of developing some of these libraries (see SciRuby http://news.ycombinator.com/item?id=3180369). but the Python libraries have been in development for years so this will be no easy feat.


As a user of Sage (http://sagemath.org/) I can appreciate that. Nice to have what you need at your fingertips.


>> Library support.

Isn't that rather an argument for Java or Perl?

What I don't get is the cheerleading for Python? Ruby has/had something similar going with Rails. I mean, the article even notes that it is a middle of the road choice, optimised to not be extreme.


I suppose it depends on the libraries and what you want to do with them. I personally find the Python REPL in combination with matplotlib, NumPy and SciPy to be extremely handy for poking around in small/medium-sized data sets and debugging numerical apps. Python "stays out of my way" a lot better than Java or C++ would for this ad-hoc kind of exploration.

I'm sure there are probably tons of nice alternatives to Python for this kind of work, and there are tasks that other people need to do for which trying to use Python and some readily-available libs just wouldn't be sufficient. I only suggest Python to people whom I know are doing work where it would be helpful; there's no point in pushing it as a one-size-fits-all solution.


Yes, Java and Perl have large libraries, but wging asked what he would be missing going from Python to Ruby -- missing libraries is the big one.


It's an argument for any language that has libraries available, weighted by your need for those particular libraries and their quality.


Right, in addition to the usual efficiency concerns (can you afford that 0.5-1s of JVM spinup? do you need mature multiprocessing libraries?)


> Isn't that rather an argument for Java or Perl?

No, because then you'd have to stand either Java or Perl. If you are looking for a language to learn beyond a quick hack, then the language matters more than its libraries. I know Perl, and everybody knows what a boon CPAN is, but still for scripting tasks I have switched to Python. It would have made me feel sick to keep programming in Perl.

As I had to choose between Python and Ruby too, besides the library thing, I picked Python because Ruby seemed like it was trying too hard to be powerful and expressive, to the detriment of clarity. I didn't want to invest too much time in learning a scripting language. It's just for scripting, you know, it's not meant to accomplish much. If I need a powerful and expressive language, then I just reach for the big guns.


Isn't that rather an argument for Java or Perl?

Or JRuby.


No, it's an argument for python over ruby.


Ruby's blocks and built in regular-expressions are two examples of things you're missing.

if result =~ /regex/

that kind of thing is awesome for readability, but it can also be pretty darn slow.

There's also some interesting things about the ruby object model and interpreter that are different from python's approach. Whereas in the python REPL you type help(object) to get the docs on that object, in ruby you type object.methods or object.public_methods, etc. It's less documentation oriented, and more code-exploration oriented, but the two end up delivering the same sort of information.

Hope that's helpful! Ruby's a lot of fun to tool around with, and there are (in my opinion) fewer things about it to trip you up completely than there are when first learning python. But once you use it for a while and get beyond the basics, it becomes clear that there's a lot of unfocused cruft foating about, which fits no rhyme nor reason. Much more than Python has.

(cautionary note: I'm not an expert with either language)


ruby does have its share of cruft, but it also has some inspired features that make a far bigger difference than you'd think they would. blocks are the biggest (specifically, syntactic support for attaching a lexical closure to every method call, which the object can choose to use or not), but here's another cute one: the case statement

    case object   
      when matcher1; ...   
      when matcher2; ...   
    end
performs a method call on the matchers, with the object as an argument, rather than vice versa. so you can say things like

    case name
      when "John Smith"; puts "hello again"
      when / Smith$/; puts "are you related to john?"
    end
the string "John Smith" and the regexp / Smith$/ will each call a different matcher to see if they match your name - a far cleaner design than the string class having to provide matchers for everything you might want to use in a case statement.


Have a look at IPython for a more code-exploration oriented experience.

http://en.wikipedia.org/wiki/Ipython (Sadly the homepage doesn't quite get the simpler sell points across as well. http://ipython.org/)


As a programmer who knows both languages, while I like regex as a native type with it's own notation, I don't miss ruby blocks at all. Most Ruby code I see abuses blocks pretty badly instead of refactoring behaviour into a function, and soon you have a code base littered with the same algorithm over and over again. I think Guido has a point on making lambdas one-liners exclusively.

Also, Ruby programmers talk about how readable Ruby is, but I disagree. The lack of consistency between using parenthesis or not, how to separate arguments, or even string formating syntax makes reading Ruby code wrote by someone else much more prone to "WTF" moments than Python.


It sounds like object.public_methods and object.methods are a lot like dir(object) in python.


I'm an extremely happy ruby user but have nothing against python, it's just that the languages operate at such a similar level of abstraction that it seems like a bit of a waste to spend time learning python when I could be learning C or Haskell.

FWIW though, I get serious library envy, especially with regard to NLTK. Ruby has web development cornered but is decades behind python when it comes to anything around machine learning and scientific computing.


> Ruby has web development cornered but is decades behind python when it comes to anything around machine learning and scientific computing.

The irony of this is that many folks in the Python scientific computing space pine for better syntax extension facilities so that they could do more "natural" DSLs for data analysis and embedding math inside Python code.

The object lesson here is that communities and passionate users matter. Python took root in some scientific computing environments and certain dedicated individuals built up a software library and invested in growing a community that is now a force to be reckoned with. If the same people had all chosen Ruby and built a community around that, there is no obvious reason why Ruby shouldn't be the one with the great scientific libraries.


The downside of Ruby is the third-party packages, especially advanced/industrial-strength stuff (NumPy/SciPy for instance).

The main upside of Ruby is blocks, however limited (compared to Smalltalk's) the ability to craft your own control structures is incredibly powerful and empowering (and you can't really do that in Python, you can abuse decorators for it nowadays but it's not exactly sexy or readable, or generally a good idea)

Ruby also makes metaprogramming easier and more common, though the jury's still out on whether it's a good thing or not.


I'd also say that ruby has done a better job of moving forward. Compare ruby 1.9 to python 3. I think JRuby is more mature than jpython, and I know this will start the flames, but I think ruby is clearly better for web stuff; not just rails, but the whole ecosystem. Edit: I also think bundler is better than what is going on in python land.


I would say that the policy of rapid movement it has its ups and downsides; Python code is generally very stable between versions and breakage is well-documented. So far I haven't seen the same thing going on in Ruby land.

JRuby is clearly more advanced that Jython. I'm guessing that with the advance of PyPy, other Python interpreters just aren't getting that much attention, which is a bit unfortunate.


rapid movement definitely has it's downsides. However, I don't know what you mean when you say that ruby code isn't stable between versions. Do you mean between versions of ruby? If so, again, the transition from ruby 1.8 to ruby 1.9 was much much better than the transition from python 2 to python 3.


Python 2 to Python 3 is a big jump once in a lifetime. what about ruby 1.9 to ruby 2.0?


ruby 2.0 will likely be just as easy as 1.8 to 1.9, if not easier.


so, it's not as easy as python 2.x to 2.(x+1) and python 3.x to 3.(x+1). will every version update of Ruby will suffer as mush as Ruby 1.8 to 1.9?


> Compare ruby 1.9 to python 3.

They're not comparable, so no.

> I think JRuby is more mature than jpython

Pretty different situations, from 2006 to 2009 two lead JRuby developers were hired specifically for that by Sun (later left for EngineYard) and a third was hired by ThoughtWorks for the same.

For Jython this only happened in 2008, after the project had pretty much gone on freeze due to the original founder leaving to work on the Pypy project.

Compare IronPython to IronRuby instead, and the situation is reversed. Same if you compare Rubinius and Pypy.

> and I know this will start the flames, but I think ruby is clearly better for web stuff

That's your personal judgement, I don't agree.


You dodged the first two issues. Python 3 is causing serious pain, does django work with it yet?

Second, who cares why JRuby is more mature, it is.


> You dodged the first two issues.

No (and screw you for your unwarranted downvote you ass)

> Python 3 is causing serious pain, does django work with it yet?

I did not deny that, I denied your equation of Python 3 with Ruby 1.9 when they have absolutely nothing in common and had very different goals.

> Second, who cares why JRuby is more mature, it is.

That's not the issue with your question, the issue is that you cherrypick a single alternative implementation which supports whatever claim you're trying to make and ignore all others. That makes your claim both dishonest and irrelevant.

IronPython is more mature than IronRuby, Pypy is more mature than Rubinius, by your comment that makes 2 for 1 therefore Python is superior to Ruby right? That's nonsense, because your original assertion is nonsense.

But hey, at the end of the day I couldn't expect much more from somebody who decided to turn a discussion on relative merits into a dick-size contest. I guess the western ruby community has not matured as much as I thought it did. Thanks for bringing me back to reality.


I didn't down vote you, it seems HN doesn't allow me to down vote on threads I'm participating in. If it did, I'd down vote your last comment.

I think the JVM is the more significant platform compared to the clr and rubinius/pypy. Therefore I think it's maturity is more significant.


The PyPy team is amazing, for example Quora is now running on PyPy. and it's easy for a Python programmer run PyPy as an alternative(and hopefully fast) implementation.


It's ironic that you would point the finger at communities for not being mature.

Yours is by far the least pleasant and counterproductive thread on this entire discussion


Ruby/Rails seems to have a more developer-friendly approach to the web development stack than Python/Django. (I am but an egg, more comfortable in Python/Django but learning Ruby/Rails, I wouldn't argue with more experienced folks about any of the following.)

Ruby/Rails seems easier to develop and test the full stack application, from client JavaScript to server Ruby response and back. It supports Ajax through injection of injection of JavaScript into HTTP responses served up, based on specifications in server side code. The test architectures seem to reach across the client/server divide, for example checking server side responses to triggering client-side JavaScript.

Python/Django are trying, with like lettuce and splinter, but these don't yet seem to be quite to the same standard.

Also, because RoR has a lead on web development, new developments in that space seem generally better integrated. For example, Backbone is a young JavaScript MVC style framework for client side development. I've just been fooling with using it with Django, and discovered that its stringification of Json objects for POST doesn't work with Django's parsing of the client request. Maybe more experienced developers spotted and fixed this easily -- my fix is only a couple of lines -- but it's the sort of thing that makes me wonder just how many folks have tried to integrate Django w/ Backbone. I'm guessing that the Backbone community has already spotted and fixed any such issues with RoR.

Now I'm not sure how I feel about RoR, it often feels like magic and I'm a little suspicious of stuff like super-convenient JavaScript injection. Maybe when I understand it better it will seems less magical and more convenient. But it isn't hard to see why the web development community likes it.


RE: javascript injection, it's totally optional and in years of doing rails dev work I've never used it (I presume you're talking about the 'remote' option).

Actually, the vast majority of my rails stuff recently has been RESTish (or at least Backbone compatible) json endpoints with a CoffeeScript frontend.

The front and back end in that case are totally decoupled (though rails does do some nice things to help you organize your frontend code).

This would have been hard a few years ago but I've found that as time has wore on rails has become quite a bit more forgiving when you disagree with DHH, e.g.. you can use whatever templating language, testing framework or ORM you like, and there will be others who made the same choice you can look to for support. It's still opinionated about defaults, but you can switch out whatever libraries you like these days, for the most part.


One of the differences between Django and Rails is that the former makes almost no assumptions about how you are going to write your front end. I don't know much about Rails, but it seems to me that it is much more opinionated in that regard.

Re: Django & Backbone, you should check out djangbone. It makes it dead simple to create a basic backbone-compatible APIs in Django, and is easy to extend if you need more advanced functionality (full disclosure- I'm its author): https://github.com/af/djangbone


I used Python for a really long time and switched to Ruby several months ago. Benefits of Ruby are:

-no distinction between expression and statement (everything has a value)

-lambdas can thus contain any expression

-much bigger range of built-in methods and classes (all permutations of a list, removing duplicates from a list, Rational class, Regexp class, Range class, etc.)

-more flexible syntax and "control operators" like each_with_index

-almost zero distinction between built-in classes and methods and user-written ones: you can open up Hash (dict) or Array (list) and extend it straightforwardly.

Downsides are:

-generally uglier and more ambiguous syntax

-distinction between functions and methods (b/c "all you can do is send a message") leads to a lot of unnecessary duplication of concepts

-so-called "blocks" which are really just a hack to avoid writing the word "lambda" a bunch of times (maybe if the more concise Ruby 1.9 syntax for lambdas had existed earlier blocks wouldn't have been implemented). I think they are based on the idea that most HOFs only require one functional argument, and they encourage functional programming, but they (along with the last bullet point) discourage the general case. They provide some of the functionality of macros.

-you can't define a class inside a method (still haven't quite figured out the reasoning behind this one).

In general Ruby is definitely more powerful than Python, but in its higher ambitions suffers more obviously from clashes between functional and OO programming. Python's dogmatic insistence on "readability" makes it feel like an old grandma in comparison to Ruby.


> -lambdas can thus contain any expression

why would you want to use lambdas if you have generator expressions? eg:

map(lambda c: c.strip(), countries.split(','))

vs

[c.strip() for c in countries.split(',')]

IMHO the latter one is way more readable and generators cover almost all the use cases for lambdas. If you could not do it with a generator expression and choose to use a anonymous function it should have been a method anyway.

> -much bigger range of built-in methods and classes (all permutations of a list, removing duplicates from a list, Rational class, Regexp class, Range class, etc.)

you mean sets, dictionaries(hashtables) etc.? those are built in as well in Python.

Remove duplicates: mylist = list(set(mylist))

Range class: [i for i in range(10)]

you can import the rational class via a simple from fractions import Fraction

same for regex

> -more flexible syntax and "control operators" like each_with_index

you can iterate over dictionaries with for k,v in dict.items()

if you need the index of a list you can use for i,v in enumerate(list)

personally I don't think there is much of a difference between Python and Ruby. One is more mature the other is way hipper and it comes down personal taste which one you choose.


> why would you want to use lambdas if you have generator expressions?

Lambdas (blocks) in Ruby are used for way, way more than just building or manipulating lists. For example, there is no need for the "with" keyword in Ruby, because you can just use blocks for that. As another example, you can specify a block to the Hash constructor to specify code that runs when a key is accessed that is not present.

> Range class: [i for i in range(10)]

That is not a range class, that is an expression for constructing a contiguous list of numbers. The Range class in Ruby is an actual class that you can instantiate and call methods on.

  ("a".."z").cover?("c")    #=> true
  ("a".."z").cover?("5")    #=> false
> personally I don't think there is much of a difference between Python and Ruby. One is more mature the other is way hipper and it comes down personal taste which one you choose.

They have similar capabilities but make very different choices in a lot of ways.


Thanks for mentioning sets - this is another thing that I would like to be built-in in Ruby. But "list(set(mylist))" seems like kind of a hack.

> you can import the rational class via a simple from fractions import Fraction

Sure, you can write a library for anything and by now Python has libraries for most everything. But it still makes a difference to have things in the core language, plus a nice syntax for them.

> why would you want to use lambdas if you have generator expressions?

This is a case of making up for odd semantic choices with special syntax, which then becomes an unnecessarily fixed-in-stone part of the language. Range objects in Ruby are actually like slice objects in Python. But Ruby separates the syntax for ranges from the get-item method, since there are plenty of other places where you would want to use ranges (then you can iterate and map over them, etc.).

Ruby uses blocks to implement things like an each method that fulfills the same function as Python's for statement. (Ruby has a for statement but it is hardly ever used.) In Ruby your example would be

countries.split(',').map &:strip

which is obviously more concise and no less readable. Compare with

countries.split(',').each &:foo

where foo is some side-effectful method.

IMO such a small change in behavior should result in a similarly small change in how the program is written.


Ruby does have sets:

    require 'set'
    Set.new([1,2,3])


>Range class: [i for i in range(10)]

How does this differ from list(range(10)) (besides being highly suggestive of the power of list comprehensions)?

>IMHO the latter one is way more readable and generators cover almost all the use cases for lambdas. If you could not do it with a generator expression and choose to use a anonymous function it should have been a method anyway.

This is actually why Guido van Rossum wanted to remove lambda from the core language: http://www.artima.com/weblogs/viewpost.jsp?thread=98196 (He didn't, thankfully. I still want reduce back in 3.x, though.)


I did a switch in a reverse direction. From Ruby to Python. People actually shouldn't compare both languages in terms of language constructs and what or what not they got right. They are similar enough that it doesn't matter. What matters is library support and the community.

I got into Ruby by pure accident (No Rails-bait). I needed a scripting language to automate some tasks and one of my friends was playing with it back then and suggested it.

Ruby's weakness (and it's strength at the same time) is that it got popularized by Rails. As a result a huge community has grown around it - but it brought a tunnel vision effect along with it. People don't recognize that there are things besides Rails and it's reflected in weak (non web-dev) library support.

I didn't like the fact that there's a big push in Ruby-land for metaprogramming / DSLs and monkey patching. Ruby's not inherently that great at it. It just happens that you can do it, but it doesn't mean you should. Which has always been in my eyes mostly a big abuse of *_eval, method_missing etc. that leads to performance issues, maintainability issues, and so on. All that just to save a few keystrokes. Couple that with the very implicit nature of Ruby then you get a disaster. How often did I want to use libraries that turned out to be monkey-patching the same built-in classes and clashing with each other, How often did I want to delve to some code base to customize something, make a fix, etc. just to find such coding practices and being unable to reason about it in a regular object oriented fashion which Ruby's foundations are built on.

The more I wanted to build serious things in Ruby the more often I would hit such problems.

This all brings a question: What would happen if Rails had disappeared the next day. What would all these people do? What could they use instead?

It was an obvious choice not to stick with Ruby because it has nothing to offer in the long term over Python. Learning python is a very good investment - I can do web programming, gui programming, scientific computing. I can use it to script blender, gimp, you name it. And unlike Ruby - I have excellent alternatives for each task.


Ruby is Python's slutty sister. She's just as hot, but she'll let you do things to her that Python would dump you for even trying.


Blocks.

Ruby gets a lot of things wrong, but blocks are one thing they more or less do well. Everything good that has been built from/for Ruby owes a large measure of its success to Ruby's block implementation.

Python programmers are definitely looking up the power curve at Ruby.


I think you could pretty much substitute Python and Django with Ruby and Rails, and the article would still ring true. As a Ruby developer I've only had brief encounters with Python, but to me it seems like a language I would love for the same reasons I like Ruby.


As a Python user, I'll give my personal reason for using Python over Ruby. Upfront: I think Ruby is absolutely fantastic. It's a very elegant language. In a lot of ways I think it's cleaner than python. I'd love to use it.

Except...

Every interpreter I've tried is just too slow. I recently ported over a networking library I wrote in python to ruby, and the requests per second (CPU limited in this case, since it was talking over a loopback socket -- this was just a test of how fast data could be serialized/deserialized) dropped from 7000 requests/sec in python to 120 in ruby. And that was just with standard CPython. Under PyPy, I was able to process about 20,000 requests/sec after letting the JIT warm up.

I realize it's sort of silly to choose a dynamically typed interpreted language on the grounds of performance when you're already giving up so much performance regardless, but in my mind python hits the sweet spot of "just fast enough for most things I need to do", and with all other things being roughly equal (which, in my opinion, they roughly are), being an order of a magnitude factor faster wins out.


Did you give JRuby a try?


I didn't. Perhaps I should, but it seemed like it would have to be at least 10x faster than YARV to be worth it, which looking at the benchmarks seemed unlikely (but who knows? maybe I was hitting some sort of interpreter specific bottleneck)


For some tasks, JRuby is faster than python and for others, it is slower, but the difference is not big. PyPy is significantly faster in most cases than both.


Is this code public?


In my opinion Ruby has a slightly more obscure syntax with custom symbols, and it's C API is not as clean and well put together as Python's.


Which is more obscure?

Reverse a string in ruby: string.reverse()

Reverse a string in python: string[::-1]


Slice syntax in Python always bothered me. This would be slightly more readable:

    ''.join(reversed(string))
Except it requires you to have an idea of how iteration works (beginners don’t) and why that “join” needs to be there, and it’s still an order of magnitude slower. But hey. Ruby is prettier in some ways, Python in others, and beggars, as they say, can’t be choosers.


Way way slower than the slicing.


What an odd example. Other than perhaps a palindrome detector, I'm having a really hard time coming up with a use for a reversed-strings. Python DOES provide a reverse for arrays.


> As someone who loves Python and doesn't know any Ruby beyond a few bits of syntax and the obvious bits that are common to most languages, what am I missing?

Nothing. Everyone jokes about turing completeness, but Ruby and Python are pretty much exactly equivalent for their users. Python's got the upper hand in some areas, Ruby in others, but you'll get it done.

If Python works for you, use it. I didn't like it, the object model/runtime never felt right[1]. Ruby's did, it's internally consistent.

[1] The classic len(str), the underscored methods, recently decorators…


That depends on your taste buds. Mine prefer consistency. Ruby is a alippery slope when it comes to consistency and it requires the whole community to change certain attitudes.


Ruby seems more consistent to me, as it follows the "everything is an object" design principle quite rigorously. Python does not have any consistent design principle like that at its core, as far as I can tell.


This is something that I struggle a lot between Ruby, Python, and JavaScript.

Ruby/JS everything "is an object" is definitely consistent when it comes to "design principle". But somehow I just can't grok/remember/stick to my head every time I read Ruby/JS code.

I suppose as a user, I'm more concern with the readability (a.k.a coding style) more than the "everything is an object".


A key difference is that Ruby picks one paradigm (object oriented programming) and sticks with it (to a fault, potentially) whereas Python takes a multiple paradigm approach and various styles tend to be mixed together even in the best of conditions (consider len(str) vs str.length). You can attempt to ignore OO in Ruby but it's not recommended and under the hood, everything you're doing is forced into an OO paradigm anyway.


As the author points out, a huge standard library. Sure, Ruby has everything you need if you pull in this gem and that gem, but you better hope the gem you need is working that week or you're going to waste a lot of time reading Github logs and back-tracking revisions.


If the distance between Ruby and Python is the Moon and Earth. Then the distance between Ruby/Python and any other common language is the Earth and Sun.

You aren't missing much, comparatively.


I've used both. What you're missing out on is a better ecosystem of libraries and a community that communicates more and better than Python's imho.


I should probably point out that the bit I put in single quotes is not a verbatim quote.


I am kinda afraid of getting down voted, but I have used Python and I feel like I get all this and more (CPAN) when using Perl. The major difference in my opinion is that Perl gives you even more freedoms, which causes you to need some self discipline, but you get stuff done even quicker that way.

It feels a bit like Python is better for programming beginners or people that tend to be too lazy sometimes and Perl is for people who want to be able to be lazier since they know what they are doing.

Also, I think Python (and most other language projects) should clone CPAN (including CPAN testers). Seriously it's a major deficit.

I also think Python is the easiest to grasp for people coming from static languages. Perl and Ruby are way harder.


Like the author said,

you’ll spend a lot more time reading code than writing it, and [python] focuses on guiding developers to write readable code.

As someone who reads a reasonable amount of source, I'm grateful for that.


It feels a bit like Python is better for programming beginners or people that tend to be too lazy sometimes and Perl is for people who want to be able to be lazier since they know what they are doing.

I'm sorry, I don't understand this at all. What does this mean?

Also, I think Python (and most other language projects) should clone CPAN (including CPAN testers). Seriously it's a major deficit.

The Python Package Index (http://pypi.python.org/pypi) is no CPAN, but I've never had a problem with it.


The point about static languages is definitely well-made. Python's two key aspects are its lack of dynamism, its structural rigidity, and... oh, THREE key aspects!, because I forgot its famously rigorous static type-checking system.

Coming from a C/C++ background myself, I found this very reassuring. I don't like languages that keep you constantly on your toes due to providing insufficient structure and too much dangerous freedom and flexibility.


Ok, just as a disclaimer: i recently read Larry Wall's post on perl design principles +natural language and am reconsidering perl.. But honestly all my previous experience poking around with perl have been horrible... it's pretty messed up syntax to read... i guess it may not be so hard for a perl expert..


Most Perl tutorials do a terrible job of explaining the two or three things you really need to understand to learn Perl syntax. I wrote Modern Perl to rectify that:

http://onyxneon.com/books/modern_perl/


Perl really has a problem here. Most people don't learn Perl as their first language. Perl is very flexible (There Is More Than One Way To Do It). I often come across code where I can see that this from C programmer. Especially when reading documentation of language bindings, which often are from C programmers. Of course you could replace C with any other language, but really, one can use Perl in various ways. This is awesome when you know how to use it and can create short and readable code, like when you want to describe something in a natural language and when you really understand something you can make it easy. In fact Perl is heavily inspired by natural languages. Larry is a linguist.

So there are two ways. Either read the Modern Perl book, which indeed is awesome. It's better than any other Perl book/tutorial and really, there are ones that really suck. What's awesome about it is so good while there is a free version available. In fact the source code can be found on GitHub.

It's really nice, because you will actually know how to do things, not just what the language is like. You will be ready to start your first project. It's also really nice, because it's written in a way that neither bores people who know how to program nor makes it impossible for someone who never programmed to understand it, even though it's probably pretty hard if you have no idea about programming at all.

An alternative is looking for stuff on CPAN or GitHub. I actually think that's often a better approach (at least it is for me). Learning to code by seeing actual code and having a reference. It's just kinda hard to grasp things.

Another approach is of course programming something and looking up how it is done. In this case it's maybe a good idea to look out for best practices.

Oh this is something else that's nice about Perl, but beginners sometimes don't know about.

  use strict; # Well, everyone should use that
  use warnings; # For more semantic errors
  use diagnostics; # For explaining errors/warnings in plain Enlgish
  use Perl::Critic; # That's a CPAN module that tells you what probably isn't a good thing to do
And then there is an awesome community, called Perl Monks, which maybe is a bit like what Stackoverflow is these days, but way older.

Anyway, all of this is covered in the Modern Perl Book. So go for that book if you want to get into Perl (again).

Also, Perl folks are usually not too angry about other languages. This is also a great thing. They actually talk about problems and fix them instead of being dogmatic. Perl developers usually just want to get stuff done. If you can't do something easily in Perl itself, then you will usually find a solution on CPAN (see Moose for -very- advanced OOP).

I would like to see a lot of these things in other communities, but a lot of what makes Perl so awesome is that it isn't the new thing, but stuff that already had a lot of evolution going on. Sometimes you don't want that, but Perl folks usually are friendly towards other languages. Well, besides PHP, but that's more a cultural things. Perl culture alone is a reason for me to stick with it. It makes boring things more fun.

In other words: Give it a serious try. If you don't like it you can still go to another language and have probably learned a lot.


Python doesn't have the glyph noise that brace-based languages do, and well-written Python is beautiful and easy to read.

But lately I have been having an internal debate on whether using autodoc and inline docstrings detracts from its beauty and makes it harder to read because it reduces the amount of code you can see on the screen (see Steve Yegge's rant on this http://steve-yegge.blogspot.com/2008/02/portrait-of-n00b.htm... from a few years back).

If you use Sphinx to document a method's description, params, param types, and return type, then it can easily turn a three-line method into a 12 line method, and then you can only see ~5 methods on the screen at a time.

So if the code is already readable, would it not be better to put the docstring comments in an external file to keep the code density?

Django does this, and I'm starting to think it's the right balance.


Docstrings should already be easy to detect by editors, so it would be trivial to add a show/hide functionality. Incidentally, one could also use Sphinx to create a plugin for that.


But not all editors have this, nor is it widely used in editors like Emacs.

And GitHub doesn't have it so it can make it harder to get a big-picture view and see the structure of a module when evaluating different libraries online.


I just noticed Sublime Text 2 has an ability to fold multiline docstrings. If there only was a way to do this automatically.


one man's "glyph noise" is another man's "taking advantage of the symbol characters to add expressive power". of course, the end of that road is APL, but i'd argue that for instance /foo/ is more readable in code than regexp("foo") because it interrupts the flow of the line a lot less.


I generally write docstrings like "Return the n-fold frobulation around the center (x,y)." That's one line that way and about a half dozen in Javadoc style, without the latter adding much to pay its freight.

What would be worth putting inline but with auto-folding, to me, would be examples.


I work with python daily. I find that when I first started I avoid a lot of the "fancy" constructs such as list comprehensions, dynamic arguments, and decorators as well as meta programming.

The great thing is the bar to being effective in the language is so low. You can pretty much pick up most python code and just read it and get a general feel for what it does. Not only that but it is much easier to read because of whitespace delimiting.

I can generally tell the quality of a python program but how nicely indented it is. If there are lots of indents like 5 or 6 levels deep I know something is wrong. For a newbie if your code is hard to understand and it is in Python you are writing it wrong.


I find a great way to avoid deep nesting is to move code to functions, and bail out when some conditions are not met.

As an example, this deeply nested code:

    def frob(someargs):
        if something > 2:
            ... # do some processing
            if that == "CONNECT":
                ... # some more processing
                if verdict in blessedsolutions:
                    ... # even more processing
                    print "Happy birthday!"
I tend to write like this:

    def frob(someargs):
        if something <= 2: return # Not our concern.
        ... # do some processing
        if that != "CONNECT": return # Not handled here.
        ... # some more processing
        if verdict not in blessedsolutions: return
        ... # even more processing
        print "Happy birthday!"

I do the same thing in loops, using 'continue'. The above second example looks rather flat, but in practice the 'do some processing' statements often need identations due to conditions and loops.

I added the processing to demonstrate that it's not always possible to combine conditions. And, of course this way of 'bailing out early' vs. 'nested ifs and processing' is not Python specific. The dogma that a function should have only one exit point (as proposed by 'structured programming' adepts and Pascal afficionados) often leads to deeply nested control structures and convoluted testing. Brian Kernighan has written a paper on this[1].

[1] http://www.cs.virginia.edu/~cs655/readings/bwk-on-pascal.htm...


Neither solution is great. Excessive use of return and continue is an incredibly fast way of making code completely incomprehensible.

Such complicated control flows should be treated as a major, fundamental problem. Keeping them in place will cause bugs and prevent future contributors from having confidence that their changes are correct. The problem of complicated control flow is too deep to be solved with a band-aid like this, although perhaps syntactically it looks a bit nicer. In fact, this is exactly what people did before the so-called "structured programming adepts" came around, except using gotos instead of continues. It didn't work so well back then, and it doesn't work so well now.


It works fine as long as the early returns are for error-checking or default values. Generally, you want each function to do one thing and do it well; one very effective way to structure this is to bail out early if that one thing can't be accomplished, and then have the body of the function actually do it.

If the actual logic requires many branching paths, then usually I'd reach for a more structured solution like a chain-of-responsibility pattern, fanout, or FSM. Also, you can get pretty far with a bunch of predicate functions composed together with 'and'.


Hmmm, you have a point there, but what is your proposal to solve it in a correct way, then?

Some problems need a deeply nested control flow and arcane business rules, you have to implement that somehow, and in such a case the method I use looks like the lesser of two evils. I'd rather have some flat ecosystem of meaningful functions than one monolithic deeply nested control structure.

From time to time I like defensive programming, and when I write a function 'frob' that promises to frob something, that function first makes sure that the thing can indeed be 'frobbed', and if not, bails out. This also works well for programs that should be idempotent (although that is a whole other issue).


The way I would do it is to push the processing out to functions, too. Something like:

  verdict = get_verdict(the_thing)
  if verdict not in blessedsolutions:
      return
I'd advise against running returns onto the same line - I find it makes it harder to read, particularly when you have a lot of checks like this.


"The way I would do it is to push the processing out to functions, too."

This is indeed a good idea to isolate code away, but wouldn't alleviate a deeply nested control structure. It would make it somewhat more easier to look at, though.

"I'd advise against running returns onto the same line "

I usually follow PEP8 and place returns and continues on the next line, but the code examples I posted earlier had to be a bit condensed to conserve screen space, that's why ;-)


Sometimes your problem space is really just that complex, in which case there's not much you can do, short of writing some sort of business rules engine.


You should try inverting that code with guard statements:

    def frob(someargs):
        if something <= 2: return

        # do some processing
        if that != "CONNECT": return
        
        if verdict not in blessedsolutions: return
                    
        print "Happy birthday!"
Code Complete 2 suggests this for many types of logic when overly deep nesting bites you.


An less-known technique to dealing with nested-ifs is through the use of an one-time loop. Here's an example (in JavaScript):

    function oneTimeLoopMethod(x) {
      var result, temp = -1, a, b;
      do {
        ... extra processing code ...
        a = func_a(x);
        if (!a) break;
        ... extra processing code ...
        b = func_b(a);
        if (!b) break;
        ... extra processing code ...
        temp = b;
      } while (false);
      result = postProcess1(postProcess2(postProcess3(temp))); // This step can be more complicated.
      return result;
    }
It's useful when you want do additional processing before returning the final result to the caller. Because, otherwise, you'd have to duplicate the processing code in multiple places. When I first read about this technique, its description cautioned about the lack of clarity. So, use it at your own risk!


Functions that need to employ this trick are often overly complicated and could use splitting into more manageable ones. For one, you could relieve the need of do-while-false itself by moving its content into separate function and replacing break with return.


Take this code snippet for example:

    function func(x) {
      var a, b, c, result = -1;
      a = getA(x);
      if (a) {
        b = getB(a);
        if (b) {
          c = getC(b);
          if (c) {
            result = calc(c);
            release(c);
          }
          release(b);
        }
        release(a);
      }
      return result;
    }
What would you do? Would you create a function for each of the nested block?

    function func(x) {
      return _calc1(x, -1);
    }

    function _calc1(x, default) {
      var a = getA(x);
      if (!a)
        return default;
      var result = _calc2(a, x, default);
      release(a);
      return result;
    }

    function _calc2(a, default) {
      var b = getB(a);
      if (!b)
        return default;
      var result = _calc3(b, x, default);
      release(b);
      return result;
    }

    function _calc3(b, default) {
      var c = getC(b);
      if (!c)
        return default;
      var result = calc(c);
      release(c);
      return result;
    }
Or, use guard clauses and write your code like this:

    function func(x) {
      var a, b, c, result = -1;
      a = getA(x);
      if (!a)
        return result;
      b = getB(a);
      if (b) {
        release(a);
        return result;
      }
      c = getC(b);
      if (!c) {
        release(b);
        release(a);
        return result;
      }
      result = calc(c);
      release(c);
      release(b);
      release(a);
      return result;
    }
Instead of doing that, with a do-while-false loop, you can write your code like this:

    function func(x) {
      var a, b, c, result = -1;
      do {
        a = get(x);
        if (!a) break;
        b = get(a);
        if (!b) break;
        c = get(b);
        if (!c) break;
        result = calc(c);
      } while(false);

      if (c) release(c);
      if (b) release(b);
      if (a) release(a);

      return result;
    }
Note that this type of deep nesting are pretty common with Window-based COM programming. They usually go much deeper. With the do-while-false loop technique, you 1.) avoid creating one-time-use helper functions, 2.) consolidate post-processing/clean-up code, 3.) have only one exit point.

Can you think of a better way to tackle this problem?


Deeply nested code is usually a symptom of not thinking the problem in advance and solving it with data structures, instead relying on ad-hoc conditionals to solve it.

When I find myself testing against too many conditions, I usually refactor code to use a map between values and behaviour (e.g., a dict mapping regexs to function objects). This way you remove business logic from code, and are free to make it more algorithmic, like a simple mapping problem.


If there are lots of indents like 5 or 6 levels deep I know something is wrong.

"Flat is better than nested"

I keep a print-out of PEP-20 above my desk. It serves as an incentive to clean up my messy code ;)


It's important because it reminds me I should keep looking.

I should keep looking instead of turning it into my Blub. I love Python but its boundaries have been hitting me frequently for years. There probably never will be a total sweet spot between functionality, syntax, compatibility, support, and platform-crossing but I know I have to keep looking for a language that's more sophisticated yet more practical.


When I was at Google there was a paper titled "Please don't use Python for large programs." Can someone post that? It would be an interesting counterpoint.


I don't think it's public. In any case, its argument basically boiled down to "Python doesn't have static types, without static types it's too hard to communicate API information between programmers, Google has thousands of programmers that need to communicate, so please don't use Python."


what type of paper, Google's coding policy? there is a mail-list post, https://groups.google.com/forum/?fromgroups#!topic/unladen-s...


What language do they recommend for large programs?


don't know for sure, it might be C++, Java, and now they have Go.


I like Python the language, but the standard library can often annoy me as there is no unified style. thereAreMethodsLikeThis or_there_are_methods_like_this orevenworsefullylowercase. Gah.



Thats a thing where the standardization came after those libraries. It doesn't get to me that much, but I have a coworker it does bother a fair bit. I poke fun at him for this, but what he does is just take 15 minutes and write a "proxy" module (I think he was actually able to automate it...). If `baz.fooBar` is bugging him, he'll just create a `mybaz` module where mybaz.foo_bar either calls or is just a new name for baz.fooBar and then he just does an `import mybaz as baz`.


My biggest gripe too. Hopefully stdlib got more love lately on Python 3.


Many years ago I've decided it was time to pick up a modern programming language. In the past I had written lots of (Turbo) Pascal code, some x86 (and more exotic) Assembler and a bit of C, but then for several years I only did shell/awk scripts and ported C software to IRIX and SINIX/RU.

So I sat down and decided to find a nice general-purpose language that I would focus on learning. I wasn't quite sure what for yet, but I decided I was going to pick one and stick with it. I didn't have the time to learn multiple languages and I didn't have a lot of time to try out different ones, so I just looked around a little bit. Some I ruled out early: C and it's relatives were just too verbose, I didn't want to spend the time writing all the error handling necessary just to read a damn file. I was doing a lot of sysadmin type work at the time and had therefore developed an intense disliking for Java. This was based on too much badly packaged Java software that even if you got it running at all was sluggish, hogging way too much memory and crashed with arcane error messages. I did like the quick & dirty way I could make things work in a shell script, so I looked mostly at scripting languages.

I looked at Perl, which was the standard scripting language at the time and had a huge library (CPAN) of modules, which was great. I hated the syntax though and found the code often unreadable and there being too many ways of doing the same thing.

I looked at Pike, which had the benefit of using a C syntax (which I already knew, so that was a plus), but it seemed not widely supported and nobody was using it (this hasn't changed since, it seems).

I don't remember what else I looked at, but then there was Python. The syntax was a revelation. Here someone had finally made the most of creating a language from scratch and spent the time thinking about syntax and how to do away with largely pointless stuff (brackets), replacing it with things that everyone did anyways (indentation). It also came with a sizeable library of stuff that did almost everything I could dream of at the time. It was love at first sight.

Recently I've spent £35 on a stainless steel mill that grinds salt when you twist it one way and pepper when you twist it the other way, just because I really love well thought out and engineered things that make stuff that's been around forever just a little bit better. That's how I feel about Python. It's not doing anything fantastically new or unique, but it's doing the stuff that others do just a little bit better.

Today I'm doing lots of web stuff, some sysadmin stuff and random hacking of various things on the side. I'm still doing almost everything in Python and whenever Javascript bitches at me about a missed semicolon I do think: "All these fantastic things you can do in a web browser these days and you couldn't figure out that this command in an entirely different line isn't part of the previous line?".

The only thing that I can't do in Python is write software for my iPhone, which bugs me a bit. But every time I try to get into Objective C I just get annoyed by the syntax and amount of cruft I have to write to do anything.


I'm still doing almost everything in Python and whenever Javascript bitches at me about a missed semicolon I do think: "All these fantastic things you can do in a web browser these days and you couldn't figure out that this command in an entirely different line isn't part of the previous line?".

I hear you about Python, but JavaScript doesn't require semicolons either, as far as I know. The main reason it's advisable to put them is to avoid ambiguity when 2 statements find themselves on the same line, for example, in a minified js file. Python uses semicolons for the same reasons, but minifying Python files is quite uncommon (and would probably serve no real purpose).


JS semicolons are actually encouraged —omitting them, while syntactically valid, can lead to obscure bugs. http://bonsaiden.github.com/JavaScript-Garden/#core.semicolo...


If you are following Crockford (JavaScript, the Good Parts), it is encouraged. However, some, like the current node.js maintainer think otherwise; see http://blog.izs.me/post/2353458699/an-open-letter-to-javascr...

Personally, I still follow Crockford's advice though.


And you guys just demonstrated yet another reason why I choose Python for my go-to scripting language. Semicolons are completely redundant in Python and the language is so clean and the style guide PEP 8 is so natural, you rarely hear about these meaningless formatting style bickering in other languages. Saves so much time from arguing.


You can write in Python for Android, in case you're considering changing your phone so you can write in Python for it:

http://google-opensource.blogspot.com/2009/06/introducing-an...

http://code.google.com/p/android-scripting/

https://github.com/kivy/python-for-android


Python does have the strength in all the powerful libraries it makes available to you, and that may be a determining factor in the relative popularity of Python v. Ruby in the future.

I've used Python for many years but recently took a serious look at Ruby to see what it has. I haven't looked back - I found Ruby to be less verbose and still easy to read, and very enjoyable to code in. Python was enjoyable too, but for me, Ruby much more so.


Mildly offtopic, but do you have a link to that mill? Or, if it is patented, can you copy the patent number off the side? I love mechanisms like that.



Syntactically, JS is funny in a lot of ways.

I highly encourage you to investigate CoffeeScript.


As someone with lots of Python experience, having used CoffeeScript/JS for ~three months makes Python looks somewhat old fashioned when I switch back. If Coffee/JS had some way of overloading array access operations, and maybe a numpy equivalent, I don't think I would have any reason to go back to Python. It's also just so much faster than CPython...


I actually quite like Javascript, especially for its speed. However, the major obstacle to me is that JS is locked in web development. It even does not have a standardized module to read files line by line, let alone other system-level stuffs. One of the (small) reasons that python is not used in web browsers is the lack of speed.


The upcoming version of JavaScript--some engines already support this, I think--includes support for "proxies" which basically let you overload both array access (a['b']) and . access (a.b) for an object.


Please share your data showing that Coffeescript is faster than CPython


Sounds like Apples to Oranges...


Yes, but it's an Apple with multiline lambdas vs. an Orange with only single-line lambdas.


"Weirder languages (e.g: Haskell)" sounds like something a Blub programmer would say.

How is Haskell "weird"?

I've used Python for many years, but only recently have I migrated my last Python niche (simple scripts) from Python to Haskell. I've found that even there, Haskell has become more productive.


> How is Haskell "weird"?

Functional the world (of wide-spread computing languages) is imperative, lazy the world is eager, objects-less when the world is object-oriented, curried when the world is uncurried, pattern-matched when the world is conditional and brings up "strange" concepts from mathematical bowels like "monads", "functors" or "zippers" when the world barely iterates on arrays.

How is Haskell not weird from the point of view of 99% of the developers population? Knowing, liking or using Haskell is not even relevant there, Haskell is weird because it is an alien to the majority in the same way Smalltalk, Erlang or Clojure are weird (but for different reasons, for the most part).


The world is mostly parallel and asynchronous, yet most languages are sequential and synchronous. And the object-orientedness claim is overstated, as you drink from a cup, rather than use rather than think it in terms of cup.spill(), signals propagate, but 'return' is nowhere to be found... unless of course you are just talking about programming, but that's a bit of a circular argument.


I understand "different". Not "weird".


They reify to the same thing, something different looks weird, and something weird is different.


You seem to be attaching a non-existent negative connotation to weird. Weird means different. Weird is "more" different.


I believe that sentence was meant to be a joke by addressing common generalizations about other languages.


Python is great in every way except one -- it isn't Javascript.

(to clarify: I'm not saying JS is a superior language. I'm just saying it's going to win out in the end because of the history of its implementation. There is a serious benefit to having one language everywhere, and JS is the only viable candidate because of the investment in client-side sandboxing and performance -- two features which turn out to matter almost as much on the server as well.)


I wasn't aware that Python and JavaScript were in competition. Or for that matter, that JS is by any stretch of the imagination "invested in performance".

People don't use Python because of its merits as a programming language. They use it because of the excellent standard library, Scipy, Sage, etc. While it may be that in a decade JavaScript has built an ecosystem to match, I frankly don't see why anybody would bother. We already have Python, why would we invest such effort into a less suitable language?

JavaScript is tolerated by web developers, because they're forced to use it on the client, and so there's a benefit to also using it on the server to prevent duplication of effort. But there is absolutely no reason why non-web developers would or will ever use it outside of that niche.


> non-web developers

Sorry, not sure what that means </sarcasm>

OK maybe not everything will be a web app, but pretty much everything that isn't some deep driver-level boot code or embedded firmware is already almost always part of some sort of web app, and the stuff that isn't is pretty much all written in C/C++.

When is the last time you wrote a Python program that didn't interact over TCP or equivalent with some other process? And how often was that other process not something at least nominally webby?


    > When is the last time you wrote a Python program that
    > didn't interact over TCP or equivalent with some other
    > process?
My last python program was a set of scripts to help me analyze music harmony. Entirely CLI based.

Before that? objcfix - a refactoring tool. Again, CLI based.

That's not counting the multitude of small scripts I write on a day to day basis. I've written a large amount of Python code, and very little of it interacts with a network.

Put it this way: Python was invented in 1991, the same time as the web. It didn't become a good tool for making websites until after 2005 when Django arrived. So, what did people do with it before then, if not make websites?

    > but pretty much everything that isn't some deep driver-
    > level boot code or embedded firmware is already almost
    > always part of some sort of web app, and the stuff that
    > isn't is pretty much all written in C/C++.
This is not the reality we live in. There's:

* Games (either written in C++, Objective-C or Java nowadays, plus a bit of Lua)

* Big Software: MS Office, Photoshop, Avid, Logic, InDesign, etc. Written in C++

* Scientific Computing: Written in C++, Matlab, Mathematica, Python

* Financial Computing: C, C++, Java, and a tiny spot of OCaml.

* iPhone/Android Apps: again, Objective-C or Java.

* "Enterprise Software", whatever that is. Java.

And as you point out, embedded software. Everything around us runs Embedded C.

The future is the same as the present: full of a rich and diverse selection of languages.


Mostly true 'cept for pythons built in pasta makers:

http://www.python.org/dev/peps/pep-0318/


Decorators can be misused, but they can also succinctly express an aspect of the program that is repeated many times in different contexts.

My own rule is that a decorator should do one thing, and that thing should be described by it's name.

Being able to look at a method that has the @has_permission decorator on it and know that it is getting checked for against a package wide permission model is more clear than having boilerplate to check permissions on every method that needs that check.

Any language construct can be misused. And people can hide stupid things in decorators and name them in misleading ways. That doesn't mean that it's the fault of the syntax.


Yeah, that's fun ;-) I wish there was a

    if main:
        ... # do some processing
instead of having to write:

    if __name__ == "__main__":
        ... # do some processing
That'd make DHH happy, too ;-)


I find that the best way to fix this is to write your program as a library, then the part that you're supposed to run as a script which imports and runs it. No __underscore__magic__ required.


I could not agree more. After learning the python idioms, I realized how much time I have wasted doing similar things in PHP. Python has gotten lot of things right. I don't feel cramped by a language as I have with Java.

Python idioms : http://python.net/~goodger/projects/pycon/2007/idiomatic/han...


True. The real magic: The basic constructs are so simple, so how the output is so robust? And picking old or library code is always easy!


Executable whiteboard.


I have a list I wrote up once about Python's problems. Some of these are more exotic than others. Most of them are kludgearoundable. A couple are fixed in Python 3. Some are simply design choices that are exactly different from my mental model of the world, and due to the TOOWTDI Python world, it is frustrating to work with them.

Some Things Wrong With Python

* Immutable strings[0]

* Everything a reference[1]

* Environment copied on loop entrance (implying assignments often break when done in loops)

* Lack of braces[2]

* Lack of enums[3]

* Standard debugger pdb reminds me of the first debuggers ( used on the PDP-1 and EDSAC) in its feature list.

* Exception-oriented design, which clutters code with "try/catches" everywhere.

* Exceptions aren't just exceptions, they are also overloaded to provide signals.

* Two forms of objects (old style vs. new-style)

* Object inheritance is pointer to parent. Metaprogramming becomes an exercise in hackery.

* Objects are actually dictionaries with some pointers behind the scenes.

* Duck typing will automagically cast certain things[4]

* As an interpreted language, code that is not tested can be assumed to be syntactically correct but in error (this is a horrible problem when testing for rare error conditions)[4.5]

* When Python is fast, it's because it's calling out to C.

* Python objects are gigantic. An empty string is 40 bytes, for example. This adds up.

* Python can not meaningfully multithread, due to the global interpreter lock.

* Python suffers from many different versions being installed on different versions of Linux[5]

* Lambdas are totally broken[6]

* Large-scale module importing is a half-baked job[7]

* Python 3 routes around some these issues by using generators for everything[8].

[0] Ever try to step through a string and modify it in-memory? Well, you can't. Sorry. ;-)

[1] I.e., a function can modify something unexpectedly.

[2] Which means block commenting or commenting out tops of loops means a complete reindenting of the block, that is, editors can't do the smart thing, they don't have enough context. This is a waste of the engineer's time. Delimiters were figured out in Lisp, a very long time ago.

[3] This was figured out long ago as well.

[4] But not others. Object design is a bit funky.

[4.5] This is a general design 'con' in dynamic languages. It's partially solvable with a sufficiently smart compiler, most compilers aren't that smart.

[5] This is why serious Python programs (that don't come bundled with their own version of Python) are written to target Python 2.4, released in 2004.

[6] A 'lambda' function in Python can not have statements. Most interesting functions have statements.

[7] Python (similar to Java) relies on a very specific directory structure for a given program to be able to import libraries. This means that if you are doing anything remotely exotic with layouts (e.g., libraries are in a peer directory, not a child directory), you have to commit hackery.

[8]This avoids the fact that the end result has to be emitted in some fashion.


You forgot mutable default arguments!

    def fn(ls=[]):
      ls += [1]
      return len(ls)
(My syntax might be a little off, but you get the idea.)

This will return an ever-increasing number if called repeatedly with no arguments.

Also, the reliance on tuples, particularly tuples of one item always gets me because (1) and (1,) are different.

Also, the scoping for list comprehensions doesn't make sense:

    x = 11
    [x for x in [1,2,3]]
    print x # 3!
This has also gotten me in the past.


Just for the record list comp scoping is fixed in python 3:

    $ python3
    Python 3.2.1 (default, Aug  1 2011, 14:47:14) 
    [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> x = 11
    >>> [x for x in [1,2,3]]
    [1, 2, 3]
    >>> x
    11


The problem is not that the argument is mutable, it's that the default argument creation is run at definition time. Mutable arguments are quite useful, and it's consistent to be able to use default arguments in this way; the argument shouldn't outlive the function execution, is all.


yes, those are nasty little gotchas.


I stopped when i found "Lack of braces" in your list. If you don't like it.. fine.. but it's in my opinion a major plus for python. Also "everything is a reference" is 1) not true for everything and 2) a smart thing to do. Many languages do that.


You think immutable strings is a bad thing?


Yeah, I goggled a bit at that one too. Particularly when it's immediately followed by a complaint about things being modified unexpectedly.

Some of the complaints are not really issues, eg. you can comment out loops by adding an "if 0:" after commenting out the top line, or else surrounding it with triple quotes """...""".

[5] is an outright lie. Major Python apps support 2.4 onwards, true, but virtualenv (a common piece of Python infrastructure) can support arbitrary libraries and versions of python: http://stackoverflow.com/questions/1534210/use-different-pyt.... And I've never heard of duck typing 'auto casting' anything - code example?

Most of the rest is either a style thing, eg. "lambdas are broken!" (Just define a function first then use it, or use a decorator. You can't write lisp in python), or else broken in other languages too. [4.5] is true for any language, including compiled statically typed languages, not just Python.


Python mixes immutability and mutability in ways that require careful thinking about the problem if you want to write unbuggy code. E.g., does a tuple holding lists allow modification of the member lists? Intuitively, it would seem to me that immutability should cascade, but thinking about tuples as holding a bag of pointers suggests that the lists are still mutable. Anyway.

I generally take the position that if you have to work around something in your language, it's broken. E.g., the virtualenv hack. It's nice to deploy out an exe and not require a whole ecosystem to be install in order to bootstrap it up.

Happy hacking.


Virtualenv is a pragmatic approach to solve actual problems during both development and deployment. As long as you have a python executable (don't have to be installed system-wide) and the virtualenv.py module, you can create a virtualenv. How is that "a whole ecosystem"?


Virtualenv isn't a hack. How else would you run multiple different versions of Python or a library?


I think immutable strings are in general a good thing, but for some applications, having mutable strings will make my life much easier.


Besides PDB, you are really just complaining about design aspects.


Perfect. But please tell me ONE language which does/fixes all the above + pure merits of Python.


It is a general rule that no language makes everyone happy all the time. Seeking for the ideal, however, is fine in my books.

I am not into purity in languages, as a personal rule of thumb. I favor Common Lisp for applications, Perl for scripts, and C (or C++) for driver layer code. Those are considered pits of impurity quite often.


"It is a general rule that no language makes everyone happy all the time"

That's the point and you knew that.


Scheme? (Something like Racket.)


+1 for the exhaustive list..



Haskell is not weird.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: