Hacker News new | comments | ask | show | jobs | submit login
Learning to Program: Why Python? (udacity.blogspot.ca)
94 points by camlinke on May 8, 2012 | hide | past | web | favorite | 84 comments



>The similarities between these languages are as complete as the similarities between driving in Europe and America: In the same way an experienced driver from one continent can improvise with a reasonable degree of safety on the other, an experienced Python programmer can be up-and-running with C# in a day or two.

Absolutely love the analogy. It's a great display of leveraging the English language to convey complex concepts in a down-to-earth fashion. Even as engineers (or perhaps because we are engineers), getting our ideas across in understandable, layman terms is a potent weapon.

(though admittedly, the latter part of the excerpt could perhaps be simplified)


But that doesn't explain why Python and not C#. Can't a C# programmer be up-and-running with Python in a day or two too?


Python has less syntactic sugar: semi colon-terminated statements, braces, etc. It doesn't make Python "better," but to a newb, the less the better. If less comes in a real language, so much the better.

Variable declarations ... you could go either way. I like that Python is closer to a language that "knows what you mean" even when you don't declare a variable.

You can learn discipline later. Learning to program is a chicken and egg problem, you have to learn two things: a language (any language) and programming concepts, and unfortunately you have to learn one to do the other. Python's forgiving nature makes that a bit easier than some other languages.


Yes, probably. The point is that Python has a lower barrier to entry, mainly because it doesn't force you to learn anything about OO first. That is why python and not C#.

The point is being made that learning one language makes it vastly easiest to learn another , so you may as well start by learning one that is designed to be learnable rather than necessarily one that is in demand.


Probably because not everyone is learning to program on Windows.


You can use Mono to program C# on Linux or Mac.


Yeah, but chances are your Linux or Mac users will already have python installed via their OS.

The biggest challenge for Windows users is remembering to go to python.org and not python.com to obtain their version.


And if you look at any of the udacity office-hours videos, when you see laptops, they're almost always Macs :-)


This is for people learning to program, remember? If they have to install Mono, that's just an extra hurdle, and doesn't get them any closer to their goal vs. just using Python.


Either way your going to have to install something, either a python runtime or a mono/.net runtime.

Plus your going to at least want to install a better text editor and probably set some environment variables.

The installation process for all of these things shouldn't be a big hurdle since all you'd have to do is follow a set of steps that can be easily demonstrated.


Thought the article was going to be a question as to why. But I wholeheartedly agree as to why Python is a great forst language.

The simple fact that it's syntax light will always be the #1 reason it's a great introduction. Worry about the syntactical hangups once you have the core concepts of programming down.

I never understood those CS courses that have you diving in blind with C++


>I never understood those CS courses that have you diving in blind with C++

C++ made more sense to me than all the CS programs that were switching to Java for their intro to CS courses when I was coming up a decade ago.

C/C++ may have overhead that's confusing to a beginner out of the gate, but Java just amplifies that.


Everyone learns differently, so hey to each their own. I dabbled for years throught my HS life in php, but when it came time to actually learn I was completely put off by my first intro to CS course that used the aforementioned 'Dive head first w/ C++!' method and left the course, and was one of 18 to do so.

Started learning python on my own around age 19 and everything just skyroketed from there. I'm much more comfortable using( saying that lightly ) C++ these days but I can't imagine ever recommending it to even an enemy as their first language.

An aquaintenance on a board somewhere put it best, " C++ is like a hurricane. Gorgeous but incredibly destructive". Personally I think it all looks hideous but I get the point now.


Oh, I agree that C++ doesn't make much sense as a first language in 2012. I was just pointing out that a few years ago, the trend was toward teaching Java which is, in my opinion, even worse as a first language because you're forcing people to immediately dive into dealing with classes before they even understand what a class is.

Today, I'd agree that Python -- and possibly Ruby -- is a better first language, but when this was happening they weren't particularly mature. You just didn't have the kind of mature, syntactically-clean, object-oriented languages available that you do now. Even so, I never understood why you'd choose Java with all its excess syntax to teach a beginner.


Prob the age old reason because it's what most programmers end up using throughout their buisness life. Always been my guess anyway


Agreed. I may not be a fan of Python in general, but it is a good language for instruction for precisely that reason: less incidental complication so you can focus on inherent complexity.

As someone who’s well versed in C++, and who had to tutor CS students in it, I certainly wouldn’t recommend anyone use C++ unless they already know it and can use it effectively.


I never quite understood why people think that teaching beginners programming through languages like Python and C# is a good idea. It's really hard. Think about it.

It may seem easy to start writing programs in those languages, but understanding what is actually going on is sort of difficult for someone who has no idea how things work. When you're implementing a linked list or a binary search tree for the first time, it's important to understand how your data structure is laid out in memory, otherwise it's a pointless exercise. Higher-level languages are convenient, but they hide a lot of what is going on, and that, in my opinion, is bad, if you are learning.

I think the best language to start with is C (and not C++, of course). One must understand pointers and bytes and bits and memory management and all that shit that leads to segfaults and many hours of debugging to be a programmer. I would even go as far as to say that one should start with an assembler and not C, but given the immense complexity of contemporary hardware, I understand that such approach is infeasible. However, teaching kids programming with a physical (not emulated), really simple, toy CPU is probably a good idea. Too bad I don't know any such CPUs.


IMHO, this is bad advice. It is like saying if you want to teach how to change oil in the car, teach the entire automotive engineering curriculum.

Introducing extreme complexity so early in the process turns off people; they give up easily and (falsely, in many cases) start believing that programming is not for them.

The first introduction should have enough fundamental programming concepts to enable the students to make something useful and tangible, by themselves. As the interests and the objectives grow, they can delve deeper into the domain.

I think with Khan Academy, Udacity and others, we are entering a new paradigm of "Just in time learning". Schools should focus on teaching smart learning techniques, critical thinking etc instead of cramming more and more topics in their curriculums.


Is it true? It's easy to spout platitudes about what learners should and should not have to learn and why, but have they been tested? A lot of people who went to my school would have preferred a curriculum based on Python and writing games, but the ones that graduated seem pretty glad they instead endured a curriculum based on C and culminating in compiler writing.

I'm happy to live in a post-Alan Kay world, but I think maybe it's time to dust off that sculpture of Dijkstra we keep in storage and move it out into the foyer. Programming is hard; glossing over what makes it hard in the name of doing stuff sooner may or may not help, but as long as we debate it without testing it empirically we're sure to get nowhere.


That is indeed a good question. One of the key challenge in reforming education field is the feedback cycle is big. Very time consuming to track experiments. So I have add that what I stated was my hypothesis reinforced with a small dataset of my experience teaching 'Computing & Internet' to middle school kids.

I have to clarify that my comment was targeted towards current Udacity (other MOOC) participants who wants to learn programming as a value added skill or hobby.

If you want to graduate with CS major and/or want to pick CS as a profession; I do believe you need to have a good understanding on the entire stack. There are no shortcuts to being a good professional and achieving mastery in any field requires committed hard work.


I completely agree with this. Nothing is more frustrating as a beginner than getting confused and not even knowing what questions to ask.

When you introduce concepts such as addressing/pointers(which are typically in hexadecimal) and memory management you add many more moving parts to the mix. In my opinion, abstracting a lot of this stuff away helps lower the number of opportunities on which a beginner can get caught up.


> When you're implementing a linked list or a binary search tree for the first time...

You're about 30 levels above an actual beginner. You have to think even lower capability, like this:

> When you're trying to print "Hello World" you have to know all this hard stuff like how to type "python" into PowerShell, or even just install Python on Windows and fix it's broken PATH settings.

That's where you need to be thinking if you're thinking about what a beginner struggles with.


The canonical IT answer applies here: it depends.

Learning C is good if you want to learn most of the canonical CS stuff - linked lists, bla, bla, bla.

If you want to recommend it to someone who is learning how to program so that they can get their web startup off the ground (where Python or Ruby would be a better fit), or someone who is trying to parse CSV files for work (C# or Python), then you can, politely, fuck off and die.

> One must understand pointers and bytes and bits and memory management and all that shit that leads to segfaults and many hours of debugging to be a programmer.

No, no really you don't. Perhaps to be some sort of archetypal "Real Programmer", but that's different from learning to program. There are all sorts of other factors at play: motivation, being able to get something working relatively quickly and iterate, being able to show friends/colleagues what you're working on, accessibility of libraries, etc.

Disclosure: I wrote a book on how to program in Python: http://manning.com/briggs/


> No, no really you don't. Perhaps to be some sort of archetypal "Real Programmer",

So, are you saying that if you asked a candidate during a job interview what malloc is or why inserting things in a linked list is faster that inserting things into an array, and he gave you a blank stare, you'd still hire him?


Who says inserting into a linked list is faster than inserting things into an array? If the array is pre-allocated, or even crazier a c++ vector or c# list<> with an amortizing growth function that only needs to be invoked once or twice, it'll be much faster than inserting a new node in a linked list.

The array/list can be pre-allocated to the size you need and have good cache locality and minimal container overhead -- the linked list will have the overhead of the pointer to the next node (2x for a doubly-linked list) + the type descriptor (if you're using a language like c#, java, python, etc.) per node and data located all over the heap with cache misses everywhere.

<sameCondescendingToneAsParent> If I was asked during an interview why inserting things in a linked list is faster that [sic] inserting things into an array, and I provided an answer that made him blankly stare back at me, would I want to take the job? </sameCondescendingToneAsParent>

Perhaps you're getting a guy switching careers or fields of expertise (web -> system or accounting -> embedded), or someone that just has that gap in their knowledge, or someone that's just having a plain ol' brainfart at that point in time. No need to be as rude as I responded in my third paragraph.


> Who says inserting into a linked list is faster than inserting things into an array? If the array is pre-allocated, or even crazier a c++ vector or c# list<> with an amortizing growth function that only needs to be invoked once or twice, it'll be much faster than inserting a new node in a linked list.

I said insert, not append. What made you assume the question was about inserting at the end? Even if you have a big pre-allocated array, inserting an element somewhere in the middle will still cost you the price of shifting the "tail" of the array to make space for the new element, while the list will not have that cost.


Now you're just being an ass.

What makes you think insert is important anyway? The typical use cases for an array/list are pop, append and indexing. Python's list implementation uses arrays, not linked lists, internally for exactly this reason: indexing being O[1] is more important for most purposes than fast inserts. (http://docs.python.org/faq/design.html#how-are-lists-impleme...)


The purpose of asking this question is to find whether the person who answers understands the subject matter well enough to reason about it and arrive at the correct answer. It doesn't even matter if insert is a typical use case or not; if the candidate knows data structures he won't even think about it.

And what's with the name-calling? Aren't you capable of having a normal, polite discussion?


I know lots of really good programmers who wouldn't necessarily be able to describe the details of linked list algorithms or malloc. They're doing high level stuff with numpy/scipy, or working on web frameworks, and whether they know malloc or not is irrelevant.

You keep talking about "the subject matter" and "programming" like it's one topic. It's not, hence the "you're an ass" comment - you're just repeating your argument ad nauseam and ignoring everything everyone else has to say.

It's possible to be a good programmer without knowing how linked lists work in excruciating detail - get over it.


> * It's possible to be a good programmer without knowing how linked lists work in excruciating detail - get over it.*

It looks like our definitions of "good" and "programmer" are somewhat different.


People used to make similar arguments in favour of assembly language, ie. you can't be a programmer unless you can drill down to the bare metal and make it do backflips/squeeze every last bit of performance out. Now computers are faster, compilers are smarter at optimising and it's only really the case if you're doing hardcore kernel programming or writing a game engine. C, pointers and linked lists are going the same way as computers get faster still: they're not very relevant unless you're coding something in a specific niche.

And I've seen people who were the opposite of what you're saying - people who knew lots about low level C/Assembly stuff, but who couldn't write clean, maintainable high level code to save their life.

But to get back on the original topic - none of that is relevant to learning how to program. Loops, variables, data structures, managing state, functions and classes are more what you want to teach to start with, and are far more relevant to most programmers than malloc or linked lists.


>what malloc is

There is no reason someone who never programmed C would (or should) know what function is used to allocate memory in C.


Fair enough, but don't you think they should at least know that memory is a resource that needs to be allocated and deallocated and how come that they never have to deallocate memory themselves when writing in python?


Not all languages are (or should be) like C, and not all languages require manual memory management.

If they are going to be programming in Python, there's probably no need for them to be familiar with memory management, or the underlying details of the language implementation - the language is there to insulate you (to a greater or lesser extent) from the machine. Later if it becomes useful to learn about memory management in other languages, they could do so if they are moderately intelligent, it's not rocket science or something that you must learn first or not at all.


That is not the same thing as asking about malloc.


I don't think understanding memory mapping has anything to do with data structures. I have no solid understanding of malloc or how it might be implemented, but RedBlackTrees, TernarySearchTress, LinkedLists, Hashtables, etc are all pretty easy to grasp.

Even the nature of a Struct, and how it might be more efficient than an Object, and the difference between a Stack and a Heap... All that's pretty accessible without sitting through a course on the finer workings of malloc.

The memory is there. It provides my byte buckets. I don't need to care much beyond that to be a good web-developer.

Asking most developers about malloc is probably about as relevant as asking about video frame-buffers. For some spaces it's make or break, but for most it's just useless trivia.


I think understanding roughly how Malloc works can be very helpful in pretty much any area where performance becomes important. Without knowing a bit about Malloc it's hard to understand the benefits of SLAB allocators and similar (which are used everywhere from game engines to OS kernels to DBMS etc).

In it's most basic form Malloc basically is a linked list.


There are legitimate reasons for people to learn to program that don't involve being interviewed by you for a job, just as there are legitimate reasons for learning to write that don't involve writing sonnets.

Similarly, there are different ways to come to the knowledge of what malloc is, or why inserting things into a linked list is faster than inserting things into an array.


> Similarly, there are different ways to come to the knowledge of what malloc is, or why inserting things into a linked list is faster than inserting things into an array.

I completely agree with that. It doesn't matter, as long as the knowledge is acquired somehow. I was just pointing out one of such ways.


Not because he couldn't answer these things, but because he'd only done the one intro Python course.

Or the one intro C course, for that matter.

Jobs where those things matter depend on much more than having got your feet wet with C and malloc learned in one intro course. You probably wouldn't have this guy to the phone screen in the first place. Right?


Depends entirely on what the job was. If I'm hiring a programmer who is expected to write C for an embedded platform, then I probably wouldn't hire them. If I'm hiring a 3D artist whom is expected to write mel scripts, then I wouldn't care about those things and wouldn't even be asking those questions.


True, but we're talking about hiring a software engineer here.


No, you're talking about hiring a software engineer. The rest of us are talking about: "Learning to program: Why Python?"


Those questions would be stupid to ask for a Python position, unless there was some need for working with C APIs. Why waste your time and the developer's time?

Much better to asking about relevant things; some common Python library, or why they'd use lists vs. dictionaries vs. sets, or what a generator expression is.


Allow me do disagree. Tomorrow you're going to have a project that uses a different technology, what are you going to do then? Fire everyone and start hiring again? Of course you need at least a few people on your team who know your current technology in depth, but you probably want to make sure they're they understand the basics.

And please note that those questions are not some intricate C specific questions. Memory management is a basic thing (I'm not talking about actual memory allocation algorithms, just things like how to make sure your program doesn't leak). Linked lists are also basic data structures. It's not asking what "volatile" keyword means and when to use it.


So tomorrow we're all going to go back and start programming our web apps in C? Give me a break. Ask the right questions for the position.

Python also does not have memory management or allocation, linked lists or "volatile". Memory management is basic for C and C++, but that's about it.


How about you need your people to make an iOS 4.x compatible app. Oops, suddenly they need Objective C including all its memory management nasties. This is going to be so much harder if you have never seen pure C.


Well then obviously you'd want someone who can deal with C. "DeepDuh" indeed.

But you're talking hypotheticals - I can do that too: if you suddenly need big data then "Oops", you're going to want someone who knows Hadoop or Erlang. If you need to launch a rocket then "Oops" you're going to want an aerospace engineer. But if you're looking for a web dev, then C/Hadoop/Erlang/Aerospace questions are largely a waste of time, and might disqualify candidates who are otherwise fine or who could figure out C given a couple of days.

Bear in mind that the original topic is talking about languages for learning how to program, not languages for iOS development. If iOS apps float the beginner's boat then by all means learn Objective C; their enthusiasm for the field will probably overcome their frustration with C. But I suspect they'd be better off learning Python or similar first, then starting on C.

Hence my original response: it depends. Python is a pretty good choice because it can do well in lots of different fields and is easy to pick up.


Why not just start with some embedded development then?

8-bit PIC and AVR microcontrollers have very simple instruction sets and can be connected to simple I/O devices like switches and LEDs, but they both use Harvard architectures, so you have different address spaces for code and data. The newer 32-bit PIC micros use a MIPS core, but the development tools aren't very open, despite being based on GCC.

The 16-bit MSP430 is fairly simple, has good open-source development tools, cheap development hardware (check out the Launchpad), and a more conventional von-Neumann architecture. Check out some of Michael Kohn's sample code for the msp430 assembler he wrote:

http://www.mikekohn.net/micro/naken430asm_msp430_assembler.p...


this is great, thanks


I disagree completely. I would say everyone should start with Smalltalk. Start with something beautiful and clean and then learn the messy stuff that makes it possible later.


I learned C++ by accident: A friend of my moms had an old programming book laying around and gave it to me. Next thing i know, i'm "wasting" my vacation reading the book from cover to cover. I never actually used C++ though. Not once in my admittedly very short time as a hobby and/or employed web dev have i ever actually needed to know C++ or even memory management. So i don't think it's as important to know as you claim it to be.

I'm still very glad i learned C++, though. I think it best equates to learning latin in school: You're probably never going to need it, but it gives you great insight into how languages work on a lower level and where they come from. While this won't help with your vocabulary necessarily, it will help with autodidactic learning. For example, if i didn't know about pointers, it's likely i would never have wondered about how the parameters get passed to a function: By reference or by value.


I would even go as far as to say that one should start with an assembler and not C, but given the immense complexity of contemporary hardware, I understand that such approach is infeasible.

Someone could make a similar argument about starting in C. And make similar arguments that you made for the benefits.

You have to start teaching somewhere in the stack. I actually disagree that it's infeasible to start with assembly. I don't think it's wise, but it's certainly feasible. I think people tend to have a bias that beginners should start at the same level of abstraction that they did. But I think that in online discussions, we place too much emphasis on where to start. I doubt it matters as much as the other factors, such as motivation of the student and the patience and care of the instructor.


Well, I think some kids care about the abstract and fundamental workings of computers, but a lot are initially interested in learning programming because they see cool stuff and want to make it themselves. Teaching them the "just in time" simple concepts that lead to concrete results without making them struggle with syntax to start out is a good way to keep them motivated initially. Then after you hook them, you can start teaching them the fundamental concepts of how stuff works.


I've been playing with Codea on the iPad for a few days. I'm a big Python fan, but having never used Lua before I have to say it's a great little language and Codea is a lovely learning environment.

My daughter (8) was watching me play with it on the couch at the weekend and I let her change some of the variable values, pick colours, etc and see the difference it made when we ran the program.


Here's one reason why not: no anonymous functions longer than one line. Which is a pretty lame limitation for a modern high-level language, all due to the dumb whitespace-as-scope design.


Just enclose the anonymous function with "(" and ")". Then you can use an anonymous function longer than one line.

>>> f = lambda x: (x

            * x

            * x)
>>> f

<function <lambda> at 0x108cf3230>


Yes, but unfortunately it can only be a single expression.


Honest question: If you are making an anonymous function longer than one line, shouldn't you just make a normal function? What are the use cases for large anonymous functions?


Especially in a learning context, I have no problem with requiring decent names for functions.

It's a learning context, not a professional one. This strikes me as like complaining that your carpentry class won't be covering a particularly exotic tool only favored by a quarter of professional carpenters anyhow. No, it's not a fatal objection to the class. If the tool is so wonderful they'll find out about it later. After they've become familiar with things like "hammers" and "nails" and the pounding thereof, which is the real problem they face today.


I wrote a nonblocking library (for pipes/processes/signals) in Python like node.js. So many of the callbacks end up being inner functions (they don't when you decide you can make all the necessary state object-level, and thus make the callback a method).

The anonymous syntax actually reads better than defining an inner function and then calling it.

I think the real reason why they were never added is syntax, and honestly I can't think of a good syntax myself.


That makes sense. Callback methods. Thanks.


Simplest case: custom control structures.

In JavaScript, a broadly comparable language, I mainly use anonymous functions for event handling and map/each/fold. They are also useful for implementing more advanced code like arrows cleanly.

In a lot of ways the anonymous function in these cases is more akin to a block than a named function--it's only used once and in a very clear place. Having to name my click and onload handlers would make the code more confusing.

Coincidentally, this brings me to something that annoys me in both languages: lambdas are too verbose. Having much simpler and more concise lambda syntax (I'm personally a fan of Haskell's \ x -> x because it looks like a lambda if you squint) would make a lot of common idioms more readable and would encourage wider use of custom control abstractions, which can significantly improve code.

Amusingly I don't find myself using lambda syntax nearly as much in Haskell despite the fact that it looks really nice. A surprising amount of the time simple point-free code is all that's needed, and anything too complex gets moved into a where clause. But Haskell is not really comparable with JavaScript/Python at all.


It's perfectly possible to have multi-line lambdas with indentation blocks, just look at Haskell or CoffeeScript.


It's too bad Javascript's faults are so grievous. The sane subset of JS + a web browser makes a pretty good beginners environment.

But when you have to blow so much time on screwy things like "this" scoping and unwanted implicit conversions it kind of tips the balance.


You can always do something like this: http://www.skulpt.org/

None of the disadvantages of such an approach matter in a learning context. Sure, it's slow relative to a real implementation... but so what?

(I'm also not saying that particular link is a drop-in solution, as I've never heard of it until I went googling on the theory that something like it must exist. But if you were building a business on this, it looks far enough along that you could adapt it as needed.)


I can't help thinking that almost everything he lists applies more to Scheme than Python. Scheme also has the advantage of being much simpler, smaller, more concise, more flexible and more elegant than Python. It actually does functional programming well rather than half-heartedly, and is rather nice for OOP as well.

Additionally, Scheme has one very important property that Python lacks--there is almost no magic. Even definitions and assignments (define and set! respectively) look like normal function calls. It also makes it clear that very little is fundamental to a language--with Java and Python things like classes and control structures are special and built into the language; with Scheme they do not differ superficially from normal functions. With Java I thought classes and objects were somehow the only way to do it; Scheme clearly showed me that they do not even have to be part of the core language.

Now, Scheme does have some complicated ideas like macros and continuations, but there is no reason to teach those to beginners--it's an eminently useful language as is, and extremely simple. The very first CS course I took in college used Scheme; we spent something like just two lectures on the language itself. Now that the professor retired they've changed the course over to Python; I think they're still learning the language half-way through the semester.

Scheme is also great because, being very simple, it is much easier to implement than Python. And once you have a basic Scheme implementation, you can modify it to see other potential languages. For example, in our intro CS course, we talked about writing a memoizing interpreter and even a lazy interpreter. It's hard to imagine a lazy Python! We even reused Scheme syntax (but not really semantics) to cover logic programming à la Prolog.

That course was one of the most enlightening semesters I have ever spent. I really think that Scheme is the perfect introductory didactic language.


I never really accepted the argument that Scheme/Lisp is simpler than a language like Python, simply because everything in the language is reduced to constants/terminals (what's the right terminology here?) and functions.

Many language differantiate between statement and expression, and compile-time and runtime and lots of other stuff.

Saying that we can express C-structs in a language by reducing them to functions is not a simplification, instead I would argue it's worse when trying to grok C. While coming from Python it would be easier since we already have classes, and C-structs are simply compile-time final classes without inheritance.

I agree though that learning Scheme can teach a lot, just not necessarily how to grok every other language well.


It makes sense to use a language that is close to what people will be using later just so they will be able to transfer their learned knowledge more easily.

Lisps are very interesting and important, but their syntax does not transfer easily to most popular languages.


>Python’s type system is more flexible than C#’s (for better and worse). It is important to keep in mind a few concepts from Python that are not directly translated to C#:...

>Dynamic typing, a.k.a “duck-typing” (this, too, can be accomplished using libraries built around Delegate.DynamicInvoke).

I'm not super familiar with the keyword, but I'm fairly sure this is built right into C# 4.0 via the "dynamic" keyword, is it not?


From what I understand, yes. I believe the point the author is trying to make however, is that Python allows you to think more about the structure/function of your code rather than worrying if a variable should be an integer, a float, or a double making it easier for new programmers to "jump in".


The author mentions that C# has a nice IDE (Visual Studio). I'd like to mention that a couple of OSS & Python enthusiasts at msft have produced one for Python as well (free/apache 2.0).

It supports both CPython and IronPython, along with intellisense, debugging, profiling, etc. : http://pytools.codeplex.com


Liked the fair simplicity of the post but I am disappointed by the Wikipedia links. Like many design pattern they look like Java special idioms used to overcome its limitations. So dependency injection is just "if x import y"? Like any other conditional, it must be used only if necessary but does it need be a Design Pattern?


Design patterns tend to arise due to linguistic limitations, but not always. They’re mainly a shared vocabulary for talking about software architecture and design, regardless of paradigm.

As for dependency injection: DI ⊆ inversion of control ⊆ decoupling ⊆ factoring, where ⊆ = “is a kind of”. Roughly, DI means externalising dependencies and giving them as parameters in order to reduce coupling. Which, when you put it that way, is obvious, and something we (should) do often in any language.


Title is misleading. The article discusses why learn Python instead of C# as a first language.

Missing: Python being cross platform (arguably more so than C#.)


I just want to say that I've been learning Python as my first language. I've found it to be fairly intuitive and a lot of fun. Of course, learning any language, especially your first, is going to be difficult, though.

One of the drawbacks I've heard is that Python subscribes to the idea of doing something 'the right way,' although i've found a few times where i did something the 'not right way' and it was still functional (although maybe a little longer or not as clean as it could have been).

Edit: I've also been involved a little bit with Ruby lately, and it seems thre are a few similarities between the two. Though that could just be that there are many similarities between any two langauges, and I'm just not familiar enough with multiple languages to realize that :P

Still, it's an excellent first language, IMO


A lack of freedom can be a good thing in a first language. Of course, Python doesn't always have a "right way" or they wouldn't be on their fourth command line argument parsing library. ;)

Ruby and Python will seem more similar after knowing a lot of languages. But they are different enough too. If it's not intimidating you it's probably a good second language.


Why do I keep reading about this? I don't see how any command line argument parsing library is relevant other than argparse. Do you have some concrete complaint about argparse? I am seriously wondering what I am missing, it seems so simple to me yet this seems to be a burning perennial complaint.


Why shouldn't they be relevant? Getopt is still very useful for short scripts and optparse does everything it was meant to do, even if it's not as full of features as argparse. Maybe argparse really should be the canonical choice, but I certainly wouldn't want to see the others gone.

As for a minor complaint about argparse - setting up a custom help formatter [1] is not straightforward, not in the slightest in comparison to optparse [2] (which is much simpler, implementation wise).

[1] http://hg.python.org/cpython/file/default/Lib/argparse.py#l1...

[2] http://hg.python.org/cpython/file/2.7/Lib/optparse.py#l155


Forgive me, I started with 1.5.2. Back then, you used getopt. Then came optparse. Then came argparse. So I guess there have only been three, unless there's a popular non-built-in option I don't know about.

Now, I haven't had to maintain anything I wrote back then (not sure I ever even did use getopt) but if I'm sure if we looked we could find someone who has upgraded their option parsing twice.

I'm being a little snarky. I'm really just trying to point out to the new user that the much-vaunted "TOOWTDI" concept is at best a policy decision and is to be taken with a grain of salt rather than treated like a straight-jacket. There's always more than one way to do something. Python just doesn't endorse the kind of free-for-all on every line that Perl does.


"The right way" is not always enforced by the language. It's often just conventions agreed to by the dominant community. "Pythonic" is a word that you'll hear a lot, if you haven't already.


It doesn't seem to mean much of anything consistent, though, because it is used as a sort of generic positive adjective and used to promote even things of dubious quality.


And that's sometimes what the "right" way is, just a way.


Right idea, but bad execution. This could have been a great post had the OA had expanded on his examples instead of just linking to wikipedia.

Here is where he failed. We need to ask, who is this article for? Programmers who already understands some of the patterns and paradigms OA touched upon, or newbies.

Given he seem to be responding to a new programmer, it would have been more effective if OA had displayed the C# implementation vs. python, at least for a few of the more interesting examples.

On a personal perspective, the first language I thought myself was BASIC. Remember line numbers and goto? I later learned enough C to understand how things work behind the scene. So I personally like the high-level first approach, especially python. It's like a swiss army knife.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: