Absolutely love the analogy. It's a great display of leveraging the English language to convey complex concepts in a down-to-earth fashion. Even as engineers (or perhaps because we are engineers), getting our ideas across in understandable, layman terms is a potent weapon.
(though admittedly, the latter part of the excerpt could perhaps be simplified)
Variable declarations ... you could go either way. I like that Python is closer to a language that "knows what you mean" even when you don't declare a variable.
You can learn discipline later. Learning to program is a chicken and egg problem, you have to learn two things: a language (any language) and programming concepts, and unfortunately you have to learn one to do the other. Python's forgiving nature makes that a bit easier than some other languages.
The point is being made that learning one language makes it vastly easiest to learn another , so you may as well start by learning one that is designed to be learnable rather than necessarily one that is in demand.
The biggest challenge for Windows users is remembering to go to python.org and not python.com to obtain their version.
Plus your going to at least want to install a better text editor and probably set some environment variables.
The installation process for all of these things shouldn't be a big hurdle since all you'd have to do is follow a set of steps that can be easily demonstrated.
The simple fact that it's syntax light will always be the #1 reason it's a great introduction. Worry about the syntactical hangups once you have the core concepts of programming down.
I never understood those CS courses that have you diving in blind with C++
C++ made more sense to me than all the CS programs that were switching to Java for their intro to CS courses when I was coming up a decade ago.
C/C++ may have overhead that's confusing to a beginner out of the gate, but Java just amplifies that.
Started learning python on my own around age 19 and everything just skyroketed from there. I'm much more comfortable using( saying that lightly ) C++ these days but I can't imagine ever recommending it to even an enemy as their first language.
An aquaintenance on a board somewhere put it best, " C++ is like a hurricane. Gorgeous but incredibly destructive". Personally I think it all looks hideous but I get the point now.
Today, I'd agree that Python -- and possibly Ruby -- is a better first language, but when this was happening they weren't particularly mature. You just didn't have the kind of mature, syntactically-clean, object-oriented languages available that you do now. Even so, I never understood why you'd choose Java with all its excess syntax to teach a beginner.
As someone who’s well versed in C++, and who had to tutor CS students in it, I certainly wouldn’t recommend anyone use C++ unless they already know it and can use it effectively.
It may seem easy to start writing programs in those languages, but understanding what is actually going on is sort of difficult for someone who has no idea how things work. When you're implementing a linked list or a binary search tree for the first time, it's important to understand how your data structure is laid out in memory, otherwise it's a pointless exercise. Higher-level languages are convenient, but they hide a lot of what is going on, and that, in my opinion, is bad, if you are learning.
I think the best language to start with is C (and not C++, of course). One must understand pointers and bytes and bits and memory management and all that shit that leads to segfaults and many hours of debugging to be a programmer. I would even go as far as to say that one should start with an assembler and not C, but given the immense complexity of contemporary hardware, I understand that such approach is infeasible. However, teaching kids programming with a physical (not emulated), really simple, toy CPU is probably a good idea. Too bad I don't know any such CPUs.
Introducing extreme complexity so early in the process turns off people; they give up easily and (falsely, in many cases) start believing that programming is not for them.
The first introduction should have enough fundamental programming concepts to enable the students to make something useful and tangible, by themselves. As the interests and the objectives grow, they can delve deeper into the domain.
I think with Khan Academy, Udacity and others, we are entering a new paradigm of "Just in time learning". Schools should focus on teaching smart learning techniques, critical thinking etc instead of cramming more and more topics in their curriculums.
I'm happy to live in a post-Alan Kay world, but I think maybe it's time to dust off that sculpture of Dijkstra we keep in storage and move it out into the foyer. Programming is hard; glossing over what makes it hard in the name of doing stuff sooner may or may not help, but as long as we debate it without testing it empirically we're sure to get nowhere.
I have to clarify that my comment was targeted towards current Udacity (other MOOC) participants who wants to learn programming as a value added skill or hobby.
If you want to graduate with CS major and/or want to pick CS as a profession; I do believe you need to have a good understanding on the entire stack. There are no shortcuts to being a good professional and achieving mastery in any field requires committed hard work.
When you introduce concepts such as addressing/pointers(which are typically in hexadecimal) and memory management you add many more moving parts to the mix. In my opinion, abstracting a lot of this stuff away helps lower the number of opportunities on which a beginner can get caught up.
You're about 30 levels above an actual beginner. You have to think even lower capability, like this:
> When you're trying to print "Hello World" you have to know all this hard stuff like how to type "python" into PowerShell, or even just install Python on Windows and fix it's broken PATH settings.
That's where you need to be thinking if you're thinking about what a beginner struggles with.
Learning C is good if you want to learn most of the canonical CS stuff - linked lists, bla, bla, bla.
If you want to recommend it to someone who is learning how to program so that they can get their web startup off the ground (where Python or Ruby would be a better fit), or someone who is trying to parse CSV files for work (C# or Python), then you can, politely, fuck off and die.
> One must understand pointers and bytes and bits and memory management and all that shit that leads to segfaults and many hours of debugging to be a programmer.
No, no really you don't. Perhaps to be some sort of archetypal "Real Programmer", but that's different from learning to program. There are all sorts of other factors at play: motivation, being able to get something working relatively quickly and iterate, being able to show friends/colleagues what you're working on, accessibility of libraries, etc.
Disclosure: I wrote a book on how to program in Python: http://manning.com/briggs/
So, are you saying that if you asked a candidate during a job interview what malloc is or why inserting things in a linked list is faster that inserting things into an array, and he gave you a blank stare, you'd still hire him?
The array/list can be pre-allocated to the size you need and have good cache locality and minimal container overhead -- the linked list will have the overhead of the pointer to the next node (2x for a doubly-linked list) + the type descriptor (if you're using a language like c#, java, python, etc.) per node and data located all over the heap with cache misses everywhere.
If I was asked during an interview why inserting things in a linked list is faster that [sic] inserting things into an array, and I provided an answer that made him blankly stare back at me, would I want to take the job?
Perhaps you're getting a guy switching careers or fields of expertise (web -> system or accounting -> embedded), or someone that just has that gap in their knowledge, or someone that's just having a plain ol' brainfart at that point in time. No need to be as rude as I responded in my third paragraph.
I said insert, not append. What made you assume the question was about inserting at the end? Even if you have a big pre-allocated array, inserting an element somewhere in the middle will still cost you the price of shifting the "tail" of the array to make space for the new element, while the list will not have that cost.
What makes you think insert is important anyway? The typical use cases for an array/list are pop, append and indexing. Python's list implementation uses arrays, not linked lists, internally for exactly this reason: indexing being O is more important for most purposes than fast inserts. (http://docs.python.org/faq/design.html#how-are-lists-impleme...)
And what's with the name-calling? Aren't you capable of having a normal, polite discussion?
You keep talking about "the subject matter" and "programming" like it's one topic. It's not, hence the "you're an ass" comment - you're just repeating your argument ad nauseam and ignoring everything everyone else has to say.
It's possible to be a good programmer without knowing how linked lists work in excruciating detail - get over it.
It looks like our definitions of "good" and "programmer" are somewhat different.
And I've seen people who were the opposite of what you're saying - people who knew lots about low level C/Assembly stuff, but who couldn't write clean, maintainable high level code to save their life.
But to get back on the original topic - none of that is relevant to learning how to program. Loops, variables, data structures, managing state, functions and classes are more what you want to teach to start with, and are far more relevant to most programmers than malloc or linked lists.
There is no reason someone who never programmed C would (or should) know what function is used to allocate memory in C.
If they are going to be programming in Python, there's probably no need for them to be familiar with memory management, or the underlying details of the language implementation - the language is there to insulate you (to a greater or lesser extent) from the machine. Later if it becomes useful to learn about memory management in other languages, they could do so if they are moderately intelligent, it's not rocket science or something that you must learn first or not at all.
Even the nature of a Struct, and how it might be more efficient than an Object, and the difference between a Stack and a Heap... All that's pretty accessible without sitting through a course on the finer workings of malloc.
The memory is there. It provides my byte buckets. I don't need to care much beyond that to be a good web-developer.
Asking most developers about malloc is probably about as relevant as asking about video frame-buffers. For some spaces it's make or break, but for most it's just useless trivia.
In it's most basic form Malloc basically is a linked list.
Similarly, there are different ways to come to the knowledge of what malloc is, or why inserting things into a linked list is faster than inserting things into an array.
I completely agree with that. It doesn't matter, as long as the knowledge is acquired somehow. I was just pointing out one of such ways.
Or the one intro C course, for that matter.
Jobs where those things matter depend on much more than having got your feet wet with C and malloc learned in one intro course. You probably wouldn't have this guy to the phone screen in the first place. Right?
Much better to asking about relevant things; some common Python library, or why they'd use lists vs. dictionaries vs. sets, or what a generator expression is.
And please note that those questions are not some intricate C specific questions. Memory management is a basic thing (I'm not talking about actual memory allocation algorithms, just things like how to make sure your program doesn't leak). Linked lists are also basic data structures. It's not asking what "volatile" keyword means and when to use it.
Python also does not have memory management or allocation, linked lists or "volatile". Memory management is basic for C and C++, but that's about it.
But you're talking hypotheticals - I can do that too: if you suddenly need big data then "Oops", you're going to want someone who knows Hadoop or Erlang. If you need to launch a rocket then "Oops" you're going to want an aerospace engineer. But if you're looking for a web dev, then C/Hadoop/Erlang/Aerospace questions are largely a waste of time, and might disqualify candidates who are otherwise fine or who could figure out C given a couple of days.
Bear in mind that the original topic is talking about languages for learning how to program, not languages for iOS development. If iOS apps float the beginner's boat then by all means learn Objective C; their enthusiasm for the field will probably overcome their frustration with C. But I suspect they'd be better off learning Python or similar first, then starting on C.
Hence my original response: it depends. Python is a pretty good choice because it can do well in lots of different fields and is easy to pick up.
8-bit PIC and AVR microcontrollers have very simple instruction sets and can be connected to simple I/O devices like switches and LEDs, but they both use Harvard architectures, so you have different address spaces for code and data. The newer 32-bit PIC micros use a MIPS core, but the development tools aren't very open, despite being based on GCC.
The 16-bit MSP430 is fairly simple, has good open-source development tools, cheap development hardware (check out the Launchpad), and a more conventional von-Neumann architecture. Check out some of Michael Kohn's sample code for the msp430 assembler he wrote:
I'm still very glad i learned C++, though. I think it best equates to learning latin in school: You're probably never going to need it, but it gives you great insight into how languages work on a lower level and where they come from. While this won't help with your vocabulary necessarily, it will help with autodidactic learning. For example, if i didn't know about pointers, it's likely i would never have wondered about how the parameters get passed to a function: By reference or by value.
Someone could make a similar argument about starting in C. And make similar arguments that you made for the benefits.
You have to start teaching somewhere in the stack. I actually disagree that it's infeasible to start with assembly. I don't think it's wise, but it's certainly feasible. I think people tend to have a bias that beginners should start at the same level of abstraction that they did. But I think that in online discussions, we place too much emphasis on where to start. I doubt it matters as much as the other factors, such as motivation of the student and the patience and care of the instructor.
My daughter (8) was watching me play with it on the couch at the weekend and I let her change some of the variable values, pick colours, etc and see the difference it made when we ran the program.
>>> f = lambda x: (x
<function <lambda> at 0x108cf3230>
It's a learning context, not a professional one. This strikes me as like complaining that your carpentry class won't be covering a particularly exotic tool only favored by a quarter of professional carpenters anyhow. No, it's not a fatal objection to the class. If the tool is so wonderful they'll find out about it later. After they've become familiar with things like "hammers" and "nails" and the pounding thereof, which is the real problem they face today.
The anonymous syntax actually reads better than defining an inner function and then calling it.
I think the real reason why they were never added is syntax, and honestly I can't think of a good syntax myself.
In a lot of ways the anonymous function in these cases is more akin to a block than a named function--it's only used once and in a very clear place. Having to name my click and onload handlers would make the code more confusing.
Coincidentally, this brings me to something that annoys me in both languages: lambdas are too verbose. Having much simpler and more concise lambda syntax (I'm personally a fan of Haskell's \ x -> x because it looks like a lambda if you squint) would make a lot of common idioms more readable and would encourage wider use of custom control abstractions, which can significantly improve code.
But when you have to blow so much time on screwy things like "this" scoping and unwanted implicit conversions it kind of tips the balance.
None of the disadvantages of such an approach matter in a learning context. Sure, it's slow relative to a real implementation... but so what?
(I'm also not saying that particular link is a drop-in solution, as I've never heard of it until I went googling on the theory that something like it must exist. But if you were building a business on this, it looks far enough along that you could adapt it as needed.)
Additionally, Scheme has one very important property that Python lacks--there is almost no magic. Even definitions and assignments (define and set! respectively) look like normal function calls. It also makes it clear that very little is fundamental to a language--with Java and Python things like classes and control structures are special and built into the language; with Scheme they do not differ superficially from normal functions. With Java I thought classes and objects were somehow the only way to do it; Scheme clearly showed me that they do not even have to be part of the core language.
Now, Scheme does have some complicated ideas like macros and continuations, but there is no reason to teach those to beginners--it's an eminently useful language as is, and extremely simple. The very first CS course I took in college used Scheme; we spent something like just two lectures on the language itself. Now that the professor retired they've changed the course over to Python; I think they're still learning the language half-way through the semester.
Scheme is also great because, being very simple, it is much easier to implement than Python. And once you have a basic Scheme implementation, you can modify it to see other potential languages. For example, in our intro CS course, we talked about writing a memoizing interpreter and even a lazy interpreter. It's hard to imagine a lazy Python! We even reused Scheme syntax (but not really semantics) to cover logic programming à la Prolog.
That course was one of the most enlightening semesters I have ever spent. I really think that Scheme is the perfect introductory didactic language.
Many language differantiate between statement and expression, and compile-time and runtime and lots of other stuff.
Saying that we can express C-structs in a language by reducing them to functions is not a simplification, instead I would argue it's worse when trying to grok C. While coming from Python it would be easier since we already have classes, and C-structs are simply compile-time final classes without inheritance.
I agree though that learning Scheme can teach a lot, just not necessarily how to grok every other language well.
Lisps are very interesting and important, but their syntax does not transfer easily to most popular languages.
>Dynamic typing, a.k.a “duck-typing” (this, too, can be accomplished using libraries built around Delegate.DynamicInvoke).
I'm not super familiar with the keyword, but I'm fairly sure this is built right into C# 4.0 via the "dynamic" keyword, is it not?
It supports both CPython and IronPython, along with intellisense, debugging, profiling, etc. : http://pytools.codeplex.com
As for dependency injection: DI ⊆ inversion of control ⊆ decoupling ⊆ factoring, where ⊆ = “is a kind of”. Roughly, DI means externalising dependencies and giving them as parameters in order to reduce coupling. Which, when you put it that way, is obvious, and something we (should) do often in any language.
Missing: Python being cross platform (arguably more so than C#.)
One of the drawbacks I've heard is that Python subscribes to the idea of doing something 'the right way,' although i've found a few times where i did something the 'not right way' and it was still functional (although maybe a little longer or not as clean as it could have been).
Edit: I've also been involved a little bit with Ruby lately, and it seems thre are a few similarities between the two. Though that could just be that there are many similarities between any two langauges, and I'm just not familiar enough with multiple languages to realize that :P
Still, it's an excellent first language, IMO
Ruby and Python will seem more similar after knowing a lot of languages. But they are different enough too. If it's not intimidating you it's probably a good second language.
As for a minor complaint about argparse - setting up a custom help formatter  is not straightforward, not in the slightest in comparison to optparse  (which is much simpler, implementation wise).
Now, I haven't had to maintain anything I wrote back then (not sure I ever even did use getopt) but if I'm sure if we looked we could find someone who has upgraded their option parsing twice.
I'm being a little snarky. I'm really just trying to point out to the new user that the much-vaunted "TOOWTDI" concept is at best a policy decision and is to be taken with a grain of salt rather than treated like a straight-jacket. There's always more than one way to do something. Python just doesn't endorse the kind of free-for-all on every line that Perl does.
Here is where he failed. We need to ask, who is this article for? Programmers who already understands some of the patterns and paradigms OA touched upon, or newbies.
Given he seem to be responding to a new programmer, it would have been more effective if OA had displayed the C# implementation vs. python, at least for a few of the more interesting examples.
On a personal perspective, the first language I thought myself was BASIC. Remember line numbers and goto?
I later learned enough C to understand how things work behind the scene. So I personally like the high-level first approach, especially python. It's like a swiss army knife.