Hacker News new | past | comments | ask | show | jobs | submit login
Joe Armstrong on Programmer Productivity (groups.google.com)
259 points by mdevilliers on Sept 19, 2013 | hide | past | favorite | 89 comments



FTA (favorite part for me)

Most time isn't spent programming anyway - programmer time is spent:

    a) fixing broken stuff that should not be broken
    b) trying to figure out what problem the customer actually wants solving
    c) writing experimental code to test some idea
    d) googling for some obscure fact that is needed to solve a) or b)
    e) writing and testing production code
e) is actually pretty easy once a) - d) are fixed. But most measurements of productivity only measure lines of code in e) and man hours.

For me, b) is a bottleneck. c) is far and away my favorite part. Nothing like green-fielding . . .


That's the core of why outsourcing coding and fixed price projects so often fails - it's overestimating the role of (e), underestimating the difficulty of (a)-(d), and discounting the communication friction when (a)-(d) and (e) is not done by the same person.


I had a paid internship in college, many years ago, where I basically spent three months twiddling my thumbs waiting for a project, and when I finally got one, it was creating an MS Access database to count SLOCs. If they were that concerned with productivity, perhaps they could have given me something useful to do.


At least at the larger software companies do they really only measure LOC and man hours? It seems like you could develop decent estimates if you tracked every asset on a job, its performance, and the overall conditions of the job itself.


The problem is that the assets are dynamic. If you give me a single complete requirement for some little feature I can give you a pretty accurate estimate. But most of my requirements are things like "Write an app to do X." The only part of that I can estimate is the part I've done before, but if the app were just like all the apps that came before there would be no need for a new app. But the juicy candy center is an abject mystery and there is nothing to compare it to from which to derive an accurate estimate. Once you've dug halfway into it, you'll be tossing out new requirements. Maybe you can make good estimates on those, but your a priori estimate is now probably really off.


For me there is another time-consuming step:

c') tooling: learning/debugging the latest test tool, test runner, deploy tool, source management system, CI system, etc. etc.


Indeed, now there are so many tools that you can't just expect a reasonably experienced programmer to just know the tools that you're using.

You have to either:

a. stick to old and/or boring and/or obviously inferior tools and programming languages, or

b. expect to pay the price of the time lost by programmers joining your team to learn the tools that you have chose or "worse", to actually survey all the options and pick the best tools for the project

...I know, the evolution of technology needs diversity in everything, including tools, so that some form of "selection" can actually push forward the best alternatives, but just as in nature and biology, "evolution" has huge prices to pay (in biology the most tragic prices are mortality and cancer, in software engineering I don't know what their analogues are, but I dread to think that we're already close to facing these "prices"...).


Indeed, now there are so many tools that you can't just expect a reasonably experienced programmer to just know the tools that you're using.

I can think of at least two further difficulties that come with the explosion of available tools:

1. It’s hard to know whether a tool is actually worth using at all. Many tools sound appealing because you’re familiar with the problem they aim to solve but not yet familiar with all the quirks and edge cases and hidden costs of using the tool instead.

2. Many tools look promising in their early days, when they are prime examples of enthusiasm driven development. It’s another thing to ask whether the tool is still going to be in active development five years down the line, when there are newer and shinier things to work on. Even if there are, your 100,000 line code base might still need that security fix or one of the 10% of missing features that were on the roadmap when you first started integration.

I’m all for using the right tool for each job, and I’m certainly in favour of using good tools rather than doing everything the hard way, but figuring out which tools those are gets harder as tools proliferate.


I'm feeling this way about JavaScript now. So much so I almost don't want to use JavaScript at all. There are so many JS frameworks and libraries right now; they can't possibly all survive but who knows which ones will?


True, this is why I prefer old tried and true languages and tools, as much as I'd love to live on the bleeding edge. The day is just too short to use CoffeeScript/Dart/Haxe when there's JavaScript. Completing projects that seem "easy" is hard enough to begin with.


This one sentence sums up the sociological and psychological problems that retard progress in the tech industry:

"Experiments that show that Erlang is N times better than "something else" won't be believed if N is too high."

That is exactly it. I've had those arguments where I was forced to tell a manager about another project I did for another company, a similar project where things went quickly and smoothly, using technology the manager was not familiar with. And every word I spoke was met with disbelief.


"And I hadn't even told him the truth. Actually, the shit coming out of Basco's pipes was a hundred thousand times more concentrated than was legally allowed. ... That kind of thing goes on all the time. But no matter how many diplomas are tacked to your wall, give people a figure like that and they'll pass you off as a flake. You can't get most people to believe how wildly the eco-laws get broken, but if I say "More than twice the legal limit," they get comfortably outraged."

-- Neal Stephenson, Zodiac (1988) http://en.wikiquote.org/wiki/Neal_Stephenson#Zodiac_.281988....

(I'd include the previous paragraphs in the quote, to show the conversation the protagonist actually has about this, if my copy wasn't on loan to a friend right now, dammit! If anyone else can post that, that's be cool.)


So tell a big enough lie and everybody believes it but a big enough truth and nobody does...


Did they disbelieve it went smoothly as you described, or did they lack confidence in your conclusion that the _reason_ it went smoothly was because of the particular tools chosen?


Partial remedy: stick to easily measurable facts: number of people, budget, time to completion, number of detected bugs…

Then, when your manager inevitably does not believe you, ask him if he thinks you missed something (no), then ask him if he trusts you (yes), then contemplate the face of cognitive dissonance.


Ya, I had a manager laugh at me when I told him I had learned Erlang over the weekend (I am the resident "language lawyer", and Erlang's really not that complex). He didn't seem to be able to grok that a language could be so simple…


Languages are simple syntactically, but I'd have a hard time believing that anyone could pick up the set of idioms necessary to effectively use a new language in a few days.

Norvig it covers this well in his Teach Yourself Programming in Ten Years (http://norvig.com/21-days.html) essay.


Without knowing any other language? Sure.

Coming from a strong background of similar languages? Sorry, you're wrong there. Perhaps you aren't familiar with Erlang; it is the "everyman" of functional languages. The things I think you'd call "idioms" (not really sure what you mean as that's a very loose term), let's say recursive programming, pattern matching, carry over as-is from any number of other functional languages. Pattern matching? It's a mix of OCaml and Prolog. Terms? Imagine Scheme had tuples in addition to lists. Etc. The only "new" concept is the message system, which if you know anything about networking, is pretty straightforward.

In other words, if you know pretty much any other functional language, it's almost trivial to translate basic programs to & from into Erlang knowing little more than its syntax, which as you concede, is simple.

OTOH, if you're coming from, say, PHP, sure, learning Erlang will be difficult as first you have to understand functional programming. But you're asserting impossibility, so counter-examples are moot.

Ten Years… I've been coding over twenty years in more languages than I can count… I have a good idea how long it takes to learn a language well. Maybe I'm biased because it's the latest language I've studied, but Erlang was the first non-toy language where I skimmed the reference manual, said "huh, that was all unsurprising", and started coding effectively.


Thanks for the link, it expresses part of what I struggle to explain quite well.

One problem I have, which Norvig doesn't talk about, is that I'm currently moving from one ecosystem (Microsoft) to another (the Java world), and it's not just about the language, it's everything, all the toolchain is different and unfamiliar.

The way of doing things is quite different too.


You can learn Objective-C in a couple hours if you have experience with similar languages.

Learning Cocoa or Cocoa Touch, otoh...


I picked up c# within a day or two with a working knowledge of Java. Some languages are easy to pick up given the right background.

I'll have to check erlang out. I only hear good things.


What's your definition of "picking up" a language?

A) Writing a "Hello, World"?

B) being able to fix a simple bug in a program?

C) Writing a small program that solves some domain problem

D) Writing a program that solves a "real world" problem, making use of the programming languages' strengths and conforming to standards (something that other people would let you commit into a repository unchallenged :) )

I'm certain A) takes only a few hours, and B) can be done in a day.

As Norvig says, you can do C) if you write in the new language like you would in a language you already know (not much of a problem if we're talking C# vs Java).

But D) IMO takes weeks/months.


This roughly corresponds to my experience with Erlang; keeping in mind that "D" in Erlang relies heavily on its OTP ecosystem, which is fairly complex and different from most other languages.

Ignoring OTP, my ability to use the language proper was pretty much complete within a week, owing to its simplicity and my prior background in functional languages.

This is in contrast to C, which I have been using for around 15 years now, yet am still learning nuances of the language. (Things like: which integer operations are undefined on negative numbers; how integer promotion works with shift operators; how "restrict" interacts with scope.)

C is almost a fractal of nuance that does take years of experience to comprehend; Erlang has no nuance. (The closest thing to nuance I can think of is the relationship between integers and floats; and even then the takeaway is "it just works; don't worry about it". The only language in which I've seen the number hierarchy handled more cleanly is Racket.)


Erlang has a few syntactic conventions that will probably seem familiar only if you've used Prolog. But, like any other language syntax, you learn it pretty quickly and within itself it starts to makes sense.


This was actually the biggest syntactic stumbling block for me, but because I know Prolog. It's definitely not Prolog semantically, and differs syntactically as well (e.g. clauses are separated by "." in Prolog but ";" in Erlang); to this day I sometimes find myself writing Prolog in Erlang.


The corollary to a) that I deal with all the time is third party integrations that are poorly documented and that after some time X start magically working even though the third party changed nothing on their side (according to them). Management always thinks "We've integrated things with this vendor before, it will be a snap to do again on a totally different endpoint" and this is never the case.


Then there's Facebook.

"We integrated with Facebook a few months ago, it should be easy to do it again."

Famous last words. I've lost count of how many times I've become an "expert" at some aspect of Facebook integration to have it be completely different just a few months later. Google is also really bad about this, at least in the past year or so. It almost makes me want to quit webdev and be a dba or something.


Feel ya. So many API's and yet it seems like there is no interface - since they change all the time. Might as well don't have them at all.

DBA it is then.


This is done on purpose. While others are trying to play catch upmwith their API, they're busy advancing things like robotcars with no competition.


My last job was in an agency that used to build websites for a fixed cost. After a few years of doing it we changed the policy so that integrations were charged as time and materials. You just can't judge how long it's going to take when you have to deal with an external organisation. Hell, one of the projects I was working on before I left (a year ago) was started 2 years ago - and it's still going on. It's just a simple authentication integration but they're at the mercy of the other party. No actual code has been written but plenty of developer hours have been wasted - including mine.


Integration is one of the biggest problems because there are the most unknown unknowns, to quote Rumsfeld. You don't know what spontaneous changes will happen, and you frequently can't control the resources on the other side.


Exactly. Same for non-commodity hardware. Come to think of it, even true for commodity hardware.


this is partly why I advocate a slow code movement similar to the slow food movement. instead of sprinting I would like to walk to the destination, avoid the pains of broken ankles and repairing shoes whilst running in them.

I would like to explore to find a sane and reasonable approach and not be driven by artificial deadlines guessed at three months ago but by todays business need.

i would like to actually be measured on business value generated, not lines of code written.

I would like to improve and simplify, deliver real value and savour the joy of actually creating.

this may of course explain why I feel totally unproductive at times


I think that is entirely in line with the true values of agile, which haven't really gotten through.

Slow food isn't about slowness for the sake of slowness. It's just about being realistic and interested in the long-term. It's about saving energy and mental stress.

One beautiful expression of this ideal is in "Domain-Driven Design."


Agile in the sense you are describing really has the wrong name. Agile implies fast; from Apple's Dictionary:

able to move quickly and easily; able to think and understand quickly

Agile should have been called Adaptable or some other name that has less implication of speed and more implication of accomodation of unknown or changing requirements.

No putting that toothpaste back in the tube at this point though.


Though this has the same problem as "cheap". People always equate "fast" or "quick" with "quick right now" and "cheap" as "cheap right now". There are enough examples in long-term projects that this is not true (though the converse is true, as well. And those problems which transist from short-term to long-term are just nasty)


To me, agility has connotations with being quick (reflexes) and adaptable. It's probably from the RPG's that I've played.


I'm not disagreeing - just it's pretty clear the main concept that got through was "sprint".

had a rant recently on


Just as "fast food" in not unhealthy because of the "fast" part (there's actually a ton of healthy "fast food" if you actually look for it), but because moving fast is hard and cool, so people cut corners instead of accepting that "slow is how they go" (like re-re-...-re-frying in the same oil as a way for "faster profit" for the fast food industry, or writing code with no/bad tests/documentation).

People should try and find their "natural speed" when coding, and figure out other personal special abilities they have instead of "speed", stead of trying to keep up with the fastest guy in the room and being afraid to give estimates that are x times longer than his, and managers should accept that people work and think at different speeds and "slow" != "stupid", as most Americans tend to use "slow" in everyday lingo.


I too am a slow programmer.

I have seen so many furious sprints towards the 'finished' system with so little knowledge of what actually needs to be built. The only part that has changed recently is that there is now typically a unit test suite along side that correctly tests the wrong thing.


I think the PostgreSQL community gets this right. The only deadlines are if you want your feature in the upcoming release, or if you can wait another year.

Seems to be paying off quite well. What other database can ship reliably every year with major new features and all kinds of improvements on many axes? And people still say the code is readable and the reliability is solid.


Related: I've stopped using the term "sprint" as it is, by definition, a non-sustainable pace.


Yes, where can I sign on?


yes yes yes yes yes! Great to hear this sentiment being expressed.


I rewrote a (~10k lines) C++ app to erlang back when I was learning erlang, and saw a ~75% reduction in lines of code.

http://www.metabrew.com/article/rewriting-playdar-c-to-erlan...

I expect the 'N' value varies wildly depending on what you are building, and whichever language you are comparing against.


At a previous job I rewrote a (~9k lines) Java app in Java and ended up with a 77% reduction in lines of code! So it would appear that the "smart programmer effect" is likely larger than the effect of most languages.


In this case, I was re-implementing the C++ app I had written - so it was the same programmer authoring both codebases.

I expect if I rewrote either codebase again, it might shrink even more.


You can compress a lot of ruby code a few %age points by mashing the bottoms of the functions into "end;end;end;end;", same with C code that has curly braces on their own lines.

So this is similar to benchmarks: if you don't write a long paragraph about methodology, SLOCs and benchmarks are misleading


Did you try to rewrite the app in C++ and look on how many percent reduction you get from that second try? I think it’s fair to compare that second app in C++ with the rewrite in another language since a rewrite will likely always be shorter independent of the language.


I understood the problem pretty well before writing the C++ version, having done some prototypes beforehand. The C++ version went through some reasonable refactoring too, so it's not like that code was just the first working version that g++ would accept.

I didn't do a full C++ rewrite of course. I think it's fair to say that whenever you are doing lots of concurrency/multithreading/evented stuff in a language like C++, the savings will be enormous when you switch to a language that has a nice concurrency model (ie, not just threads and shared mem and mutexes).


Good point, but it's also not quite a fair comparison to go back and forth between implementation languages.

Changing the language changes your thinking about the problem. If you translate a program from erlang to C++, the result might be shorter and more reliable than if the first version is in C++ and you rewrite it in C++. It also might not be, of course, but the fact that it could be muddles the comparison.


In my own experience, programmers vary more than languages do.

I worked with someone who produced very elaborate designs for the most trivial tasks. Unfortunately, his code was more clever than he was, so it often didn't actually work. Whenever I re-wrote something he'd written in the same language, I managed to do it in 1/10th to 1/15th the number of lines, while adding the "actually works" feature.


One of the big differences I've seen between better and worse programmers is that the better ones have much greater abilities to create useful abstractions. I've also seen massive decreases in the number of lines of code when rewriting other peoples' code in the same language, and it frequently comes down to the original programmer not having seen that the dozen cases that the code needs to handle are really just the same case with minor differences, and the common behavior can be factored out. The extreme case of this kind of redundancy is copy-and-paste programming, where absolutely identical code is replicated all over the place.


it's not that programmers vary - its that the same mistakes are made across the board:

1. memory management is hard - this is mostly a solved problem because the difference between C and $GARBAGECOLLECTEDLANG was great enough that it outweighed most differences in programmer ability. The mass move to web servers put paid to the need to develop for Windows APIs and Linux Syscalls and the a erage software project actually got better. (well ... worked)

1.a. business rules work better in languages that treat functions as first class objects and even better in languages that are logic based and really well in DSLs. most programmers don't reach the first so ... well we await the next mass change if languages.

2. Choice of algorithm. For loops are nice. just not everywhere. O(n^2) hurts. You can see the same in Unindexed table scanning queries.

3. Metrics. measure your own programs. write dynamic do s that update profile information in them. if you are not measuring then ... how do you know you improved? passing another hundred tests does not tell you if those tests measure what is valuable. tests are regression. metrics are progression.

4. I'm ranting too much today, blood pressure is rising :-)


I've often wondered what the software development world would be like if we used prototypes. Experimental hacking seems like a cut-down version of this concept.

But what if we actually built things that we already agree in advance to throw away? Maybe even start from two or three different plausible designs, and push on each one for a while until one seems to be winning? Then rewrite it, learning from the other contenders and from the prototype itself, all before shipping.

The obvious answer is that it would take too long. But I'm not so sure. They say designing software takes too long, also, but when I spend a few weeks designing, the implementation ends up going smoothly and hitting the target. Usually the features that are designed thoroughly at the start of the release hit the target, and other "quick" features that are added in later, sans design, take longer than the designed features and end up pushing the release out. Any overruns or missteps in the implementation of the well-designed features seem insignificant in comparison to things that skimped on design work.


Dr. Winston Royce came to this conclusion over 40 years ago. Unfortunately people read his paper and developed the Waterfall methodology instead. See Step 3:

http://www.cs.umd.edu/class/spring2003/cmsc838p/Process/wate...


This is a well-discussed concept already - see the concept of "Spikes" in Agile development, or to a lesser extent, the idea of "Tracer Bullets" from The Pragmatic Programmer.

Of course, what's done in practice tends to differ, unfortunately.


"Of course, what's done in practice tends to differ, unfortunately."

Exactly... I've never really seen this happen beyond what I would consider "experimental hacking". Maybe it happens somewhere.


I like Fred George's developer anarchy in this regard - especially the concept of micro-web-services. they are the equivalent of a unix command - do one thing (well) and join up with MQ. if you make a service small enough you can be confident of putting it up as a prototype and rewrite it next weekend in node.js


I always did this, it's just how I code. Trick is to keep them short and fixed duration under a day, preferably under half. a) that's where all the value is (longer and you're just going down the rabbit hole), and b) gives you time to quickly write production code based on the spike/prototype.


It could be argued that rapid prototyping is modern software development.

In the late 80's and early 90's, rapid prototyping was very fashionable in the 'anti-waterfall' software engineering schools of thought. The idea was to use dynamic languages with good development tools (various Lisps, Tcl/Tk, Smalltalk) to rough out the idea, which would then be rewritten in 'real' languages.

Of course this tactic failed successfully! We shipped the prototype. Nowadays, it would seem absurd to develop a web application (for example) written in a 'real' language like C++ or PL/1.


I'm doing this at the moment. We have a gnarly ball of legacy code that is burning out devs. It actively resists any attempt to disentangle its parts and after years of attempts at refactoring it's gnarliness is barely reduced. I'm of the opinion that the actual problem it is solving is not nearly as complex as the code. We've extracted a smaller problem that contains all the hardest parts and are solving it with 2-3 different approaches. A completely naive implementation of the core functionality took only a few days and is already faster than the original.

The trouble at the moment is convincing the management that a) this is productive work and b) it really is time to (incrementally) rewrite this code - it's beyond saving. From their point of view it looks like we want to scrap something that sort-of works most of time and do all the work from scratch. It's difficult to convey just how much psychic damage the current code is causing to someone who isn't buried in it day-to-day.


problem is any prototype that provides say 75% of the functionality will end up being pushed to production as soon as someone above developer level gets wind of it.


I throw away prototypes pretty often. Spending a short time thinking through a problem by experimenting with it is more in line with my kind of thinking than writing detailed design documents or UML diagrams.

The key is to not be ashamed of it.


There's a problem with language productivity comparisons that Armstrong mentions: it's impossible to write the same program in two different languages. If you have the same team write it, their second try will benefit from everything they learned the first time, and that is so huge a part of programming that it is sure to distort the outcome and may even dwarf any language effects. But if you use different teams instead, you've traded one confounding variable for another—the effect of switching teams—which is also hugely influential. Thus it's impossible to do an apples-to-apples comparison, and most such experiments deserve high skepticism. It's too easy to consciously or unconsicously engineer the outcome you expect, which is presumably why we nearly always hear that the experimenter's pet language won the day. Has the experimenter's pet language ever not won the day?

That makes me think of a more modest way to do these experiments that might return more reliable results: use the same team twice, but have them solve the problem in their favorite language first. That is, if A is the pet language and you want to compare A to B, write the program first in A and then in B. This biases the test in B's favour, because A will get penalized for all the time it took to learn about the problem while B will get all that benefit for free. Since there's already a major bias in favor of A, this levels the playing field some.

Here's why I think this might be more reliable. If you run the experiment this way and A comes out much better, you now have an answer to the charge that the second time was easier: all that benefit went to B and B still lost. Conversely, if A doesn't come out much better, you now have evidence that the language effect isn't so great once you account for the learning effect.


This approach strikes me as the most realistic. If there is an existing codebase in the preferred language A, the obvious question, if B wins, is "Do we rewrite the code in B?"

The only downside, and I think adding the language C tries to defuse it, is that whatever the team writes first will always stick. For example: if I wrote a version with dynamic typing (say in Ruby), then redid it with static typing (Haskell), of course I'm going to try to reuse types. The extreme example (Greenspun's rule) is if my pet language A is Lisp: regardless of what B is, the team could try to write a half-baked lisp runtime on top of B. The style carries over, and sometimes it doesn't translate exactly. I don't know how to solve this.


Good point—there are more effects than just "learning about the problem" that carry over to the next time you write the program. Once your brain has imprinted on a particular design for solving the problem, you'll probably carry that over to the next implementation. It may not be the design you'd have come up with if you were thinking in B in the first place and, short of erasing your memory and starting over, there's no way to test that.


If you want to compare languages A and B, have them write it in language C first for the domain knowledge, then the two languages. That seems like a better option than giving one a bias on purpose.


But that doesn't account for the pet language effect.

Also, whichever of A or B goes third would still have an advantage over the one that went second.


That doesn't take into account how good a language is for prototyping and experimenting. Even if I have to write in $blublang, I'll probably still prototype it in $petlang for the increased productivity and then port it once I have the architecture nailed down.


Write the program several times, alternating languages. Eventually, both the programs in both languages should converge on their respective optimal lengths.

(I say "optimal" rather than "shortest" so we're not tempted to sacrifice clarity for concision.)


For some reason I assumed that the experiment was to measure development time rather than program length. Obviously that wasn't specified, since you assumed the opposite.


Uh, heh, development length is probably a better thing to measure. They are somewhat correlated, and you can measure them both at the same time.


I'm not sure the growth in the amount of software in the environment (contributor to A) is really that much of a problem. After all, he says we might have "thousands" of times more software, but we certainly aren't spending thousands of times more fixing it. That's because today's software is significantly less broken than the software of yore.

We're standing on the shoulders of giants. Underlying most of our environments is Unix (or Linux, same diff), which is basically the same as it was 20, 30 years ago. We're also running on http, an astoundingly good design. There are lots of other basics, none of which are all that complex, and all of which are finely tuned.

More to the point, a) represents a practical limit. If our environments get too flaky due to poorly understood configuration or bugs, we ultimately can't get programming done. But not pushing to where it hurts some means not using the latest, greatest tools that can amplify our power as programmers - the same tools that let us have thousands of times more software than we used to, without losing any more time to configuration/bugs than we did decades ago.

And beyond that, a lot of the business opportunity in the industry lies with running along just behind the bleeding edge, being firstest-with-mostest to the new and powerful technologies. So unless you're in a safe business relatively independent of new tech, you're going to bleed a bit.


>might have "thousands" of times more software

He said thousands of times more lines in each piece of software, not thousands of times more software.

>That's because today's software is significantly less broken than the software of yore.

As a per-line measurement, yes. As a per-program measurement, not even close. Nor as a per-feature measurement, because those 1000x locs are largely going into abstraction layers at the bottom of the stack and chrome at the top.


> 30 years ago there was far less software, but the software there was usually worked without any problems - the code was a lot smaller and consequently easier to understand

Pardon the intermission, but this is one of the areas where node.js shines IMO. I can read the complete source code for a very complex application, or at least know that each module has a reasonably-sized source and is readable if the need arises. Very small modules and using composition is encouraged, not restricted to a fringe community, and you also get to share tooling and libraries with the browser. All that on top of a friendly, functional language. It really feels like a step forward.


Never used nodejs before - is node's packaging system different from Python's or Ruby's?


Yes. The biggest difference is that there is no global namespace for modules.

    require('my-dependency')
returns a value, rather than aliasing identifiers into the current scope. So you end up with code like:

    var myDep = require('my-dependency')
    exports.doSomething = function (x) {
      myDep.doSomethingToAnX(x);
      // more code here
    }
The Python import (and Go and probably others) works in a similar way, except that in node, even the module names (the argument to require) are local to each module. That is: `require('my-dependency')` can return different values depending on the location of the file that called it. This means your project can depend on two different libraries that both depend on conflicting versions of "my-dependency". The mechanism by which this is accomplished is simple and straightforward. Another nice side-effect (as compared to Ruby) is that you can easily isolate a copy of any given dependency to monkey-patch or otherwise modify it, without affecting anybody else. (Not true of built-in globals such as String or Function, but that's a shortcoming of JavaScript, not Node)

Of course, the event+callback architecture and preference for writing everything in a manual continuation-passing-style makes working in Node suck for a host of other reasons, but the module system is really quite nice.


Modified globals is the fault of bad coding, not node. There are valid reasons for using globals, but it's well known to be bad practice to modify standard classes these days.

If someone wants not yet released features, they can compile to JS/node with source maps (i.e. Traceur, Coffeescript). This also solves the cps/callback issue (i.e. Iced Coffeescript).

Npm is also incredibly easy to publish to (one line in the CLI). That leads to a ton of packages released that would otherwise hit friction in release process in other package managers.


node's packaging system is easily one of it's best features, IMO. They've clearly learned from the mistakes of others.

Now if only npm could shadow repositories...


Nice post although I disagree that 30 years ago stuff mostly worked. That's not how I remember it but I guess his main point was that there was a lot less software then.


  The problem is we don't do similar things over and over again. Each new unsolved
  problem is precisely that, a new unsolved problem.
And if tasks are similar, and they can be made mechanical (or at least, commonalities can be factored out), it should be turned over to the machine itself.


> I've been in this game for many years now, and I have the impression that a) is taking a larger and larger percentage of my time.

Have the same feeling.


And it also depends on the editors, debuggers and other tools they use/know to use.


Tools are definitely important, however proficiency with those tools and most importantly, proficiency with the language are the largest determining factors. It takes months (if not years) to get to the point where you're really comfortable in a new language and can crank out volumes of code relatively quickly.

If you made me write with Eclipse, I'd probably be horribly inefficient. It's not that Eclipse is a bad tool, it's just that it lends itself to languages that I'm not super fond of writing in (ie. Java, ActionScript, etc.) and it's much different than the way I'm used to working (with vi, git and a command line). If you transplant a C# programmer into the vi and Python world, they'll usually run away screaming. Neither is better, they're just different.


Investments in complexity bring fewer and fewer benefits, until maintenance alone consumes all resources. http://t.co/CL87QjJNjg


Nothing kills programmer productivity faster than management.


How about the perspective of Joe Armstrong (Green Day)?


He's sleeping right now. Try again in twelve days.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: