Hacker News new | past | comments | ask | show | jobs | submit login

Never heard of the two people you mention, but I'll share what I've observed:

The two biggest time sinks in everyday programming are 1.) communicating with everyone else and 2.) looking things up. If you can avoid these and have a quick analytical mind, you can basically program as quickly as you can type. So the secret to being insanely productive is to work as a solo programmer in a domain where you are thoroughly familiar with all the tools, enough that you have your API calls completely committed to memory.

If you look at most programmers who have a reputation for being insanely productive, they all follow this pattern. Fabrice Bellard works pretty much exclusively in C, mostly on signal-processing or low-level emulation. Rob Pike works all in Go using the plan9 stack. John Carmack is all games and 3D graphics, using OpenGL. Jeff Dean has two major interests - distributed systems and large-scale machine learning - and works basically exclusively in C++.

I read an interview with Guido van Rossum, Jeff Dean, and Craig Silverstein within Google where they were asked basically the same question, and the answer all of them gave was "The ability to hold the entire program in your head."




"looking things up" To expand on this point, it's not the time it takes to look things up. It's the loops of trying and failing.

I've recently switched from C# to Ruby. In C# I'd consider myself as close to an expert as an ordinary man can get. I never had to look anything up, but more importantly... it was rare that I was confused why something didn't work. In fact most of the time when I saw something not work I knew exactly what I forgot.

In ruby, where i'm less familiar. I'm surprised every now and then. It's not the "go to the internet and search" part that looses time. It's the "let's try this, and see if it works". I might spend an hour on writing 3 lines of code. In C#, I might forget the 3 lines, but I know roughly what they are... and if i had to look it up I can confirm if it's the right 3 lines in minutes.


It's both - think of having to look something up on the Internet as an L1 cache miss, while having to look something up and then try multiple variations as an L2 cache miss (having the API in muscle memory so you don't have to think about it is a register hit). It may take you 3 seconds to write out an API call from memory, 3 minutes to search for it on the Internet, and 3 hours to diagnose a problem through trial and error. Occasionally even as much as 3 weeks - I've had bugs that lasted that long when it was a bug in the underlying framework, because you never expect that the bug will be in someone else's code.

Obviously avoiding the 3-week bugs will save you more time than the 3-minute lookups, but you can continue to improve significantly by turning many of those 3-minute lookups into 3-second typing bursts.


I am not a programmer but I am able to do some "things". Starting with Unix in the 80's (and trying to do C) back when you had a book or two (and nobody to ask any questions). There were many times that I had to work 4 hours or longer just to figure out and iterate (to see what would actually work syntax wise) what I could easily search for today. One book or two, nobody to ask, total trial and error. I always tried to buy as many books as I could (which I typically had to find at a college bookstore an hour away) so I could at least triangulate an answer by a different discussion of the same point from different authors.

Also had those plastic syntax cheat sheets which I picked up.


same here - it's been tough breaking the 'buy a new book' habit. If there's something I find that I want to save, I will either save the complete page, or just copy/paste the relevant text.


> I read an interview with Guido van Rossum, Jeff Dean, and Craig Silverstein within Google where they were asked basically the same question, and the answer all of them gave was "The ability to hold the entire program in your head."

There are memory improvement techniques, developed over centuries, which can help. Think of it as latency reduction in a source code memory palace.

http://mt.artofmemory.com/start


On the other hand, if you can do this and your co-workers can't, you may be tempted to write complex and entangled code, because the implications of each line are perfectly obvious to you.


Or you end up with something like Git, which is easy to use if you have the exact same background/education/philosophy as the person who wrote it, but has terrible usability for pretty much everybody else.


I don't have the same background as the "person that wrote it," and I consistently find the git command-line tools easier to work with than other tools (SourceTree, magit-mode, vim-fugitive, etc [1]). I think that in this case, it's just the mental model of a DAG[2] of commits that people have trouble wrapping their minds around. The tools are just there to help you slice and dice the DAG.

[1] I haven't used TortoiseGit, but I do imagine that having file browser menu items for files that are maintained by git could be useful from time-to-time, but I would still be using the cli most of the time.

[2] Directed Acyclic Graph


There are more DSCM implementations based on a DAG. (for example git, mercurial, bzr, monotone, codeville, fossil)

The only one that draws serious usability complaints all the time is git.


To borrow a turn of phrase from Bjarne Stroustroup, there are two kinds of software: software people complain about, and software nobody uses.


A couple of years ago the others were in use as well.

git won for various reasons, but the UI complaints certainly aren't because people don't understand DAGs.


Yeah, the reason everyone complains about Git is because everyone uses git. SVN was the horse to beat 10 years ago, because everyone used it.


people complained about git's UI 10 years ago. They didn't complain (as much, by far) about the UI of the other DAG based DSCMs.


To be fair, some of the usability complaints aren't about the interface so much as things like:

* Unreasonable ideas about the idea that commits/history can be rewritten, so therefore "nothing is safe."

* Arguments over how some git commands are similarly named to svn commands, yet don't do the same thing. The svn way is the 'right' way, and git is 'doing it wrong,' but really it's just an argument about familiarity (sharing many traits with arguments over Mac vs. Windows keyboard shortcuts, for example).

* Complaining about a recoverable error because they don't know about the reflog.

* etc.

I'm sure there are legitimate complaints, but most of the complaints that I see are around things like that.


I consistently find the git command-line tools easier to work with than other tools

Yes, but how many weeks and months did you spend learning all the quirks? http://git-man-page-generator.lokaltog.net

that people have trouble wrapping their minds around.

If you create a product and just tell people "sorry, you're not smart enough to understand it," was it a good idea?


When git was still one of many competing DVCSes - I tried darcs, and the CLI was pretty great. Spoiled for ever.


I like to joke git's motto would be "made by people smarter than you, for people smarter than you". It still rocks.


I realize these things are subjective, but git is not difficult to understand or use. Branching is a breeze. Merging is a breeze.


Oh, but you just don't. You just don't. Six months from now, the thing will have escaped your brain and you will have to look at it.

You are generally also your own cow-orker.


If you invest the time into acquiring a faster and larger mental cache, you are unlikely to handicap your hard-earned capability with complex code. In fact, the new visibility can increase simplification and reuse.

The same restraint doesn't apply to someone naturally gifted with an exceptional memory, since they never went through the acculturation phase :)


One thing I never quite understood: would these exercises also improve your memory when you're not using the techniques explicitly? Somehow having the information organized in your mind in a more cohesive manner, so that recall is always possible by traversing the graph...


Yes. It's like learning a new language, which changes how concepts are represented. Or increasing your vocabulary and improving your writing, which then influences thinking. Moving beyond the alphabet to additional symbols, opens up new options for both representation/storage and pointers/indexing.

Eventually, images come to mind without awareness of the retrieval process. The slow part then becomes translating the images back into alphabet-based language, for communicating with other people.


Could you share link to the interview?


I would agree with this, just from the taste of it that I've gotten. The first time I did a CS assignment without looking up any java syntax felt like the first time achieving full speed on a bike at the age of 10, with all the freedom and possibilities that come with that.


PG wrote in one of his essays that he used to sleep in the morning, doing stuff involving others (like meetings) in the afternoon and coding in the evening and night to avoid interruption.

Ben Horowitz wrote in his book that the most productive management style is to assign one task only to one person and allow him to make decisions on his own without to force him to discuss everything with a bunch of people.

IMHO the key to productivity is managing your tasks/team in a way avoiding interruptions and reducing communication to the minimum.


Isn't this an argument in favor of IDEs vs editors?

As much as I prefer the philosophy of Emacs/vim/sublime + cmdline tools, code completion is never great (see Yegge's "grok" rants).

I really enjoyed a project I wrote in Java inside Eclipse, because code completion really helped me not open Google all the time. But I hate everything else about Java...

Contrast with Go, where achieving flow is hard because you can't even f* compile with an unused import. And I love everything else about Go (fast compiles, easy deploys).

(One could argue that if I haven't memorized the smallish Go standard libraries I'm not smart enough to work in the field...)

Notch livecoding a "minecraft clone" for Ludum Dare is a great display of what you talked about: familiar problem, familiar environment, great speed. Unfortunately we can't do the same with Fabrice or Jeff...


The way i view editors vs ides is an editor is a completely un-configured ide.

for example your go problem can be solved using goimports http://godoc.org/golang.org/x/tools/cmd/goimports Using this tool you don't have to worry about those imports and it can be configured to automatically run on save in most editors.


Code completion is great, and it certainly helps me go a lot faster.

But beyond this, it's an argument in favor of smallness. Java needs code completion because the standard libraries are huge - a tech stack + domain where your interface with the outside world is much smaller (eg. DSP in C) is much easier to memorize than one where the interface surface is huge (eg. Swing and Java).


Never heard of this "The ability to hold the entire program in your head." quote, but it's so true.

In addition to memorizing APIs, etc., I like memorizing where I've done "function X" in past projects.

That way when I encounter something similar (that's probably not on Google/StackOverflow), I can grab it from your previous project. Then copy/paste/tweak.


http://had.co.nz/ - Hadley Wickam - creator of many R packages. Taylor is the creator of awesome Laravel PHP framework, single handedly.


> If you can avoid these and have a quick analytical mind, you can basically program as quickly as you can type.

Without stopping to think about design? I can't imagine that.


The domains where most of these programmers work are in ones where the problems are fairly well-defined. For example, what's the design of ffmpeg? It takes in a well-specified video codec and outputs a video stream. What's the design of protobufs, MapReduce, or GFS? In each case they had several motivating examples built using earlier technologies, and knew exactly what the key metrics are that they wanted to optimize and where the bottlenecks were likely to occur.

That doesn't mean there's no design work involved - far from it, I recall reading through the design notes for both GFS and MapReduce while I was at Google and being amazed at the possibilities they'd considered. But the design work is largely up-front: it's picking among alternative high-level architectures and figuring out what the consequences of that choice will be. Once the choice has been made, you don't need to make a whole lot of follow-on decisions; a lot of the code follows strictly from the choices you made up-front, and you aren't writing a line and thinking "Oh, was that a good idea? I better do it some other way". It also helps that these high-productivity programmers are highly experienced, and they specialize in a domain, so they've seen many of the low-level pitfalls before and avoid them instinctively.

It's a very different experience from writing end-user code. I'll work on a webapp, get some data on the screen, and then discover "No, the flow is wrong; we should present this and this widget independently on another screen and alter these other widgets based on the values there", and then that will have a cascading effect throughout the program that requires a bunch of other changes. By contrast, when I wrote an HTML parser, the behavior was already fully specified by the HTML5 spec. I had to make a few judgment calls regarding "What's the ideal API for client code? What are the boundaries of responsibility for this parser? What data representations should I use?", and I ended up having to revise them significantly, but that was largely because it was my first major C library, and a more experienced C programmer would know instinctively what the right choice was. Much of the time spent on the parser was straight-line implementing the spec and then tracking down bugs in the implementation.


That's a nice description, but up-front or not, in my view the design work should be counted as part of the programming. Maybe that's pedantic, but the idea of 'programming at the speed of typing' sounds positively dystopian to me.


I'm just not sure how it's possible to have any API committed to memory these days, though I agree that if one picks a small number of dependencies as core tech, one stands a far better chance of doing it and will realise better productivity outcomes than someone who compulsively includes every 30-line $PITHYNAME.js novelty to hit HN and GitHub.

Still, I grew up writing backend C in Linux environments, focusing heavily on socket programming, protocols and systems generally. I cannot help but feel that rapid development was a lot easier back then, and coding felt more rewarding. Part of that is definitely age; I'm 29 now, been writing C since I was 10, and started to feel consciously "burnt out" on coding around 20. (Basically running on motivational fumes ever since.)

However, there's more to it than that. C had a small and generally static standard library that one committed to memory easily with a little experience. Sure, one had to consult man pages from time to time for system calls, but in the grand scheme of things, writing code was a very original exercise, since so little could be taken for granted. That's why, even though I had to write 10x the lines of code, with my own lists and hash tables (notwithstanding GLIB etc.), I felt massively more productive after a coding marathon in those days.

I really feel that the practice of programming has shifted radically in its intellectual content since then. Primarily, we're wiring together prefabricated Lego blocks nowadays; when I write code in modern languages, I've got 27 browser tabs open and seem to spend 95% of my time looking up the fine points of how this Lego block connects to that one. Java is the archetype for this, but it happens even in more terse, expressive languages. Clearly, there are some productivity benefits from all this, being able to develop at a higher level while taking for granted many data structure primitives and wrappers, and I'm not blind to that. Still, it seems like the real art these days has shifted to adroitly and dexterously figuring out new APIs and libraries.

And there's so many of them! Verily a diarrhoeal explosion of APIs, libraries and dependencies. Even if you're impervious to the latest fads and fashions, the technology is shifting rapidly from year to year, and with it, entirely new documentation, methodologies, reference manuals -- an entirely new skill set, practically.

So, I'm hard-pressed to imagine how I'm supposed to ingrain APIs into motor memory when they are so numerous, expansive, and ever-shifting. Overall, I feel that my productivity as a programmer has declined considerably, even if the overall efficiency of lines of code has gone up. Pragmatically speaking, I'm not sure I'd trade in the overall productivity gains of all the abstraction for the halcyon era of C programming, but I definitely find it challenging to motivate myself to code when the primary skill set seems to be in looking things up.


I think you've captured well the spirit of my comment.

The point that I didn't mention - and that may partially answer your question - is that the other thing all of these "highly productive programmers" did is narrowly specialize in a particular niche where there weren't good existing solutions but are a number of potential users. There are still C libraries that remain unbuilt! Actually, with the decline of good C programmers and the rise of scripting languages, the relative demand for quality C libraries has probably gone upwards if anything. Usually these programmers (along with more recent ones like Zed Shaw and Mongrel, Brad Fitzpatrick and memcached, or Salvatore Sanfilippo and Redis) identified a specific need, solved it quickly and efficiently, and then leveraged the fame & reputation from that successful open-source project into a good job at a company that lets them do what they want or a series of consulting engagements supporting that software.


IIRC Brad's approach was more of a social hack/MVP: wrote memcached in perl, convinced people of its obvious importance, let other folks write the C version (which I'm pretty sure he could have written, judging by his Go code).


identified a specific need, solved it quickly and efficiently, and then leveraged the fame & reputation from that successful open-source project into a good job at a company that lets them do what they want or a series of consulting engagements supporting that software.

That's what I need to do. I almost did a few years ago, but unfortunately made the mistake of exclusively licencing the project to the customer, for a very cut-rate price. It was a gamble, as it was skunkworks, so it could have paid really well. Instead, it paid crappily, while demand for it elsewhere was abundant. Alas, IP...


2.) looking things up.

Well, shit. Is this even possible in web development these days? With the exception of jQuery, there hasn't been a single JS or backend framework that I've used from one year to the next.


So don't use them. Virtually all my work at Google was with vanilla JS and I had the DOM APIs memorized in my head.


So how do you know you're not missing out, or more precisely, that you're not missing out and what you are missing out is so great that the productivity you gain from restricting yourself to the subset of tools you fully master does not compensate for what you are missing out - I am thinking of frameworks like Angular or React? I am asking that as someone who doesn't use the frameworks I mentioned and sticks with vanilla JS and DOM APIs.

If you were at Google I suppose that you had lots of technical conversations with colleagues so you more or less knew about the latest trends and their benefits, but it's harder at a smaller company or working on your own.


That's the question everyone asks, right? People tend to be really insecure about the possibility that someone else might be doing better than they are.

The short answer is that you don't - there is always the possibility that someone has invented something that lets someone else do your job way more effectively. But you accept the risk that you're missing out for the certainty that you're getting features done and code written quickly. And periodically, maybe look around and try some of the new inventions that seem to be getting traction to see if they really do make your job better.

The key point is to rely on your own data and observations rather than the opinions of others.


Anybody who adopted Angular right out of the gate is kicking themselves now because everything they wrote in the last year is now obsolete. Hopping on the latest thing might be fun and you might learn a few things, but I'd prefer to wait for things to be fully battle tested and have a good history of support before I put it into production.

Another good reason is speed. It's rare for a framework to be faster than native DOM api's. And the app loads faster!


I'd love nothing more than to just use the latest DOM APIs, but every company I've spoken to in the last 5 years wants a framework. They feel more comfortable working with an app structure that everybody else is using to supposedly great effect.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: