Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Holding a program in one's head (paulgraham.com)
142 points by eposts on Aug 23, 2007 | hide | past | favorite | 131 comments


One possible alternative organization pattern is the "band." Music, like software, are also ideas. As the band Genesis has proven, you simply cannot replace Phil Collins with Ray Wilson and expect to have a mega-band again. Ray's music with the band is nice, but the style definitely was markedly different from what most people think of as "Genesis." The same thing happened earlier in the band's history too, when Peter Gabriel left and Phil Collins replaced him as the front-man. Everything changed: the song writing, the style of play, everything. It took half a decade for people to get used to it.

Writing software is kind of the same way. Just look at the impossible made possible by demo coders on old-school 8-bit computers. These programs would never have been successfully coded in a commercial organization. But, -bands- of coders wrote them successfully. They promoted their software as bands, included self-written music, artwork, etc.

It would be interesting to see how a band-style organization would apply towards more practical software products. Software so produced would come in boxes with the band's logo, but more importantly, a _list of credits_, anecdotes about the software's creation, etc. That is, to make the delivery of the software more _human_.

Back in the day, when credits on software were more commonplace, it was possible to judge the quality of a product (to some extent) based on who was involved with it. Some people became reknowned coders, reknowned technical writers, etc. I think it gave two incentives: first, your name is going on the box of that package -- this gave prestige in the community post-sale; second, it allowed the customers to predict the overall feel of the software prior to actually purchasing it, based on their experiences with software written by the same or similar authors.

Literature is another example. People flock to this blog because of the name, Paul Graham, just as much as they do for the information contained therein. People buy books from famous authors because the authors are well known to produce good work. People often subscribe to magazines only to read one or two columns by well-known authors. So, in a very real sense, tacking your name on something is a seal of authenticity and a seal of quality all rolled into one. And, people like that.

I know I do.


The idea of tacking your name onto software seems to be gaining ground lately. Until recently, every FaceBook page had "A Mark Zuckerberg production" on the bottom, though he didn't really share the credit with the rest of the team. I guess that makes him much more Roger Waters than Mike Portnoy. The nonprofit I spent much of college working on has a list of all the team members that make it happen (http://www.fictionalley.org/houseelves.html). And Xobni has full bios for everyone on the team (http://www.xobni.com/team).


Very nice analogy, and I think it captures why some "agile" teams I've worked on felt so productive - a small group could jam on something and have a shared understanding of the problem as we were working. However, as with a band, there's definitely an upper bound. 2, 4, 6 programmers - awesome. 12 programmers - starts to fall apart pretty quickly.


Interesting point. I noticed that lately, start-up names sound like band names: Infinity Box, I've got a Fang, etc.


Ha! I've got a Fang is from a They Might Be Giants song.


Hmmm... maybe I should name my band, er, startup We Threw Gasoline On The Fire And Now We Have Stumps For Arms And No Eyebrows (a NoFX song) :~).


If people want to use Now Form A Band to name their startups, I'm absolutely fine with that.

http://www.nowformaband.com/


Heh, fun site. I just submitted "ultraviolet castastrophe". The first time I saw that term in a physics book, my first thought was "that would make a great band name".


(Just as long as the startup also records music.)


Please List This Corp on the Stock Exchange

(...right about this time, some shithead will be drawing a fat fucking line, over the precis on our business plan...)


> Oddly enough, scheduled distractions may be worse than unscheduled ones. If you know you have a meeting in an hour, you don't even start working on something hard.

God, how I wish more people understood this. It might seem like programmers overreact to even the most minor demands on their time---"What's the big deal? It's only a half-hour meeting!"---but a half-hour meeting can easily kill several hours of productivity.


Just like Marc Andreessen's "don't keep a schedule": http://blog.pmarca.com/2007/06/the_pmarca_guid.html


When I do have enough time to do some work, but I know it will get cut short by a scheduled event, it causes an almost non-stop sense of anxiety in the back of my mind. It's seriously distracting and feels horrible.


Same here! especially when your in the zone churning out code faster than Eminem can bust a rhyme. I love it when that happens and really hate it when someone pokes in and requests some idiotic update form so that your boss/manager can feel better about himself....


It's about time PHB notice this tiny little fact: they are the only ones looking forward to any sort of meeting. Keeping a "this year I will do this and that" schedule is about as much schedule as it was ever required.


Thank you Paul! AFAIC, this was your best essay ever. (And that's saying alot). You have just described what has been in my head for most of the past year, but I didn't have the words to describe it.

A good friend of mine is a artist. He claims his secret is, "I paint every day." I tried that. It didn't work for me.

I have tried every combination of pens, tablets, paper, sticky notes, and electronic approach to distill my thinking and none of it has ever made much difference.

Only when I get the entire program in my head (level 0 only), do things get cooking. I rewrite everything 3 or 4 times. Sometimes I rewrite just to understand.

This approach reminds me of Jessica Livingston's chapter on Steve Wozniak in "Founders at Work" (required reading). Only when he could get the entire Apple II into his head did it become the breakthrough that it was.

I used to be afraid to exercise because it took time away from programming. What a mistake. I walk up and down steps for 35 minutes every morning WITH MY PROGRAM LOADED INTO MY HEAD. THAT'S when I do my best design work. I didn't realize what was happening until just now. Thank you!


"... Only when I get the entire program in my head (level 0 only), do things get cooking ...This approach reminds me of Jessica Livingston's chapter on Steve Wozniak in "Founders at Work" (required reading). Only when he could get the entire Apple II into his head did it become the breakthrough that it was ..."

The common theme here seems to be succinctness and efficiency in design. The Woz example is a a classic here. The design process clarity of purpose through reduction until it is as simple as you can get it has merit.

So is it the speed of iteration to reach "high efficiency design" be the goal as well?


I have an alternative explanation on why taking breaks can be beneficial. I'm less of a fan of the subconscious theory--it can certainly happen but I think it's much less common than people estimate.

What I think is more likely is that you get fresh insight when returning to a problem after a break, because the mental model that was constructed in your head was subtly wrong. It's happened to me a bunch of times: "OK, where was I?... the grommit plugs into the foobar, and the whatzit sends a message to the widget... wait a minute, no it doesn't, it sends to the frobnosticator first! WHY on earth was I thinking that for all that time before the break??"

Sometimes it's as simple as the famous case of getting somebody else to look at your code and immediately point out what's wrong that two hours of staring at the screen couldn't accomplish.

Other times it's a lot more subtle, high-level and abstract. Since, in those cases, it's often difficult to discuss the problem with somebody else, you don't get a) the outside feedback to speed things along and b) the humiliation factor that makes you remember how the insight came about. ;) So the meta-revelation of how you came to that revelation might be lacking, and it's attributed to the subconscious instead.


I used to write my research papers (in linguistics) this way, I'd make piles of useless notes, writing and rewriting sections of it while never having anything to show for it. Then when I understood all of it, I'd write the paper (say 10-45 pages) as fast as I could type, usually in a single sitting. I'd have to go back and edit it and such, but the bulk of the work was getting all of the problem and my solution to it into my head.

Interestingly, as I headed to grad school, this became more and more difficult as the problems became harder and harder. Eventually I had to devise a new system for writing papers (which I can't describe adequitely) because the problems and their solutions became to large to hold in my head at once.

To me, this is the interesting case, how do you solve problems that are too large to hold the solutions to in your head at once? The obvious answer is "break it into smaller pieces", but that is frequently very difficult to do. Probably the answer lies in "go back to the basics", and make sure that you have the foundations cold so that they aren't occupying stack space.


If you have the foundations cold, then you can make a word for each foundational concept, and a syntax for each way the foundational concepts interact. Then you've started the kind of bottom-up programming pg describes.


Yes, if you writing in forth. I'm less sure that the bottom up approach that forth and lisp allow translates as readily into generalized research.

This is one of the ways in which programming is different from other research. In programming, when I define a term(/function/word), it bloody well means that. In linguistics, if I define a notion, well, whether or not it means anything at all is exactly the point of the research.


I've found that a ramification of holding the current program in my head is that I have a greater emotional bond with whatever problem I've most recently worked on.

If I work on several project concurrently, there's always one of them that I am more passionate about at any given time, causing me to think about it at nearly all hours of the day. Interestingly, the one I am more passionate about oscillates -- and is nearly always the project I have most recently worked on for a large chunk of time. Therefore, it is the one in which I am holding the most context in my head.


> Sometimes when you return to a problem after a rest, you find your unconscious mind has left an answer waiting for you.

This happens to me on so may occasions in may different areas. While programming, if I have a problem I'll sleep on it, wake up, and see the problem in a whole new light.

Also, before writing an essay for class, I will read the prompt throughly. This way my brain starts form sentences without conscious thought.


I totally agree, and oftentimes, I don't even need a whole night. Just now, in fact, I left my laptop frustrated, thinking, "Shoot, what the heck is going on with this bug?!". I went to the bathroom and "forgot" about my problem during my pee. Then, one minute later, walking back toward my laptop, I paged the problem back in, and realized my sub-conscious (I guess) had already solved the problem -- I knew exactly what was going on with the bug.

It startles me how often and well this works. It seems like cheating, too, because it feels like I'm not doing the work! :)


I've had the same experience with taking a bath. Just forgetting about the problem (weather it is a bath or pee) seems to help.


Don't forget you can combine the two. This is often done for you already if you use the pool or hot tub at your apartment complex that kids were playing in all day.


To me sleeping is the greatest problem solving technique ever invented. I have even lost count of how many times this has worked for me. If companies had any sense, they would allow employees to take cat naps.


I second catnaps. I've written before on news.yc about the nap-success I've had. Certainly any company I run will have a quiet place where people can crash for 15 or 20 minutes in the afternoon. Of course, so far the only such company's world headquarters was located in my apartment, and that kind of feels like cheating...


Absolutely! I've gone on the record to say "sleep sovles all problems." Sleep itself doesn't--but waking up with a fresh, calm mind is much more useful than a chaotic, depressed mind that results when you are very frustrated with something.


In 1985, Peter Naur wrote a similar article presenting programming as theory building. In essence, a programmer primarily constructs a mental model of the problem, and its solution; secondarily writes code; and incidentally documents. Naur discusses how this view affects program life and modification, system development methods, and the professional status of programmers.

Naur, P. 1985. Programming as theory building. Microprocessing and Microprogramming 15, 5, 253--261. http://www.zafar.se/bkz/Articles/NaurProgrammingTheory


Very good essay.

The only part I don't really agree with is the implicit condemnation of programming done by large companies. Yes, their methods result in mediocre software, but that's often what you want. There's a reason they try to treat programmers as interchangeable cogs, and resist having an entire program in one person's head.

To use the tired "building a house" metaphor - you can get a renowned architect to design the next landmark in a city, or you can get mediocre, interchangeable architects to design a row of townhouses. Both approaches are valid and have their own place, but there's no point asking the famous architect to build townhouses.


A poorly laid-out neighborhood of shoddily-built slums? I think that's a rather good analogy for corporate IT.


Is ClutterMe a townhouse or a landmark?


From footnote 3 here: http://www.gigamonkeys.com/book/introduction-why-lisp.html

Psychologists have identified a state of mind called flow in which we're capable of incredible concentration and productivity. The importance of flow to programming has been recognized for nearly two decades since it was discussed in the classic book about human factors in programming Peopleware: Productive Projects and Teams by Tom DeMarco and Timothy Lister (Dorset House, 1987). The two key facts about flow are that it takes around 15 minutes to get into a state of flow and that even brief interruptions can break you right out of it, requiring another 15-minute immersion to reenter. DeMarco and Lister, like most subsequent authors, concerned themselves mostly with flow-destroying interruptions such as ringing telephones and inopportune visits from the boss. Less frequently considered but probably just as important to programmers are the interruptions caused by our tools. Languages that require, for instance, a lengthy compilation before you can try your latest code can be just as inimical to flow as a noisy phone or a nosy boss. So, one way to look at Lisp is as a language designed to keep you in a state of flow.


> Thanks to Sam Altman, David Greenspan, Aaron Iba, Jessica Livingston, Robert Morris, Peter Norvig, Lisa Randall, Emmett Shear, Sergei Tsarev, and Stephen Wolfram for reading drafts of this.

PG, you got the Stephen Wolfram to read this? If so, perhaps you can have a conversation with him about making Mathematica open source :~)?


Speaking of Wolfram...

He gave a great talk on simplicity in nature's algorithms at MIT. I could see a lot of application to programming with his take on simplicity vs. complexity.

http://mitworld.mit.edu/play/147/

Also, he spoke at Startup School in 2005 - really excellent talk you should definitely check out if you get a chance.

http://feeds.feedburner.com/~r/Ycombinator-StartupSchool/~3/...

(For anyone new to the site, you can get all the available Startup School talk mp3's at this podcast: http://feeds.feedburner.com/ycombinator-startupschool )


PG, you just got me wondering about feedback. When you let people you trust or respect read drafts of your essays, what kind of feedback do you expect? How does the feedback affect your essays?


I don't have any specific expectations. Usually I just say "please let me know if got anything wrong, or missed anything important." If I'm writing about something I don't understand well enough, I ask domain experts. I always ask Hutch Fishman about startup funding, for example.

I take responses pretty seriously. I've killed whole essays friends thought were bad. Usually I just have to rewrite a sentence or two.


Not just Wolfram... also Google's Director of Research Peter Norvig, and as usual, the historic hacker Robert Morris.


The only part I slightly disagree with is number seven, "Don't have multiple people editing the same code". It is best to have clear ownership of the design of a component, and consultation before major changes. But if you design for readability (number five) editing and even radically redesigning someone else's code should be normal. I find I write the best code when I assume someone as smart as me but ignorant of the problem is going to have to rewrite this tomorrow. (And this is true, even if it is me.)

By Paul's account, the different components of ViaWeb were all written in different languages. That enforces vertical silos to a degree that I'm not sure would be healthy in many projects. Just yesterday I saved my colleague a lot of time by pointing out that he was basically recreating a library function I'd already done.

Projects that have clear vertical components, and one team member per component, do move really quickly and it's tempting to think that all projects should work like that. I'm not sure that is really true of all worthwhile projects. However, it might be true for startups.


I must say I recognice all the elements here and I have been working almost exact this way since I started in 1982. I will say the key element is thinking in visual terms, or to visualize the problem and solution. However, just a tip - I discovered two useful tools that helps me now that my memory is getting weaker (from the age? :) ). A dictaphone and a white-board, The dictaphone allows me to record ideas, snippets, reminders and such, even pratice presentations. This is very useful as I don't have to write it down anywhere. I also use a white-board to draw either bigger lines to confirm my idea, or write down details to have them displayed in front of me at all time so that I don't forget important, but perhaps subtle parts. Just my 2 cents. Thanks to mr. Graham for an excellent essay!


I usually lurk but I just can't with this one. Great article. What apt timing... This has made me rethink taking a recent position with a large company.

Multitasking is a myth. Editing code on demand is possible but to design a program is a process that requires all cylinders of the analytic and creative mind. To be in the zone is almost like a trance where I could start speaking tongues at any moment.


I'm trying to write a novel, and find that all eight points are equally instructive if you replace 'program' with 'story,' 'programming' with 'writing,' and so on. Number 8 is particularly helpful in this context, at least for me.


It is amazing how difficult it is to get non-programmers to understand the affect interruptions have on the art of creating a program. A 10 second interruption really means 30 minutes of vastly decreased productivity, as it can easily take that long to reload a programs universe back into your head.

More often than not, the reload is imperfect and parts of the universe are not restored, leading to more lost productivity as the programmer must recreate the solution they already had worked out. Another subtle side-effect of this imperfect reload are bugs caused by a mismatch between the pre and post-interruption program state.


I wonder about the kind of cognitive access a programmer has to their program once it's loaded. Descriptions of walking through a building imply that moment-by-moment the programmer is only dealing with a subset of the problem, although the whole thing is readily available in long-term memory. He's thinking about the contents of a particular room and how it connects with the other rooms, not conceptualizing the entire house and all its relationships at the same instant. I imagine this is necessarily the case, since short-term memory is limited. If true, this imposes limitation on the topology of the program, since the connections between different parts are localized and factorizable X when you walk out of the bedroom you don't immediately find yourself in the foyer. Consequently, problems that can't be broken down (or haven't been broken down) into pieces with local interactions of sufficiently limited scope to be contained in short term memory will not be soluble. Does this make any sense?


Directly from IM in response to an interruption from my boss about an error message:

"I suspect there is an issue with the actual input from the database. (Note: we're working with a test database that's sketchy in spots) What do you want me to work on? Do you want me to spend the time tracking down that error or do you want me to work on "X" (that should have been done three weeks previous)? Because I was thinking about "X" and my entire train of thought is derailed and now I'm trying to work on "X" and wondering what is wrong with that particular chunk of data."

I suppose for the non-programmer it sounds like I'm just being nasty over nothing, but that little panic attack over a minor error message cost me HOURS in trying to get back to the original program in my head so I could finish it. In fact, I actually had to go and FIX the error to get it the hell out so I could fully focus on what I needed to be doing.

Once I got there, around 1am, the code flowed like water and it's done save for minor debugging. I was in the zone enough that if my eyes hadn't been closing by themselves, I'dve finished that, too.

Management, I think, has a double edged sword to deal with. My immediate boss wants to let me do what I do because I am lucky enough to have the ability to put someone else's code in my head in the same way described in this blog entry. It means that I can go in and fix it and if there's a bug I know why it's doing it and I know just where to push on it and where it needs shoring up--in short, after a while, it's like I wrote it myself.

But he's also dealing with HIS boss, who is dealing with the bottom line, and his boss would probably have apoplexy if he saw me playing spider solitaire as I let the problem I'm addressing work itself into my brain, and would have NO idea what I was talking about if I said I needed to get the application "into my head".

Wonderful blog entry.


I'd like to have this essay printed on bronze plaques and hand them out to everyone I ever have or will work with, or perhaps just to random people at the top of the escalator at the mall.


I am NOT a SW guy per say - I do ASIC/System level emulation - my way. It is a task that requires HW, different pieces of SW, scripting and work in the lab - but boy'o'boy - did U hit the nail RIGHT on the head. I worked for 4 years in a small but fiery start up and developed all the necessary pieces to emulate our system of 40M gates for about 1/10 of the $$ of commercial solutions - exactly the way U described it - in my head and in the (scarce) off hours.... Anyway, the startup didn't make it for the same lame reasons hundreds don't make it - abysmal (mis)management decisions that no technology can defeat :( Now for a year I live in the quiet HELL of a bigger (and BIGGER) company where I HAVE to push the 2 buttons that I am assigned to and constantly reminded to shut up and watch the buttons I am assigned to push.... You touched a RAW nerve! vess


Awesome essay, Paul. You really nailed this whole phenomenon. It's the first time I've read all these things in one place and it is so very true in my experiences.

At my old company, one comment you'd always hear from team members with a new idea was "Hey, I thought of this in the shower this morning, and I think we should ..."


Management has decided the sprinkler system in the dev room will be activated and remain on until further notice.


It has been decided that no thinking will be permitted at home in the shower. Management is concerned that this practice may result in the possibility of an intellectual property dispute.


The most interesting article I read from PG the past year.

Holding as much of the program in your head gives you the power to identify what will give you the best optimization. Also this may be the short explanation why good software is written by teams of max 2-3 people!


Paul,

There is also another issue that seems to be overlooked, people create their own disruption. When I wrote about Gloria Mark's finding on working spheres ( http://nuit-blanche.blogspot.com/2006/07/designing-collabora... ) I was surprised of that finding as it would seem counterproductive to the need to having to load up the "code" in the brain's RAM. Another explanation is that what she described is a typical cubicle/large organization workflow that would be counterproduictive to the Hacker's brain (replace hacker by researcher and you have the same symptoms).

Igor.


When I was doing a 1 year web consulting project in Toronto around 2000, I did all my work from home and went into the office only for meetings. I'd get up in the morning, jog to the gym, pump some iron, jog back home, make a nutritious meal, shower, grab a quick nap if I felt tired, and then I was ready to hammer out some serious code (ASP/SQL Server 2000, Javascript, VB6 (eeew!)). I was so effective at churning out well-organized code because I already saw solutions to problems - perhaps while jogging to/from the gym, or maybe just doing a set of this or that execise and just THINKING subconsciously to the next machine or the water fountain. My roommate was at work (office job) so the apartment was QUIET and I never got distracted. It was never ever a problem for me to get into the ZONE. Heck, I also remember times when I'd get up at 2AM because ideas/solutions were just running through my head and I felt re-energized so I'd code for about 2 or 3 hours and go back to sleep feeling that I ACCOMPLISHED something. Those were the good ole days. LOL!! My point? A strong body provides fuel to the mind so that it can solve problems effectively. Doing something outside of programming that provides solace will allow your unconscious mind to find solutions. Thirdly, SILENCE is GOLDEN. Try to avoid distractions at all costs.


PG -

Holding the program in your head "scales" - at least on well-constituted teams. I was a chief/architect (based on Fred Brooks' chief programmer team concept, long time ago) and was able to utilize some "ordinary" programmers and a couple gifted programmers by doing the highest level abstracting of the problem so they could each hold a sub-problem in their heads. Made the bosses happy, since there was less reliance on any one person, and I was largely replacable by one of the more gifted team members.

(BTW - greetings from anotehr Gateway-survivor)


One missing point: preparation

I've been trying to get into a small program I need to write and am a bit stuck. After glancing at the (great) list in the article and realizing I can roughly control most of those elements, I think 'now why am I still stuck'? Oh I need to get this data structure from X, how to call that sub element from Y. Grunt work, yet details that easily fit into gaps between all the other various day to day distractions. So get all the junk together before that 12-36 hour marathon.


I consider program brevity key to holding the problem in ones head, and the key to running a successful IT operation, particularly a small IT business. However, it would seem to me that the mainstream has been going in precisely the opposite direction.

In my opinion, there have been only two significant developments in mainstream languages in the last 50 years - structured programming (if..then..else) and object orientation. The operative words are MAINSTREAM and LANGUAGES, as there have been some very innovative and terse non-mainstream languages.

In the beginning, there was FORTRAN. You forced your problem to fit in a series of matrices and got the job done, usually quite efficiently. Add to Fortran structured concepts, and sooner or later you get Pascal, PL/I, and C. C introduced pointers - for better or worse, programmers finally had time to get really creative in solving problems by inventing all sorts of data structures - some legitimate, some a distraction from the problem. Bigger code. Add to this rampant object orientation, one method per class, and the ability to really go overboard with content vs. form. Form is winning. Bigger programs.

Although not part of a language, these days with Java, .Net, and so on, it is not possible to get by without a heavy duty IDE - Eclipse and Visual Stupido. These tools help you navigate a problem which you can no longer possibly load in your head, hopefully you can get a little part of it.

To me, the place where there has been the most progress, and outside of programming language design proper, is in the field of memory management and garbage collection.

Maybe everything I know is wrong. But in the last 20 or so years, I have never seen a programming language which stressed clarity through conciseness. It's all starting to be more like COBOL. Is this what the world really likes?


I had a similar line of thought in a blog post awhile back at http://compoundedthought.blogspot.com/2007/02/people-factor-...

The idea here is that organizations are inherently anti-individual. My essay takes the idea that organizations are anti-individual because of their history. I think there is some merit in both points of view.


I agree, with two objections:

I disagree with rewriting one's program. If I rewrite a program to solve the same problem, then I'll end up having exactly the same code. This is useless. If I am solving a different problem, then rewriting is usually in order.

In regards to writing (re)readable code, I feel that the word "readable" leaves the door open to a lot of abuse, and doesn't shut the door firmly on those who encourage literate code, James Joycian code, commented code, etc! "Readable" has connotations that one can read it at leisure, like a novel. It allows for some excess, some baggage. If there were an adjective which meant "the uncompromising naked terseness of mathematics, complete absence of comments except in pathological cases (to be avoided!), and short identifiers, especially in inner scopes," then I'd use that adjective.

I usually think in terms of Occam's Razor: what minimal program description reproduces the subjective effect that a user desires? I feel that any other approach would be based on ideology. "X-Acto blade like code," perhaps, rather than "readable code?"


I agree.As a hobyist(semi-professional) I have been programming in assembler for 30 years and in the past few years in C for embedded PIC microcontrollers.A difficulty arose when I decided to produce a Book incorporating several of my projects including the source code.Of course,I never did comment each line of code as I was taught by my tutors - no time to do that if I am carrying a program in my head and eager to translate the algorithms in my brain into source code.When it came to writing comments for the source code in my book,I was completely stumped -it meant I had to reverse engineer my own code that was created several months or years ago!.No way was I prepared to do this - the contents of my brain at the conception of my software could never ever be replicated.


I suspected that my former employer was not unique in it's organizational misfits and reading your essay confirmed that it was indeed part of a larger picture. Of course. An organization cannot really tolerate indivduality as a basis for their development. Everything must be coordinated -> politics -> bad decisions. After working in a large organization for 8 years I quit, but not before I sent a letter to the man on the top. I conluded that my problems were due to his (bad) descisions:

1. The leader of 130 IT workers was not educated in IT. None of the top leaders were into IT. Not even the lead architect!

2. The economists think of everything as a "factory". Therefore, IT is produced the same way. I call this the "factory-view". If you add more money, more developers you will get this done faster. If you need something, go buy it. It's always better to outsource.

There were other things as well, but these factors may probably be more widespread than I like to think of. Finally I left for a small firm. What a relief!!


Wow... this is a great description of what's happening inside a good programmer. It may even be good enough to help non-programmers understand what's happening and therefore what to do about it. This should be on the short-list of reading for managers and others who are responsible for the software development working environment.

On a seperate tangent, Paul referred to some mental techniques (e.g. 'black box', 'solve a subset) and I can think of others (e.g. hold this portion 'constant' and change a different portion).

What if we had a catalog of those mental techniques - would that be interesting? Usefull? Who would use it?

Thirdly - my experience of loading the whole program in my head extends beyond programming - when I've participated in the business side, along with programming, then the scope of what I consider becomes larger (and involves some new elements) but the underlying approach and way of thinking about problems/solutions appears to be the same. Have other folks had a similar experience?


I would add a ninth point. "Leave distinctive footprints." I have written good code, that worked completed as desired, that I knew inside and out--that I then revisited a year later and couldn't make heads or tails of it. I learned. Now when I do something new, something elegant, or something complex, I document it immediately well enough to re-grasp it in short order--and document it idiosyncratically. When I come back to the code--or more frquently, come to another conundrum that will need a similar or transmogrified approach--that idosyncratic documentation is there, and I can search for it, and grasp the code ideas it embodied. It helps immensely with those disruptions that invariably occur. I still vividly recall when I first grasped the immense power of Select Case True because of the kitschy commenting I wrote for it--and I called up that commenting and that code for quite some time!


Point #3 ('Use succinct languages') deserves special mention. I've noticed time and again that writing small helper functions (especially new predicates) at my Scheme REPL leads to being able to express the problem at hand with ever more clarity. Also, a REPL with the ability to dynamically reload code is an invaluable weapon...


One aspect of this article that hasn't been mentioned is that this also points out one reason that big companies have a hard time keeping good people unless they have some way to let people do this kind of work.

Companies that allow interruptions or distractions that don't let employees get into "flow" or "the zone" and create/execute at their capacity are going to find that it's difficult to retain truly great employees because most great creators/makers are only happy when doing their best work - anything less is demeaning, insulting, boring and maddening.

The points made by other comments that some companies fear the gifted individual because that individual is not a cog that can be exchanged for others is correct - I've seen this fear before. So what's the answer? Have some of both kinds of coders? Accept a loss of control and the associated risks? Have an Advanced Technology Group (ATG, like apple once did) and let magic happen there and then toss it to the production group to put it into shippable products? How do you keep the envy between those in the group and the rest of the engineers under control? One thought was to have rotating positions in the group and you did some time in production and then you rotated into the ATG. I like the sound of the Google 20%, but don't know how well it would work for me - I want a couple of weeks per idea to really get it to the prototype/proof-of-concept stage. Then I can set it aside.

At another company, I had the software engineers in a separate building across town from the sales, marketing, support groups. And everyone one who was not a programmer was directed to contact me and not to contact programmers directly unless the programmer had requested them to. Not ideal in all ways, but it did help with many of these issues.

anyway, truly great post. I sent it to the VP of engineering at my company because it explains my frustration with working there so well....

This topic is worth a book on how to run a software engineering company. Take the insights from this blog, and then figure out a way to create a medium to large company work environment that supports what software engineers need to be truly productive and innovative.


Having a context for the problem or solution helps to solve this problem. I have used MindMaps for abstract things so that the context that you are working in can be quickly loaded back in your brain. For code, I have used the idea of clubbing together files related to a problem as a quick way to get the context back in your head. Take a look at Eclipse Mylyn at http://www.eclipse.org/mylyn/. It let's you save context i.e. files, resources etc. for the bug/feature/enhancement that you are working on. You can also save this context in your defect tracking system as attachment and share it with your co-workers. Invaluable.


I think you're right, except when it comes to what you say about brevity.

I don't think it's important for a program to have a small text-footprint in order for it to be easily loaded into your brain. Why? Because it's the CONCEPTS behind the application code we're loading into our heads. It's not the text itself. So... if I have a program where I've formulated a class tree, it doesn't matter to the ease with which I can grasp the program whether the classes are formulated in Python, C#, C++ or Pascal. It's the concept of the class tree which we load into our head. Not the code itself.

So, on that point you're actually wrong. Brevity isn't important.

That said, a lot of your other points actually DO make sense.


I'm pretty sure what you described is what he meant by brevity. It's not about writing less code per say. It's about adding layers of abstraction, which classes are, to shrink the problem space.

Comparing python to assembly, python has basically added in another level of abstraction. It's easier to translate a concept into python because of this. A good hacker can build up these abstractions himself no matter the language, but it's just easier and nicer to use a language that has the type of abstractions you'll be using already built in.

Once you fully trust your classes then they become like DSLs and you build your program out of them rather than the raw bits of whatever language you're using.


"... So, on that point you're actually wrong. Brevity isn't important. ..."

I don't think it as just brevity. I like to think of it as "high efficiency" in design, less moving, simpler parts. The less parts, the less the overheads in maintaining and understanding.

Having concepts of what happens is not the same as having actual understanding of what happens. So does this mean I ditch all my other tools and use say "lisp" exclusively for brevity's sake?

No, I make a pragmatic choice of python (or perl, php etc). I get similiar succinctness but access to tried and tested tools others have created. I don't want to re-invent web servers, databases or the RSS for instance.


This may be an advantage of design patterns: that they allow for a sort of brevity of class structure. You can think "ok, that's just the Composite Pattern there", and store it as one token in your short-term memory. Or at least fewer tokens. If your language let you, for example, use the Composite Pattern as a first class feature without having to code it every time, I'd bet your program would be easier to load into your head than one where you've got your own, new and possibly buggy, implementation of the pattern.


Does it matter if you formulate your classes in assembly?


Paul, If the parts of a program define its succinctness then my hypothesis is that succinctness is efficacy not power. Imagine a useful machine with just one part to understand my chain of thought.

PS: IMO there is a difference in power and efficacy.


Lisa Randall the Harvard physicist read this too?


Very insightful. In my long corporate experience, corporate management refuses to acknowledge that writing software is a creative act of a creative brain. In fact, before my early retirement, they came in with something called methodology, which treated programming as something that you could script so that programmers could be interchangeable. The result, which I predicted, was that both morale and productivity sank to new lows. Just another sign that the beancounters just do not trust creative thinkers. When I programmed, my best work was done at night when there were no interruptions.


Thank you. I have been married to a programmer for almost 20 years - this article was very enlightening. I always wondered what was going on with my "mad scientist" when he retreated deep into his work and I was not to disturb him. After hours of his being sequestered in the basement glued to the computer - I would dare to ask him the question - "Would you like something to eat?" He would stare at me blankly unable to answer at best, and at worst this question would start a war. It all makes sense now - thank you!


This is a great essay, makes me feel more on track than disorganised. Point 2, work in long stretches and the fixed cost of re-visiting a program - I think I'll show this to all those who interupt me and that don't understand why I don't really seem like I'm listening! This essay also explains part of why so many big companies are producing code that from a user who can program, you end up thinking "I could do better myself..."


I used to use the evening walks with our dog to solve programming problems. We live in a very, very quiet suburb with large properties, there is only local trafic and nothing else. So it was possible for me to visualise the solution to a problem into the dark in front of me (like an internal blackboard). Many times I came home after about an hour with a happy dog and a way to solve the problem or at least a great way to it. I just had to make some short note of it to pick it up next morning, the note releasing all saved information in my brain. Worked a treat for me.


"... Take on the kind of problems that have to be solved in one big brain ..."

That's on sentence that resonates and is probably a clearer definition and advantage of what working on a "hard problem" is really about.


Most of this appears to be old coder wisdom and I fully agree with it. However, there is one point I am at odds with. I dislike code ownership and I think a small team of good people should be encouraged to review and rewrite any portion of the system they deem needs it. As Paul suggests, rewriting is core to understanding. Allowing and encouraging team members to rewrite improves both understanding and code. I realize this isn't old coder wisdom but I believe it is critical to making a small team behave more closely to the ideal of a single programmer.

-sdk


This is true for all kind of work you are doing with your head - and even if -like me- you are a surgeon! I guess I am "programming" my hands to do the work! Your essay gave me a better understanding - thanks


Great essay. I've been working like this on my own for about 7 years and i think the end results speak for themselves. Its a shame that management types can't accept this method of working and just want code monkeys.

Kujoy


Thoughtful essay. You fail to mention the divide and conquer strategy that good mathematicians and programmers use to manage the complexity. Using your "loading" analogy, it would be like carving a problem into pages that exhibit locality of reference.

When mentoring smart rookies, this is the bit some are last to grasp. Partitioning a problem means that there isn't as much to remember. This scales, fractal-like to the design of larger systems.

I find as I get older, this is more and more important and necessary.


One useful technique for design/debugging is to read through the code, and explain it, to somebody else. (Whether or not they're actually there is often surprisingly immaterial)


I'm pretty sure that a lot of programmers, engineers and designers of many things get the message. A lot of them will understand the issue along additional axes too.

The problem is the people who impact problems and haven't got a clue. There seem to be a lot of them and they are dragging mankind backwards, slowing and even reversing progress. These are the turkeys who blame programmers for a majority failure rate of projects.

It's important to deturkify these guys. What can you do to help that happen?


Great essay Paul. Some echoes of your Lisp book and Code Complete!

The only downside to working on your own is that you don't see the bugs that you create through not seeing the problem? Others seem to see what you don't? That solo work needs tempering with some level of review.

I chuckled over the effort spent on 'homers', work done (perhaps even for the company) in a persons own time. Boy does that work get some deep concentration and produce some good results!

Thanks Paul. DaveP


This is a collection of points PG has made before in other essays and comments. If the ideas are new to you, you may want to check out the others: http://www.paulgraham.com/articles.html

> Perhaps the optimal solution is for big companies not even to try to develop ideas in house, but simply to buy them.

What's good for YC is good for the country!


> What's good for YC is good for the country!

It's not that unlikely. Since we had the luxury of doing whatever we wanted by the time we started YC, it stands to reason we'd do something beneficial instead of merely money-making.


It reads like a marketing pitch to acquirers -- "don't build, buy! Look at this one, a 2.0 webby thingamajig, ain't she a beaut? Only nine ninety nine nine nine nine ninety-nine. Hardly any miles!"

Google sure seems to have a lot of success developing apps in-house, and due to integration requirements this makes a lot more sense in many cases. An outside party would simply be unable to provide an optimal solution in those instances. E.g., Kiko.


> Google sure seems to have a lot of success developing apps in-house

Bad choice of examples. Google is by far the biggest buyer of small startups. They just don't publicize it.


Everyone wants to get acquired by Google, and yet it doesn't stop them doing their own stuff in-house. Unless AdSense, GMail, Calendar, News, Froogle, Video, Finance, etc. were bought instead of built.

Not to mention all the work that happened after they bought Keyhole et al.


AdSense was at least partially an acquisition. Video became YouTube. Froogle is largely a failure. Considering how many programmers Google has your argument that they're good at producing stuff in-house is pretty weak. They're great at scaling and running stuff, but so far they don't seem to be very much better at creating new great things than any big company is.


Everything they buy they do a huge amount of work on, and they don't merely scale; they add features. Google Video was already easily superior to YouTube for the player alone (actual random access); they bought YouTube for its audience. YouTube got so big because unlike GV, it was very lax at policing commercial content. Its poor video quality, lack of download facility, lack of fast-forward, and short clip length limits should not be confused with the killer feature of giving other people's stuff away.

Froogle a failure? It's a search engine for product prices.


"... they add features ..."

Reading FOW, Ch12, p164 Paul Buchheit it is more like trying to fit new functionality into the search engine infrastructure, ("we only do web search"). Though to be fair pb does suggest this has changed.


I didn't claim YouTube's success was due to any technical features like supporting seeking. I agree a large part of why they were so successful was because of the copyrighted content. They took a risk that Google wouldn't. That proves my point exactly. Startups do many things that big companies can't or won't do. Sometimes that takes the form of working in a gray area of the law.


If someone wants to claim "very small companies are more likely to try something legally dubious because they have very little to lose," that's a completely different argument. Not to mention, YouTube and then Google had to pay up in order to square things away and neutralize the risk. Big companies are more comfortable playing fast and loose with the rights of individuals, while individuals and very small companies are more comfortable antagonizing powerful corporations.

Start-ups are NOT the best way to develop software. Start-ups are inherently inefficient. On top of doing the development, you have to take care of all the paperwork and technicalities of running a business. YC makes you move, which is a big interruption (you should only be moving if you NEED that interruption to leave behind all the distraction; some people are already coding like crazy and having to rearrange their lives like that just gets them off-track). You are trying to market yourself. The list goes on and on; I can't find the thread but somewhere PG said he only spent 20% of the time actually coding (my memory may be faulty, it might be 50%).

By contrast, working at Google you don't even have to cook -- they try to take care of all the minutiae. It's really no surprise that whatever they buy, the vast majority of the work is done post-acquisition.

The real purpose a start-up serves is as an advertisement that these people are very dedicated on that project -- when they are bought they are paid EXTRA to work on whatever they already wanted to work on anyway. [1]

THAT is the key difference. Most employees are hired to do some job they don't care about for some crummy wage, so no surprise productivity isn't high. If Google bought up start-ups and then reassigned everyone to other projects, nothing would get done. You don't get dedicated "in general", you are dedicated ONLY to what you're very interested in.

Hiring isn't obsolete; hiring people based on generality to do some unspecified thing for average pay is obsolete. It's simply a quirk that there are currently two models -- the useless traditional one and the "find people at another company who are already doing something and then pay them MORE to keep doing what they want to do ANYWAY" one.

It has always been obvious that letting people choose what they want to do and paying them more money for it yields better results than telling people to do something they probably have little interest in and paying them less money to do it.

Start-ups are a bad model for actual development because of the overhead. They are a good model for picking a certain type of people. I think PG just has this reversed.

[1] As a caveat, not everyone is dedicated to their technical ideas; they are dedicated to making a lot of money. Once they make the money, they stop working on the idea or anything like it. This seems to be what happened with ViaWeb, and would explain a couple of things: One, why PG is so insistent on "flexibility" and steering a lot of people away from their initial idea (because that's what he did -- except he wasn't passionate about the art gallery thing in the first place, so it's a lot easier to switch gears). Two, having steered applicants away from their initial idea, if they were dedicated to that, you've now turned them into people who are dedicated to the new thing for money only, and this could explain why YC's picks haven't been more successful. Your spirits also flag more when you're working on something you don't really believe in, which is why having a co-founder to lean on is more important. Plus there's the overhead -- how many of these start-up groups have a "business guy" as one of the co-founders? It's an inefficient ratio; imagine if every time Google hired a programmer they also had to hire a "business guy" for him.

I think it's pretty clear YC is much more compelling to PG than Yahoo! Store ever was. He could make a lot more money at it, too, if he doesn't steer the dedicated people away from what they are dedicated to.


I'm not claiming startups are efficient. I'm just claiming they're better at producing new great stuff and that big companies suck at it. You don't seem to really disagree about that. Google and Microsoft have many thousands above average programmers with all the resources in the world and they're producing very little new great stuff, as defined by the market. That's why they buy startups as seeds of success, which their companies are capable of growing into big money-making trees.

Microsoft has historically always been this way, even in the very early days. Almost every single one of their significantly successful projects was bought as a seed from a startup and grown from within. They do add a lot of hard work, but the irreplaceable ingredients are done by startups.

Google and Microsoft are in at least tacit agreement that startups are the best way anyone has found to develop great new software.


I didn't mention Microsoft -- I would not hold them up as a shining example of software development.

>startups are the best way anyone has found to develop great new software

YC's program is 10 weeks. The time ALONE means they aren't developing great new software that other people can't make. Back to your Microsoft example, they were notorious for having people demo for them, then stealing the idea and developing it in-house. (Remember how start-ups were all scared of going into a market segment Microsoft might want?) A start-up isn't even a good way of getting an audience, which is what Reddit, YouTube, etc. were bought for, because chances are your start-up will fail. The reason start-ups collectively work is there are so many of them, a big company can just acquire the winners.

Focusing on only the hits and ignoring the misses is a hallmark of pseudoscience, so if we're going to trumpet the winners let's compare that to the vastly larger deadpool. Start-ups are 10x more effective in producing stuff that goes nowhere. Big companies are automatically better at producing stuff people use than 90% of start-ups for that reason alone.

> I'm just claiming they're better at producing new great stuff and that big companies suck at it.

I think this is a case of "the overwhelming majority of great developers do not work at YOUR company". Numbers alone mean that more great ideas will come from outside your company than from within. More new stuff will always come from everyone not at company X than at company X.

Working at Google doesn't seem to stop people, hired or acquired, from coming up with good stuff, so once again we're back to start-ups being an advertisement of who's dedicated rather than a superior development methodology. I mean there are only so many ways you can just sit there and code, there's nothing magical about doing it in a cramped apartment. And it's demonstrably worse to do it on ramen than on free gourmet health food.

The reason Joe Kraus and others romanticize it is because they were young, it was new and exciting, and they were doing it with a bunch of their friends. That's a good recipe for getting something done, but the code's going to be a mess if you're inexperienced, and worse if you're rushing to a demo. Out of large numbers of people getting something done, some of those will be a lot better than others. It's a numbers game. VCs fund, what, way less than 1%?


> ...chances are your start-up will fail.

You seem to believe a startup's success is a game of pure chance. Of course there's an element of luck in any complicated process, but the startup game is survival of the fittest. The number of failed runners doesn't detract from the results of the winners.

> ...hallmark of pseudoscience, so if we're going to trumpet the winners let's compare that to the vastly larger deadpool

The statistical odds are only really interesting in an academic context. The fact is that the startup process is able to do what big companies are simply incapable of doing. Even with virtually unlimited selection of the top people in the world and more money than some countries. This is the crux of my point.

> ...once again we're back to start-ups being an advertisement of who's dedicated rather than a superior development methodology.

Developing great new stuff is not about creating the most amazing source code. It's about the results of the work. Intense dedication is certainly a huge factor in creating great stuff and it's something most employees of big companies lack and successful startups have. Why doesn't intense determination to develop something great qualify as a methodology? Sounds like one to me.

> ...some of those will be a lot better than others. It's a numbers game.

Can you spot the contradiction?


> The fact is that the startup process is able to do what big companies are simply incapable of doing.

They aren't incapable of it, they do it too. They just have different priorities. A company's goal is to grow as large as possible. They want to shut out competition.

Both of those reasons explain why they acquire start-ups -- one, to feed the beast. Two, to stop any threat to their gluttonous expansion.

Microsoft's core is still Windows, Word, and Excel. Google's is still the search they started with when they were a start-up (every big company was once a start-up), not any of the start-ups they've since acquired. The core products pay the bills; the rest is filling up the cracks.

The numbers game is of crucial importance because large companies externalize the cost of the selection process. Google will pay a little for dedicated people and a lot for a big audience to sell ads to. Everyone's just still stuck on the old paradigm of either some time-wasting interview process ("what are you interviewing for?" "A job!" "To do what?" "I don't know exactly!") or buying a start-up with a self-selected people/problem combination that already seems to work.

> Why doesn't intense determination to develop something great qualify as a methodology?

I think we're getting a little carried away here with hyperbole. There are few "great" ones, that's not a methodology, although self-deception might qualify. Steve Jobs had an intense determination to develop something great -- the Mac. Hardly anyone falls into that category. Most start-ups now are excited kids who mainly want to make some quick bucks.

And I note the Mac was created BY a gigantic company, by someone who was already rich (and a bunch of engineers on minimum salary -- hey, it's Jobs). It's WHO, not WHERE, and I note design-by-committee-with-endless-meetings has been recognized as a mistake for a long time (Apple III).

And yes, the code does matter.

> Can you spot the contradiction?

No, it's 5 am, I'm up because my neighbor was crashing around, I don't even know what I'm writing. I seem to be talking myself into agreeing with PG.


IIRC a lot of it actually comes from the ITConversations interviews and wasn't actually in writing.


Excellent points, except it's clear you've never spend much time pair programming in a small team. There, the rules are a little different, as the design is a collaborative process.

Although there's more inertia than working solo, I like team pair programming better. You get third-draft code in first-draft time when pairing, and having to continuously collaborate on the design sharpens your thinking a lot.


I completely agree with this. However, I have disdain for the word "programmer" to describe what I do. I am a "software developer" which describes it much better and indicates much of what you've said here. A "programmer" is a monkey that writes x+2=y a developer has domain knowledge over the problem he is solving. Let's change the terminology to fit the job.


Blah I hate the term "software developer", not so much because its inaccurate but because its a symptom of the greatest evil to ever face computer science; software engineering. Software engineering's relationship to computer science is like accountancy is to mathematics. Sure they both use maths but a thousand accountants aren't likely to come up with any great proofs (unless theres a genius amongst them who does the hard work, which is generally how I think large projects ever have successes).


I think of the relationship between software engineering and computer science more like the relationship between architecture and physics. Software engineering describes the application of the theories of computer science to solve real-world problems.


Pg isn't speaking about programmers as in code monkeys, but about programmers as in hackers. Using other terminology to speak about programmers is not necessary, as programming shouldn't be disdained as an activity; you're probably better enhancing your terminology by going back to using the word "programming" without fear.


I'll grant you that.

I suppose I wrote that response as a knee-jerk reaction to the twinge I get in my left shoulder every time I hear the word "programmer". I program and I code but I also paint and write but painting does not make me an artist and writing does not make me an author. Regardless, I agree with you after taking a breath and composing myself enough to slip back into regularity of terminology fearlessness.


So you think painter and writer are derogative too?


I love your point - you alread wrote about this topic when in your article "love.html" - how to do what you love...

for me the circumstances for working on my theory must be similar to the cristalls that grow especially good at zero-gravity - the best place for me to think is in the train where i have 6 hours without being disturbed!!!



Great article. I seem to have no problem getting a program into my head, its getting it out that's the tough part. Its amazing how many times I've had to go somewhere else after work but end up driving the regular route home and get out of the car wondering how it was I got there. Scary.


One of the best articles I've ever read on the DNA of successful technologies, ideas and companies. Any involved in a technology company should read, including friends and family. A possible reason some larger companies are finding it hard to compete.


This thought is only half-formed, but perhaps holding entire programs in one's head is the defining characteristic of a hacker (as opposed to a programmer, developer, software engineer or anything else people call someone who writes code).


In case anyone interested: I posted a Russian tranlsation of the article here: http://www.developers.org.ua/archives/max/2007/09/04/pg-head...


WOW what great insite. I have been telling my boss that I work much better at home with less distractions. Now you have pointed out why that is true. My boss is really against telecomuting. I am going to she them this article.


I agree, it's an awesome essay and dead on. Now I can finally give my friends a good answer when they ask about my computer habits. Glad to know I'm not the only person who works like this. Great job.


I think some high-class programmers don't really need to rewrite code to understand the system or to make changes in it. High-class programmers usually put minimal effort to achieve maximum result.


Great points. They make me ask what PG thinks about unit tests: they seem to be like an exoskeleton allowing you to keep just a part of the program in your head. Or are they just a distraction?


In my case I found the latter.

I used to be excited about unit tests because of the promise of being able to make bolder changes to the code with certain confidence. And I feel the practice of making my code more testable improved my design skills.

But I found they are a maintainability issue too. They increased the cost of making the type of changes that would require me to change the tests- and normally, changes that don't break interfaces don't worry me a lot even if I don't have tests. Also, relying on the tests for a sense of confidence made me lazier about trying to really understand why the code worked. In general, it put my head in the mode of seeing the trees rather than the forest. YMMV.

Nowadays I write the code as if I was going to unit test it, and then I don't write the tests. When I do, I don't make much of a point of keeping them around; I use them basically as shortcuts to poking around in the REPL while I debug something. I try to keep things simple enough that not much can go wrong, I try to make sure I really understand how and why it works, and I try out most representative use cases I can think about.

That's enough to keep my debugging time quite low. When I do have bugs, I treat them as learning experiences: I try to understand why it happened and what can I do so I don't get more of those.


Joel on Software makes a similar point:

http://joelonsoftware.com/articles/fog0000000022.html


Wow, what an amazing, entertaining, and well written article! Hit the nail on the head. I will make my boss read this! ha


the points mentioned in the article works for small and medium level projects. but given the scale of projects that are evolving today, these points need to take a back sheat. The point of the programmer being the owner of the project does not exist, there are multiple hierarchies and it works from one level to another.


I 100% agree. But there is a dark side of the force in programmers too.

Programming is fun. Bug fixing is not. Selfish programmers, however talented, don't fix bugs.

Filtering out selfish talented programmers is not easy. Yet they probably are the reason why organizations get paranoid.

An extreme solution is to make sure bugs are fixed (somehow) BEFORE new code is created ; selfish programmers won't stand the heat very long.

Fortunately test suites, automatized non regression testing, is becoming popular.


This essay was great it really gave me an insight to a programmers mind and actions!!!!


hi ... read ur essay... Im a programmer who started off great but am in a critical stage of giving up programming cos of working in adverse conditions similar to the ones u had mentioned.Ur essay I think might help me come out the situation... Thanx anyway...

Rajiv.


agree with most parts except for the use of perforce. most good programmers i know hate it, because it slows you down (as does subversion). git is the scm/vcs to use for doing feature experiments.


perforce (adverb):

    of necessity; necessarily; by force of circumstance.

http://dictionary.reference.com/browse/perforce


it's always true that if we keep the thing in our mind, we can do anything efficiently...all it is needed is our full concentration and understanding ability...


I completely agree with the points.


Wow. Nothing short of epiphanic.


I agree.


its like he knows me




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: