
Holding a Program in One's Head (2007) - mmphosis
http://paulgraham.com/head.html
======
mtraven
It's actually an indictment of our programming tools that they require one to
hold so much of the design context in one's head. If they were better (more
expressive, easier to interact with) they would help to solve the problems
rather than require superhuman feats of endurance.

pg does mention that succinct programming languages help, which is true, but
they don't go nearly as far as they could. Like him, I use Lisp for that very
reason, but Common Lisp is 30 years old -- why hasn't the state of the art
advanced since then? It's really quite shameful.

~~~
dualogy
Sounds like you have a number of tangible, palpable, feasible ideas for such
super-smarter next-gen programming tools -- care to share?

(Just hoping what you have in mind isn't UML + SOAP + some unintelligable
über-abstracted-meta-code-gen...)

~~~
ehsanu1
I have one: natural language programming.

I have some ideas about how it would work, and have looked a little into how
it might be implemented and have some seemingly feasible ideas there too. But
I'm not an expert in either NLP or programming language development, so what
do I know.

A good first step in this direction is the natural programming language for
creating interactive fiction, Inform 7. The problem is that it's not general
purpose. But you can try reading about it if you think natural language is too
ambiguous to ever possibly be a programming language.

I want to write down all my thoughts about this idea, but I thought I'd get
some more meat on it before I do that. Also, I haven't mentioned here why I
think natural language programming can help, but I'll leave that to your
imagination for now.

~~~
sillysaurus
_I have one: natural language programming._

This exists. It's "Write each step of an algorithm in comments before writing
any code, and (only when you're finished writing each step in English in
comments) fill in the code each comment represents".

But it's impossible to convince anyone to try that, though, even though it
gives all the benefits you are imagining NLP would give.

The mentality "Don't repeat yourself under any circumstances, even if it adds
no complexity and increases clarity!" is unfortunately why this is
undiscovered. Also, people generally go overboard with the "commenting" part
-- the purpose is to be pseudocode in the way Python is executable pseudocode,
while retaining the ambiguity that is necessary for natural language to be
natural. So it gets a bad reputation, and its power is hidden despite it being
powerful.

~~~
tspike
The closest experience I've had to this is using RSpec and Capybara to do
test-driven development.

Here's an actual snippet of a test I wrote before writing the corresponding
code for an app that's now in production:

    
    
      describe "adding participants" do                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
        context "when user has created a new event" do
          before :each do
            visit root_path
            click_link 'Get Started'
            fill_in 'Name', :with => 'Tres'
            fill_in 'E-mail', :with => 'tres@sogetthis.com'
            click_button 'Next'
          end
        
          it "should prompt for more participant info" do
            page.should have_field 'Name'
            page.should have_field 'E-mail'
            page.should have_button 'Add Participant'
          end
        
          it "shouldn't let you enter an invalid e-mail" do
            fill_in 'Name', :with => 'Tres'
            fill_in 'E-mail', :with => '#)(*)($*#)($*'
            click_button 'Add Participant'
            page.should have_content 'invalid'
          end
        end
      end
    

It's been an absolute joy to work this way. I'd love to see this level of
abstraction make its way into the mainstream in other languages and
environments.

------
kubrickslair
In my experience, this is PG's most helpful/ reassuring technical essay.

Recently I was working on/ creating pretty complex algorithms with gargantuan
cobweb of edge cases. Not only did I had to work for 15+ straight hours, but
in order to maintain productivity fasted (bar caffeine, few nuts & water)
every alternate day to keep the steam going.

Incredible times, and rather close to reaping the fruits now, but there is a
pretty good chance that I would not have been here had I not read and
internalized this essay- was grappling with several decades of CS research
though in much less theoretical setting.

~~~
andrewcooke
i'm not bashing you - i've been there myself. but that same shared experience
requires me to ask: dude, how are you going to maintain that?

you need to find a simpler way... if that's how you feel now, a year down the
line you're going to hate that code.

~~~
DenisM
Long stretches of uninterrupted thought are required to correctly partition
the problem, which is a hard problem as the number of possible partitions is
roughly exponential to the number of moving parts. Once the problem is
optimally partitioned, understanding the solution is a linear effort. Hence,
effort required to understand the problem is not indicative of the effort
required to understand the solution. Now, for a new person to understand _why_
this partitioning was preferred to all others requires one to go all the way
back, and I worry can never actually be replicated, which is why old software
projects often stagnate after the original architects depart.

------
Spearchucker
Warning: controversy.

The text misses the point in that having to hold a program in one's mind is
not required to begin with.

The real problem is complexity. There's just so much going on that it's
difficult to remember. And complexity is solved through abstraction -
separating the problem out into manageable components. That way the programmer
works at a system level, at which components are composed into a solution, or
at the component level, where one specific aspect is dealt with.

That minimizes the amount of stuff the programmer needs to hold in his head.
If you find yourself oscillating between higher and lower levels of
abstraction you should probably be revisiting your architecture.

It's called separation of concerns.

No tool improvement is going to solve that. Neither will a rethink of ways of
working.

~~~
chubot
That's begging the question. To come up with the abstractions in the first
place requires understanding the whole problem. There's no getting around
this.

I can't tell you how many programs I've seen where the wrong abstraction is
never perturbed, and as a result all feature development pays a huge cost in
reliability and functionality.

To create or change an abstraction, you have to get your mind around all sides
of it. That requires A LOT of context. You should always be "revisiting your
architecture". It's basically never true that the first architecture works,
unless you've written an extremely similar program before. The worst programs
result when people are afraid to shift boundaries. That happens a lot in big
company development that PG talks about -- each team just accumulates crud on
their side of the abstraction boundary, but they never talk to each other so
they can globally simplify.

It's true that abstraction reduces complexity, and thus is the only way you
can build big programs, but it takes a lot of hard work to get there.

~~~
Spearchucker
The "whole problem" is simply a process. Processes in the real world are
composed of 1-* sub-processes, and ultimately distill down into
transformations (input -> transformation -> output).

You don't need to understand every low-level process to make a start.

You raise two interesting points though. Yes, revisiting an architecture is
required when you've not solved a similar problem before. Consider though that
in this community 99.9% of the time others have solved your problem before.
You could reinvent the wheel (which is arrogant), or you could do some
research to see which architecture has been successful for others.

Sure, that means doing some work before writing code which seems decidedly
unpopular these days. It's a free world.

You also say that it's a lot of hard work. Yes. Nothing worth doing is easy.

[Edit] There's a really good book (there always is) that describes complexity
and problem solving better than any other I've found.
<http://www.amazon.com/dp/0787967653>

~~~
ambrop7
> Consider though that in this community 99.9% of the time others have solved
> your problem before. You could reinvent the wheel (which is arrogant), or
> you could do some research to see which architecture has been successful for
> others.

The best way to learn something is to come up with it yourself. If you base
your architecture on what you read in books or elsewhere, you probably only
have a limited understanding of it. And that's especially bad when you're
talking about the _architecture_ of a program.

So maybe a better idea in terms of the end result is to first come up with
your own solution, implement at least a functioning prototype, and _then_ do
some research, to improve upon your idea based on what you have discovered (or
throw it away, in the worst case, but of course keep what you've learned doing
it).

~~~
Spearchucker
_The best way to learn something is to come up with it yourself._

That's been bugging me, and it's been bugging me because it's not entirely
right. It's not entirely wrong either, though.

Given an undirected graph, in which you need to calculate the shortest path
between two vertices, would you slog it out or would you just find a shortest
path algorithm?

Now move that up a level, from feature implementation to feature design -

If you had to federate identities between two directories, would you hack
something together, or would you have a look to see if someone else has done
that before? Assuming the latter, you'd probably discover OpenId, OAuth, WS-
Federation, and SAML. Would you then use one of these or try to roll something
that models their behaviour?

Take that up all the way until you reach the age-old build vs. buy debate, and
the connected enterprise and so on. Building something just to understand it
is not making much sense to me.

~~~
ambrop7
> Given an undirected graph, in which you need to calculate the shortest path
> between two vertices, would you slog it out or would you just find a
> shortest path algorithm?

Probably just find an algorithm. This is because my goal was _not_ to learn
about shortest path algorithms, rather it was to find the shortest path.
Whichever algorithm I chose and however I implemented it is of minimal concern
- if it turns out to be bad, I can probably redo it without significantly
affecting any other part of the program. I claim that, if, on the other hand,
you wanted to _learn_ about shortest path algorithms, it would be better to
first at least try to think of a solution or two, then go for the books.

Speaking of software architecture, given the profound impact a solution will
have and how disastrous bad choices can get, it _should_ be your goal to learn
about whatever you're dealing with.

------
stcredzero
Holding a large program in your head is like holding a city in your head. You
can't entirely do it. What you can do is to know broad outlines and
principles, so you can navigate to the relevant points and negotiate difficult
parts. When you see the details at the relevant places, then you can deal with
them. It's like being a cab driver. You don't memorize every cobblestone and
pothole, but you know how to get where you need to go you can deal with them
as you're driving.

~~~
JoeAltmaier
You can visit the neighborhoods often enough that they are quickly familiar
when you happen upon them again. That's called a domain expert.

In fact, London cab drivers DO hold the city in their head. They are required
to, before they get their license.

~~~
stcredzero
Yes, I was thinking about what I read about London cabbies. However, they
don't necessarily have recall of every pothole and cobblestone. They remember
enough of a map so they can get to where they need to be, and deal with what's
there on the way.

------
jrogers65
For years now I have been arguing with other programmers on the merits of IDE
features such as intellisense. My position has always been that they are
completely unnecessary. Autocomplete is useful, sure, since it saves some
typing, but that is it's primary goal - to save keystrokes, not to help you
remember what a class is capable of doing.

Many programmers appear to get caught up in the small picture way of thinking,
where all that they consider important is the code they are currently working
on. The fact of the matter is that a piece of software is an ecosystem. Every
part of it is directly or indirectly tied to every other part. It is only when
we consider the system as a whole that we can create an elegant architecture.
This is simply not possible if things are always seen as units and their
relationships are an afterthought.

Thank you for this link. I shall be using it to add fuel to any future
arguments along these lines.

EDIT:

Clearly, it takes a lot of effort and skill to pull this off. The bottleneck
becomes the human. The logical question that follows is 'how do we make the
human more efficient and capable of remembering more?'. The answer is diet,
nootropics, meditation, exercise and knowledge of techniques (e.g. how to
memorise facts rapidly). I won't go into specifics but suffice it to say that
most people are undernourished and are mentally impaired because of it.

~~~
charlieflowers
>>>> "The answer is diet, nootropics, meditation, exercise and knowledge of
techniques (e.g. how to memorise facts rapidly). I won't go into specifics but
suffice it to say that most people are undernourished and are mentally
impaired because of it."

Dude, if you have some concrete insights in these areas, you _definitely_
should _go into specifics_. A lot of us would be very interested.

~~~
jrogers65
My apologies, I thought that it was outside of the context of the
conversation.

A surprisingly high percentage of people have nutritional deficiencies. For
example, an alarmingly high percentage of people in the US are deficient in
Magnesium, an essential nutrient (meaning that your body cannot manufacture
it).

[http://en.wikipedia.org/wiki/Magnesium_deficiency_%28medicin...](http://en.wikipedia.org/wiki/Magnesium_deficiency_%28medicine%29)

> 57% of the US population does not meet the US RDA for levels of magnesium

And we're talking about a first-world country here! Magnesium is essential for
a healthy stress response, and, as we probably know from experience, many
people struggle with the stresses of day to day life. Anybody who regularly
consumes alcohol, tobacco or caffeine is likely to be deficient as those drugs
rapidly deplete Mg reserves in the body.

That is just one example. Another is Calcium, which 75% of people have
deficiency of ([http://www.livestrong.com/article/365193-heart-disease-
cause...](http://www.livestrong.com/article/365193-heart-disease-caused-by-
calcium-deficiency/)).

The body also becomes less efficient as time goes on. Have you ever wondered
by old people tend to be more grumpy than everybody else? Because our
serotonin (a neurotransmitter which is implicated in mood and irritability)
levels fall as we age - <http://www.pslgroup.com/dg/4098E.htm>

One way to offset the brain's natural decline of neurotransmitters is to
supplement with the precursor amino acids which are used to manufacture those
neurotransmitters. For example, L-Tryptophan, an essential (again, meaning
that you cannot make it) amino acid is rapidly absorbed by the body and is
used to make more serotonin (among other things). The difference between
protein and straight amino acids is that the latter are ready for use, whereas
proteins, which are composed of amino acids, need to be broken down to their
constituent parts before the body can use them.

Another age-related neurotransmitter decline - dopamine. A lot of older people
have had some success with using D,L-Phenylalanine and L-Tyrosine to boost
dopamine levels and reclaim their sex drive.

Healthy acetylcholine levels are also essential for cognition and can be
boosted with things containing Choline (eggs are an excellent source).

GABA levels are very important for short term memory and focus. For example,
anyone who drinks coffee will have experienced that overexcited state where
you have a mountain of motivation but lack proper focus. Taking L-Theanine
(Green Tea also contains this), an amino acid which causes increases GABA
levels will restore the focus. I never drink coffee without L-Theanine for
this reason.

Taking L-Glutamine prior to drinking alcohol will prevent a hangover. It will
also stop alcohol and sugar cravings. This is because the body is capable of
using L-Glutamine as a source of energy.

But the most important thing is how all of these neurotransmitters interact
with each other. Some of them are polar opposites, some modulate the
release/inhibition of others. The brain is always striving for balance
(homeostasis). All of these neurotransmitters, nutrients and minerals are
vectors which pull it in one direction or another. Keeping them in balance is
the key to feeling good. Having a model of their interactions makes it
considerably easier to debug problems and fix them. This comes with
experimenting with your body and gaining an intuitive understanding of what's
what. I think that it would be difficult to come up with a generalised
approach due to people's baseline levels of each neurotransmitter being
different.

Note that even popular multivitamin brands sometimes contain too little of a
given nutrient. I recently bought a doctor-recommended one and it barely
contains and magnesium or calcium. It pays to educate yourself and not to
completely defer your health to someone else. It's your body and it's
essentially your problem.

After addressing deficiencies, I found that I felt a lot better and that my
rate of recovery from stress, a night of drinking or strenuous physical
activity had improved considerably.

Another important factor is removing things which cause harm. E.g. alcohol, in
sufficiently high doses (high enough to be drunk), is neurotoxic, meaning that
it directly harms the brain. One of the primary mechanisms of alcoholism is
that the damage caused by alcohol directly contributes to future alcohol
cravings. A lot of people have problems with anxiety - they would be wise to
discontinue caffeine intake altogether as it is a major risk factor in anxiety
disorders.

Meditation has a host of benefits and structural changes have been observed in
the brains of practitioners -
[http://www.scientificamerican.com/podcast/episode.cfm?id=med...](http://www.scientificamerican.com/podcast/episode.cfm?id=mediation-
correlated-with-structura-11-01-22)

Exercise is absolutely essential and, frankly, should be a first-line
treatment for things such as depression.

There are also a few new and interesting nootropic substances such as
Piracetam, Aniracetam and Noopept, the last one being the most effective for
me personally.

All the wise folk say that the body should be treated like a temple. I
completely agree with them since the mind and the body are but one and the
same. Treat the body right and the mind will follow. Treat the mind right and
you will feel right.

~~~
charlieflowers
Interesting. Where have you learned this information? Are there some books /
studies / other sources you can point to? (Not a demand for evidence ... just
that I'm curious to look more into this).

~~~
jrogers65
I've mostly been googling for studies/information which explain things I do
not understand. Most of the studies tend to be on Pubmed
(www.ncbi.nlm.nih.gov). A lot of references can be found on wikipedia, which
is a great resource in itself.

This is paired with playing with neurotransmitter levels using the methods
described (and occasionally some less legal ones) and trying to relate feeling
to thought. After a while, an intuitive understanding of the terrain that is
the body, thought and emotion begins to emerge. Sometimes you feel something
new, notice an interaction, make a prediction that neurotransmitter X has
relationship Y with neurotransmitter Z, google for studies and are surprised
that said relationship has been observed. Intuition is as accurate as the
information it's working with - it can sometimes be trusted and other times
cannot. I'm a programmer and to me this feels identical to debugging in a
messy monolithic legacy codebase. I use the exact same techniques to try to
figure my mind out.

I think that it is important to be picky about sources. I do not trust
anything which cannot be backed up by a study, though less accurate
information can sometimes point you somewhere interesting. I also find it
important to tread away from the mainstream with caution, since I'm not an
expert in this field. In other words stuff like Reiki is out of the question -
it needs to at least be feasible.

------
alcuadrado
This is totally misleading, and may make people feel they are not good at
programming.

To quote one of the masters of CS:

The competent programmer is fully aware of the strictly limited size of his
own skull; therefore he approaches the programming task in full humility, and
among other things he avoids clever tricks like the plague.

    
    
        Dijkstra (1972) The Humble Programmer (EWD340)

------
mpweiher
To me, this goes back to the "single page program": all programs should have a
top level that can be expressed as a single page of executable code and
comprehensibly captures the essence of the program.

If our programming language(s) don't allow for this, see what's wrong and fix
it. Lather, rinse, repeat.

------
barking
_So if you need to write a big, complex program, the best way to begin may not
be to write a spec for it, but to write a prototype that solves a subset of
the problem._

This is so true for me. And not just for big complex programs but also for
anything not completely trivial.

The one (applicable to me) part of the joel test
(<http://www.joelonsoftware.com/articles/fog0000000043.html>) that I never
made any progress on was having a spec. I just couldn't do it whether through
laziness, inability to concentrate or whatever. Actually it's been a bit of a
guilty secret of mine.

Generally there'll be some part of the program that I'll be able to solve
right away and while I'm doing that I'll be having ideas about how to do
something else. Later it might become obvious that a certain part would have
been better done another way and if there's a serious benefit to changing
things I can do it at that stage.

It's only after a lot of work's already been done that I'd be able to produce
some kind of spec for the program

Having a spec that lays everything out beforehand is to me analagous to a
mathematician writing the final proof of a theorem before doing all the
thinking.

------
nnq
You should _never need to hold the entire program in your head_! (except when
you actually begin coding it - for this early stage of development I agree
with PG). Needing to hold a whole mature program in your head every time you
work on it is a code smell that tells you your solution ended up in the shape
of the most popular software architecture of all time: the "big ball of mud"
(<http://www.laputan.org/mud/>). Now it's obvious why having a big working
memory makes you a great programmer - everything usually starts as a BBOM or
ends up as one, but the point is to fight this tendency...

Once a program grows, you should architecture it so that you only need to keep
the piece that you're working on in your head, ie it should be a network of
black-boxes and you should only need to open the one you're currently working
at, and even when you do things like large scale refactoring, you should be
able to selectively and partially open only some of the boxes to do your job -
and this is what programming languages and patterns should help you do!

~~~
manmal
To avoid "memory overload", I have taken to literally ignoring parts of the
system which I currently don't need. I.e., I don't even look at other classes'
code before I don't need them. Else, I would have to spend 3 days of
understanding it all. That's for very big programs, and of course I do look at
the overall structure and what patterns have been used to couple classes
together. But if a method promises something with a contract, I won't read
through it, but treat it as a black box. My theory is that cognitive power is
like money spent during the day. I can recharge it after a few hours by taking
a nap of 20 minutes, but I better watch what I expend my cognitive credits
for. And spending it all on reading other people's code (which has probably
already been revised 10 times) is not worth it.

~~~
nnq
...this is what we all try to do I guess. You said "I don't even look at other
classes' code" but I always find it much easier to not look at a function's
code or at the code of a method of an immutable object or a "predicatively
mutable" objects than for classes of highly mutable objects.

...that's why I'm currently investigating functional programming as way to
make it easier to hold larger parts of programs in your head.

------
sanxiyn
I agree with the power of holding a program in one's head, but I also consider
this a (the?) serious bottleneck in the software engineering.

I hope we discover a scalable alternative to holding a program in one's head.
It doesn't need to be as good as holding a program in one's head, it just
needs to approximate it.

(Edit: see also Design Beyond Human Abilities by Richard P. Gabriel.)

~~~
d2vid
I would argue TDD is a workaround which allows you to do meaningful work in a
team context without holding a complete program in one's head.

It gets everyone to put their thoughts about what the program _should_ do in
one place. Every time you run the test suite, you are outsourcing to the test
suite the task of running through your mental model of the program and
thinking "what did I break".

The limitation of TDD is that it blocks rapid iteration in what the program
_should_ do. When you load the entire model of the program into your head,
requirements change at the speed of thought. With TDD, they change at the
speed of a lot of reading and typing.

~~~
charlieflowers
>>> "The limitation of TDD is that it blocks rapid iteration in what the
program should do. When you load the entire model of the program into your
head, requirements change at the speed of thought. With TDD, they change at
the speed of a lot of reading and typing."

I agree 1000% percent with that. You articulated very well what I see as the
single biggest drawback of TDD.

It's as if you're still working with clay, but you put metal around it every 2
seconds, so you lose the benefit of working with clay.

------
mamontazeri
This is very similar to how a lot of writers that do split point-of-view write
their stories. "Load" all the information needed for a particular point-of-
view, write all of theirs chapters, unload, and repeat.

------
gfosco
Rings true...

I definitely work best in 4-10 hour focused sessions, can easily do a weeks
worth of work in one go. I can't always make those sessions happen though.
Coding block? I guess I should write. [1] :)

Around the time this was written, I was working somewhere you had to break all
the rules to get anything accomplished. The un-sanctioned stuff was of much
higher quality and function.

[1] [http://tommy.authpad.com/understanding-and-combatting-
coder-...](http://tommy.authpad.com/understanding-and-combatting-coder-s-
block)

------
Munksgaard
Peter Naur wrote on this subject in 85 and came to much the same conclusions.

Here is the paper for anyone interested
[http://www.google.dk/url?sa=t&source=web&cd=1&ve...](http://www.google.dk/url?sa=t&source=web&cd=1&ved=0CCoQFjAA&url=http%3A%2F%2Fwww.itu.dk%2Fcourses%2FSASU%2FF2010%2Ffiles%2FNaur%2520from%2520Cockburn.pdf&ei=cEjDUJbyFsPj4QS4kYH4Ag&usg=AFQjCNGsZkpSqx_WTO7E-lm_bGSiv-L5Gg&sig2=bdpslbjKCzqdiIVEF262kA)

------
slant
I find it interesting to read this again after a few years only to realize
that Paul is in direct contradiction of those who would have us take a break
every 25 minutes with his second point. It's kinda refreshing to hear this
point of view again. I worked 5 solid hours last night, starting that
particular stretch around 7pm and got around 3-4 normal work days worth of
work done.

~~~
zipdog
Yesterday a friend explained the physiological reasons behind 25 minute
breaks: basically the muscles (including eye muscles) go into a different
state of relaxed-tension and its generally not good for them to be in that
state while the body is sitting-looking.

There's also the issue of the effort required to concentrate on a task, which
gives diminishing returns as you extend it beyond ~25 minutes without break.

However a mental flow state (what you seem to describe) requires little to no
concentration effort and so can yield great returns over the time spent. The
tradeoff is that the body is in a less-than-desirable muscular situation for a
prolonged time - but I'm guessing the damage there is not so significant if 8
hour flow sessions are not a daily habit.

tl;dr: the 25 minute break thing is probably a good general guide, but
shouldn't stop you pulling an 8 hour session if you are absorbed and
productive

------
nnq
> Maybe we could define a new kind of organization that combined the efforts
> of individuals without requiring them to be interchangeable.

...reread the whole thing and this thing got my imagination "high" ...I feel
there's some deep wisdom in here ...maybe we do thing wrong in most of our
organizations by requiring complex components like people to be
interchangeable, instead of accepting the fact that they are not and that the
"personality" of an organization should be allowed to radically change as
people (or bigger "unique" subsystems) become part of it or leave it... maybe
we could even end up with what Nassim Taleb calls "antifragile"
organizations...

~~~
saraid216
It's not just the "personality", though. It's also the idea that if your
sysadmin gets hit by a bus, your site doesn't become unusable.

~~~
nnq
yep, the compromise is probably to separate _maintenance_ from _creative_
tasks, and accept that maintenance tasks need replaceable components, but then
again, maybe it's ok to accept things like having a certain software component
written by a "lone wolf eccentric genius" in a dialect of lisp he alone can
understand (replace with your fav equivalent phrase) as the price for having
really unique features and performance for that component that no competitor
can match, and make contingencies for the risk of having to throw away that
codebase and rewrite from scratch if he gets hit by a bus... maybe if the APIs
and interfaces are properly designed (or processes or however else you may
call then in peopleware land), you can accept working with unique and not
easily replaceable components and somehow design systems that are architected
to embrace the "hit by a bus" type of risks

...and if I think further along these line, the organization that can best
afford these types of risks are big software corporations (think Google,
Microsoft) that could afford to scrap entire codebases and projects (if they
ever got over the "mind brakes" that make the managers consider such things
insane), or start things in completely new directions when they bring new
"genius visionaries" ...these types of innovation based on risky and
irreplaceable components/people would be prohibitively expensive or impossible
for start-ups, but may bring us new breakthroughs in things like general
purpose AI or god know what

...maybe true progress really is the work of unique individuals and our whole
focus on "team work" and "replaceable peopleware" is what suffocates and kills
innovation

------
juliangamble
Rich Hickey made some very similar points in his Strange Loop presentation
"Simple Made Easy" <http://www.infoq.com/presentations/Simple-Made-Easy>

------
skrebbel
I'm not so sure about the "don't touch other people's code, and don't allow
them to touch yours" bit. What if someone gets hit by a truck? Or leaves the
company? Or _shrug_ makes a promotion? I've been pretty comfortable about the
"collective code ownership" approach, within small enough teams, in the last 4
years. You can talk issues over with a colleague and he knows what you're on
about, code reviews become effective, and getting something important changed
does not mean you have to wait for the 'owner' to have time for it. What's
wrong with that?

~~~
almog
Exactly what I thought - I only 'own' the code if it's a topic branch, once my
peers reviewed it, they should know it as much as I do.

------
dcalacci
Not sure about working for long uninterrupted sessions(I tend to believe that
short, self-scheduled breaks increase my productivity, as per the pomodoro
technique), but I think the notion of loading a 'context' into your memory is
spot-on. I know that, at least for me, it is difficult to even begin working
on a problem unless I have that initial context in my head; without a full
understanding of the problem at hand, I feel like I can't do anything well.

------
w_t_payne
Perhaps the key to a productive software industry is - paradoxically - to de-
industrialize. Model the organization and communications patterns on a guild
of independent workshops, each workshop comprising one artisan developer with
one or two apprentice developers, with each workshop supplying hyper-
specialized services based around a library of existing, re-deployable
functionality.

------
biggs
Are there any applications (mobile, web or whatever) whose sole purpose is to
help a programmer capture the program that is in his/her head?

The first thing that comes to mind is UML based applications like Rational
Rose. But those seem to have such high barriers of entry and are more geared
towards communicating the program to other developers or different
stakeholders.

------
hayksaakian
One good test for this is if you can explain in abstract terms, and preferably
to a non programmer, how the program works.

------
8ig8
I just came across this article in a HN post last week that was discussing a
similar theme. I think it does a great job explaining the challenges of beings
programmer...

<http://alexthunder.livejournal.com/309815.html>

~~~
irishcoffee
This is a fantastic summary, and I thank you for sharing.

------
gruseom
I wonder what the most complex system is, to date, that has been made and that
can all fit in one head.

A recent example, though not a program, is Mochizuki's proposed proof of the
ABC conjecture. I wonder what program would be comparable to that in
complexity.

------
njx
I do it. I build mockups in my mind, run through clicks and interactions and
finally the code flows.

------
monsterix
Wow, this is particularly true when you're not rapid prototyping. An aha
moment happens when one is able to reverse think a complete chain of events
leading up to a catastrophic failure because you saw a tiny UI/UX bug on the
surface.

Paul has written such splendid pieces that should resurface from time to time.
This thread has happened [1] to YC community before.

[1] <http://news.ycombinator.com/item?id=2988835>

------
Roybatty
_work in long stretches_

Nope, that idiotic open offices isn't conducive for that.

