
Gradual Programming - wcrichton
http://willcrichton.net/notes/gradual-programming/
======
gravypod

        > A basic observation about the human programming process is 
        > that it is incremental. No one writes the entirety of their 
        > program in a single go, clicks compile + publish, and never 
        > looks at the code again. Programming is a long slog of 
        > trial and error, where the length of the trial and the 
        > severity of the error depend heavily on the domain and the 
        > tooling. 
    

I don't know about others but I've done this quite a few times. When I have to
write some code for a personal project of moderate size (10k LOC) I like to
just sleep on it for a while, stay up very late, chug some coffee, and write
the entire program. After that I click compile and run a few times, fix any
variable names that I've undoutably misspelled, and then maybe 1 or 2 bugs.
It's basically delaying starting a project until you have a perfect mental
image of what that code needs to do, how it looks, and understand mentally
every tradeoff you're making. I wish I could do this with larger code bases
but unfortunately, from my own experience, planning time is exponentially
related to the projected lines of code. A 5K to 10k LOC project usually takes
me on the order of 2 weeks of thinking. If you go higher than that you're
talking about months.

If you wanted to change the world of programming I would think of a way to
abstractly describe dataflows in the vain of unix pipes. Define every part of
the transforms you need to do on incoming data, where those responsibilities
should be, and then link them together. Current abstraction methods are
subpar: they require inherent complexity, are language specific, and are not
easy to change after you've using them (unless you have adapters on every part
of the code base which in it's own right becomes a mess).

~~~
Groxx
Yeah, I'd call this the difference between "discovering how to program a
thing" and "programming something you already understand".

Discovering as you go is totally fine, and dominates what I write. It's also
very nearly your only option when learning something new. Sitting on millions
of lines of code, integrating with dozens of other systems... you often simply
don't know what you need to do up-front, so you iterate and explore until you
do. Gradually tightening the constraints of the system fits this kind of thing
_perfectly_ and I _love_ when it's easy to achieve. It gets you both a faster
start and a safer finish.

In contrast, entirely agreed - I've written a few low-kLOC projects start-to-
finish with basically no problems. But I knew exactly what I was going to do
before I started actually coding. The (significant) time leading up to that
was spent more-thoroughly understanding what and how I wanted to do, and at
some point it all clicked in an "oh. so that's how. OK, might as well build it
now" moment.

~~~
mcphage
> It's also very nearly your only option when learning something new.

Also your only option when working with other developers.

~~~
Groxx
Making it so this _isn 't_ true with multiple developers is one of the
benefits of waterfall-style programming. Design an API up-front, divvy up
work, build it. As long as the design is good and you don't do stupid shit,
it's entirely possible to work in parallel without getting in each other's
way.

But yes, it absolutely compounds the issues. Everyone involved has to
thoroughly understand it all / their role in it, and getting to that point
likely takes longer than discovering it as you go :)

------
tathougies
I take _huge_ issues with this statement

> For a multitude of reasons, straight-line, sequential imperative code comes
> more naturally to programmers than functional/declarative code in their
> conceptual program model

On the contrary, I think most people are 'configured' to think declaratively
and functionally. Many programmers on the other hand are somewhat mentally
atypical in that they think quite imperatively. This leads them to pursue
technical hobbies, like programming. However, we do a huge disservice to think
that 'humans' are better at imperative stuff. In my experience, there's a
significant subset of people that find declarative stuff significantly easier,
but do not pursue programming when they take their first OOP course and are
turned off.

For me at least, I played around a bit with programming when young, but it
never clicked for me until i started exploring the declarative and functional
languages. These just came way easier to me than everything else, and had I
not been exposed to them, I doubt I would have continued in this as a
profession.

~~~
erikpukinskis
You're right people's first thought tends to be declarative.

But the problem is, programming isn't about just putting down a first order
encoding of your needs. It's about bridging the gap between that encoding and
the libraries actually available to you.

That's where declarative programming becomes unintuitive to people. How do you
bridge the gap between your declaration and the declarations provided by
Rails/Vue/Webpack/etc? You need to somehow imagine the transformation that
will get you there.

Imperative programs, on the other hand, always have a ready axis for
decomposition: time. You break your program up into tiny time slices, and then
work backwards to find procedures that that get you to the first one, then the
second one, etc, until you hit the end of your program.

With declarative programs, you need to slice the program in an abstract space.
In my experience it takes 6-12 months to get up to speed with a new
declarative programming landscape like Rails or Ember.

~~~
ak39
Thanks. I’ve been following this sub-thread with interest. I have very little
theoretical understanding of the differences between imperative and
declarative (functional) paradigms.

I used this video to get a better understanding - I hope it helps others too:

[https://m.youtube.com/watch?v=sqV3pL5x8PI](https://m.youtube.com/watch?v=sqV3pL5x8PI)

------
gcommer
Gradual programming certainly strikes me as the future of programming. I see
the end goal as being able to use a single, unified language for every
application. Certain sections of a large codebase might need their own special
jargon or abstractions; but that shouldn't necessitate a totally separate
language and toolchain.

This fits real life: we can use English for casual gossip, highly specific
technical documentation, and everything in between.

Another important dimension for gradual languages is difficulty: Languages
with advanced verification capabilities (eg, Rust) would be much easier to
pick up and get started with if you could trivially have sections of code
where you didn't even need to know about the memory management features of
Rust (it would be way easier to choose Rust as a primary language for a
company without having to worry nearly as much about the new engineer
onboarding cost).

Also, since he phrased this as an HCI problem: tooling strikes me as just as
important as language design. A language that was gradual between some sort of
highly constrained drag&drop interface and text-based programming would be
great.

~~~
psyc
It seems to me that the languages we write programs in today are generally at
the wrong level of abstraction in one way or another. Imagine if we wrote
long-lived programs in a sort of lingua franca pseudocode, and then compilers
competed with each other over the years to produce ever more efficient machine
code from that well-established pseudocode. In this fantasy world, for
example, Adobe Photoshop isn't written in C++. It's written in logic. They
keep writing new features in logic-language, and at the same time their
compiler team keeps working to make a better compiler that produces a better
compilation target.

~~~
Izkata
> Imagine if we wrote [..] in a sort of lingua franca pseudocode, and then
> compilers competed with each other over the years to produce ever more
> efficient machine code from that well-established pseudocode. In this
> fantasy world [..]

This seems to, to an extent, describe SQL.

------
tom_mellior
The initial part of the article:

> Gradual programming is the programmer tracking the co-evolution of two
> things: 1) the syntactic representation of the program, as expressed to the
> computer via a programming language, and 2) a conceptual representation of
> the program, inside the mind. In the beginning of the process, the
> programmer starts with no syntax (an empty file) and usually a fuzzy idea of
> how the final program should work. From this point, she takes small steps in
> building components of the program until the final version is complete.

calls to mind things like the B-Method
([https://en.wikipedia.org/wiki/B-Method](https://en.wikipedia.org/wiki/B-Method)).

Here you first specify your program at a very high level in logic; then you
_refine_ the model, still in logic, and proving that the refinement is valid;
you iterate some more refinement steps, i.e., you gradually make parts more
concrete by actually implementing individual operations; and at the end you
have a complete, executable program. A program that, since you proved all the
refinement relations as you progressed, is also formally verified against its
spec.

Of course this is too much work for most domains, and many domains don't even
have any useful notion of formal specs. Also, the _actual_ B system is a
horrible thing with a horrible logic and horrible proof tools. But the general
idea is appealing, for some things.

------
jimbokun
Best I can tell, pretty much everything he describes is already part of Common
Lisp. “Gradual Typing”, for example has a long history in Lisp, and is pretty
much the default way to write programs.

~~~
catnaroek
Not every optionally typed system is gradually typed. The purpose of gradual
typing is to allow typed and untyped code to interact

(0) without destroying the safety guarantees of typed code,

(1) correctly assigning blame, if a runtime type error happens, to the
specific untyped part of the program that causes the error.

Common Lisp's optional types don't quite fit the bill.

~~~
fiddlerwoaroof

       (check-type foo string)
    

Does: it asserts the type of the place and throws a (restartable) error,
giving the user a chance to fix the problematic value rather than bailing. I
guess they don't really solve (1), but the stack trace that arises is a start.

~~~
catnaroek
Stack traces contain a lot of information. Most of it is useless garbage. Yet
at the same time sometimes stack traces don't contain the information you
need. Making them even more useless garbage.

~~~
tehwalrus
Hogwash. Stack traces in multi-threaded applications are much much more useful
than just an exception name, or even just the site of the exception.

~~~
catnaroek
What I need is the proof that the program is correct. Bugs will show up as
gaps or inconsistencies in the proof. But, in the absence of even an attempt
at a proof, all I can do is throw the program to the dogs.

~~~
tehwalrus
I've never yet had the luxury of working in a language that admitted formal
proofs of correctness. Do you mean informal "proofs" like unit tests?

When a bug occurs (perhaps because your tests did not anticipate every edge
case) how do you determine where the problem code is located without a stack
trace?

I know that it doesn't usually point to the line of code that is problematic,
but it usually gives you a starting point for your deductions...

~~~
catnaroek
> I've never yet had the luxury of working in a language that admitted formal
> proofs of correctness.

You don't need much from your language, except for a formal semantics. But,
even if your language of choice doesn't have one, you can stick to a subset of
it that is easy to formalize: no first-class procedures (or things that can
simulate them, such as dynamic method dispatch), no non-local control flow
operators (e.g. raising and handling exceptions), no non-lexical variables,
just statically dispatched procedures and structured control flow constructs
(selection and repetition, i.e., pattern matching and recursion).

> When a bug occurs (perhaps because your tests did not anticipate every edge
> case) how do you determine where the problem code is located without a stack
> trace?

By reasoning about every statement's intended and actual preconditions and
postconditions, i.e., predicates on the program's free variables. At some
point, the intended and actual preconditions of a statement-postcondition pair
won't agree. But this is unreasonably hard if you need to reconstruct these
preconditions and postconditions completely from scratch.

~~~
tehwalrus
I've never had the luxury of that much control over a codebase in the real
world of work.

------
svilen_dobrev
someone said "software is made of decisions". IMO this is _the_ problem. All
those listed axises, and another dozen i could imagine (esp. moving away from
machine and closer to reality), they all require decisions to be taken. Some
of these decisions are predefined - from the language, OS, hardware,
employment, audience, customer, tie-color, whatever. Other are obvious. But a
lot are not even understandable unless one has been-there, done-that.

Most people do not want to make/take many decisions, especially for things
they dont care (in that moment), so they go for something that has those
predefined. As +/\- , usualy that something comes with lots of other extra
decisions predefined too.. and the pain starts. Sooner of later u want full
control of some aspect, and in order to get yoursefl free on that aspect, ...
u have to either do horrible hacks, or re-make everything yourself. And there
is the other extreme. Next time, u'll just start there. With No gravity
defined - whatever that brings (like very long schedule. Or very loose
language).

Usualy people take magnitude(s) fewer decisions-per-minute than what one needs
in programming. Same for alternativity. Hence the difficulty of programming..

And yes, the all-or-nothing approach that is all-over software, has to be
anathema-ed. For 30+ years, i have not seen anything good in it. i want to be
able to choose-and-mix (which is, decide) by myself. And so do my new pupils..
And all the intermediate levels. Just the set of aspects, and details thereof,
is very different for me and for them. Which calls for even greater
graduality-flexibility-alternativity..

Myself, i have ended doing everything in python, for it stops/chains me the
least, without getting too unstructured. Usualy making little "languages" for
anything that i might, all on top of each other, "gradualy" changing the
semantics; away from the language predefined, into the target domain. And,
generating all else that cannot go this way.

have fun

------
jon49
I find F#'s type inference to be really nice. You feel like you are almost
writing in a scripting language but whenever you mess something up it will
tell you. And if you say, "No F# you are wrong!" F# always wins. And the
ability to write small pieces of the code in F# interactive is tremendously
useful. I find I like T-SQL for that same reason. I can write small programs
scripting as I go and bring everything together in a bigger program all
without having to worry about types too much. The tooling isn't quite as nice
with T-SQL as it is with F# (granted C# blows F# away with its tooling, but F#
is catching up slowly).

What's nice about T-SQL is that you can do a lot of your programming in it and
then you generators to generate all your C# code (Query-First) and even your
client side code (I'm working on a console app that will do that for me). But
the tooling for this is not very mature. I don't understand why programmers
don't embrace it wholesale. I feel like I'm always writing so much cruft in my
day-to-day job and screaming at the computer that I could do what takes an
hour in C# in 5 minutes in SQL.

SQL is a bit harder to learn at first. But the more you work with it the
easier it gets. And it is much easier to understand since there isn't much to
it, unless you are doing something really complex. But then, I don't know if
C# is that much clearer.

I just wish the tooling were as nice for PostgreSQL as it is for SQL Server. I
don't understand how you can pay $200 for a product that can't even figure out
what fields my alias refers to.

------
ponderatul
It's amazing it took so long to get here. Obviously less smart people like
myself, figure this out on our own. My brain doesn't have a lot of processing
power, and I always try to shorten the feedback loop between what I write and
the response I get from the programme itself, instead of throwing myself in
layers and layers of abstraction. But to me and I think many others, this is
the only way. We have to make the language, and flow of the code conform more
to the human intuition.

~~~
ndh2
Actually I believe abstraction is a tool with the express purpose of limiting
the amount of stuff you have to keep in your head. Abstraction lets you omit
the nasty details, so you can work with a representation of the problem space
that better fits in your brain cache.

------
chubot
FWIW, the implementation of Oil is following a "gradual" trajectory. It's
amazing how long various "wrong" things were in the codebase!

I was already 8-9 months into coding with over 10K lines of code before I
added ASDL schemas for the AST [1].

I believe this was a good thing, because I had running code from the start,
and the only way to understand shell is over many months. It's impossible to
just "think about it" and come up with the correct schema. The schema follows
not just from understanding the language, butfrom the algorithms you need to
implement on the language. [2]

So I see the value in both -- starting out with a very loose notion of types,
then making it stricter.

I'm taking the next step now and am looking to make that detailed schema
static rather than dynamic [3].

[1] Success with ASDL
[http://www.oilshell.org/blog/2017/01/04.html](http://www.oilshell.org/blog/2017/01/04.html)

[2] From AST to Lossless Syntax Tree
[http://www.oilshell.org/blog/2017/02/11.html](http://www.oilshell.org/blog/2017/02/11.html)

[3] Building Oil with the OPy Bytecode Compiler
[http://www.oilshell.org/blog/2018/03/04.html](http://www.oilshell.org/blog/2018/03/04.html)

(copy of Reddit comment)

------
jondgoodwin
> Several of the axes mentioned (memory management, language specialization)
> lack any documented attempts to systematize their treatment at the language
> level.

This is not entirely true. The author of the Gradual Memory Management paper
you cite is currently building [1] a systems programming language called Cone
[2] that incorporates these mechanisms. Although much work remains to be done,
the online reference documentation does go into some detail about the syntax
and semantics for references, allocators and permissions.

[1] [http://github.com/jondgoodwin/cone](http://github.com/jondgoodwin/cone)
[2] [http://cone.jondgoodwin.com](http://cone.jondgoodwin.com)

~~~
wcrichton
Sorry Jon, I didn't mean to imply you haven't been making progress on your
language. I was thinking moreso in terms of published research.

~~~
jondgoodwin
I appreciate the clarification, as I did not catch that from the context. I
just assumed maybe you were not aware of my follow-up work with Cone.

I do think there are good research opportunities in these ideas, e.g.:

* Type theoretic work on formally proving the soundness of the mechanics (as a follow-on to the proof work done with Pony and Rust-belt).

* Cross-allocator comparative studies of performance and other factors (although many such studies have been conducted in the past, the fact that you can control for the language/runtime and even test hybrid memory management strategies would make a new study a worthy addition to prior work).

* Assessing the impact of borrowed references and lexical memory management on tracing GC's "generational hypothesis"

In your role at Stamford, perhaps you can inspire students to pursue such
projects. My focus is devoted to real-world results, but I would be happy to
offer support, if that would be of any help.

------
FractalLP
Even though Perl6 isn't production ready yet, I'm surprised to see it not
being mentioned in a group of languages with some gradual typing support as
what I would subjectively consider idiomatic Perl6 takes advantage of quite a
few things including gradual typing.

------
speedplane
I wonder where recursive programming fits into this.

For any newby programmer, recursive programming is a difficult concept to
grasp. Then during an intermediate programmer's stage, you realize recursive
programming's power and lean on it because of it's often theoretical beauty
and often simple. As you advance more, you realize the computational
disadvantages (exploding stack) and softer disadvantages (lack of
readability).

Would "gradual" programming ban recursive programming?

------
skybrian
> no new big ideas since Prolog

Given that Prolog dates back to 1972, I wonder if spreadsheets count as a big
new idea? How about programmer notebooks like Jupyter?

~~~
FractalLP
Mathematica has had pretty advanced programmer notebooks for something like 20
years i think. Jupyter definitely isn't new. The only new part would be a lot
of languages (Python, Julia...etc) starting to use it.

------
jack9
No mention of [https://quorumlanguage.com/](https://quorumlanguage.com/) ?

------
alexashka
I'm surprised to disagree so thoroughly with someone who teaches a programming
languages course at Stanford.

It's a problem of education, and business needs affecting education.

What do schools teach? Python? Java? C? A little lisp maybe? Javascript (oh
god)?

Why? Why are we pretending those languages have anything at all to do with
learning computer SCIENCE? More like job-training for big companies, so that
graduates are 'job-ready' when they graduate.

You don't need studies, because users don't KNOW what's going to make their
lives better, other than incrementally better. Imagine asking people who rode
horses what they'd like to have improved in terms of transportation. Who in
the world would say a car? A self driving car?

Cars require massive infrastructure investments. Programming language research
and development requires serious investment, and serious collaboration. When
people who don't do research, decide on allocation of funds, demand X number
of papers published that rot behind paywalls, etc etc, why is anyone wondering
why we're still using C for critical world-wide infrastructure?

For someone who expresses this far better than I can, watch a couple of videos
of Alan Kay on youtube.

~~~
wcrichton
I'm not sure why our two viewpoints have to be at odds. I wholeheartedly agree
that improving education and encouraging students to focus more on
understanding the fundamentals instead of the syntax is important. But our
languages are still in need of improvement!

~~~
alexashka
I went off on a rant rather than address your post, I'm sorry.

Let me try and do a better job this time around:

 _I believe, then, that viewing programming languages (PL) through a lens of
human-computer interaction (HCI) is the most critical meta-problem in the
field today. More than ever, we need surveys, interviews, user studies,
sociologists, psychologists, and so on to provide data-driven hypotheses about
the hard parts of programming._

I agree that tooling for programmers is awful - I disagree that it is because
we haven't done studies. My point was that studies will absolutely not help,
for anything but incremental improvements that hardly matter. I also don't
think it's the most critical problem - I'd agree that it is the lowest hanging
fruit, however.

Meaning, I think there can be significant improvements to programmer tooling,
within the next 5 years, and I also think the problem again lies in funding,
programmers not willing to pay for better tooling (lack of imagination) and
lack of imagination of the big companies.

One glaring example of lack of imagination is our current state of using the
internet - html/css/javascript in a web browser is a completely broken
paradigm and yet, no big company has done anything to actually fix it, some
spend millions of dollars each year to keep it going in fact! Thousands upon
thousands of people have spent optimizing, writing libraries for, and using a
fundamentally mediocre language. That does not give me much hope frankly.

I am being quite negative so let me propose what I think is the future of
programming languages: a dependent type system language with incredible
documentation, standard library and an ability to drop down into a language
like rust, for pieces of code that need performance.

The current trend of Swift/Kotlin is imho a syntactic and minor improvement
over Java/C#, and moving in the direction you've mentioned - I think object
oriented programming, dynamic languages and mutable state by default, or even
having to think of managing memory, are ideas that will only be used in niche
circumstances 30-40 years from now. OO for UI, dynamic languages for what bash
scripts are currently used for, managing memory via rust for performance
critical apps.

This is all assuming we wake up and start reading scientific papers from 30-40
years ago and actually using those ideas, rather than cherry-picking ideas to
glue onto a fundamentally broken existing paradigm.

One last thought - these are all thoughts of someone who's frankly quite
uneducated. I'd love to know what people who actually design
languages/compilers/proof assistants for a living think.

~~~
icebraining
_no big company has done anything to actually fix it_

Sure they have. Google wrote Dart and tried to get others on board. Then
asm.js arose, allowing other languages to be compiled and run reasonably
efficiently. Nowadays there's some consensus around Webassembly.

Regarding HTML/CSS, canvas and WebGL both bypass them.

~~~
alexashka
Everything you've mentioned completely misses the point - it is not just
performance that stinks, it is not just javascript as a language that stinks -
all these tools are incremental improvements that completely lack imagination.

It is the platform itself - meant to display static text content, being
coerced into some frankenstein that requires never-ending libraries and
tooling to do what desktop apps did 20 years ago.

It is hard to see any other world, so let me link you to a guy to whom it was
obvious the web browsers stink decades ago.

[https://www.youtube.com/watch?v=En_2T7KH6RA](https://www.youtube.com/watch?v=En_2T7KH6RA)

[https://en.wikipedia.org/wiki/Project_Xanadu](https://en.wikipedia.org/wiki/Project_Xanadu)

~~~
icebraining
I didn't miss the point, but I may have bungled mine.

Let me start by addressing Xanadu; you obviously get the web's folly, so I
don't understand why you'd link to a criticism of the web that has been
outdated for twenty years. Yes, as you point out, the web is a pile of hacks,
but those hacks haven't been bolted on to add two-way links or transclusions,
but - as you point out - to do what desktop apps do. Had Xanadu won over the
web, someone would have created JavaScript and a bunch of hacks on top of that
just the same.

A closer vision to what we've been trying to achieve is Alan Kay's, which said
that browsers should have been essentially a basic layer - not an application
- to run third-party code safely.

And in my opinion, webassembly and canvas+webgl are essentially that - a
simple bytecode VM and two slim layers over core graphic APIs. Sure, you need
some HTML to load them for historical reasons (although surely a "manifest"
file format would rapidly arise to replace it), but they are essentially
standalone and quite basic. All the layers of hacks are in HTML's advanced
features, in CSS and in JS. Once you have applications that don't use those,
you can build a browser which is just a very basic HTML parser tied to a
bytecode VM.

------
Capaverde
> Is imperative programming more intuitive to people than functional
> programming? If so, is it because it matches the way our brains are
> configured, or because it’s simply the most common form of programming?

(I think) It's not because of brains or abundance, it's because of the human
language.

> How far should we go to match people’s natural processes versus trying to
> change the way people think about programming?

Economically, it is your time (of one person only) versus the time of
potentially thousands of users. What do you think?

> How impactful are comments in understanding a program? Variable names?
> Types? Control flow?

Ideally, I think, comments would not be contained in the source code,
polluting the general vision of its structure, but would be placed outside, as
documentation.

For variable names, as long as its meaning/use is made immediately clear on
first sight, without needing to figure out its frequency pattern or see its
declaration to figure it out (maybe placing it in a separate place from the
code, again), the variable name shouldn't matter much (as long as its not
purposefully unrelated or obscure).

I think it would be useful to have something like a "live documentation" in a
REPL (commands like man commands documenting each part of the language (and
program), each word an entry, non-sequentially accesible). That is something I
would do if I were to design a programming language, a structure called a
"dictionary" documenting each and every reserved/defined word in the
environment.

~~~
nopinsight
> Ideally, I think, comments would not be contained in the source code,
> polluting the general vision of its structure, but would be placed outside,
> as documentation.

I believe Donald Knuth, who introduced Literate Programming, would disagree
with this.

"According to Knuth, literate programming provides higher-quality programs,
since it forces programmers to explicitly state the thoughts behind the
program, making poorly thought-out design decisions more obvious. Knuth also
claims that literate programming provides a first-rate documentation system,
which is not an add-on, but is grown naturally in the process of exposition of
one's thoughts during a program's creation."

[https://en.wikipedia.org/wiki/Literate_programming](https://en.wikipedia.org/wiki/Literate_programming)

[https://www-cs-faculty.stanford.edu/~knuth/lp.html](https://www-cs-
faculty.stanford.edu/~knuth/lp.html)

I have not used full-fledged Literate Programming, but I find having part of
documentation interleaved with code helps improve both and reduces cognitive
load required to search for relevant documentation elsewhere.

~~~
mikekchar
I've done it a few times. I'm now firmly of the opinion that ideally the only
language you should see in code is a programming language. If you can not
express the intent clearly in the programming language, then you should work
at it until you can. It's not always possible and I write comments when I have
to. Some languages are not very expressive, either, which makes it difficult.

The main problems I've had with literate programming are:

\- It has a very high risk of being very slow going. If you have to change
something, then you will often find that you need to change a lot of prose.

\- Because of the previous point, there is a tendency to do a lot of up front
design and spike a lot of code to "get it right" before you commit to writing
the prose. There's no particular problem with that except that it places a
_very_ high cost on change. There is considerable incentive to accept a sub-
optimal situation because changing it will be extremely expensive.

\- It is quite difficult to write prose in a reference style. Generally
speaking, one is encouraged in literate programming to write your code in
"presentation order". In other words, the order that you would discuss things.
This is fantastic if someone is sitting down and reading your code from
scratch. It makes it difficult to find the wheat in the chaff when you are
debugging, though.

\- Because of that, I find that I'm more often reading the undocumented code
rather than the documented code -- because once I understand what's going on
it's far easier and faster to just read the source code. Jumping around
between the generated code and the presented code is often frustrating.

Having said that, I've found that I really like programming in a literate
style when I'm writing a blog post, or the like. Because I'm thinking more in
prose than in code, it works extremely well.

My personal suspicion is that Knuth had great success using literate
programming for TeX precisely because he was more concerned about describing
algorithms for typesetting than he was about writing a great typesetting
system. I love TeX (and I'm one of the few people who actually has
considerable experience using TeX without LaTeX), but it is a horrible
language that works exceptionally well. In many ways I think that it
demonstrates the issues that I've had with literate programming.

