Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Programming Bottom-Up (1993) (paulgraham.com)
75 points by tzury on Dec 9, 2023 | hide | past | favorite | 33 comments


If you want to see this principle in action, I’ve been updating Arc for the last five years. If you clone https://github.com/shawwn/sparc, you should be able to run bin/arc news.arc and see a clone of hacker news running on http://localhost:8080. No need to install anything if you’re on Linux or Mac; it downloads a minimal racket locally. Windows users just need to install racket normally.

I use it as a bookmark aggregator, since I can submit things and leave myself notes. The first account you create will become an admin, and any future accounts are regular users. Passwords are bcrypted rather than sha1’d. Windows works too, but for some reason the repl seems to block all other threads from making progress, so Microsoft users will have to run it with DEV=0 to disable the news.arc repl on startup.

It’s not ready to show yet in general, but there are a lot of advances. My design goal is that if pg ever sees it, he’d want to use it himself.

It incorporates some ideas from Bel, too. I’ll be doing a big write up of all the changes.

The biggest advance from a language standpoint is probably keyword arguments. If there’s any interest, I’ll go into more detail.

The biggest advance from a usability standpoint is that arc is now a thin wrapper over racket. There’s no FFI and no need to translate anything. Arc lists are racket lists. Ditto for hash tables and everything else. You can write racket code in arc by prefixing code #'(like this), which the arc compiler turns into a racket expression (like this). So #'(require (rename-in racket/system [system racket-system])) will do what you’d expect it to do in racket. You can mix racket code and arc in two ways: (#'foo bar baz) will compile bar and baz as arc expressions, but will call foo in functional position. So (#'begin0 (+ "x" 'y) 'z) will give "xy", because begin0 is a racket special form that returns its leftmost expression.

The other way is an equivalent of quasiquoting: #`(let ((x #,(obj a: 1))) x) will return a hash table with ‘a set to 1.

The unit tests in test.arc are probably the best way to see all the language features, but it’s hard to read at times. (The writeup will introduce each concept in a bel-like fashion, but it’s not ready yet.)

news.arc is where most of the power is on display. Making a new endpoint is extremely easy. And prompt.arc shows how you can write a dynamic application without using databases or needing to store any state. The state is in the closures.


Speaking of closures, remember how HN used to run out of RAM because each page refresh would create new ones? That doesn't happen here, because no new closure is created if both the lexical environment and the compiled code are identical. The number of fns are displayed at the bottom when you're logged in as an admin, so you can verify nothing new is created when you refresh.

Dan has a good writeup about why HN got rid of them, but unfortunately I can't seem to find it. There are still reasons to avoid them (e.g. if you need a stable url), but I'm happy that performance is no longer one of them.


The late Stan Kelly-Bootle, who wrote a witty humor column in UNIX Review for years, famously said (paraphrasing here): "Everyone's talking about top-down or bottom-up programming, but from what I see, it looks like it's mostly bottom-down."


bottoms-up


For me the key thing is that if you are doing "bottom-up" programming, then those macros/functions/etc. should be abstract and not related to business rules. Else you are doing premature abstraction, and you'll end up creating big rocks you have to carry on your back when you want to make a change.

As a consequence, I personally practice bottom-up very sparsely and carefully.


I don't think the bottom-up model denies the importance of the top-down architecture. In practice, you have pressures from both sides. You need to explore what's possible at the bottom but also respect what's required from the top. And they are complimentary. You explore what's possible from the bottom and then you need to adjust how you should organize those possibilities according to the requirements from the top. At the end of the day, as everything else in the world, it's all compromise.


100%


I've heard this about Lisp, but I don't really get how this doesn't apply to other languages. How is writing special operators in Lisp different from writing special functions in any other language?


> I've heard this about Lisp, but I don't really get how this doesn't apply to other languages.

It can be, and he even says as much:

>> Bottom-up design is possible to a certain degree in languages other than Lisp. Whenever you see library functions, bottom-up design is happening.

I'll also point to kazinator's quote of Ken Thompson in the previous discussion linked by dang:

>> It is the way I think. I am a very bottom-up thinker. If you give me the right kind of Tinker Toys, I can imagine the building. I can sit there and see primitives and recognize their power to build structures a half mile high, if only I had just one more to make it functionally complete. I can see those kinds of things. The converse is true, too, I think. I can’t— from the building—imagine the Tinker Toys. When I see a top-down description of a system or language that has infinite libraries described by layers and layers, all I just see is a morass. I can’t get a feel for it. I can’t understand how the pieces fit; I can’t understand something presented to me that’s very complex. Maybe I do what I do because if I built anything more complicated, I couldn’t understand it. I really must break it down into little pieces. - Ken Thompson, Unix and Beyond: An Interview with Ken Thompson, 1999

And if you use a TDD style of development (in particular, but test-heavy in general with frequent use of unit and integration tests below the full end-to-end-test level) you'll also likely stumble onto a similar bottom-up style of development.

I think, circa 1993, his emphasis on Lisp and bottom-up development made a lot more sense than it does today with the increasing availability of interactive development environments (essentially every dynamically typed language, pretty much; but even many others like Java and C#) and increased emphasis on test-heavy development methodologies.


A function call has runtime overhead that a macro may not. There are also things that can be difficult to express as a function in the language type system but that are straightforward to implement as a macro. Lisp isn't the only language with a strong macro system but it disproportionately attracts programmers who value the power and expressivity that a strong macro system provides.

There are a number of reasons that bottom up programming in the style PG seems to be advocating for is not popular in industry. Industrial programming favors the median developer while bottom up is a technique that works best with small teams of skilled programmers. Big cos obviously prefer "safe" languages, but even startups are generally conservative due to the vc induced pressure towards hypergrowth, which also favors the median developer since hypergrowth requires hiring.

Writing good programs from the bottom up requires more skill[1], but can lead to substantially better programs by certain criteria. But it may also be considered inscrutable by the median developer. The median developer can also misuse the power afforded by the bottom up language and write something much worse than they would have been able to in a more conservative language. This isn't a knock on the median developer, btw, just my perspective on why this style is rarely found in industry.

[1] neither LLMs nor stack overflow are of much help when you have essentially written a new language to solve your problem.


> There are a number of reasons that bottom up programming in the style PG seems to be advocating for is not popular in industry.

I think another reason is that it’s just more work for another programmer to grok a bottom-up written program written with its own DSL then a program that’s bent to fit the existing (more rigid) language since they already know the existing language.


Extreme reinforcement of your point: Read Leo Brodie's classics "Starting Forth" and "Thinking Forth."


Chuck Moore, inventor of Forth, is also an advocate of bottom-up development.

> You mentioned that a lot of Forth programs you’ve seen look like C programs. How do you design a better Forth program?

> Chuck: Bottom-up.

> First, you presumably have some I/O signals that you have to generate, so you generate them. Then you write some code that controls the generation of those signals. Then you work your way up until finally you have the highest-level word, and you call it go and you type go and everything happens.

> I have very little faith in systems analysts who work top-down. They decide what the problem is and then they factor it in such a way that it can be very difficult to implement.

https://www.oreilly.com/library/view/masterminds-of-programm...


PG has other essays where he claims it's because the lisp version of operators can just do more things.

And that's true, lisp macros are way more powerful than what most languages support. The newer ones (and some "progressive" old ones) got template systems that are almost as powerful, but the most used languages simply don't have anything comparable.

That said, nowadays using a lot of that kind of feature is normally frowned upon.


I have a theory it is frowned upon because there are not a lot of small shops left. Everyone wants be a cog at a big tech co where its leetcode interviews and code that large groups can (somewhat) easily maintain.

In these cases the languages used are generally uniform so your skills easily transfer and you aren't suffering as many 200% problems by learning both the language and the authors peculiar add-ons.

Of course it still happens just not as easy with rigid languages such as Go.

It's all tradeoffs.


As a rule, when something starts deserving being called by "shop", it is already too large for heavy usage of macros. In fact, I'm not sure it scales for more than a single developer.

That is, unless you treat them as another language, with tooling, documentation, and stability. Then they scale again, but it requires quite a lot of work.


Is this website an example? Dang has been talking about a set of performance changes for a long time and I always wonder if it is slow because it is so bespoke.


I've heard this about Lisp, but I don't really get how this doesn't apply to other languages.

My no. 1 suspect is OOP or, more specifically, languages like Java that won't let you write a function. You can bypass that limitation creating a class whose only reason is to hold stand-alone functions. I don't do that, lately I don't much Java at all, but I've seen huge Util.java classes.

In sane languages I used to start exploratory programs with two files: one for the main flow, the other for utility functions. Later the former would become different windows, the latter different libraries. Classes grew slowly, one functionality aspect at a time.

Forced OOP nudge you into a priori design the class structure before understanding the domain. Bottom-up advantage is that, having the primitives earlier, you can test with real data from the start and show the results to the domain experts.


> Whenever you see library functions, bottom-up design is happening.

I'm glad he acknowledged that library functions is the majority implementation of "bottom up design". In that case extracting primitive api functions & layers composed of these functions is effective at reusing logic & creating appropriate levels of abstraction.

That being said, many frameworks do not offer these layers of abstraction so create a sort of "complexity lock in"...where one has to use the entire framework in order to use a few useful parts of the framework. I find this complexity lock in to be unfortunate as it not only couples the programmer to the entire framework, even if the majority of the framework is not useful. It also creates noise in the conversation, making it more difficult to search for the appropriate solutions at the appropriate level of abstraction.

There is a cost to integrating other libraries into the framework.

Frameworks also tend to become an institution in search of increasing market share. This means there are changes done to the framework. Some of these changes are breaking changes that don't necessarily improve the core development experience in meaningful ways.


A much better approach is outside-in. Start with the protocol a client (command line/browser/desktop) will use to communicate with your system and then implement each req/rpy in logical usage order.

And use auto tests to verify functionality. Having a test program running a full end-to-end req/rpy session checking all functionality makes the bug count approach zero.

I am maintaining large scale mission critical C++ applications using this approach. With zero production bugs for 5+ years.

And yes of course split the implementation into reusable parts as much possible. That is basic Software/System Engineering.

Claims that it can't be done in other languages than Lisp is absolutely BS. Linux (for example) is mostly a huge collection of reusable components written in C.

It works for any language/environment and is therefore a more fundamental technique.


I really like On Lisp's message of building large things from small things. But when I read it these days it feels dated. The examples in the first part of the book are things that you'd expect in your standard library these days and the rest of the book solves problems you can import a library for. Macros can be simple solutions to problems that would otherwise be difficult. But the first section of the book mostly makes macros for things we'd use a higher order function for these days.


Related:

Programming Bottom-Up (1993) - https://news.ycombinator.com/item?id=16933030 - April 2018 (64 comments)


You can enjoy the luxury of developing top-down, bottom-up, or sideways or whatever mainly because of the tools and environments that you have, and those were mainly developed bottom up.

Code you re-use is often ready-made bottom pieces.

Some code you reuse calls your code, like frameworks. They are middle.

You might be writing a plugin or extension to an application; then the top is decided and you're making some middle or bottom thing.


Interestingly Feynman applauds the bottom up design of the shuttle software[1].

[1] https://history.nasa.gov/rogersrep/v2appf.htm


I used to approach problems bottom-up, and I often prefer it when I'm working on my own because I do feel I have a much deeper understanding of the system as a whole that way. But on larger teams, and especially teams with junior developers or communication boundaries, I find a top-down approach to be much easier to get everyone on board with.


top-down is impossible because real systems have no top.

https://softwarequotes.com/author/bertrand-meyer


https://bertrandmeyer.com/OOSC2/ - The book being quoted is now available for free, the quote you're referencing comes from page 108. (PDF page 138)


Thanks. I had read most of it many years ago, the print version, but will read this online one again.

IIRC, there's an Easter egg of a kind at the end of the book.

Looked it up. It seems the Notation section of the book's Wikipedia page mentions two Easter eggs (sort of).

https://en.m.wikipedia.org/wiki/Object-Oriented_Software_Con...

I had tried out Eiffel at that time. Very systematically designed language, I thought, though I'm not a language lawyer.


Actually, seven, not two.

Six times throughout the book, and once in the Epilogue. Again, see the Notation section.


I remember nodded excitedly along with this when I first read it maybe twenty years ago, and less so now, so I decided to go into my mental time machine and see if I could remember what was so exciting about it.

At the outset let me say that I know that Graham highly values Lisp's ability to transform itself into the language you need, especially by using macros. And that's nice, too, but it wasn't what excited me. For me, it is summed up in this quote:

> As you're writing a program you may think "I wish Lisp had such-and-such an operator." So you go and write it

I remember mentally screaming "Yes! Yes!" when I read this the first time, and wishing I could explain it to some of my coworkers. But who could possibly disagree with such a simple statement? When I read it, why did I think it would be hard to explain?

Thinking back, the conflict was between two perspectives on designing the modules or classes of a codebase: one says that you are implementing a Platonic ideal of the domain, another says you are implementing operations that support the functionality of the system.

In the eyes of many of my coworkers at the time, good domain code would have no traces of the specific needs of the program it was written for. It would have its own logical consistency, dictated by the reality of the domain. A Car class should behave like a car, and it should serve the needs of any future car-related programs our company wrote.

This was obviously an incomplete picture. For example, a Car class in a ride-share system probably doesn't have a coefficient of drag, even though a real car does. But some people felt that the naive, incomplete picture captured such a profound and helpful ideal that it was better than a complete picture that acknowledged the demands of the program (kind of like it might be better to tell a four year old "lying is bad" rather than tell them that it's super complicated and mommy and daddy lie all the time but only in the right ways for the right reasons.)

I hated the way this ideal glossed over the obvious truth that the supposedly neutral domain model was in fact very selective. Any Car class would specify a tiny and very specific subset of the behavior of a real car, and it would do so at a specific level of fidelity, which needed to support the functionality of the larger system. Because I hated pretending that we weren't writing the low-level code for specific high-level needs, I found it refreshing that Paul Graham would so clearly put the needs of the program first. He wrote domain functionality on demand! And if he left coherence and maintainability of the low-level code as an unspoken exercise for the reader, that was no worse (and better in my mind) than leaving unspoken the connection between high-level functionality and low-level design.

Anyway, I was always confused that Graham called this "bottom-up programming," because he explicitly goes top-down. He discovers a need at a higher level ("As you're writing a program you may think 'I wish Lisp had such-and-such an operator'") and then he drops down to the lower level to provide it ("So you go and write it"). To me this is opposite of the approach that says you start at the bottom and implement a Platonic ideal of the domain, without first explicitly considering the needs of the program.


There’s an interesting thread of truth in this, and you’ve touched on it. Paul always goes top down fist, and bottom up second. I don’t think he explained that very well.

Doing bottom-up first is fraught with danger, since you don’t know what you’ll need until you start writing an app. That’s why it’s so important to let the bottom-up approach be guided by the needs of an actual application. It rarely goes well any other way.

The problem isn’t starting top-down, but that people never do the bottom-up work that you uncover. They get around it in other ways, mostly by suffering. A lot of devs don’t even realize they’re suffering; they think it’s part of the work. Whereas if you pull apart the elements of your top-down program into their component parts, you’ll almost always end up with shorter programs that are more powerful. Sometimes exponentially so.


Getting around the problem in other ways often means they reach for third party libraries, adding dependencies, just glueing half fitting parts together, solving the problem now, but suffering in the long term. (Or whoever comes afterwards to maintain the project, if the original devs left after 2 years for the next gig.)


> Anyway, I was always confused that Graham called this "bottom-up programming," because he explicitly goes top-down. He discovers a need at a higher level ("As you're writing a program you may think 'I wish Lisp had such-and-such an operator'") and then he drops down to the lower level to provide it ("So you go and write it"). To me this is opposite of the approach that says you start at the bottom and implement a Platonic ideal of the domain, without first explicitly considering the needs of the program.

This sums up why I've never really understood this essay. I agreed with the technique but was mystified by why it was called bottom up programming when you first had to approach the problem top down to discover what you needed to implement.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: