Hacker News new | past | comments | ask | show | jobs | submit login
The Forth Methodology of Charles Moore (2001) (ultratechnology.com)
93 points by todsacerdoti 9 months ago | hide | past | favorite | 61 comments



Most of Moore's Forth methodology comes down to extracting the optimal solution by:

    1.  Iterating your understanding until you have the deep core of the problem.
    2.  Experimenting with possible solutions until you find the best approach.
    3.  Not coding the final production code until the first two steps are met.  (Maybe this is a variant of Fred Brook's "Plan to Throw One Away"?  see https://course.ccs.neu.edu/cs5500f14/Notes/Prototyping1/planToThrowOneAway.html)
    4.  Writing code and documentation that is simple, direct, and easy-to-understand.


This works well when the problem.stays the same for long enough, and you're not in a particular hurry. Good for radio telescope control (the original application of Forth), good for spacecraft design (some of Moore's CPUs have flown in space). Harder in a typical business setting, alas.


What you're saying is completely right after you realize that the problem with business software is not the software side, it is the illogical requirements of businesses. The issue is that different parts of the businesses throw requirements that are contradictory and generate unmaintainable systems in the long run. Thus the need to constantly rewrite and patch. In an "enterprise" setting, you don't want a solution oriented language, you want a language that can be twisted to produce whatever the business wants in a short period of time.


The problem with business requirements is that there are no systems analysts anymore; that work has largely been given to programmers. A systems analyst could determine what information each part of the business needs and what information it can provide to whom. With that information in hand, designing an information system becomes a relatively straightforward process; programming it even more so.

The way we do things today is basically guesswork. Which suits programmers just fine, as it lets them write a lot of code. But that's not the best of all possible worlds, only the default world.


Everything that involves planning and long term thinking has been eliminated from modern businesses, under the guise of "fast iteration", "just-in-time", or whatever new name they find. It's not a software-only phenomenon.


the problem is that the requirements take 6 months to elucidate from the incoherent ramblings of the users, and by the time you've figured it out, the business has moved on and has new and incompatible requirements. so your software is late, doesn't meet the requirements and doesn't even work.


> the problem is that the requirements take 6 months to elucidate from the incoherent ramblings of the users

This is really only an issue if you appoint programmers to serve as systems analysts. Sadly, this is all too common among businesses today.

Remember, programmers are detailists who must speak the language of the computer, and do not see the big picture, only one small part of a much bigger puzzle. A good systems analyst is a generalist who understands the business and can empathize with the users. They have strong communication skills, and can take a big-picture perspective and articulate their perspective and solutions clearly to management. They most certainly would not dismiss user requests and complaints as "incoherent ramblings". Programming and systems analysis are separate professions that require vastly different personality types, with the consequence that programmers make very poor systems analysts.


> and by the time you've figured it out, the business has moved on and has new and incompatible requirements. so your software is late, doesn't meet the requirements and doesn't even work.

There's a reason we don't have them anymore.


You're putting the cart before the horse.

Again, gathering requirements is a time-consuming and confusing process, making the software late and often wrong, because most companies assign programmers to the systems analysis process, not people skilled in actual systems analysis. A good systems analyst has the empathy and high-level insight it takes to understand the existing business systems and the true needs of the users, and design a new system to better meet those needs -- traits sorely lacking in programmers. Note that a business system is not the same as software. It incorporates all of the procedures, whether performed by humans or machine, necessary for business operations.

As a consequence, we tend to declare that nothing can be done about gathering requirements and implement "Agile" methodologies, which are nothing more than institutionalized guesswork: in the words of Milt Bryce, "do a superficial feasibility study, do some quick and dirty systems design, spend a lot of time in programming, install prematurely so you can irritate the users sooner, and then keep working on it till you get something accomplished." And the software is still late and wrong!


someone online mentioned(i'll link if i find it again) a quote similar to this, something like that:

    - go crazy in understanding the problem
    - break it in small words
    - repeat above steps until it's easy to write in forth


Charles Moore definitely believes in modifying the original problem so that it fits forth while he modifies forth to fit the problem.

I have used a similar approach when I needed to get a PoC out in 6 weeks, we made it, but not without drastically modifying "the spec". In one instance the PM wanted continuous queries over geospatial data, millions of points and arbitrary locations along with a time dimension. I changed the spec to use 3 buckets, near, medium and far. That feature took 2 hours, the continuous query itself would have taken weeks or longer.

Successful projects have a malleability to them.


> Charles Moore definitely believes in modifying the original problem so that it fits forth while he modifies forth to fit the problem.

This philosophy is often described by 80s lispers. They're not top-down nor bottom-up.. they lie in the middle. Trying to make a comfy intermediate layer so they can experiment various upper layers to fit the problem and adjust if rapidly in case of changes. Similar to IR in compilers I guess.


The main weakness of Forth is that modern programming is basically connecting pieces of APIs from different systems. This is true at the OS level, and much more so at the distributed level (web). So you're stuck with languages that help with this type of interfacing, such as Python and Java, while languages that excel in expressibility and logical development like Forth provide little benefit to the type of programming we're forced to do every day.


One other weakness that I sometimes wonder about in relation to Forth is that modern processors are just much more complicated to reason about internally. I am not sure how much efficiency I can pull out of them at reasonable cost considering they likely have.

1) Out of order execution 2) Multiple levels of caches 3) Branch predictors

Forth has several neat advantages. I think it might be a better way to reason about a complex problem. It can be bootstrapped on almost any piece of hardware. It can be made to be very memory efficient in its runtime.

Those things were true in the past and they are still true now. But I do wonder whether the performance part of it is still true. Not because Forth has changed, but simply due to the fact that modern hardware (+ compilers) can take mediocre code and make it into pretty damn performant code.


That's correct, but Forth can always be made to run faster with enough additional work (a more robust compiler for example). That's what commercial Forth still do nowadays.


You hit the nail on the head. My ideal world would involve me having an extended period of time to research a problem and then artfully craft a perfect solution to a problem that is beautifully documented.

The world I live in is mostly crappy software products involving an infinite mess of APIs glued together and poor understanding of the core problem which leads to a huge mess. It's worse for the organization too, but good luck arguing that.


> The world I live in is mostly crappy software products involving an infinite mess of APIs glued together and poor understanding of the core problem which leads to a huge mess.

And now we get to add frequent supply chain attacks to the list.

I recently tried NextJS even though I consider myself an ardent framework minimalist. Create app (or whatever) proudly proclaimed it had installed 666 packages. I took that as a hint and bailed…


An infinite tower of abstractions lol.


This is why Perl stays attractive to me — not because of the language, but because of CPAN and all that accumulated API interfacing knowledge expressed therein.


> 3. Carefully construct a well thought out solution.

I think he might be on to something here ...


Everything I read about Forth, and about people's reaction to it, sounds amazing.

So why don't we use Forth for everything? Does it have some ceiling that makes it not great outside things like making firmware?


The Forth philosophy generates specialized and incompatible systems that are only suitable for one person or small teams. For those teams who develop the system (or are willing to put significant time into learning a bespoke system), it's amazing--a 10x effect. But if you want to share code between teams or on the Internet or if you want fungible engineers (or have rapid turnover such is common in the industry these days) it's not workable.

Also, it's incredibly efficient, which is necessary when you have kilobytes of RAM. When you have gigabytes of RAM you can cobble together a Rube Goldberg machine without having to think too much, and time-to-market wins out over system coherence and quality.


Indeed, any language that encourages you to program by building tailored DSLs for every problem domain has the same issue (Lisps, Racket…).

It sounds very productive to always be able to express solutions with the perfect language for that type of problem. But you end up with a myriad bespoke languages that are very impractical to document and learn, and are a constant friction for collaboration (even with your future self).


It can work if you think carefully about module boundaries so that the dsls do not leak out of their containing scope. Most of the time, dsls are not well bounded and even if they are, they are likely to confuse newcomers to the codebase. Of course, whether or not dsls are used, every program does have custom dsls in the form of functions that newcomers also aren't going to understand so it isn't as though programs written in a vanilla host language don't have many of the same comprehensibility problems as those written in a dsl.

That being said, not every piece of code needs to be accessible to everyone. Dsls allow you to create a hermetic world inside of a programming environment in which certain properties always hold, even if the environment does not guarantee these things for you. Programs written in the right dsl can be more productive to write and have much lower overall operational costs. But the price is that the rules of the dsl are not necessarily the same as the rules of the host language and this leads to confusion. For the truly ambitious programmer, it can be worth sacrificing accessibility to the median coder to build systems that the median coder can't even conceive of. Of course your reward is to be derided for not expressing the solution in a language they can already understand.

Wrt to Forth itself, Chuck Moore has pointed out that Forth is a multiplier: it makes good programmers better and bad programmers worse. I personally care more about the former property and frankly don't want to work with bad programmers anyway.


To me it is an issue of documentation mainly, smart programmers cannot read minds either. There are standard and effort-efficient ways to document function APIs, whereas DSLs need full user guides. They have complex rules about what are valid expressions and what they do, hard to guess without a thorough explanation, and incompatible with modern dev tooling that makes it easier to quickly to discover how to use an API without much reading or trial-and-error.

I do agree there is room for such approaches and they can have 10x effects in some situations. But more often than not, smart programmers can get wrapped up in the logical beauty of it all, distracted away from the core problem, feeling extremely productive but barely getting anything real done (like Vim can make feel so much faster but you really aren’t when you measure and compare properly).

Creating new languages is useful: config, markup, query… But it is hard and laborious to do well, and usually it is only successful if creating the language is your core goal, rather as a one-off auxiliary part of solving a problem, as languages like Forth or Lisp constantly encourage by their design. And, again, a non-trivial language without a manual is unusable (or it is trivial).

Limits to flexibility can be very useful: they make you focus on solving the problem with the tools you have, rather than making the perfect tools to solve the problem, tools that you will probably never use again because they are too tailored to that one problem.


You can use tailored DSLs in any language. Not just Lisp.

I do it all the time. It is a huge productivity boost.

An example is a biz application I recently delivered to a large international customer. More than 90% of the C++ and Typescript code was auto generated from a simple declarative spec. The only custom code was the customer specific biz logic and custom interface logic to external tools.

Yes it isn’t generated with macros Lisp style but it works equally well.


Marketing. There was a recent article that hit the nail on the head. The author was quite prolific within his field, and always used Forth. He would be called in as a consultant to do a certain kind of work (mostly DSP programming), and could adapt his existing Forth code to the new problems. His coworkers invariably thought that he was _cheating_ by using Forth. They were Real Engineers writing code in a Proper systems language like C. Don’t you know all of C’s advantages?

The fact was that he could run rings around whole teams of C programmers. His implementation would be more efficient, more reliable, and cheaper to mass produce (lower ram requirements, lower power requirements, etc). But he could never stay anywhere very long because all the Real Engineers knew that C was the Right Language to implement things in. His better solutions were seen as aberrations rather than a condemnation of their own mediocrity.

That’s the power of good marketing. Forth never had an industry giant spending billions a year on marketing to convince all of the engineers that it was the best language ever. C, C++, and Java all did. All the VPs have read articles and attended conferences that touted C++ or Java as the best thing ever, but they’ve never even heard of Forth.


I've seen this partly. The influence of mediocre smooth talkers is hard to believe. I also believe that non-tech social groups are more impressed by large teams rather than one or two geniuses. Lastly large companies asking loads of money to operate will also have cushions in case of problems.. the efficient lone wolf will not.

People with deep non mainstream knowledge should avoid mainstream by wide margins and try to assemble in small groups of like-minded hackers.


I hear the “marketing” argument all the time and I don’t buy it. Lisp (for example) has been promoted heavily for 60+ years by generation after generation of enthusiasts. And it hasn’t convinced the mainstream to switch.

And meanwhile C and C++ code runs the world. Without any commercial marketing behind it.


Wow, ok. No commercial marketing?

How about conferences? I’m sure none of the corporate sponsors have any marketing reason to contribute. <https://isocpp.org/wiki/faq/conferences-worldwide>, <https://javaconferences.org/>

Or magazines? Surely they were entirely supported by their subscribers, not their advertisers. <https://en.wikipedia.org/wiki/Category:Computer_magazines_pu...>, <https://en.wikipedia.org/wiki/Category:Defunct_computer_maga...>


The main reason comes from how we were taught to write programs. It is the same problem with human languages: if you learn to speak English as a kid, you'll hardly have the need to learn German. Going from something like Python to Forth is similar to learning a foreign language, the way things are expressed in Forth are sometimes radically different. So you need to relearn to program, which is a painful and expensive process for companies. The languages that become successful are a direct translation of the first successful programming languages like Fortran and C. Everything else is too alien to succeed.


I know this is hard to accept for people enthusiastic about a language X (Forth/Lisp/Haskell/..) that hasn’t entered the mainstream despite years of salesmanship and promoting from X people:

There is a reason why it hasn’t happened (obviously).

It might not be for reasons X people agree with or value. However those reasons are important and valuable enough for non-X enthusiasts to bounce off X.

And to make matters worse, the typical reaction from X enthusiasts is to then claim that non-X developers are not smart enough to “get it” or that somehow people using X are smarter than non-X developers. Claims that are so obviously ridiculous that it makes non-X people bounce even harder off the X community.

I personally love learning new languages. And have written compilers for a few of my own. However that’s all fun and games. Not something I would use in anger for my day job where I have to work in a team and deliver highly complex software for large corporation that relies on my code to run their business.

However having said all that, Forth is really cool and fun to play with. I love minimum languages! Lisp is another fun one. Or the lambda calculus. All great fun languages!


Something like:

https://yosefk.com/blog/my-history-with-forth-stack-machines...

...might recalibrate your expectations.


His methodology has a lot of planning. I'm in general, terrible at planning how long a program will take to be written. But there is one class of programs I am pretty good at: very short, very simple, programs.

Good news is that every problem you'll ever face can be broken down into sub-problems which are easier to solve.

Bad news is planning a software project in that much detail, down to the granularity needed for me to get accurate time estimates, takes as long as actual doing the programming would.

And even at that, there will still be many problems with the plan, because all that time we were not, say, making unit tests, or proto-types large enough to expose problems.

But yet, here he is, and his methodology certainly worked for him. I wonder how much Forth itself contributed to that success, or could he have used any interpreted language.

Any pointers on how to better estimate programming effort would be greatly appreciated.


Make up a number that sounds good. Check if anyone has done this before in your setting. Multiply this estimate by three. Keep people updated on your progress if you are late.


> multiply this estimate by three

https://en.wikipedia.org/wiki/Hofstadter%27s_law


I wonder if a programming methodology could work by: (1) writing the solution in Forth and creating a prototype that will be 100% thrown away. (2) making sure that it works. (3) re-writing the solution in a language that will be accepted by the business people. In other words, using Forth as a fast prototype system.


I have done that with Python, coded up prototype in Python. Wrote tests against service interface. Translated the Python to Java in the second version until the tests passed. It actually did save us a bunch of time, esp since the front end devs could use the Python prototype to do their work.


Compare it to the process of the Shuttle Software Group, detailed in the FastCompany article, "They Write the Right Stuff" (https://archive.is/HX7n4)

(Previously discussed many times on HN; the most popular post: https://news.ycombinator.com/item?id=23537530)

To me, the Forth process is more rigorous and less error-prone. It may not be the 'industry standard', but for certain very specialized and intelligent teams (such as the Shuttle Software Group), I'm surprised it hasn't taken off (excuse the pun)...


> First consider the options that everyone else would reject first, that is where the biggest algorithmic improvement is likely to be found.

This is excellent advice for humanity in general.


Many of these aphorisms are useful in a non-programming context as well.


It's a nice philosophy, but good luck doing this in a strict Scrum process.


Well, there is the question of how much quality we want or need. This process is by someone who created a very terse programming language. Indeed, it probably is a way to create such a thing. Most of us are not writing programming languages, let alone very terse ones. I am not entirely sure it is optimal for that use either. The triply nested loop might be a bit excessive. Of course scrum is, in theory, very flexible. One could have a 'definition of done' that includes the design of at least two alternative solutions, benchmarks for all of these and more stuff like that....


you can write "UNTIL" in bold capital letters, but your boss or customer can equally write "right now" in bold capital letters. worse, they can hold on to your paycheck or give it to somebody else.


in pre-c programming practice it was common to write programming language keywords and variables in all upper case, among other reasons because teletypes and computer character codes often omitted lower case. in prose this rather rebarbative convention serves to distinguish them from natural-language words, much as we often use typewriter typefaces today. it is not intended as shouting, which i think is your reading. the loop we write in c as

    do {
      x();
      y();
    } while (!z());
is written in forth as

    begin x y z until
and that is what the all-caps untils in this text refer to: three nested loops

the thesis of this document is that you get faster results by taking the time up front to think your problem through until you understand it, and throwing away your code and hardware designs is not something to be afraid of

i'm not sure jeff and chuck's results at itv bore this out; hardware kept evolving out from under them too fast, other people's code became more useful (because of shitty hardware, because of the free software movement, because efficiency became less critical, and because of higher-level languages with better reusability), and they ran out of money

but simplifying a problem is still immensely valuable when you can do it. the problem with programming continues to be overcomplicating things


If you read the bio of Charles Moore [1], you will see for yourself that he is well aware of business constrains. Among other things he co-funded Forth, Inc. (which is still alive, as you can see) and was a Forth freelancer. I grant you in advance the point that it was 30-50 years ago and that the situation can be different today. It is however still worth following this methodology when you can afford. There's often a life after deadlines.

But that's actually not something specific to Forth. Every programmer does that to some degree. Sometimes it is included in big and small so-called "refactoring" steps. Moreover, one could say at first glance that the core of this methodology is the well known "if you spend more time on design you will spend less time on code".

The other thing is that, indeed, in this context it is less of a problem to throw away what you've done because of step 1-2. The phrasing is a bit too abstract; Perhaps clearer is what he said in an earlier interview: "Don't anticipate, solve the problem you've got." [2].

This piece of advice is so beneficial that it is almost a motto for me. It is also harder to do that one may think, because it is so easy to FUD oneself with "what if" and "what about" and sometimes you have to fight against your own hubris too (e.g. "I'll build something powerful"). So it requires some training; after all these years programming in Forth I still catch myself anticipating (e.g. "I think it could be useful to have that").

[1] https://www.forth.com/resources/forth-programming-language

[2] https://www.ultratechnology.com/1xforth.htm : Don't leave openings in which you are going to insert code at some future date when the problem changes because inevitably the problem will change in a way that you didn't anticipate. Whatever the cost it's wasted. Don't anticipate, solve the problem you've got.


And yet it is wise to write code in a way that will lend itself well to additions later on. It is bad advice to forgo thinking about making ones code flexible and extensible. To simply spit out some code and without further thought accept the first barely working version of it. This will lead to obstacles, that in businesses will be interpreted as cost of changing something in the future, wgich in turn will lead to decision of simply never making this or that improvement. You will not be given the chance to revise your design later, in many cases, so you better make it work well the first time.

One might not be able to anticipate future requirements, but there are situations, in which the engineer, provided they even get the right idea, can make their code reusable with little to no additional time needed, simply, because they know they they are doing and have experience with the matter. The worst that can happen to a product is, when less experienced engineers think to do well in telling the engineer with the experience, that this can be done in the future, when actually that future never materializes. Whole new directions of products die, because of this. Code might become even more, because of not anticipating things, and having to build workarounds later, because one does not get time to refactor and has feature pressure from management.

It can all be pretty shortsighted and can limit the output that skilled engineers can provide. Bad management is gonna manage badly.


> It is bad advice to forgo thinking about making ones code flexible and extensible.

In my experience, the best way to make code flexible and extensible is to make it simple. No hooks, no architecture, no pattern, just the smallest thing that does the job. And failing that, the most modular.

That way, when unforeseen requirements do come your way you can just modify your code. The simpler it is, the easier it will be. And it works for any future requirement, not just the ones you had the experience or luck to foresee.

The flip side though is that simplicity is not straightforward. One does not simply "spit out some [simple] code". Simple code requires time and effort. I would like to be able to just say "forget about extensibility, just make it simple", but it wouldn’t convey the difficulty of making things simple.


> To simply spit out some code and without further thought accept the first barely working version of it

That's not exactly what TFA describes, though.

> there are situations, in which the engineer, provided they even get the right idea, can make their code reusable with little to no additional time needed, simply, because they know they they are doing and have experience with the matter

Maybe. Problem is, how do you know when you or someone else has enough experience to make those calls? I guess the only way is to try and try again. But if you go that route, you should really keep the score, that is, note down when you made a call and check later if you were right or wrong. It's easy to trick oneself though because the future is virtually infinite, so a decision to make e.g. something more reusable at "this little extra cost" can be indefinitely neither right nor wrong.


That is: there are more dimensions to real-world solutions than purely technical.


that's jeff's main point here


And yet we watch organizations overlook the fullness of the problem on essentially a daily basis.


muh shareholder value


Sure, but that is purely tactical, getting one to the next quarterly earnings statement.

The strategic question is why going long (or, a better strategic/tactical balance) seems so impossible.


it's easy to convince yourself you're playing the long game when you're just screwing up. even in retrospect, it can be hard to tell if you were making a reasonable bet that happened not to pay off, rather than just a dumb bet—or, if you won, to what extent you were making an unreasonable bet that happened to pay off, rather than a savvy one. multiply this by the inevitable fact that, acting in a complex and unpredictable world, everyone is mostly acting out of profound ignorance

finally, aside from unpredictability, rapid innovation and capital growth implies a high discount rate: any new innovation must compete with the risk-free return on capital. if we assume 5% per year, we must discount a return 20 years in the future by 64% (≈1-1/e)

under these circumstances, it's sensible to focus on short-term gains with a rapid feedback loop, rather than long-term gains which might turn out to be chimerical


I'm looking for an 80/20 answer here.

The 20% long(er)-term plays are going to require wisdom to sniff out.


little is scarcer


is this a dead drop?


every open comment thread is, but this one is so far unused


Fnord


<a href="https://bagas31.pro/sketchup-pro-unduh-pra-aktivasi/">Sketch... Pro 2024 v24.0.553 Unduh Pra-Aktivasi | Bagas31</a>




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: