I would like to find the motivated time to analyze how Forth and lambda-calculus functional languages compare/connect. Each Forth word seems to be effectively a lambda that must be connected to a correctly-typed stack, and words can themselves manipulate the stack as its own structure.
Forth include more that just stack machine - return stack, vocabulary, definintions, memory, etc. This means that these simple definitions will not be as simple as above for complete Forth. But they also can be represented as Church encoded data and they still be representable in lambda calculus.
I keep reading papers on stack-based vs register-based architectures, and it's clear there's no "conclusive" right-answer. But I think it's pretty telling that very little looks like a B5000, and an awful lot looks like a PDP-11/S360. Maybe the Forth faithful are right and the rest of us are too dumb to 'get' stack-based, but it's pretty clear what the market decided.
The B5000 does not closely resemble a Forth machine. It is stack-oriented, but does not contain separate parameter and return stacks which may grow and shrink independently. You could perhaps argue that the JVM and its peers are the modern-day successors to the B5000.
That wasn't the point. The point is that most of the field abandoned stack hardware for register hardware, upon which all sorts of software paradigms are implemented.
I've read Koopman; fascinating stuff. Really solidified my understanding of the use of multiple stacks in Forth-like environments.
But I think it's instructive that none of the commercial architectures he describes (from 1989) have a contemporary descendant (although I think you can still get the RTX 2000 as a space-rated special order at an astronomical unit price). I think that tends to support my premise.
The 8087 was basically obsoleted by register-based SSE2 and SSE3, and as I understand on x86_64 the vestigial 8087 op codes are decoded microinstructions executed by the SSEx unit. And the descendant of the B5000 (now called Unisys ClearPath) is a VM running on Xeons; they stopped making stack-based MCP processors in 2015 or so.
These types of books have always been a bit frustrating to me. Coders at Work is another one.
Moore is a genius, and his inclusion is absolutely for the better.
However, if it came just 5 years earlier, it would have been able to have so much more value. Falkoff's inclusion within was great, Falkoff contributed greatly to the APL ecosystem, but interviews with Iverson are scarce and hard to come across, and he had an amazing view on the big picture for these things. His books are probably the most valuable I have on my shelves.
Also, it leaned far too hard on ALGOL's many derivatives. Roger Hui and Arthur Whitney would have been more valuable than most of the people included. Even Forth got three people interviewed, which is admittedly pretty cool! (PostScript is Forth.)
Last year I got into Forth, then PostScript, which seems a kind of dumbed-down, simplified Forth. On PostScript's wikipedia page, Forth is not mentioned as an influence. So I went to change that, and in the page html there's a comment saying if you came to add Forth, see talk page. I looked into it a bit. (One of?) The guy who wrote PostScript's previous couple of languages were based on Forth (wikipedia admits), but then PostScript was not so much as influenced by it?! That actually made me very angry! The PostScript reference manual reads as if written by lawyers, which maybe shed some light on it. They don't admit it's influenced by Forth, but say of course it has influences, and the next sentence is about Forth. As if they couldn't not mention Forth, but legally didn't want to spell anything out. Pretty disgusting treatment of Chuck Moore, seems to me. HN, help me right the wrong!
PostScript is really a lot more like Lisp or Smalltalk than Forth. It has GC, blocks for control flow, local variable binding, symbols, mandatory bounds-checking, type-safety, and exception handling. Forth conscientiously abjures such things, and uses compile-time metaprogramming instead in some cases, which is possible in PostScript (and used routinely for raster image data) but less flexible.
However, PostScript not only passes arguments on the stack, it exposes things like the compiler machinery in ways that are more typical of Forth than of the high-level languages it draws so much from. So I feel like it has some Forth inspiration, in the same way Python is influenced by SNOBOL or BASIC.
Nowadays I've mostly given up PostScript in favor of SVG and Reportlab, sad to say.
Yeah, but the canvas API sucks bad enough (mostly due to the constraints of JS) that PostScript is an appealing alternative. Super verbose and can't even produce a PDF. If you can print it it's all pixelated.
Kragen is right that PostScript is a lot more like Lisp or Smalltalk than Forth, especially when you use Owen Densmore's object oriented PostScript programming system (which NeWS was based on). PostScript is semantically very different and much higher level that Forth, and syntactically similar to Forth but uses totally different names (exch instead of swap, pop instead of drop, etc).
But its most obvious similarities with Forth are based on generic Computer Science concepts like stacks, which is not something particularly unique to Forth itself, so saying PostScript was influenced by Forth ignores other much more influential languages like Interpress, JaM, and Design System, which directly influenced PostScript and were designed by the same people. Also, PostScript is not threaded like Forth. But it's homoiconic, unlike Forth and Lisp.
PostScript was a direct descendent of and heavily influenced by Xerox PARC's Interpress, by John Warnock and Charles Geschke, which itself was a direct descendent of JaM, which stands for "John and Martin", by John Warnock and Martin Newall. And JaM was a descendent of Evans and Sutherland's Design System, by John Warnock and John Gaffney.
Brian Reid (whose brother is Glenn Reid, author of several books on PostScript from Adobe) wrote up an excellent historical summary in 1985 on the laser-lovers mailing list of the influences and evolution of PostScript.
>Brian Reid wrote about page independence, comparing Interpress' and PostScript's different approaches. Adobe's later voluntary Document Structuring Conventions actually used PostScript comments to make declarations and delimit different parts of the file -- it wasn't actually a part of the PostScript language, while Interpress defined pages as independent so they couldn't possibly affect each other:
>By now you can probably see the fundamental philosophical difference between PostScript and Interpress. Interpress takes the stance that the language system must guarantee certain useful properties, while PostScript takes the stance that the language system must provide the user with the means to achieve those properties if he wants them. With very few exceptions, both languages provide the same facilities, but in Interpress the protection mechanisms are mandatory and in PostScript they are optional. Debates over the relative merits of mandatory and optional protection systems have raged for years not only in the programming language community but also among owners of motorcycle helmets. While the Interpress language mandates a particular organization, the PostScript language provides the tools (structuring conventions and SAVE/RESTORE) to duplicate that organization exactly, with all of the attendant benefits. However, the PostScript user need not employ those tools.
There's more interesting discussion about the relationship between PostScript and Forth on c2:
Thank you so much!! What an amazing effort. I've only read a few of your linked comments so far, but look forward to exploring all those. Thanks again, all very much appreciated.
You're welcome! OOPS (Object Oriented PostScript ;), I meant to say that PostScript and Lisp are homoiconic, but Forth is not. The PSIBER paper on medium goes into that (but doesn't mention the word homoiconic, just describes how PS data structures are PS code, so a data editor is a code editor too).
Also, here is a metacircular PostScript interpreter, ps.ps: a PostScript interpreter written in PostScript! Since PostScript is homoiconic and so much like Lisp, it was as easy as writing a metacircular Lisp interpreter (but quite different in how it works, since PostScript and Lisp have very different execution models).
Obvious Question:
Why would anybody ever write a PostScript interpreter in PostScript?
Possible Answers:
To use as a debugging tool.
To trace and single step through the execution of PostScript code.
To serve as a basis for PostScript algorithm animation.
To gain a deeper understanding of how PostScript works.
To try out some ideas from Structure and Interpreteration.
To experiment with extensions to the PostScript language.
To demonstrate that PostScript isn't just for breakfast any more.
To make PostScript run even slower (but thicker).
To avoid programming in C (the portable assembly language of the 80's).
To use to interpret its self.
To have something nerdish to talk about at parties.
[...]
Some Forth systems like Mitch Bradley's OpenFirmware have a metacompiler that can compile the same or different versions of Forth for the same or different architectures (or even word sizes or threading methods), which is a totally different thing than a metacircular interpreter, but also very cool and useful.
Also here's some stuff about LispScript, which is a Lisp to PostScript compiler developed by David Singer and Rafael Bracho at Schlumberger around 1987, and some discussions about the joys of using the same scripting language on the client and on the server and sending JSON-like data structures (polymorphic PostScript arrays and dictionaries) back and forth over the network as events. What we were trying to do with NeWS PostScript between both the NeWS client and server eventually finally happened with JavaScript/JSON between the web browser and server with node.js, and is now called "AJAX".
>PostScript is often compared to Forth, but what it lacks in relation
to Forth is a user-extensible compiler. You can write your own
PostScript control structures and whatnot, like case and cond, to
which you pass procedures as arguments on the stack, but the
PostScript scanner is not smart -- but there is no preprocessing done
to the PostScript text being read in, like the way immediate words in
Forth can take control and manipulate the contents of the dictionary
and stack while the text source is being read in and compiled, or like
the way Lisp macros work. This is one of the things I would like to
be able to do with something like LispScript.
Date: Sun, 7 Jan 90 10:36:44 PST
From: Russell Brand <wuthel!brand@lll-crg.llnl.gov>
To: don@cs.UMD.EDU
Subject: wuthel wisp klote
>Somebody quoted you as saying something along the lines of "Lisp is
>the language for people who want everything, and are willing to pay for
>it." Is that how you put it?
The quote is exactly right. It is from a panel I led on the
differences between Lisp, C, Ada, Pascal, Forth and Rex.
>You might get a kick out of Sun's proposal for "NFS 3.0" aka "NeFS". The basic idea was to put a PostScript interpreter in the kernel for extensibly and efficiently executing distributed and even local file system operations. It not only cuts down on network transactions for the same reason NeWS and AJAX does, but even locally you can avoid billions of context switches by executing "find" and tasks like that in the kernel, for example.
The Network Extensible File System Protocol Specification 2/12/90
From: brent@terra.Sun.COM (Brent Callaghan)
Newsgroups: comp.protocols.nfs
Subject: NeFS Protocol Spec Available
Keywords: NFS version 3
Date: 13 Feb 90 01:57:31 GMT
NeFS is not NFS. It started out as an NFS protocol rev
(NFS version 3) but its protocol is a radical departure
from the NFS's remote procedure call model. In order
to avoid confusion with NFS we're calling it NeFS for
now - Network Extensible File System (the similarity of
the name to a window system protocol is entirely intentional).
And later on around 1990-1993, Arthur van Hoff wrote PdB at the Turing Institute in Glasgow, an object oriented C to PostScript compiler called PdB (for Pure dead Brilliant), which is kind of like TypeScript, conceptually. We used PdB to develop HyperLook (Networked PostScript based HyperCard for NeWS), which I used to port SimCity to NeWS on Unix. (A few years after that, Arthur wrote the Java compiler in Java at Sun in 1995, and lot of other cool stuff since then!)
>Arthur van Hoff wrote PdB, and we used it for to develop HyperLook (nee HyperNeWS nee GoodNeWS). You could actually subclass PostScript classes in C, and vice-verse!
>Subject: PDB -- ANSI-C to PostScript compiler arthur@turing.ac.uk (1993-01-21)
>From: arthur@turing.ac.uk (Arthur van Hoff)
>Organization: The Turing Institute Ltd., Glasgow, Scotland
>Date: Thu, 21 Jan 1993 12:52:14 GMT
>Hi PostScript Hackers, [...]
>There is no more need to write PostScript! Start using PdB right now! PdB
is an optimizing compiler to compile ANSI-C (like) code into Adobe
compatible PostScript. It includes executables, examples and many useful
header files. Note that it is not dependend on NeWS.
It's considerably less impressive than some of the examples you've linked, but quite a long time ago I wrote a simple Forth VM as a printable postscript document. Might be topical: https://github.com/JohnEarnest/Four.Ps
Consider the current lawsuits regarding APIs, I think it's probably safer for wikimedia to defer to whatever the corporate PR department believes to be correct lest they be compelled to testify in court.
Can anyone elaborate on the bottom up method with a small but concrete example? Say I want to parse a special format of strings, read part of each line and dump into say. CSV, how would a Forth programmer approach the problem?
It's been ages since I did any Forth programming, but at the start of my career I spent about 2 years writing Forth code. I have always found that bottom up programming is very similar in nature to test first TDD. You start implementing one specific part, then you implement another specific part. You check to see if there is a better representation for it and refactor if you can. Rinse and repeat.
This is very different than outside-in GOOSE style development, though. You start at the lowest level of abstraction and work your way up. It requires you to do some analysis of the problem up front so that you can see that low level of abstraction.
WRT your example, I can't really work through it without a real problem, because the devil is in the details. But generally, if I have to parse some strings, I'll start with parsing an example of a string in the simplest way I can. Then I'll expand that code to include another string, refactoring the code as I go. Once I have some idea what the parsing interface is going to look like, I'll write some code to read a line from a file. Then I'll work my way up to call the code that reads the line and sends it to the parser. Then I'll take a look at the representation for the parsed string and implement a single CSV output. Then back up to hook that up. Finally, I'll move back down the abstraction layers and work on more strings. You want to move from that low level, gradually up refactoring as you go, and then dive back down repeatedly. Or at least that's how I do it. I don't pretend to be as skilled at it as Chuck Moore who is famous for discovering amazing abstractions.
I should note that not all problems fit this style of development. Sometimes you have to work from top down. Normally you can discover this fairly quickly if you start chasing your tail. You'll implement something, work your way up, then move down only to discover that you have to change something fundamental. You do it, moving back up, only to discover that the first bit doesn't work any more. However, once you discover the top level abstraction you need, you can go back to a bottom up technique because the lower level abstractions will be more obvious.
Thanks this is really interesting. I'm definitely not experienced enough to classify problems into top down or bottom up though guess that takes years of experience.
One of the things about Forth, is that the techniques have not proven to be amenable to generalization. Chuck Moore himself says[0]:
> I wish I knew what to tell you that would lead you to write good Forth. I can demonstrate. I have demonstrated in the past, ad nauseam, applications where I can reduce the amount of code by 90% percent and in some cases 99%. It can be done, but in a case by case basis. The general principle still eludes me.
---
I would have to start by asking, what is the essence of what has to happen to parse the string? How I would approach the problem depends a great deal on the specifics of the actual problem. So, your "concrete" example problem would have to be a lot more concrete before a Forth programmer could even start tackling it. That's the nature of a bottom-up method.
In Dercuano there is a simple program compared in C, assembly, Python 2, and ANS Forth, in the note “Forth assembling”, p. 3326 of the PDF. Is that what you're looking for?
Myself, I'd probably try writing a recursive-descent parser before anything else in that case. For generating CSV I might have a word quote? which tells me if a string will need quoting, a word quoted which emits a quoted field, a word unquoted which emits an unquoted one, a word comma which emits a comma, and a word record which emits a CRLF.
But I'm not a competent Forth programmer, and I think neither is anybody else commenting here.
I'm willing to try the exercise if you want to give me a text format.
I wonder if gpu core arrays can share ~high level code like GA144. Passing bits of forth thunks onto neighbors felt immensely powerful (and as much more rope to tie yourself into knots).
If you actually look at the way Forth works you'll see that every stack manipulation wastes code space and real-time. Since there is only one data stack there are a lot of stack manipulations going on. Forth programmers are aware of this and do their best to minimize them, which tends to make their incredibly cryptic code even more cryptic.
If the definition of a low level language is one that bedevils the programmer with minutiae, the Forth is the lowest of the low. I don't understand the fascination others have for it, and don't understand how anyone can like it after actually programming with it. It's horrible.
It teaches you to set things up so that the code doesn't have to do stack manipulations and other minutiae in the typical case. Most other languages seem to encourage modules with general APIs and hard boundaries so that the caller has to unpack/repack/rearrange the data as it enters and leaves. Forth very deliberately encourages developing a holistic system, and it discourages wholesale code reuse from other projects and systems, which gives you the power and flexibility to refactor relentlessly, until only the essence of the computational solution remains.
Forth is definitely a difficult language to work with, particularly in a professional environment where managing turnover is massively important. When I dive in to some Forth code that I've written, to make even the smallest of changes, my brain has to be fully engaged, and that's a non-starter in most environments (Chuck probably thinks this is a good thing; why are we making changes to code we don't understand?). But I am still an avid proponent of learning and applying the principles of Forth, because of the results that it makes possible. It is quite eye-opening to see directly how a system can become 10x as powerful, with 1/10th of the code, if you are willing to do the work and embrace the "minimalist" (I would call it "essentialist") mindset.
One of Elbrus supercomputers was a stack machine that translated stack operations into out-of-order register operations, quite successfully. There was also a variant of Ada called El-76.
(see more about Pentkovsky for more interesting stories)
This means that you do not need to sacrifice speed for compactness.
Also, zero-operand ISA (stack machines are zero-operands) have what can be called normalizing property: if you have stack layout for inputs and outputs you have only one optimal way to achieve it. E.g., "( b a -- x) swap -" sequence won't be different in any place where you have to compute b-a (in the register's three operand typical RISC case there are 32^3 combinations). This means that if you can use something like [1] Sequitur algorithm to make code more compact than six bit per opcode.
This is, actually, what Forth programmers often do manually. And in untyped language, which will not complain about stack layout violations after refactoring.
But this does not mean that you cannot use, say, dependent types for Forth programs for better programming safety. And this does not mean that you can't benefit from single (or double) stacks or their alternatives. For example, Applicative class from Haskell's Prelude is good in expressing concatenative programs, just like ones in Forth, I think.
I have a hard time resisting the temptation of ROT >R myself at times, and it's true that I never pass arguments to the wrong function in C, while in Forth I do. Even assembly is less bug-prone in my experience. But I think there's probably a there there that I don't fully understand, and I want to.
Sometimes I wonder if Forth would be better off with no stack manipulation words. If you really need to swap, after all, you can X ! Y ! X @ Y @. Chuck left SWAP out of the x18 in the end.
In a stack oriented language the stack manipulations are the logic. You could make a more complicated compiler to optimize them or use local variables but most people/projects don't bother.
>Forth programmers are aware of this and do their best to minimize them, ...
This has never been true in my experience. The effort goes into insuring that the stack manipulations are correct.