There are many things that got taken from Algol, but it's also important to distinguish Algol-60 and Algol-68. Although some things from the former were still in the latter, they have little in common.
Algol-60 gave us:
- semicolons as statement terminators
- begin/end
- chain assignment (a := b := c)
- conditional expressions (like ?: in C)
- the "type name;" variable declaration syntax.
- while loops (technically both "while" and "for" were syntactic elements of a single loop structure, and could be combined)
Algol-68 mostly influenced other languages via C, which appropriated quite a few of its keywords: void; struct; union; long and short (in A68, you could chain arbitrarily many "long" and "short" before "int" or "real" - the implementations were required to distinguish short int / int / long int, but additional modifiers could be used in an implementation-defined way to designate larger or smaller types if an implementation chose to provide them - if there was a need for 128-bit ints, for example, they'd be "long long long int"). Abbreviated type names as well: int, char, bool all come from A68. Also, compound assignment like += and -= (except it was +:= and -:=), and the notion of assignments as expressions yielding the assigned value.
Go arguably takes the consistently left-to-right composite type syntax from A68: arrays were declared as e.g. [0:9]int, although variable name still followed type. Something like "ref []bool x" would mean "x is a pointer to an array of Booleans".
Regarding := specifically, it was an assignment operator in Algol-60 (and thence in Pascal language family). In Algol-68, you would use = to initialize an immutable variable in a declaration, and := to assign to them if they are mutable, while Go is the reverse.
Bourne shell took reversed-spelling terminator keywords like "fi", "od", "esac" etc from A68.
C++ seems to have gotten its operator overloading from A68.
It's very interesting to read both specs today. Even Algol-60 was amazingly powerful in some respects - it already had nested functions closing over outer function's variables, and functions passed as arguments, for example (although they weren't first-class, since you couldn't return functions). Then there was crazy stuff like call-by-name, computed gotos, and passing labels as arguments (to which the callee could then goto across stack frames! basically, setjmp/longjmp as an intrinsic language feature). It can be an interesting exercise to try to devise an efficient implementation of all this stuff.
And Algol-68 pretty much reads as a modern programming language, except for some weird terminology that they invented just for it that didn't get adopted by the industry - e.g. "transput" for input and output, or "mode" for types. From modern perspective, it's hard to understand why it was considered overcomplicated in its day, but it makes more sense when you look at its competitors at the time. The most funky thing about it from modern perspective is that pointers were implicitly dereferenced, and so it needed to have a bunch of extra operators to disambiguate things (e.g. "=" for value equality with implicit dereferencing, and "is" for pointer equality).
Here are the actual reports, for those curious. This one is for Algol-60, as revised in 1963 (there was a later "modified report" in 1975, which codified a bunch of extensions and features standardized elsewhere - but reading the original reports is more interesting given the historical context):
>> Bourne shell took reversed-spelling terminator keywords like "fi", "od", "esac" etc from A68. <<
One difference between Bourne shell and Algol 68 is that Bourne uses “done” instead of “od”, because “od” was already used by the octal dump command! The Bournegol source used DO-OD though - https://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd...
> ... pointers were implicitly dereferenced, and so it needed to have a bunch of extra operators to disambiguate things
Funny as it has always annoyed me that static objects and dynamic objects used different syntax (a.b vs a->b) so that if you ever changed one, you'd have to go edit ALL uses. It would IMO have made sense to have a->b work for both (better still "a.b" if the syntax could be worked out).
I didn't mean implicit dereference on member access, but rather implicit dereference in most non-pointer contexts. For example, suppose you have:
INT x := 0
REF INT p := x
p is a pointer here ("name" in Algol-68 parlance). Now if I want to read x via p, I can just write something like:
p + 1
No dereference operator - because + is not defined for names, the implicit dereferencing is applied (it works for multiple levels, too - REF REF INT would have been dereferenced twice).
On the other hand, the left side of := does not implicitly dereference; instead, it requires the operand to be a REF, and rebinds it. So I could do:
INT y := 1
p := y
and now p references y instead of x. If I wanted to actually mutate x, I'd have to dereference explicitly on the left side, which you do by casting, removing as many REFs as needed.
It all is actually somewhat more consistent than C, in that in C, they had to come up with that whole "value category" thing for expressions - lvalue, rvalue etc - that is orthogonal to its type. E.g. if you have "int x", the reason why you can use x on the left side of assignment is because it's an lvalue, while something like 123 is an rvalue; but the type is just int for both. In Algol, this is reflected in the type - when you say:
INT x := 1
this is actually syntactic sugar for:
LOC INT x := 1
which is in turn sugar for:
REF INT x = (LOC INT := 1)
where LOC is an operator that's kinda like a typed version of alloca() - i.e. LOC INT returns a value of type REF INT that is allocated on the stack (there's also HEAP, which is the equivalent of C++ "new", but garbage-collected). So all mutable variables actually have type REF T, and := requires the left operand to be a REF (and changes what it refers to).
OTOH if you write something like this:
INT x = 1
then x is just a binding, not a name; its type is just INT; and therefore it does not have a valid type for the left operand of assignment. Which means that technically all variables in Algol-68 are immutable - it's just that some of them can be bound to values that are names, and the name can then made to reference a different value.
I'd say that Algol 68's greatest influence is probably "void", given how often C, C++, Java and C# developers have to write that keyword to this day. ~
I'm surprised Pascal wasn't mentioned. Its syntax is very similar to Algol and Wirth was part of the Algol X development team. His (rejected) proposals were the genesis of Pascal.
It is interesting that B. Stroustrup first looked at Algol 68 as a potential base for introduction of classes into a statically compiled programming language, and then he turned to C - probably because Algol 68 was already as complex as is now C++.
Classes were introduced into a statically compiled programming language in Simula-67. C++ has a lot of Simula legacy, even down to some keywords (e.g. "virtual" is from there, although it meant "abstract" originally).
actually modern C and Algol68 (which doesn't have objects) are roughly equivalent in complexity .... probably the obly thing that C doesn't have is the type-case (algol68 unions have a hidden small enumerator that describes the actual type stored within the union)
What it does have is the algol 60 idea of nested scopes
For example if proc A contains a variable X and declares a subroutine B within it, then B can access X, even if you take a pointer to B and call it from elsewhere (effectively that pointer to B also carries info about which instance of A it came from ....
These days we've largely forgotten about this idea of how variables should be accessed and inherited between subroutines - hardware designers used to design support for these language features (display registers in Burroughs mainframes) - to be fair as a feature it honestly doesn't add a lot of practical functionality considering what it costs
We even have the legacy of that in x86 - the ENTER opcode, the second argument for which is "nesting level of the function":
"If the nesting level is 0, the processor pushes the frame pointer from the BP/EBP/RBP register onto the stack, copies the current stack pointer from the SP/ESP/RSP register into the BP/EBP/RBP register, and loads the SP/ESP/RSP register with the current stack-pointer value minus the value in the size operand. For nesting levels of 1 or greater, the processor pushes additional frame pointers on the stack before adjusting the stack pointer. These additional frame pointers provide the called procedure with access points to other nested frames on the stack. See “Procedure Calls for Block-Structured Languages” in Chapter 6 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for more information about the actions of the ENTER instruction."
(Although, curiously, Free Pascal - which also implements this feature - doesn't use ENTER in assembly, last I checked.)
The issue is that if the local variables of A, such as X are allocated on the stack, once you return from A, the stack frame will disappear. So the example you cite can only work as long as A is active (not returned from). If A itself returns a pointer to B, at least the locals of A captured by B must be allocated on the heap and GCed once all references to a particular B disappear. Note that a number of modern languages provide full closures including Go, Haskell, JavaScript, Python, Scheme, Swift.
The nice thing about Algol-68 is that you're in control of where the variables are actually allocated - you'd just need to ensure that anything that the closure might use after the outer scope exits is a HEAP name (i.e. declare it as "HEAP INT x")
exactly .... which kind of makes those proc pointers not as useful because you need to carefully manage lifetimes/etc yourself and failures are, well, not defined
It's the compiler's job to do so, not a programmers! The programmer can do whatever makes sense. For example, in Scheme
(define (make-counter)
(let ((n 0)) (lambda () (set! n (+ n 1)) n)))
is guaranteed to work in any context where make-counter is called. Same in Go; though the syntax is a bit unwieldy due to having to specify types:
func makeCounter() (func () int)) {
n := 0
return func () int { n++; return n }
}
Google's Go compiler is smart enough to allocate only the captured variable (n in this case) for such returned functions on the heap. There should be no failures.
IIRC some algol-68 compiler did remove the restriction and allowed such full closures.
I think that's a language centric point of view - more one would say "it's the language's job to do so" and then write a compiler for the language.
Algol60 and 68 were created at a time when people were still trying to figure out how computer languages ought to work (in a sense we still are) and it's likely the idea that procedure local storage should (or could) out live the procedure being called wasn't an obvious choice (remember that algol68 Nests are not just locals of a particular subroutine but also locals of every subroutine in its declaration [not calling] path as well as globals at level 0)
Algol68 (and Algol60) gave us lots of things we take for granted today but there are also features that have fallen out of fashion
(I'm old enough to have seen language features like objects and functional programming come and go, several times ... it's all fashion .... and I'm cynical enough to know that there's no correct way to do any of this stuff)
"Here is a language so far ahead of its time, that it was not only an improvement on its predecessors, but also on nearly all its successors. "