The most common "system" as they describe it is probably a database. Since it's not feasible to drop all your data and start over when you want to make a schema change, instead we have "alter table" statements to modify data in place. Often, the only practical changes you can make once you go live are those that can be expressed as database migrations.
Traditional, image-based Smalltalk worked this way. Often debuggers and repl-based languages will support some changes without restarting, but the intention is typically to support debugging and exploration, not to preserve data over the long term. Being able to restart and reload is so taken for granted that every web browser supports it and it's exposed to end users.
A compiler is free to optimize non-persistent data structures to the point where the objects you read about in the code might not actually exist. A persistent system requires that the runtime actually has the data described by the language stored in a way that you can inspect it and write new code that uses it.
> Traditional, image-based Smalltalk worked this way.
The first Lisp implementation worked as a 'live system'. One would boot an image from a tape drive, load/run Lisp code from cards and could then dump a new image to tape. That was implemented end of the 1950s.
It's funny. Last night I was just reading the chapter in Taleb's "Antifragile" where he argues the same thing as the author of this paper, i.e. that tinkering is at the root of most technological innovations, whereas the common belief is that science precedes all innovations. Coincidence in timing!
> It's funny. Last night I was just reading the chapter in Taleb's "Antifragile" where he argues the same thing as the author of this paper, i.e. that tinkering is at the root of most technological innovations, whereas the common belief is that science precedes all innovations. Coincidence in timing!
I personally believe that you can find examples in both directions: where engneering precedes science and where scientific progress leads to new engineering.
I don’t think Dick would disagree with you; his point was that it’s not inherently a unidirectional path with the scientists as vanguard and engineers in their wake.
> But for them, modularity was the result of careful design, not magic enforced by a compiler or a system. Any underlying programming system can, at best, ease creating good designs.
This still at a time when programming language research was pointed towards increasing programmer’s power: FORTRAN, LISP, BASIC, PL/1 were all attempts and increasing the expressive power, automating some bookkeeping, and letting programmers do new and interesting things faster.
That sprang from a varsity of factors but one (especially with BASIC) was democratization of programming.
About 25 years ago we went the other way: Java, especially Go, and to some extent the GoF went in the direction of restricting what tools programmers would have in their toolboxes with the aim of making it harder for them to get into trouble, again with the aim of democratizing programming.
Method combinatory are a powerful mechanism I’ve missed since I stopped using flavors.
Objective-C had a partial implementation via run_super (an idea inherited from Smalltalk I believe, but can’t remember) which gave you control of before and after behavior, but was backwards in that you had to code it explicitly as opposed to having your mixin provide an addendum or suffix. This violated abstraction and encapsulation.
Also, combinators other than :before and :after often led to spaghetti code. I used :progn (useful!) and :or and :and (:or was in particular quite finicky) which allowed my users a lot of power and flexibility but mostly allowed them, from confusion, to mess up their own code. I don’t remember if these constructs survived into CLOS.
by the was these terms come from isle cream. There was a popular ice cream place called “Emac & Bilio” often referred to as Emac’s. So once there was an Emacs editor, naturally there was a typesetting package called Bolio, the “flavors” system’s standard base class was si:vanilla-flavor, and of course mixins from the practice of mixing in sprinkles, crushed cookies etc (as opposed to just using them as toppings)
Gabriel really has his finger on something in his identification of the chasm between the systems versus language people that goes right down to the vocabulary.
The dynamic versus static debate is basically systems versus theory, and in that debate, the word "type" often means something else: meta-data describing an object, available at run-time, versus logical property of a program node.
Traditional, image-based Smalltalk worked this way. Often debuggers and repl-based languages will support some changes without restarting, but the intention is typically to support debugging and exploration, not to preserve data over the long term. Being able to restart and reload is so taken for granted that every web browser supports it and it's exposed to end users.
A compiler is free to optimize non-persistent data structures to the point where the objects you read about in the code might not actually exist. A persistent system requires that the runtime actually has the data described by the language stored in a way that you can inspect it and write new code that uses it.