First, some personal background: When I was an intern at Harlequin, I was one of the first users of Harlequin Dylan. Later, when CMU stopped work on Gwydion Dylan, I helped start an open source project to maintain it. And when Harlequin wanted to start open sourcing their Dylan (which later became https://opendylan.org/ ), I met with them to help hammer out licensing details. Harlequin released a lot of cool code.
What was cool about Dylan? Well, it was a basically a relative of Common Lisp, but with an infix syntax, but it had been simplified to improve code performance. It was a more ambitious language than Java, with full closures, basic macros, and generic functions. Dylan had static typing if you wanted it, or you could leave code untyped. (Unfortunately, collection and function types were fairly weak due to the lack of standardized generic types.)
In the end, Apple's abandonment of Dylan and the rise of Java united to make Dylan irrelevant. But it was a fun language, and I had a lot of fun hacking in it. I think that the closest popular language at the moment, design-wise, is probably Julia.
I still remember one 20-hour day that where a friend and I set up tower computers in a cozy basement room at Dartmouth, and I eventually convinced the Gwydian FFI tool to parse 10,000 lines of Linux headers.
The implementation was open-sourced and had a fair amount of work put into modernising it. Things seem to have gone quiet the last couple of years, but it's out there if anyone wants to try it:
I think you're right. Moon and I exchanged a little mail about it a few years ago. He wrote to tell me he liked Bard, my Dylan-influenced Lisp, and I think he said PLOT wasn't implemented.
Interestingly, he also said he had a solution to class linearization that was superior to C3, but that he couldn't remember the details. I asked for more information, but he said that if he still had the notes they were in the attic or something.
The wiki page makes a valid point; seemingly heavy lisp users will argue against non-s-expression syntax, but happily use prefix notation for quote, and infix for pairs...
Pretty much. Lisp programmers seem to think writing ast with parentheses is somehow superior to other syntax (except when it comes to quote and dot) - while what you really need is a good way to manipulate the ast (for good/easy macro support).
The common lisp variant is pretty broadly supported afaik - and there's a scheme variant and a #language for racket:
Still, if what you want is "sane" (IMNHO) syntax - you're probably better off using Julia. I don't know why a dylan-like subletting of common lisp/clos hasn't been more successful. Maybe people just love different corners of the common lisp spec too much?
> is a good way to manipulate the ast (for good/easy macro support).
S-expressions were not initially used in Lisp because of macros. Actually the early Lisp had no macros, but s-expressions. They were used as a syntax and representation for lists and tree-like data.
s-expressions were first used as a data representation and people had to use it, because programs were represented as s-expressions. The programs were written as m-expressions in some algebraic notation, but had to be translated to s-expressions to be executed by Lisp. The expectation was that s-expressions were only an implementation detail and programmers were then using some other syntax for their programs. There were some larger efforts to design a successor to Lisp 1.5 (called Lisp 2) with such a syntax and have that as the default/main Lisp dialect. Eventually this project was cancelled.
The advantages of s-expressions in the context of Lisp were/are:
* not dependent on line-based data entry and not sensitive to whitespace
* one could easily feed code as s-expressions to a interpreter, this could be a Lisp interpreter, or an interpreter for some other language using s-expressions.
* the parsing/scanning/printing/formatting of s-expressions is reusable in different contexts. this can be done with or without knowing the contents of s-expressions. One can easily format s-expressions based on general rules, but one can also write special purpose code formatter for Lisp. Lisp has this built-in with the pretty-printer.
* generally writing code transformations (not just Lisp macros) is a bit easier with richer data structures than strings. For example write a bunch of rules using s-expression and transform/interpret them. Thus different interpreters/compilers can be easily written, not just a Lisp interpreter/compiler.
* interactive manipulation of s-expressions and s-expression-based code is relatively easy. This helps a lot with REPL-style programming. Marking (sub-)expressions, transposing them, replacing/moving them, then gets relatively easy. You'll get a poor-man's syntax directed editing very easy. If you want in an IDE make a Java-expression available broken up into its subexpressions, one has to use a much more complicated parser, which then has very detailed knowledge about the language - which is both and good. With s-expressions you still get some structure editing, while on that level there is very little commitment about the syntax of the actual contents (is it a Lisp program? Is it a bunch of rules for a firewall? is it a list of diagnosis rules?).
Syntax-directed editing is available in many tools, but rarely is it as interactive and basic as in Lisp with s-expression syntax, which is also very robust against accidentally reformatting (omitted newlines, added whitespace, ...). Enter an s-expression and its always easy to reconstruct the visual structure.
Since (sub-)expressions are delimited left and right with characters ( and ), it's always clear where it starts and ends when displayed or read. With a syntax with less visual structure or one which needs to be reconstructed with a parser, this is much more difficult and less intuitive to use.
These are all things that can be done with (a) rich editor(s) with access to the ast - see things like Colorforth and most Smalltalks. The latter is of course the canonical example, where representing an array of numbers as a spreadsheet-like widget becomes natural and integrated.
I also think most of these points fall into the "code is data" idea - and where text manipulation (reformatting, pretty printing) is somehow seen as free/easy - as opposed to working with an ast.
My point is that s-expressions might be more simplistic, but not really that much simpler, than having access to the ast.
The fact that you have to choose between not being able to represent text (ascii) and a variable length encoding, also makes s-expressions less appealing today, than it once was.
> These are all things that can be done with (a) rich editor(s) with access to the ast - see things like Colorforth and most
Sure, see what I wrote. It just less easy/basic to use.
> Smalltalks
Quite a bit complex and much more different to use. Just compare Xerox' Interlisp-D with their Smalltalk 80. Smalltalk 80 usage is very different from Interlisp-D.
> I also think most of these points fall into the "code is data" ide
That was my point. Lisp s-expressions are used in Lisp not just because one can implement macros with them.
> The fact that you have to choose between not being able to represent text (ascii) and a variable length encoding, also makes s-expressions less appealing today, than it once was.
It's not clear to me what you mean with that. Can you give an example?
>> I also think most of these points fall into the "code is data" ide[a]
> That was my point. Lisp s-expressions are used in Lisp not just because one can implement macros with them.
Right, and I held up macros as an example of manipulating the ast.
I may be wrong, but for many types of data, you would not typically use s-expressions - other than as an intermediary representation - your sql is "compiled" to binary datastructures, as are your images - and the machine won't be "running" s-expressions directly either. I think acknowledging the complex nature of our systems can be a benefit - and insisting on a simplistic abstraction can do the users (programmers) a disservice.
>> The fact that you have to choose between not being able to represent text (ascii) and a variable length encoding, also makes s-expressions less appealing today, than it once was.
> It's not clear to me what you mean with that. Can you give an example?
Ascii can't represent international text - but is a simple encoding to work with.
We've pretty much landed on a complex (by necessity) representation: variable length Unicode with support for different reading directions.
While a the high-level the parsing rules are similar "give me the parenthesis-like grapheme, parse numbers based on various local representations" - Unicode s-expressions aren't really simple any more - they are opaque binary streams that need parsing into reasonable abstract syntax tree-structures. Even utf32 doesn't really change this.
And I think languages like nim (or even to a degree python) show that such structures can be made pleasant, intuitive and powerful to work with. And doesn't limit us arbitrarily to simple symbolic data (eg the unpleasantness to work with a megapixel bitmap represented as s-expressions) - and helps lift our minds from "we can/should force everything to be text", to we should be able to work with our code and data in many representations, accepting the underlying tree structure - and (subjectively, obviously) I think that on the program/dsl/logic side, s-expressions are less human-friendly than something more like python.
It's one important reason why many of us don't prefer s-expressions for pseudo code.
> I may be wrong, but for many types of data, you would not typically use s-expressions - other than as an intermediary representation - your sql is "compiled" to binary datastructures, as are your images - and the machine won't be "running" s-expressions directly either.
S-expressions are both an internal representation and an external textual format. Lists will be parsed into cons cell trees.
See the longer example below. There is nothing which prevents me working with structures/records and arrays as s-expression based text.
There is also nothing which prevents us from providing alternative displays of those, which are not printed as text, but as mixed text and graphics, or graphics only.
In text I would typically give the data symbol names, but in a graphic display I might display them as images.
Symbolics did add a presentation-based user interface for their Lisp Machine operating system with the release of Genera 7.0 in 1987, where the I/O can be done over rich streams and where the system remembers the complete correspondence between graphical display things and the underlying data. Thus the user interacts with various representations (called presentations) of the data. For example, if one has a list of musical notes, you could interact with them as symbolic notes, or as a graphical display of notes.
Typically I would store larger images outside of text files (either in binary Lisp files called fasls or in special image files), but for some purposes it might be okay. For example if my application needs some icons I can store them directly in textual code as s-expressions.
Note also that some Lisp applications use textual s-expressions with serialized data, but those files are not supposed to be edited with a text editor, but with some other tools.
There is also no reason that Lisp could not use Unicode (or other encodings) in s-expressions...
> S-expressions are both an internal representation and an external textual format. Lists will be parsed into cons cell trees.
But at that point - the "internal" and "external" s-expressions are no tighter coupled than say Smalltalk syntax and binary representation.
And I think (again, subjective) that s-expressions as a user-facing, textual syntax, might be good for some subset of problems - but I don't really see the benefit of forcing all users to work at the level of a compiler internal representation - the simplicity is somewhat artificial and superficial.
I guess I think we can do much better than "smart typewriter" as a way to interact with executable data.
> But at that point - the "internal" and "external" s-expressions are no tighter coupled than say Smalltalk syntax and binary representation.
Less. S-expressions are not a representation for Lisp syntax. They are a data format.
Lisp syntax is defined on top of s-expressions.
We can enter an s-expression and Lisp will execute it. Here it is actually an interpreter, which walks the s-expression structure according to the evaluation rules.
CL-USER 47 > ((lambda (a b) (+ (expt a 2) (expt a 2))) 10 (* 20 3))
200
We can also compute the same expression:
CL-USER 48 > (list (list 'lambda '(a b) '(+ (expt a 2) (expt a 2)))
10
(list '* (* 10 2) (+ 1 2)))
((LAMBDA (A B) (+ (EXPT A 2) (EXPT A 2))) 10 (* 20 3))
We can then let Lisp run that computed expression. Note that I refer to the expression over a variable (Lisp sets * to the last value) . It is data, not text.
CL-USER 49 > (eval *)
200
> s-expressions as a user-facing, textual syntax
Lisp does not use s-expressions as a user-facing, textual syntax. It uses s-expressions as a user-facing textual representation for a data syntax. The Lisp syntax is defined as a data syntax.
For example the function call form is: function args*
Thus we can compute lists which have a function name at the first place and the correct number of arguments follow:
CL-USER 55 > (describe (list 'member ''pi ''(pi 3.14)))
(MEMBER (QUOTE PI) (QUOTE (PI 3.14))) is a LIST
0 MEMBER
1 (QUOTE PI)
2 (QUOTE (PI 3.14))
>> I guess I think we can do much better than "smart typewriter" as a way to interact with executable data.
> The interface provided by Lisp is way different from Smalltalk and opens up a fully new perspective: computing of code.
I suppose my point is that the s-expressions are an illusion - it's been a powerful tool to let people see you can, as you say "compute code", which I see as a proxy for "compute logic" (where "code" is a string, and logic is executable).
Perhaps we'll have to agree to disagree - but I think the surface syntax allow "logic" to masquerade as "text representation" - while really being opaque binary streams - and that little is gained by holding on to them on general principle.
Not nothing; they certainly remain a useful representation - but like lisp shows with prefix and infix notation (quote and dot/pair) - there's really not that to be gained by such ungainly syntax.
I know its still used in several companies. One of which is quite busy this time of year. One of Smalltalk's big problems was the companies using it aren't very talkative about what they use. Smalltalk is in a lot of the back ends keeping things moving. Cincom and others still make a lot of cash off it.
> the integrated IDE/program experience, which Ruby, Python, Elixir do not give.
True, but be aware that Java IDE's give you ten times more features and refactorings than Smalltalk IDE's can ever give you, because of the simple fact that Smalltalk is dynamically typed (which means it's pretty much impossible for them to provide automated refactorings: most of these need to be supervised by a human).
I am aware, my point still stands: the refactorings available in Java IDE's are more powerful, safer and more numerous than the Smalltalk IDE ever offered.
Which shouldn't come as a surprise since Smalltalk is dynamically typed, which means automated refactorings are impossible to achieve without human supervision.
A Smalltalk image/IDE being dynamically typed doesn't mean anything, as it can have a full live-above-AST-level knowledge of everything in the program -- including the dynamic types of everything.
Think of it as ten times the power of Java's reflection.
What you can do in such a system (without human intervention) makes IntelliJ/Eclipse look like Pico.
It's not the same. With a Smalltalk system, there isn't really an operating system. All activity we would normally ascribe to an OS is simply implement in the Smalltalk environment, using Smalltalk objects, running in a live image. I recommend checking out the Smalltalk-80 "Blue Book", which is beautifully written. You can also play with Pharo too, just be sure to understand that it's adapted for modern personal computers and therefore has to run as a virtual machine. A system that could run Smalltalk natively would be much better.
They didn't survive for good reasons. They had interesting ideas and concepts that were reused in later languages, but at the end of the day, the strength of a language is not just its syntax and semantics.
Smalltalk single-handedly invented "the tooling", thanks to the various "browsers" and refactoring and testing tools, all made possible because of the dynamic nature of the language, which made the code almost its own runtime representation.
> the performance
Smalltalk pioneered many JIT techniques in the early '80s.
> the VM, the GC
All Smalltalks, as far as I can tell, are garbage-collected and run in VMs.
> the libraries
As a sibling comment states, that's what you get after decades of usage; can you tell with any certainty that Smalltalk would have less libraries, were it popular for longer?
(Actually, there are objective problems with distributing libraries in Smalltalk, stemming from its "everything in global namespace" nature; there were many systems designed to alleviate this problem.)
> is dynamically typed so there are very few refactorings that can be performed safely without the supervision of a human
I'm afraid you don't understand Smalltalk environment in full.
In an ordinary language, the code you write is just text. The type system adds certain information to the bits of text on your screen and prevents you - via compiler errors - from arranging the pieces of text in a way which would probably make the program explode when run.
It works most of the time, except when it doesn't and you end up passing void pointers and Object references. In effect, instead of only dealing with the type system, you have to deal with both static types and runtime behavior. I guess you just like it.
When working with Smalltalk, however, you work in a live environment. In Smalltalk, there is no source. Whatever you write is not just text - it's alive, then and there, and the system can trace each identifier and get its runtime type as you type, including concrete types of polymorphic types and other constructs frequently impossible to check with a static type checker in most languages.
In effect, Smalltalk gives you more contextual information to work with than most statically typed languages. Were you ever puzzled about which overload of a method (of polymorphic classes) will be called in a particular scenario? In Smalltalk, you can just ask the system and it will tell you.
Coupled with other features of the language this makes it really easy to perform automated refactorings. As others noted, the very notion of automated refactorings come from Smalltalk. I'm not very familiar with Java tooling, but I'd be really surprised if it enabled anything more than what Smalltalks offer.
In summary: static type systems are not the only way of adding contextual data to the source. There are other, arguably more powerful (access to the "real" runtime types) ways of doing it, and Smalltalk implements one of it.
Don't worry, though - the misconceptions you hold are very popular and it's not your fault for believing them. The problem with programming is that it evolves pretty rapidly, with many approaches failing along the way, that it's rather impossible to provide a comprehensive summary of the best ideas; and without such a summary we rely mostly on gossip. Moreover, the benefits of some approaches are very hard to grasp without experiencing them first-hand, and who has the time for that?
> I'm afraid you don't understand Smalltalk environment in full.
I'm afraid you don't understand the difference between a statically and dynamically typed language.
I am well familiar with Smalltalk and its image system, I was using it on Sparc and NeXt stations more than twenty years ago.
What you don't understand is that the absence of type annotations (Smalltalk) makes it impossible for automatic refactorings to execute safely. It's mathematically impossible.
Smalltalk's refactoring browser required a lot of human supervision to perform its refactorings. You could turn the prompts off (maybe that's what you did) so you'll have the illusion of an automatic refactoring, but the code you get in the end is unsafe and might be different from the one you started with.
You don't have this problem with a statically typed language, where the automatic refactorings are guaranteed safe.
> Don't worry, though - the misconceptions you hold are very popular and it's not your fault for believing them
Please stop the patronizing tone, I'm pretty sure I know more about this subject than you do, having worked in this field for 40+ years.
You just don't know what you don't know. It's okay, but just don't act superior to random people on the Internet or you'll just look like a fool.
> What you don't understand is that the absence of type annotations (Smalltalk) makes it impossible for automatic refactorings to execute safely. It's mathematically impossible.
What you don't understand - I write this jokingly and let's not escalate this further - is that there are more ways of getting the type information than explicit annotations in the source code.
But, thinking about this, I'm inclined to agree that there are, in principle, fewer safe refactorings possible in a language which relies solely on these "other ways". I of course completely disagree that there are no such refactorings at all, and - additionally - there is no reason that Smalltalk could not allow for some annotations.
Have you seen NewSpeak, a Smalltalk dialect by Gilad Bracha? I admit I was thinking more about it and similar systems than about pure Smalltalk of the '80s. Sorry about the confusion.
> but the code you get in the end is unsafe and might be different from the one you started with.
> You don't have this problem with a statically typed language, where the automatic refactorings are guaranteed safe.
Are you sure that all the refactorings in a statically typed language are safe? Even if the type system is unsound - which it almost always is? I don't believe it to be true, but I can be persuaded otherwise if you present a proof.
> Please stop the patronizing tone
Then please stop stating unconditional superiority of static type systems.
> You just don't know what you don't know.
Indeed. That's normal, though. Still, I'm learning. Please help me in this and provide arguments backed by facts instead of writing in broad generalizations.
> that there are more ways of getting the type information than explicit annotations in the source code
The only way that's been studied is running the code and inferring types based on the runtime behavior of the program.
This is still extremely error prone since you never have a guarantee that these runes are exhaustive enough to present the entire type spectrum that certain objects can receive. Besides, it's also obviously completely impractical (and defeats the purpose) to actually have to run your code before you can gain some type information about it.
No, there is really only one way to acquire the type information necessary for safe automatic refactorings, and it's to have type annotations in your source.
> Have you seen NewSpeak, a Smalltalk dialect by Gilad Bracha? I admit I was thinking more about it and similar systems than about pure Smalltalk of the '80s. Sorry about the confusion.
Yes, I followed NewSpeak when Gilad was working on it but I had little interest in it since Gilad perservered in the dynamically typed approach that is flawed on so many levels. StrongTalk was a more interesting experiment, in my opinion.
> I of course completely disagree that there are no such refactorings at all
There are very few of them, such as renaming a local variable. That's pretty much it. Anything else that goes beyond a very tiny lexical scope is impossible to automate safely.
> Then please stop stating unconditional superiority of static type systems.
It's pretty much a fact today. A decade ago, you could have argued that dynamically typed languages let you prototype faster and are less verbose than statically typed languages.
With today's generation of statically typed languages, these two advantages no longer apply and there is really no good reason to start a large project with a dynamically typed language.
> With today's generation of statically typed languages, these two advantages no longer apply and there is really no good reason to start a large project with a dynamically typed language.
There is lack of evidence for it, especially given that 'today's generation of statically typed languages' is mostly unused and there is very little practical experience. Most of the industry hasn't heard about it and doesn't use it.
That was definitely the intention. It was intended as a systems and application programming language for the Newton.
The original Apple implementation compiled to native ARM code. The runtime was intended to be competitive with C, but by the time we approached that target, large parts of the toolbox had already been written in C, and Walter Smith had created NewtonScript as a scripting language that worked as an alternative for non-performance-critical code. At that point the Cambridge team re-targetted the implementation to build Macintosh applications, but that wasn't a sufficiently compelling (to Apple management) use, and we had lost our executive sponsor when the director of the Apple Cambridge lab was promoted to a position in Cupertino.
(I'm the “Oliver Steele” mentioned on that page. I went to Apple Dylan from Skia – later released as Quickdraw GX – another technology that missed the Newton boat.)
Everything you say is right. Some details look maybe a little different when viewed from the Newton team in Cupertino.
Yes, as the Dylan runtime got better, the Newton team wrote an OS in C++ with NewtonScript for scripting. There's more back story there, though.
Before any of that work started, an earlier iteration of the OS had already been written in the early Dylan, back when it was still called Ralph. There seemed to be some level of discontent in a few different quarters with that OS. Different people expressed different criticisms (that I heard). Different people assigned blame for their dissatisfactions differently.
It was all pretty good-natured, from what I remember.
But then John Scully told Larry Tesler: Let There Be an OS Written in C++. And there was. There is various gossip about exactly how that came to pass, but it did come to pass.
Larry asked me and a couple of other people to see what we could do with Dylan. We took that ball and ran with it--maybe too far. Like maybe Larry wanted us to run it down to the end zone and instead we ran it across the border and into Patagonia somewhere. We, the five of us or so, wrote a whole OS in Dylan, essentially competing with the 60 or so people working on Scully's mandated C++ OS.
I don't know why we did it exactly, except that we could, and it was really interesting. More to the point, I don't know why Apple management let us keep going with it so long. Morbid curiosity maybe? It did work pretty well, and I think there were some pretty interesting ideas in it.
From a business point of view, though, it was silly. Obviously Apple was never going to ship 2 Newton OSes. Equally obviously, it wasn't going to choose to develop our weirdo Lisp OS instead of Capps' C++ OS that was developed by almost the whole Newton team in response to an order from on high.
The period you describe, when Dylan was getting pretty good and the Capps OS had a lot of features and NewtonScript was pretty well working was well after the inflection point where the main group started working on the C++ OS. Our smaller team started on the second Dylan OS about the same time the other team started on their OS, and we made good (though pointless) headway. We used the same microkernel they did, and the same graphics infrastructure. Everything else in ours was written in Dylan. Dylan worked great.
In fact, that version of Dylan remains my favorite general-purpose programming language ever. I pretty much lost interest in Dylan when it stopped looking and working like a Lisp, but I've never liked anything else as much for day-to-day programming--not even Common Lisp, which is my go-to nowadays.
Creator of NewtonScript here… Hi Mikel! I'm glad to hear you say it was good-natured, because I loved Dylan—having been the Newton team's Cambridge liaison and Ralph cheerleader since the Cambridge team joined Apple. I was just head-down trying to build a small language that had at least some of the goodness of Dylan so we didn't ship something that was just a C++ app in a box!
I know you loved Dylan, and I know what you were up to with NewtonScript.
I did a little work in NewtonScript for the shipping Newton before I migrated to NeXT. It worked. Or it would have, if Newton had been a better fit for the market of the time.
The Cambridge team (or big chunks of it) later became Clozure Associates, Lisp consultants and maintainers of Clozure Common Lisp. I've worked for them off and on over the years on some interesting stuff.
Well, I ported Skia's memory manager to the Newton, and added some tricks to make it more friendly to the Newton's virtual memory system. So some of Skia made it over ... :_)
The dropping of Dylan was a little more nuanced and maybe somewhat nasty. The decision was made to ship "junior" (the handheld unit that became the MessagePad 100) using C++, while work was still being done in Dylan on "senior", the tablet unit. Eventually the Junior project grew in importance ("Holy crap, we have to ship this in a year") and it was all hands on deck to get it out the door. For a while there were a bunch of Dylan programmers roaming the hallways, they were clutching copies of the C++ Annotated Reference Manual and looking really unhappy. The writing was on the wall, Senior got canceled a few months later and a bunch of the Dylan folks quit.
I was one of the Dylan programmers working on Senior (and Cadillac). I used to have a couple of the hardware prototypes, but I don't anymore.
I don't remember things being nasty, exactly, although I do remember that Larry Tesler authorized the switch to C++ because he was told to, rather than because he wanted to. I remember he looked kind of disappointed. The reason that I and about four other people kept workin in Dylan is that Larry asked us to see what Dylan could be used for on Newton. Our answer was "pretty much everything--see? Here's a working OS."
Realistically, that was the wrong answer, and we should have known it. I think we just got caught up in the really interesting intellectual challenge of inventing a novel OS around a novel Lisp, and nobody stopped us. As I said in the other comment, I don't know why management let us keep going as long as they did, but I'm gratedul.
I might well have been one of the unhappy faces you refer to, but C++ had nothing to do with that. By that point I had been using C++ regularly for about 4 years or so,ever since Apple first got AT&T's CFront, and in those days I still actually liked it.
Admittedly, I liked Lisp a lot better.
But no, C++ was not the source of my unhappiness. My unhappiness was because I had just poured heart and soul into a serious piece of work for about two years and I was having to say goodbye to it.
That's when I went to NeXT. A friend of mine had gone to work for them managing part of the software group and convinced Steve that he should hire me. So I went to NeXt and talked about Newton (because after two years of 100-hour weeks it was all I could talk about anymore) and Steve pitched me his hard sell about how I should be at NeXT.
The development of Newton became famously stressful. I mentioned 100-hour weeks; I wasn't exaggerating. The roughly 60-person main OS team on Newton (that's a ballpark figure; I could be exaggerating or lowballing it) were putting in serious hours to meet ship targets that were gradually solidifying and growing more fraught. Our little team (Matt Maclaurin christened our project "bauhaus") worked maybe even harder because, I think, we knew on some level that our work wasn't likely to be used, and we were trying as hard as we could to beat that fate through sheer effort.
Eventually management got us together and told us that we had met and exceeded every criterion for success, but that it simply didn't make sense to continue developing two OSes. We would need to archive the project and shut it down, and prepare for reassignment to other duties in Newton.
None of us was surprised. We had been aware of the likely outcome for months, and had been trying increasingly long-shot efforts to make the project appealing to someone in Apple. In those days (and perhaps nowadays, too, for all I know), Apple didn't really invent products by having some visionary leader invent a goal and directing engineers to fulfill the vision. Instead, it tolerated skunkworks projects hither and yon, and the visionary leader(s) cherry-picked the ones they thought most promising. Newton itself had been created as a means to keep Steve Sakoman from leaving Apple, rather than as a product vision. So you can perhaps forgive us for imagining that if we just made our project good enough, Apple would find a place for it.
It didn't happen.
As I said, none of us was surprised. We were let down, though, by the sheer intensity of our efforts followed by their sudden end.
I worked on the shipping Newton for a while, writing user-facing code mostly in NewtonScript.
A friend of mine whom I had met at Apple while working on another project had gone to work for NeXT. He was managing part of their software group. We socialized occasionally, and he noticed a change in my demeanor. I explained that Apple had finally shut us down and that I was feeling a little directionless. He said I should come to work for NeXT.
If I remember right, I sort of put him off. I had no ambition at that moment. I was still grieving for our lost project, and for the best programming language I had ever used (it's still my favorite to this day).
He persisted. A little while later I was invited by Steve Jobs to go up to the NeXT offices in Redwood City.
Perry was a big Steve fan. I wasn't particularly, but I did like the NeXT hardware and OS, and Perry knew it. He sold the visit on the basis of the cool hardware and software and getting to see how it worked at NeXT, where all the hardware was NeXT boxes. I bit.
My meeting with Steve was memorable. He carved out a fairly large chunk of time to give me the hard sell about how I should give up on Apple and come change the world with NeXT. I hadn't met him before. His famous charisma was real enough. It was a little odd, though, too. He tried a bunch of different angles to convince me, and he could tell really quickly when it wasn't working. In fact, nothing worked in that meeting. I was skeptical that NeXT would survive, and all of Steve's pitches were pegging my BS meters. It was kind of cool to see him start a pitch, recognize that it wasn't working, and instantly flip to a different one, as if changing channels on a TV. Eventually he gave up and ended the meeting.
I don't remember how long it was. Usually my sense of time is pretty good, but I guess I was too wrapped up in the experience to keep track.
Later (I think it was later) I was invited to be interviewed by several people from their software team. That was kind of weird. Some of those people tried to make it into the kind of hazing ritual that is currently popular in tech interviewing. I performed poorly in that part of it, as I always do, regardless of whether I'm any good for the job. The manager guy running the interview made them stop--I'm guessing because it didn't matter what those results were, Steve wanted to hire me.
I went home and thought it over. They offered me more money than I was getting at Apple. I would get to work on NeXT boxes. I liked NeXT boxes. I had a cube of my own that I had sprung for out of my own pocket. My friend had been right about networked NeXT boxes being cool. You could walk into any ofice on the NeXT campus and log in, and you would get your own familiar desktop environment. That wasn't possible at Apple. I liked Objective-C. I liked Allegro Common Lisp, which came with my NeXT box. I liked Interface Builder--I had learned to like it when it started out as an add-on for Expertelligence Common Lisp.
So, despite my misgivings about Steve's reputation as a jerk, and my mixed feelings about his high-BS sales pitch, and my estimation that NeXT was likely to wither away once Steve ran out of billionaires to soak for "investments", I ultimately decided that the technology was cool enough and NeXT's runway was still long enough that I could go to work for them for a while and have a good experience that would be educational and fun.
And I did.
I didn't stay very long. The thinking was that I would support BSD commands and libraries and several of the apps that shipped with Nextstep for a while, as I decompressed from two years of Newton. Then, when I was feeling perky again, they would let me design a new filesystem. They were intrigued by some of the thinking we did about storage for Newton, and they wanted to see if I could transfer some of it to make a filesystem for NeXT that could be simpler and more accessible for non-techie users while preserving the semantics of UNIX filesystems. That sounded challenging (especially since I was no filesystem expert) and fun.
But, alas, it was not to be. A colleague from the bauhaus project kept bugging me to go to work for his startup. It was a tiny little thing based in Santa Cruz doing contract development of multimedia projects for several clients. It was a bad idea to go to work for him. I had worked for him before and hated it. He was offering me a little more than half what I was getting at NeXT. I lived with my wife and small children in Santa Clara. His office was in Santa Cruz. He promised I wouldn't have to commute, that I could work from home, but he changed his mind soon after I started.
So why did I leave NeXT? One of my colleagues there was a project manager who later told me that Steve blamed her for my leaving, claiming that she had given me stuff to work on that was beneath me. That wasn't it. The truth is that I liked the Santa Cruz guy and wanted to make a collaboration with him work (that was a bad idea), and I was pretty sure NeXT was going to end up petering out, the way bauhaus had. It had nothing to do with the work I was given at NeXT, which I enjoyed tremendously.
In fact, I enjoyed pretty much everything about being at NeXT, except for the constant atmosphere of terror that surrounded Steve.
I'm not familiar with Newtons' filesystem. What was different about it? I was too young to have one when it was out and the one I was given a few years ago died.
Newton's frames were much simpler than full-blown frame systems designed for knowledge representation--just mutable finite maps, really. Basically, soups were persistent graphs of maps, (or dictionaries, or hashes, or objects, depending on your language background). Newton apps were also soups.
I didn't actually work on the shipping soups. I worked on bauhaus, the Lisp-based Newton OS, which had its own frame system that was a little more like what the Wikipedia article describes, though still simplified when compared to a full-blown knowledge representation system.
Apple had one of those, too, by the way. It was called MacFrames, and it was invented by Ruben Kleiman. Both Matt Maclaurin and I had been MacFrames users before getting involved with Newton. MacFrames later evolved into SK8, the Hyoercard-on-steroids project that never made it past management gatekeepers into the wild. I worked on SK8, too, after I returned to Apple post-NeXT, and when I ultimately left Apple it was to go to work for folks that I had met working on SK8.
That's yet another story. Those folks founded a startup named Reactivity, where I worked for another 7 years. Some of them are famous now--for example, one of the founders was John Lilly, formerly of Mozilla and now with Greylock; and another was Mike Schroepfer (everyone called him "Schrep") who is now Facebook's CTO.
I loved Reactivity, too, but I left it because of a health catastrophe. It was later acquired by Cisco, where, coincidentally, I'm doing some work now.
I'm compressing several more tomes worth of stories into the last couple of paragraphs there. I figure I can only ramble so much before it becomes a bore.
They're called "frames" because Bill Luciw used them like KR frames when he built the Intelligent [sic] Assistant. Built-in dictionaries/hashes weren't a common feature of languages at the time, so I had no better idea what to call them. :) I thought the minimum set of data types you'd need for a usable language is primitives, arrays, and records; frames are how you do records.
Right; I remember your discussions with Capps about them in the Living Room.
I agree that built-in finite maps were not a common feature of the languges at the time, and that "frames" were a decent word for them.
After newton I worked on Ruben Kleiman's SK8 for a while (I had used SK8 when it was still called MacFrames for a personal UI-system project, and then later I worked on Matt Maclaurin's GATE, which also used MacFrames). SK8 was more of a full-blown frame system (plus other things; if Newton's Dylan is my all-time favorite working language, SK8 is my all-time favorite IDE)).
The bauhaus frame system was somewhere in-between. It tried to provide a foundation for more frame-system features without actually implementing the more complicated and esoteric ones, like multiple parallel and extensible inheritance systems and truth-maintenance systems and so on. Larry Tesler wrote the initial version of it during his sabbatical, and then I took it over when he, as he put it, "put his executive hat back on."
1. SK8 was implemented in Macintosh Common Lisp (MCL), which was a product of Coral Software in Cambridge, MA. This was one of the justifications for Apple's acquisition of Coral in 1988.
Coral became Apple Cambridge, and went on to create Dylan, which was implemented in MCL.
2. There was a period of time where I had moved to California but continued to work on Dylan. Whenever I returned to MA to work out of the Cambridge office, I stayed with my father-in-law, who had originally proposed “frames”.
This system is key to the Newton information architecture. The object storage system provides persistent storage for data.
Newton uses a unified data model. This means that all data stored by all applications uses a common format. Data can easily be shared among different applications, with no translation necessary. This allows seamless integration of applications with each other and with system services.
Data is stored using a database-like model. Objects are stored as frames, which are like database records. A frame contains named slots, which hold individual pieces of data, like database fields. For example, an address card in the Names application is stored as a frame that contains a slot for each item on the card: name, address, city, state, zip code, phone number, and so on.
Frames are flexible and can represent a wide variety of structures. Slots in a single frame can contain any kind of NewtonScript object, including other frames, and slots can be added or removed from frames dynamically. For a description of NewtonScript objects, refer to The NewtonScript Programming Language.
Groups of related frames are stored in soups, which are like databases. For example, all the address cards used by the Names application are stored in the Names soup, and all the notes on the Notepad are stored in the Notes soup. All the frames stored in a soup need not contain identical slots. For example, some frames representing address cards may contain a phone number slot and others may not.
Soups are automatically indexed, and applications can create additional indexes on slots that will be used as keys to find data items. You retrieve items from a soup by performing a query on the soup. Queries can be based on an index value or can search for a string, and can include additional constraints. A query results in a cursor—an object representing a position in the set of soup entries that satisfy the query. The cursor can be moved back and forth, and can return the current entry.
Soups are stored in physical repositories, called stores. Stores are akin to disk volumes on personal computers. The Newton always has at least one store—the internal store. Additional stores reside on PCMCIA cards.
The object storage system interface seamlessly merges soups that have the same name on internal and external stores in a union soup. This is a virtual soup that provides an interface similar to a real soup. For example, some of the address cards on a Newton may be stored in the internal Names soup and some may be stored in another Names soup on a PCMCIA card. When the card is installed, those names in the card soup are automatically merged with the existing internal names so the user, or an application, need not do any extra work to access those additional names. When the card is removed, the names simply disappear from the card file union soup."
Liking someone does not necessarily mean that working with them will make for a good experience. Human relationships are complicated.
I had some experiences working with the guy that were good in some ways and bad in others. The bad parts might have been his fault, or my fault, or both, or might have been because of environmental things that neither of us controlled. There's a good chance that they were caused by a combination of such factors. As always, it's hard to be sure.
He approached me when I was at NeXT. He wanted me to work for his startup. I had reservations. I thought there was a fair chance that it would go badly because of the bad things from before. On the other hand, I still liked him and I wanted my reservations to be unfounded.
They weren't. In fact, it was worse than I feared.
The language I was referring to was the version of Dylan that the bauhaus project used for our OS work. It was basically Scheme plus CLOS plus some functional-programming extensions, and a type system in which all datatypes were CLOS classes.
The development environment, named "Leibniz", was a greatly-extended version of Macintosh Common Lisp that included both the Common Lisp compiler (whose output ran on the Mac 68K-family processors and which was used to implement the whole development environment, including the Dylan compiler) and also the Dylan compiler and runtime, which was initially written for AT&T's Hobbit chips and later ran on ARM.
When bauhaus started our development machines were Mac IIfx boxes with great big Nubus cards sticking out the top. Later they were actual working Newton hardware ribbon-cabled to daughterboards in the Nubus slots.
Leibniz had two sets of everything: one for Common Lisp and one for Dylan. You'd think it would be confusing, but it wasn't, not after the first day or so. There were Common Lisp editor windows and Dylan editor windows. There were Common Lisp Listener windows and Dylan Listener windows. And so on.
That's still my favorite programming language.
I liked the development environment, too, but my favorite development environment ever was SK8, Apple's HyperCard-on-steroids-with-a-built-in-knowledge-representation-system.
If I won the lottery, I'd hire some smart friends to make a new SK8 with an updated version of the old Dylan and a 3D game engine designed for building immersive environments by modifying them as they ran.
You are right of course. We were involved very early on with Newton development and were asked to start looking at Dylan pretty early on. I still have the original DRM which featured s-expression style syntax.
Soon after though we got an early release of the Newton Tollkit (NTK) which was written in Common Lisp AFAIK.
Maybe that is why I remember tools being heavy weight. The tools, not so much the compiled results.
Dylan was a slimmed-down Common Lisp. I thought it was a pretty good idea at the time but when they dumbed down the syntax (removed the parentheses) I lost interest. It's ironic that full-blown Common Lisp could easily run on iphones today but Apple won't let it.
Mocl is a subset of Common Lisp and cannot do the same things on ios that a full CL (like SBCL or CCL) can do on MacOS. It's intended to be a "waterfall model" code generator where code is compiled strictly in advance. Obviously this is how most other languages work, but one of the distinguishing features of CL is the ability to compile new code opportunistically at run time. This is impossible on ios because Apple won't allow it. There are a few other issues with mocl that make it seem like a second-class language to me, coming from using first-class CLs on MacOS. I'm sure mocl is fine if you're comfortable with these compromises; I'm not.
Of course it's a second-class language, so are React Native and everything else that isn't Objective-C or Swift. But that's not what you said, you said they wouldn't allow it. If you want them to actually support it and make it a native option that's entirely different.
I'm not sure you understand. Common Lisp has lots of dynamic features that conflict with Apple's guidelines. For example, the ability to compile code at runtime is part of the language: http://phoe.tymoon.eu/clus/doku.php?id=cl:functions:compile
React Native is only able to get around this by using Webkit's JavaScript engine.
I couldn't find any documentation of mocl features, but it seems like they neuter parts of the language that wouldn't pass the app store review: https://wukix.com/support/ticket/84
As you can see in that support ticket, this means that standard-conforming CL code doesn't run unmodified.
> For example, the ability to compile code at runtime is part of the language
An actual Common Lisp implementation does not need to support 'compile' for the application, when it is not needed at runtime. Actually if the CL implementation is only an Interpreter, then COMPILE is a no-op.
Take for example LispWorks, they have a compiler which creates an iOS library which is linked into an Xcode iOS application. When you 'deliver' (means: creating shippable code) the Lisp system removes the compiler (and potentially all kinds of other stuff). Generally the Common Lisp compiler/runtime to iOS/ARM code supports the whole language, without the runtime compiler.
This 'Delivery' tool is also used on other platforms, not just for iOS.
You must abstract a bit from what the language standard describes and what actual implementations do or support. If somebody wants to deliver static/restricted applications with Common Lisp, then he/she would just use a Common Lisp system which can do that, regardless of what the standard says or requires.
> You must abstract a bit from what the language standard describes and what actual implementations do or support.
Fair enough. I don't actually use #'compile all that much in my code. But I've been burned by these compromised pseudo-Lisps so much in the past that I tend not to trust them. And I wasn't aware that Lispworks had an ios solution; I'll check it out. Thanks.
>As you can see in that support ticket, this means that standard-conforming CL code doesn't run unmodified.
Perhaps not in Mocl, but my point was not that Mocl specifically was a good idea and was fully functional, but that implementations for these languages do exist and do work at least to some extent. I found Mocl by googling "Common Lisp iOS." I am not a Common Lisp programmer. I just know that a lot of non-native languages support compiling for iOS these days, and I honestly would have been a bit surprised if Common Lisp didn't. Plus, your child even seems to suggest that "standard-conforming CL code [can't] run unmodified" is not true.
Most performant CL implementations require the ability compile to executable code at run-time. iOS disallows executing from the heap for security purposes, so porting those implementations is a non-starter.
Others have said this already, but mocl is a (large) subset of common lisp that requires all code for which performance matters to be AOT compiled. It's better than nothing, but it lacks the level of dynamicism that lispers have come to expect.
> Most performant CL implementations require the ability compile to executable code at run-time.
Not really. Most performant CL implementations make sure that at runtime nothing needs an actual compiler, unless the application actually needs it.
For example the LispWorks on iOS implementation supports the whole Common Lisp, minus the runtime compiler. It's still possible to use the interpreter, though - AFAIK.
> LW introducing a version that supports iOS? That's huge!
That's a delivery compiler with a relatively complete LispWorks runtime, minus the runtime compiler. It's a bit expensive. It lacks any GUI and is supposed to be used inside an Xcode iOS application.
There is also one for Android...
They are working on the 64bit ARM version. The 32bit ARM version is out for some time.
I don't know much about Common Lisp, all I know is that some people still think Apple only allows apps written in directly supported languages, which hasn't been the policy in years.
They apparently loosened the restrictions on developer tools. For example, I used to have a Scheme interpreter on my iPhone. The problem is that you can't set the executable bit on a memory page, so you can't compile a form to native code and then execute it. You can still have an interpreter, or a byte-code compiler plus a byte-code interpreter, but you lose lisp's ability to compile and run native code. As others have said in this thread, there are workarounds for that, since most lisp applications don't need it, but there's no real reason, other than politics, and a misguided security policy, to forbid it.
First, some personal background: When I was an intern at Harlequin, I was one of the first users of Harlequin Dylan. Later, when CMU stopped work on Gwydion Dylan, I helped start an open source project to maintain it. And when Harlequin wanted to start open sourcing their Dylan (which later became https://opendylan.org/ ), I met with them to help hammer out licensing details. Harlequin released a lot of cool code.
What was cool about Dylan? Well, it was a basically a relative of Common Lisp, but with an infix syntax, but it had been simplified to improve code performance. It was a more ambitious language than Java, with full closures, basic macros, and generic functions. Dylan had static typing if you wanted it, or you could leave code untyped. (Unfortunately, collection and function types were fairly weak due to the lack of standardized generic types.)
In the end, Apple's abandonment of Dylan and the rise of Java united to make Dylan irrelevant. But it was a fun language, and I had a lot of fun hacking in it. I think that the closest popular language at the moment, design-wise, is probably Julia.
I still remember one 20-hour day that where a friend and I set up tower computers in a cozy basement room at Dartmouth, and I eventually convinced the Gwydian FFI tool to parse 10,000 lines of Linux headers.