Hacker News new | past | comments | ask | show | jobs | submit login
My encounter with Medley Interlisp (paoloamoroso.com)
114 points by mepian on Jan 8, 2023 | hide | past | favorite | 49 comments



Many people who read about Lisp Machines are not aware that the InterLisp-D world and the MIT world (CADR, LMI, Symbolics etc) had significantly different approaches to how the systems should work, so even if you have read or used the MIT-style systems you will learn a lot by using Medley. I came from MIT out to PARC for a year, and later moved CYC from D machines to Symbolics machines (a complete reimplementation using a different fundamental architecture) so have good experiences with them both.

At heart, the InterLisp language itself isn't that different from MIT lisps, as Interlisp started down the road at BBN and there was a lot of cross fertilization in both directions. And CommonLisp, while heavily based on the "MIT" model has a lot of Interlisp influence in it.

But as far as the interfaces are concerned things are quite different. Writing code on an "MIT-style" machine was like writing it on the '10: type into an emacs and away you go. Interlisp-D's conception was heavily influenced by the SmallTalk experience: structured rather than text editing, mouse as a primary interface tool, and development based on images rather than files of code. It was much more intensely networked even than most computing today. Although I ultimately still preferred the MIT model, I learned a lot from Interlisp and there are many things I preferred in that environment. The most annoying part for me was the dependence on the mouse.

A nit:

> "The learning curve of such a complex system is steep, almost vertical."

This means you could effectively learn everything overnight. A shallow learning curve is the terrible one: takes a long time to learn.


I didn't use Interlisp-D systems back in the days. I was as a student only a short time (mid 80s) in a lab which had probably a dozen or more of them, incl. server and laser printer. They were already no longer being used. The lab at that time was for a research project for a natural language dialog system - think Siri without spoken language - the research prototype was about hotel room reservation dialogs.

I'd think the main problem was the underpowered hardware - especially the small address space (both for main and virtual memory) - which was too small for the more challenging Lisp programs and their data. Users moved to Symbolics and also very quick to UNIX based Lisps on SUNs.

> It was much more intensely networked even than most computing today.

I'd like to hear more about that... what was it like?


PARC's environment was a LAN environment, so printing (laser printing!), file storage, mail, and many other services were provided as cloud services by the network. This was in the late 1970s! Because the D machine environments (Smalltalk, InterLisp-D, Cedar/Mesa) were hermitic (you booted a machine with specific microcode for each environment and then the environment ran on the hardware) basically there were three different OSes that needed to support the common services.

So the service implementations were smart: located via what was essentially an anycast and contacted via RPC. Everybody needed a client (for mail or whatever) but the implementations were easier.

The MIT style machines were of course networked and rarely had, say, a local filesystem either, but those services were typically provided by a host (typically a PDP-10): basically a client/server model. The D machine environments just treated the (LAN of course) cloud services and not much different from any other local resource. It was easier to write and run shared services in those days than to use websockets today.

Years later when I first encountered a Sun-3 I was shocked to discover it included an entire copy of sendmail and needed it to get the user's mail of the workstation! What an incredibly primitive approach, worse than I had had access to a decade before.

Of course that crappy model survived, but the apparently dominant model of (say) reading your mail is even more retrograde: gmail is just you logging into a remote computer and running a modern version of elm on it to get your mail. In the late 70s D-machine environments you grabbed an object (e.g. mail message) frm the network, maybe presented it to the user, then used it in a reply, and cast it into the void for someone else to take care of. None of the modern siloing which simply recapitulates the worst part of the PC era.


Woow


Tell me more about you


I'm the author of the linked post, thanks for sharing your experience. I'm curious what you preferred of the Interlisp environment over the MIT world.


And Mesa, Mesa/Cedar were heavily influenced by Smalltalk and Interlisp, there are a few references that they wanted to duplicate the development experience of those dynamic environments into strong typing environments.


Yes, and surprisingly successful! It's a shame that passed away. Exploratory programming environments had their own "winter" and are only starting to return. That return involves a lot of recapitulation of old hard-earned lessons, because in CS it's considered poor form to read the literature.


I don't know if it is CS to blame, or all the fashion trends that pretend they are showing something that will change the world from a couple of people creating start-ups without having had a solid engineering degree.

One of the reasons I like digital archeology is the way my degree was all about regarding computing evolution subjects, and we had a proper campus library to go along them, e.g. most published Xerox books.


The weirdo "startup culture" that developed in the dot com boom and survives today is just a modern phenomenon. I remember a relatively high level of disregard for the literature even back in the early 80s, more in industrial research labs than academia of course, but even in academia the CS culture was quite different from the other sciences like mathematics or physics. Perhaps it was the engineering influence.

It's peculiar as so much early CS came from people trained as mathematicians.


> I suppose the picture of computing is of a topsy-turvy growth obeying laws of a commercial "natural" selection. This could be entirely accurate considering how fast it has grown. Things started out in a scholarly vein, but the rush of commerce hasn't allowed much time to think where we're going.

(a statement as true in 2023 as it was in 1963?)


> images rather than files of code

This is my pet peeve about Smalltalk and I love Smalltalk. Image basically means a single huge file. It's fine if the IDE lets you navigate inside it when you are working alone. But with multiple authors the issue becomes how do you merge two or more huge files.

Within a file all parts of it must work together. If you have a huge file the number of pairs or parts which must work together is exponentially larger. So merging "images" together is a problem on a class of its own.


There were source code management systems, like ENVY, which made this easy. More than easy, IBM Smalltalk with ENVY was the best system I've ever used for collaborating with other programmers.


I used it. It was great especially when you collaborated with other people in the same project who used the same Smalltalk base-image.

But things got tricky when different people use different base-images and it is so easy to create new base-images. When you depend on (specific facts about) your base-image you don't need to declare such dependencies anywhere. Combine that with dynamic typing of Smalltalk. Then when you try to move to a different Smalltalk things just don't work and it takes time and effort to figure out why.

I guess the way to do it would be to never save your image, but that is so easy and convenient. But if you never save your image, it is as if you are not using the feature of having the "image", so you can't really say it is a good feature if you never use it.

When you do image-based development it is hard to control what and how many dependencies you create that are dependencies on the image you are currently working in.


The way it was handled in my experience was people working on a specific component had someone rebuild the base image they derived their work from periodically, using definite versions of components from other groups. So a group writing one particular part of an application would periodically receive new images containing set versions of infrastructure components and peer applications. Older images would still work of course. Individuals could take those images and substitute specific newer (or older) versions of packages as needed. But definitely groups working on different things mostly only shared things that had been versioned in the shared repository. It seemed to work ok to me.


It worked, but required extra care and smart team working together.

Where I ran into trouble was trying to port an app from one Smalltalk to another. Should be easy right because Smalltalk really is a very simple language. But the ways your code could depend on anything in your image are countless. And no standard language definition with conformance tests.

I guess the situation with Smalltalk versions was much like the trouble you have with database vendors today. Or perhaps cloud-providers. Except that Smalltalk was so productive that anybody could easily create their own customizations.

In the database world you can write your own views and stored procedures. But you can't modify the base platform on which you must write those things. In Smalltalk it is very easy to modify the platform, and save those modifications in your image.

IF you had to declare every class and method you use from the image you wouldn't do so much of it. It would not feel very productive having to declare every dependency. But then, you would have fewer dependencies in the end and your code would be much more portable.


> … required extra care…

Compared to which other 1980's language?

> … smart team working together.

If the team isn't working together the language isn't the problem.

> Where I ran into trouble was trying to port an app from one Smalltalk to another. Should be easy right…

Compared to porting an app between which other 1980's languages?


> Compared to porting an app between which other 1980's languages?

Say from one C-compiler to another. C is still alive and kicking, Smalltalk not as much as it used to be.


Wouldn't "trying to port an app" mean trying to port between different C gui libraries?

(Just like "trying to port an app from one Smalltalk to another" would mean trying to port between different Smalltalk gui libraries.)


They aren't really comparable though. C the language was really only comparable with part of Smalltalk, and the parts of Smalltalk it was comparable with were pretty portable. It's the rest of Smalltalk, the operating system interfacing stuff and the like, that C didn't even have. You had to rely on external libraries for that stuff, and they weren't particularly portable.

You could write perfectly portable utilities in Smalltalk as long as they didn't rely on any services that were not comparable to what C the language contained within itself.


> … tricky when different people use different base-images…

Even back in 1984 pre-ENVY the explicit advice was —

"At the outset of a project involving two or more programmers: Do assign a member of the team to be the version manager. … The responsibilities of the version manager consist of collecting and cataloging code files submitted by all members of the team, periodically building a new system image incorporating all submitted code files, and releasing the image for use by the team. The version manager stores the current release and all code files for that release in a central place, allowing team members read access, and disallowing write access for anyone except the version manager." (page 500)

1984 "Smalltalk-80 The Interactive Programming Environment"

https://rmod-files.lille.inria.fr/FreeBooks/TheInteractivePr...

> When you depend on (specific facts about) your base-image you don't need to declare such dependencies anywhere.

Why would you do what you describe as "image-based" development when you had ENVY/Developer prerequisites "defining exactly which other code an application expects to access"?

https://www.google.com/books/edition/Mastering_ENVY_Develope...

Wouldn't that be like keeping code in your own personal folder hierarchy when the others were using git?


> ENVY/Developer prerequisites "defining exactly which other code an application expects to access"?

What guarantees such pre-requisites are accurately documented and abided by?

Again if you never save or switch the image it can't give you too many problems. But what's the benefit of image-based development then?

I think it is like GOTO. You don't need to use GOTO even if your language provides it. But it is a bad idea for a language to provide a GOTO because then many people will use it. and shoot themselves in the foot.


What guarantees code is in the project git repo and not scattered across strangely named files and folders on someone’s personal hard-drive?

Remember when you used ENVY? You could save the image. You could come back next day and pickup your work where you stopped. Even though all the methods were versioned and shared.


Having an 'image' has its pros and cons. I'm not saying it's all bad. It is very convenient for the day-to-day programming. Change something in the image, save it and come back the next day. What could be more convenient.

I'm saying that using an image and relying on it, having undeclared dependencies on it, saving it and versioning the image has an adverse effect on the long term maintainability and integrability of the software.

Say you had your source-code neatly stored and organized in Envy. Then could you just give your Envy-stored source to some developer in the field-office or an open-source collaborator around the world and say: Use this source-code to develop this further? Probably not. You would ALSO have to say by the way you must use this particular version of the base-image. Right?

And then, when there are several divergent base-images for different projects, how do you merge those base-images?

Envy is the cure to the problem that is reliance on the image. Then you say it is great. Yes because of Envy. Not because of the "image".


> I'm saying that using an image and relying on it, having undeclared dependencies on it, saving it and versioning the image has an adverse effect on …

What exactly do you mean by "having undeclared dependencies" ?

Doesn't the changes file show everything that's been changed, added or removed?

"Within each project, a set of changes you make to class descriptions is maintained. … Using a browser view of this set of changes, you can find out what you have been doing. Also, you can use the set of changes to create an external file containing descriptions of the modifications you have made to the system so that you can share your work with other users.

The storage of changes in the Smalltalk-80 system takes two forms: an internal form as a set of changes (actually a set of objects describing changes), and an external form as a file on which your actions are logged while you are working (in the form of executable expressions or expressions that can be filed into a system). … All the information stored in the internal change set is also written onto the changes file."

1984 Smalltalk-80 The Interactive Programming Environment page 461

https://rmod-files.lille.inria.fr/FreeBooks/TheInteractivePr...


> … some developer in the field-office…

Incidentally multi-site development (page 89) and remote work (page 312) are discussed in "Mastering ENVY/Developer" but here's the part of your comment that puzzles me: you write this as-if it's some-kind-of gotcha?

> You would ALSO have to say by the way you must use this particular version of the base-image. Right?

To make the build process reproducible we start with the same thing and then add, remove and change stuff. Right?

That wasn't because of ENVY. That was something that would be written as a Smalltalk script — because reproducible.


Could you elaborate on the 'much more intensely networked' bit please.


I replied to a comment by lispm which was a parallel comment to yours. Interested in your thoughts.


Am ready to learn with you I want to be you


In common parlance, a learning curve is plotting effort/output, not output/effort. So vertical means nigh infinite effort to achieve small change in output.

I also don't like this convention, since it flips dependent-independent variable dichotomy on its head, but it seems to be the widely accepted norm now.


People adopting specific jargon (random, cloud, quantum) without understanding in order to try to pretend to belong. Sigh. Humans are complex.

Worst example to me is the common use of the “ugly American” — in the book, he was the good guy who got out there and got dirty, as opposed to the pampered ones who stayed in the green zone, didn’t know any locals, and assumed everything in whatever country they were in was the same as in the ol’ USA, but just broken or corrupt.

I’m not American but I always cringe when I hear this usage. Why do people have to hollow out metaphors, destroying language in the process?


Try climbing. I've done a tiny bit. It's hard going straight up as opposed to a nicely sloped walkable path. Takes a lot of effort. Metaphor WFM anyway.


I had a Xerox 1108 running Medley Interlisp (until I updated it to Common Lisp). Great memories, but I just can't get the same level of excitement running it on my MacBook in emulation mode. That said I 1000% appreciate the work that Larry and other people are doing to maintain it because it is my personal history.

I write a lot about Lisp languages (and I offer free mentoring, and people often ask for Lisp advice) and for most people I actually recommend Racket Scheme as an introduction since it is a batteries included instant install on most platforms and the supplied IDE is pretty good. If anyone gets really interested in using Lisp, the big job is deciding which language and implementation to use. I love Common Lisp (LispWorks, SBCL+Emacs), Scheme (Gerbil, Chez, Gambit), and Haskell (which for some weird reason I think of as a Lisp language).

Sadly, I have none of my old Medley Interlisp code that I did in the early 1980s (I searched my old paperwork looking for program listings - no luck). I thought of writing up a short 50 page "tutorial and fun projects" book on Medley Interlisp, but the idea of starting over from scratch is daunting.


I was in Weird Stuff Warehouse in Santa Clara in about 1990 and they had three fully kitted-out 1108s sitting on the floor for about $600 each, with software and manuals and so forth. I went home to deal with something or other, intending to come back and get one, but by the time I returned they were gone. They never had one again when I was there.


I wrote the linked post, I'm glad it brought up some good memories. I'd definitely buy your book.


> Imagine someone let you into an alien spaceship they landed in your backyard, sat you at the controls, and encouraged you to fly the ship. This is the opportunity Medley Interlisp offers.

This is rare and great.


small correction: Medley online.interisp.org doesn't run IN the browser -- it's running on a Linux-based Docker container with AWS. You can also install it on your linux, macos, windows (with WSL or ...) and other computer/os interfaces.


Wow you're right. It uses some kind of browser-based VNC client. Would love to learn a bit more about the general components to make this work.


See the Online Medley repo https://github.com/Interlisp/online


Related:

2022 Medley Interlisp Annual Report - https://news.ycombinator.com/item?id=34100600 - Dec 2022 (11 comments)

Interlisp Online - https://news.ycombinator.com/item?id=32621183 - Aug 2022 (9 comments)

Larry Masinter, the Medley Interlisp Project: Status and Plans - https://news.ycombinator.com/item?id=25379238 - Dec 2020 (2 comments)

Interlisp project: Restore Interlisp-D to usability on modern OSes - https://news.ycombinator.com/item?id=24075216 - Aug 2020 (24 comments)

The Interlisp Programming Environment (1981) [pdf] - https://news.ycombinator.com/item?id=5966328 - June 2013 (10 comments)

The Interlisp Programming Environment -- nice paper for those building tools - https://news.ycombinator.com/item?id=1889468 - Nov 2010 (1 comment)


A Lisp Machine in my browser?!?! And just like that my coming week of productivity was lost. Awesome!



Warren Teitelman wrote about the history of Interlisp-D and other window systems in 1985, in the chapter "Ten Years of Window Systems - A Retrospective View" of the book "Methodology of Window Management" (the volume is a record of the Workshop on Window Management held at the Rutherford Appleton Laboratory's Cosener's House between 29 April and 1 May 1985).

Warren was the manager of Sun's Multimedia Group in which I worked on NeWS, and his contributions to programming environments and user interface design at Xerox PARC were important and underrated.

https://en.wikipedia.org/wiki/Warren_Teitelman

>Warren Teitelman (1941 – August 12, 2013) was an American computer scientist known for his work on programming environments and the invention and first implementation of concepts including Undo / Redo,[5] spelling correction, advising, online help, and DWIM (Do What I Mean).

Ten Years of Window Systems - A Retrospective View, by Warren Teitelman:

http://www.chilton-computing.org.uk/inf/literature/books/wm/...

>4.1 INTRODUCTION

>Both James Gosling and I currently work for SUN and the reason for my wanting to talk before he does is that I am talking about the past and James is talking about the future. I have been connected with eight window systems as a user, or as an implementer, or by being in the same building! I have been asked to give a historical view and my talk looks at window systems over ten years and features: the Smalltalk, DLisp (Interlisp), Interlisp-D, Tajo (Mesa Development Environment), Docs (Cedar), Viewers (Cedar), SunWindows and SunDew systems.

>The talk focuses on key ideas, where they came from, how they are connected and how they evolved. Firstly, I make the disclaimer that these are my personal recollections and there are bound to be some mistakes although I did spend some time talking to people on the telephone about when things did happen.

>The first system of interest is Smalltalk from Xerox PARC.

Alan Kay commented in email on that paper and Warren's under-appreciated work:

>Windows didn’t start with Smalltalk. The first real windowing system I know of was ca 1962, in Ivan Sutherland’s Sketchpad (as with so many other firsts). The logical “paper” was about 1/3 mile on a side and the system clipped, zoomed, and panned in real time. Almost the same year — and using much of the same code — “Sketchpad III” had 4 windows showing front, side, top, and 3D view of the object being made. These two systems set up the way of thinking about windows in the ARPA research community. One of the big goals from the start was to include the ability to do multiple views of the same objects, and to edit them from any view, etc.

>When Ivan went ca 1967 to Harvard to start on the first VR system, he and Bob Sproull wrote a paper about the general uses of windows for most things, including 3D. This paper included Danny Cohen’s “mid-point algorithm” for fast clipping of vectors. The scheme in the paper had much of what later was called “Models-Views-and-Controllers” in my group at Parc. A view in the Sutherland-Sproull scheme had two ends (like a telescope). One end looked at the virtual world, and the other end was mapped to the screen. It is fun to note that the rectangle on the screen was called a “viewport” and the other end in the virtual world was called “the window”. (This got changed at Parc, via some confusions demoing to Xerox execs).

>In 1967, Ed Cheadle and I were doing “The Flex Machine”, a desktop personal computer that also had multiple windows (and Cheadle independently developed the mid-point algorithm for this) — our viewing scheme was a bit simpler.

>The first few paragraphs of Teitelman’s “history” are quite wrong (however, he was a good guy, and never got the recognition he deserved for the PILOT system he did at MIT with many of these ideas winding up in Interlisp).

David Rosenthal (who worked on Andrew, NeWS, X10, X11, and ICCCM) replied:

Alan, thank you for these important details. I’d like to write a blog post correcting my view of this history — may I quote your e-mail?

Is this paper, “A Clipping Divider”:

https://dl.acm.org/doi/10.1145/1476589.1476687

The one you refer to?

David.

Alan Kay replied:

Hi David

Thanks very much! Your blog is a real addition to the history and context needed to really understand and criticize and improve today.

I would like to encourage you to expand it a bit more (even though you do give quite a few references).

I had very high hopes for Sun. After Parc, I wanted something better than Smalltalk, and thought Sun had a good chance to do the “next great thing” in all of these directions. And I think a number of real advances were made despite the “low-pass filters” and exigencies of business.

So please do write some more.

Cheers and best wishes to all

Alan

Don Hopkins replied:

Yeah, it was very sad that Sun ended up in Larry Ellison’s grubby hands. And I sure liked the Sun logo designed by Vaughan Pratt and tilted 45 degrees by John Gage (almost as great as Scott Kim’s design of the SGI logo), which he just sent out to the garbage dump. (At least Facebook kept the Sun logo on the back of their sign as a warning to their developers.)

I truly believe that in some other alternate dimension, there is a Flying Logo Heaven where the souls of dead flying logos go, where they dramatically promenade and swoop and spin around each other in pomp and pageantry to bombastic theme music, reliving their glory days on the trade show floors and promotional videos.

It would make a great screen saver, at least!

-Don

David Rosenthal posted on his blog:

History of Window Systems

https://blog.dshr.org/2021/03/history-of-window-systems.html

>Alan Kay's Should web browsers have stuck to being document viewers? makes important points about the architecture of the infrastructure for user interfaces, but also sparked comments and an email exchange that clarified the early history of window systems. This is something I've wrtten about previously, so below the fold I go into considerable detail.

I archived a discussion that started with Alan's reply to the question "Should web browsers have stuck to being document viewers?":

Alan Kay on “Should web browsers have stuck to being document viewers?” and a discussion of Smalltalk, NeWS and HyperCard

https://donhopkins.medium.com/alan-kay-on-should-web-browser...

>Alan Kay answered: “Actually quite the opposite, if “document” means an imitation of old static text media (and later including pictures, and audio and video recordings).”


Don, is it fair to Teitelman to say that after years of trying to get machines to DWIM, he finally enjoyed success with his dogs?


WIMP is like DWIM without Doo, but with Pee.

https://en.wikipedia.org/wiki/WIMP_(computing)

https://en.wikipedia.org/wiki/DWIM

>Critics of DWIM claimed that it was "tuned to the particular typing mistakes to which Teitelman was prone, and no others" and called it "Do What Teitelman Means" or "Do What Interlisp Means", or even claimed DWIM stood for "Damn Warren's Infernal Machine."

[==>]

>Warren Teitelman originally wrote DWIM to fix his typos and spelling errors, so it was somewhat idiosyncratic to his style, and would often make hash of anyone else's typos if they were stylistically different. Some victims of DWIM thus claimed that the acronym stood for ‘Damn Warren’s Infernal Machine!'.

>In one notorious incident, Warren added a DWIM feature to the command interpreter used at Xerox PARC. One day another hacker there typed delete *$ to free up some disk space. (The editor there named backup files by appending $ to the original file name, so he was trying to delete any backup files left over from old editing sessions.) It happened that there weren't any editor backup files, so DWIM helpfully reported *$ not found, assuming you meant 'delete *'. It then started to delete all the files on the disk! The hacker managed to stop it with a Vulcan nerve pinch after only a half dozen or so files were lost.

>The disgruntled victim later said he had been sorely tempted to go to Warren's office, tie Warren down in his chair in front of his workstation, and then type delete *$ twice.


Unfortunately DWIM was somehow intertwined with Interlisp macroexpansion so you couldn't disable it without losing macros.


I suspect the tales of "Do what Warren Means" are apocryphal. We're now used to the idea of wasting CPU cycles to compute possible completions of user entries. DWIM was just ahead -- and a way of deciding "conservative guesses".

I'd like to put together a demo of DWIM at its best and worst.


I need help please


What's this?

DWIM is evolving!

DWIM evolved into ChatGPT!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: