Hacker News new | past | comments | ask | show | jobs | submit login
What have we lost? [video] (ccc.de)
220 points by gbrown_ 10 days ago | hide | past | favorite | 83 comments





This talk looks pretty epic. I'm glad this discussion is being injected into the mainstream. I wish every one had an opportunity to use a network computing environment such as MIT Athena once in their lives. The original plan wasn't simply to grant blanket licenses for science and simulation software to users in the academic community. But create a true edtech platform that could be used to learn anything: languages, design, entrepreneurship, etc.

We've been talking about doing a follow-on talk this year focusing on network computing environments. Plan 9 and Apollo DomainOS are likely to make appearances, but more ideas are welcome.

That's awesome! Looking forward to it.

What about the 'PLATO system'?

"Long before Facebook's and Google's founders were born, and before Microsoft and Apple were founded. Before Xerox PARC. Before the Web. AOL. Bulletin boards and CompuServe. Before the Internet. Long before MOOCs (massively open online courses). Before pretty much everything we take for granted today, there was the PLATO system: home of not only computer-based education but, surprisingly, the first online community, and the original incubator for social computing: instant messaging, chat rooms, message forums, the world's first online newspaper, interactive fiction, emoticons, animations, virtual goods and virtual economies, a thriving developer community, MUDs (multi-user dungeons), personal publishing, screen savers. PLATO is where flat-panel gas plasma displays come from, and was one of the first systems with touch panels built-in to the screen. Countless other innovations." [1]

The book 'The Friendly Orange Glow' by Brian Dear details all about it. He says it's "a book in the works for more than two decades. Based on extensive research, including interviews with hundreds of key individuals who designed, built, managed, sold, and used the PLATO system."

[1] http://friendlyorangeglow.com/

PLATO on Wikipedia: https://en.wikipedia.org/wiki/PLATO_(computer_system)

Book on Amazon: https://www.amazon.com/Friendly-Orange-Glow-Untold-Cybercult...


KeyKOS (and relatives) somehow got my attention a little while back, and seem like they might be somewhere in the space of things you'd find interesting.

I'm not sure how available/installable they are nowadays, but some of their ideas live on in L4 microkernels, apparently.

(enjoying the talk; thanks!)


AFS, Zephyr, Kerberos, BatMail in emacs--all mind-expanding stuff to me way back when!

I never really explored AFS sufficiently to understand why this was the case, but I was always surprised it (or a similar mechanism) was not available in corporate offices.

"Clear everything off of SharePoint you don't need, we're running out of room."

"The shared drive is full, we're going to purge everything not in one of the special directories. See the list."

AFS, as used at my university in the 00s, nicely reduced this problem by greatly expanding the amount of storage capacity available. It also helped speed up access to data by moving it to the system you were working on (assuming it was also an AFS node and not your personal device, in which case it was exactly like accessing a remote disk). It also helped with the "SharePoint is down again because even though we're a multibillion dollar corporation we're too cheap to hire real IT staff and Bob the Build Engineer tasked with keeping it up is on vacation." Local copies of data remained accessible.


> I was always surprised [AFS] (or a similar mechanism) was not available in corporate offices.

In some ways it was self-fulfilling - corporate offices didn't use it, so there wasn't a great ecosystem, so corporate offices did not use it.

There were a handful of vendors, all with drawbacks. On a large network, AFS, or particular cells or filesystems could get into a weird state. It was not like Solaris NFS where most senior sysadmins understood the filesystem error states.

You want to feel you are putting data into something safe, permanent and always accessible, and AFS never felt like that in corporate environments I was in.


Haven’t heard AFS mentioned in years. I was co-admin of a SunSITE back in the early 2000s and we used AFS for mirroring. Whilst it was quite neat, it was quite complex, especially due to the underlying intricacies of Kerberos. It would be great to have a more modern version of AFS with its distributed nature, central name spaces etc but with a truly open, horizontally scalable distributed file system.

Why mainstream? But yes I agree we definitely should take a hard look at any software biz older than 10 years old

Smalltalk Pharo is like these OS: an environment where everything is an object that you can inspect, modify, duplicate, ...

Here a demo with a minesweeper clone i once made to discover Pharo: https://files.catbox.moe/jggff7.webm

It was really amazing but i didn't went further than that; Using Pharo quickly felt like being on a secluded island.

More about Pharo: https://pharo.org/features


I was actually going to do a segment on Pharo, because it really does fit in with the theme, but time limitations prevented it. If you found the stuff in the talk interesting, I second the recommendation to check it out.

I feel like dbus and systems built around it (i.e. Gnome) are at least a glimpse at that.

How does it compare? d-bus is just an inter-process communication mechanism, houw would it bring us closer to copying a Minesweeper window?

A tiny, tiny glimpse.

It's a bit more than IPC, it's a discoverable object system.

Very far away from copying UI elements.


Did we really “lose” the Lisp Machine, or did we just realize that compiling Lisp to a general-purpose instruction set like x86 and then having someone else in charge of creating hardware to efficiently execute that is much more clever?

We lost the environment where the whole OS was a debugger, and everything could be inspected and modified as the system was running.

Except, that’s what the browser is, and increasingly that’s where software lives.

The browser is only half (the client-facing side) of the equation.

GraphQL endpoints are introspectable.

A model with unrestricted introspectability wouldn't have worked as a web-replacement anyway, it is not secure enough!

There are even servers like Servant for Haskell that make REST a first-class citizen of the server programming language. So we have the best of both worlds now: security and convenience.


The browser? Oh, you mean the WebAssembly Virtual Machine?

Can we point to the S-expression syntax for Wasm and say we've come full circle? :)

This was an intentional move even if no one else wants to admit it.

You can't hide implementation from the User in such an environment; at least not to the degree you can with precompiled programs with debug symbols stripped. One requires you to infer function from raw assembly, the other, lets you step through areas of code that vendors want to keep out of the reach of user understanding.

90% of established security practices around code delivery are around keeping your product from being too easily reversed or "owned" by those who use it.

Thus the popularity of -as-a-Service business models. You can't invoice for something a user just uses.


Operating Systems that are actually in use show that the real difficulty of Computer Science isn't deeply complex algorithmic complexity (although OSs have algorithms and data structures in spades)

The real problem is just mundane management / organization / interface / documentation that plain old enterprisey software fails on.

A lot of what is lost was handling a much smaller breadth of hardware, applications, and as you pointed out, a much simpler security exposure profile.


True enough. Thankfully we still have Emacs.

The linked video seems mostly about UI usage, and one rarely sees nice live action UI demos for machines that (mostly) no longer exist. They should do a sequel that shows these other very powerful system features you mention.

One practical downside is that the live editing / code & continue / source code-free operation (mentioned in the Interlisp-D segment of the video) could lead to needing "just the right image" with all the installed definitions you wanted. Sometimes "source code" might not even exist which could lead to coordination issues.


Sadly the deep dive got cut due to time limitations (both in terms of time during the talk and finding out that all of my alphas were in various states of non-working as I started preparing that segment)

Had I had time, I would have patched the NFS code to run in "permissionless" mode, where it asks what creds are necessary to access a file, then just sends those creds. Hopefully somebody will be inspired to explore and blog about it though :-D


How does that work when you drop to the debugger in the middle of an interrupt handler? A page fault? A task switch?

How does the privilege separation work for that environment?

(mostly rhetorical questions)


You'd think I was an expert, given that I gave that segment of the talk, but I haven't yet dived that deep into Genera's internals.

As I understand it though, you don't have the concept of page faults at the image level; IIRC, Genera doesn't have any form of virtual memory, or if it does, it's all in microcode. Task switches are, iirc, handled in the image, but in the same way that it's done in Smalltalk: there's a function in the "kernel" to decide what task to switch to next, but the actual context switch is handled in microcode.

Also, there's no such thing as privilege separation; everything's running in a single address space and namespacing is handled using Common Lisp's package system.


The closest modern equivalent is probably a paravirtualized unikernel.

It was intentionally designed as a single user mode-machine wasn’t it?

Absolutely. There was no isolation between programs. In fact, you could write new programs by reusing parts of existing programs.

None of those actions were visible to the virtual machine that contained the debugger. The CPU didn't even have the concept of an interrupt.

The MIT Lisp Machine ran a simple kernel written in assembler that also included an interpreter for the user-visible instruction set, it would check periodically whether devices had raised an interrupt then return back into Lisp to handle the event.


We still have Smalltalk.

I know what you are meaning but in reality we don't have.

yet there are likely an order of magnitude more smalltalk programmers today than there were at smalltalk's "peak"

I'd be curious to know how you quantified this. I was curious myself, so I dug into where Smalltalk landed in Stack Overflow's developer survey (which, of course, is not a complete overview of the entire programming industry!) - it seems that Smalltalk was the second most Loved language in 2017 [0], but it completely disappears off the chart in 2018 [1]. I would imagine that we've lost quite a few Smalltalk programmers...

[0] https://insights.stackoverflow.com/survey/2017

[1] https://insights.stackoverflow.com/survey/2018


More likely, they changed the conditions.

Well, did we? You can still do that! Those environments were extremely primitive and it doesn't take much work to match or vastly exceed their capabilities. This is so much the case we often don't even recognize such environments staring us in the face. Consider another story on the HN front page right now: IntelliJ 2021 is out. Load IntelliJ and you have something that which can be arbitrarily scripted, altered and extended as it runs, and which appears to match/exceed Genera and the other demo'd systems.

Here are some features of IntelliJ that are reminiscent of a single-address-space fully reflective operating system:

• A multi-language scripting console that lets you access arbitrary parts of the IDE and give it interactive commands, either via its real API or (using reflection) anything else:

https://www.jetbrains.com/help/idea/ide-scripting-console.ht...

• An interactive debugger that can debug any app, including the IDE itself. Of course it can be risky to use breakpoints as you might suspend the debugger you're using and get stuck, but that's a conceptual issue, not a technical one.

• An interactive auto-completing command line like thing for commands, files, symbols, menu items etc. Double-shift opens it for me.

• Real-time CPU, memory and task monitors. Try opening the command palette, type CPU and then pick "CPU and memory live charts", or search for "activity monitor" and open it.

• A fully virtualized filesystem layer with sophisticated file explorer. You can explore 'into' files at the structural/symbolic level.

• Obviously it's the JVM so everything is fully garbage collected.

• A whole UI framework that supports both tiling and floating windows.

• The ability to load, unload and upgrade apps (plugins) on the fly without restarts. They can alter the behaviour of the environment and other plugins, either via real APIs or by the big hammer of loading a JVM agent on the fly. JVM agents can rewrite code in any arbitrary way and swap it into the app as it's running.

And you can program it with Lisp if you really want to, via Clojure. I watched the talk, it was a nice talk, but I couldn't help feeling that the answer to "What have we lost?" is actually - cynically - not much. Those environments had some interesting ideas and the good ones seem to have made it into production, just not via custom hardware and operating systems.

Edit: interesting how quick this hit -1. I think a lot of people like the idea of lost treasures in computing. I used to like it too. But I can't identify something these machines or operating systems do that is strictly superior to modern technology or large-scale 'operating environments' like IntelliJ.


IntelliJ idea is, in many ways, more advanced than Genera, yes. I use it every day. But I stand behind my assertion that we've lost something there: Genera had most of the features you listed (even though I didn't have time to show them), and in the case of debugging the debugger, AFAIK, you can't get IntelliJ into a state where the debugger is debugging itself, because in order to attach a debugger to an instance of IDEA, you need to run a second instance.

Adding plugins without restarts in IDEA is a feature that's only a couple of months old now, and even then, making your own plugin is a very involved process. You can't just copy a single function into your own code, modify it, load it, and be done. In genera, that would take 1-5 minutes; quick enough that if something bothers you, you don't really need to think about whether it's worth taking the time to fix it. By the time you've done a cost/benefit analysis, you'd have been done with the fix.


I think you're right that the UI will block you from debugging itself, but, this is just a safety check that could be disabled. It might be a fun project to try and turn IntelliJ into a fully Genera-like environment.

I think as part of that you'd need to teach the debugger how to attach to itself without risk, probably by defining some notion of a protected system thread and then preventing breakpoints from affecting them. The JVM is certainly capable of doing this, although the debugger UI would have to be extended. I guess Genera either didn't have breakpoints or had some equivalent notion of un-breakable operations.

It's true that there's no way to just edit a single function and replace it, at least not out of the box. This is partly conceptual - Symbolics Lisp was pretty much a dynamically typed language if I remember correctly, as are most Lisps, and didn't have any real notion of version control or a compiler checking things for you ahead of time. You wouldn't want to edit only a single function in isolation if editing the internals of IntelliJ because a lot of the value of the tooling is putting the code in context via type checking, static analysis, folding and so on, which requires the rest of the code to be available and ideally visible.

Still, there's nothing missing from the core technology. To make things editable like that you could have a plugin that grabs the matching source zip for a selected symbol or bit of UI and opens it in a new project window, configured in such a way that compiling it causes it to be reloaded into the IDE. Either via the plugin reload mechanism or the bigger hammer of something like DCEVM, which lets you arbitrarily redefine classes at runtime even if their field layout changes. It's more of a workflow issue than anything else. The additional overhead is mostly buying you things you'd want anyway, like a notion of sharing and version control for your changes.


Great. So, can you hook in to a running browser the next time a tab crashes, fix it, and continue viewing the website, with no prior setup? How?

Well, yes, unless you count installing IntelliJ as prior setup:

https://www.jetbrains.com/help/idea/debugging-javascript-in-...

It'd be easier to do it with the Chrome debugger though. I sometimes 'fix' websites that have annoying popups or similar with the dev tools.

With complex apps you have the problem that the owners don't want you modifying them, so they're minified and have other techniques applied that debuggers don't like. But that's not something a Lisp machine would have fixed. You can obfuscate any program.

Also, debuggers aren't the same thing as code editors so of course, you'll lose your changes when the page reloads. Making them persistent means throwing together a browser extension.

Chrome is not like IntelliJ though, because Chrome is a compiled C++ app. It's not written in JavaScript and it isn't a reflective environment like IntelliJ is. With IntelliJ you actually can do things like just open up an editor tab and invoke arbitrary internal functionality. With Chrome, not so much.


I'm not understanding how you propose to resurrect and continue the process after the tab segfaults.

Ah, by "crash" you mean if the renderer process crashes, not website JS code.

You can't easily do that with a C++ process indeed, which is why my example was about IntelliJ, not Chrome or any other C++ app. It's much easier to debug and change code on the fly when it's running on a VM like the JVM. I'm not arguing that something like Genera can be matched by any modern app, just that a few operating-system-like apps with similar properties do exist. We just don't think of them as operating systems because they pose as something else. Sort of like how emacs is a Symbolics Lisp-like environment posing as a text editor.

If you're going to argue that it's not the same unless it goes all the way down to the metal, well, people have built Java/JVM based operating systems before like JNode. It's technologically possible. Just not worth it. Being able to edit a function in the middle of your filesystem driver without reloading it is neat in theory but I wouldn't want to actually develop a filesystem that way.


My takeaway from the first segment wasn't the Lisp part, but the OS and user interface, especially that any "object" can provide handlers that then provide deep and highly dynamic integration with the rest of the system, in a react-like immediate mode style to boot. And a very slick merging of CLI and GUI. All back in the 80's!

I've found that the best explanation of what we really lost is in some of the comments by Kent Pitman in some comp.lang.lisp discussions, like this one: [1] and this one [2]

[1] https://groups.google.com/g/comp.lang.lisp/c/QzKZCbf-S6g/m/K...

[2] https://groups.google.com/g/comp.lang.lisp/c/XpvUwF2xKbk/m/o...


Fitting Lisp in a stack based machine (or vice-versa) is mostly a compromise, not exactly a "realisation" of a "clever" method, because Lisp makes extensive use of list appends/concats, environments and garbage collection, which are not an easy fit for heap/stack C abstractions.

I challenge you to create a piece of hardware that can execute Lisp faster than a modern x86 CPU. This compromise was chosen because you can’t without spending billion of dollars.

which does not imply that we haven't lost the architecture and hardware innovation associated with it

On Youtube: https://www.youtube.com/watch?v=7RNbIEJvjUA

(The media.ccc.de server was a bit slow for me.)



Yes, but I'm actually located in Germany myself. I assumed that there was just too much traffic caused by HN.

Relevant: "Why KeyKOS is fascinating" - https://github.com/void4/notes/issues/41

Interesting.. it somewhat ties to what people are doing in programming languages with linear/affine types, representing resources that can be passed around but not duplicated.

I was recently thinking a bit about how one could create a typed Forth-like system, by adding a 3rd parallel stack that would hold the type on the data stack, and was used for type checking.

I really wanted to like Forth (which is very similar to Lisp/Smalltalk systems, even possibly more elegant in some respects), because of the system extensibility, but the stack errors drove me crazy. So if the system had the ability to show the types of things on the stack (or name them) while programming, that would be a big help.

Such Forth seems to be surprisingly close to Haskell (or other lambda calculus like language), which I really enjoy lately (I wouldn't go back to Lisp), but written in reverse.


A third stack is one more thing to keep track of.

The simpler solution to this in many languages is type tagging. E.g. Ruby (MRI at least) uses a few bits to indicate type to avoid creating full-blown objects for small-ish integers, symbols, etc., and you'll find the same approach in many strongly typed dynamic languages.


I think you misunderstand. The 3rd stack would be parallel to data stack, and would be used during interpretation for dynamic type checking (and also completion), or during compilation to do static type checking (and also type inference).

I got that. The point I was making is that type tagging is an alternative to this that saves you from keeping the extra stack in sync, as well as avoids sacrificing another register as a stack pointer.

It's also well trodden ground in the form of tagged unions of various forms going back at least to ALGOL 68 so there's plenty of literature on the subject.


Franck done this already for you:

http://www.oforth.com/

Also look at another next generation concatentive language: https://8th-dev.com/ Not object-oriented per se but a very comprehensive protected hi-level language/environment with "Forthy" origins.


That sounds really cool, and I'd love to see the result if/when you build it

Larry Masinter gave a presentatation recently about Interlisp. https://youtu.be/x6-b_hazcyk

Seems like IBM's problem with iSeries is that they aren't encouraging new usage. They aren't working with universities to offer students any kind of courses, and don't seem to care whether or not the system is adopted. Not sure how sustainable it is, even considering the positive qualities of the system (security and simplicity of command structure, for example).

My university used to have researchers offer to pay students for testing out new OS ideas, such as 3D visualization of file systems. Some ideas looked super brilliant, but I had to look over at the researcher and smile because I didn't know how to use it. Neither would my grandmother. Sometimes the simplest ideas win.

> 3D visualization of file systems

> I didn't know how to use it

Skateboard around thorough virtual file systems like "Plague" in the movie Hackers[1][2]

[1] https://youtu.be/kV_i8AefT8I

[2] https://youtu.be/LkqKFamTkME


An actual working implementation was psdoom, in which one could visualize processes in the Doom game environment, then wander around killing them :)

https://www.youtube.com/watch?v=JSpHc945G38


If you are into Lisp Machines, this talk on Symbolics by Kalman Reti (who worked there) is worth watching (1h):

https://www.youtube.com/watch?v=OBfB2MJw3qg

If you only have 15 minutes, he gives another demo here:

https://www.youtube.com/watch?v=o4-YnLpLgtk

He has another video where he does some actual hacking (smooth scrolling of sheet music for display while playing an instrument):

https://www.youtube.com/watch?v=sfgjL7EUHZ8


What else comes to my mind: OS/2 Warp, NeXT OS, SGI IRIX

I still miss Warp. It was actually a pretty great OS. BeOS was fantastic as well.

OS/2 Warp was nice to work in and fully object oriented too. It was just too resource intensive and by the time they shipped, Windows was the default.

At the time, if you had a powerful system, it felt so much faster than Windows. OS/2 Warp had much better multi-tasking support at the time as well and it felt much more crisp. Windows was the default, but it was also really mediocre at the time.

NeXTSTEP is pretty much alive.

For some definitions of ‘alive.’

macOS still uses Objective-C, and in spite of Swift love, some key frameworks (like Metal) make use of it.

The new C++ userspace drivers is called DriverKit, most likely in homage to NeXT Objective-C based DriverKit.

DisplayPoscript has evolved into PDF.

WebObjects has transitioned to a Java based framework, that is only used by Apple, and was the basis for J2EE design. Other than that even less people used NeXT for servers than workstations.

EOF evolved into CoreData.

If you want Renderman it is still available.

The UNIX underpinnins never really mattered, in a MIcrosoft kind of way, its main purpose was to bring UNIX software into NeXTSTEP and be a checkpoint on contracts.

So for me it is pretty much alive.


AS 400 is still going strong for many companies!

Am in my first IBMi class literally right now. It seems completely different than the Windows/Linux systems I've used for the last 25 years.

There are techs from 2 other companies in attendance. Apparently IBMi is used widely in banking.

It was the only thing NOT totaled when we got hit with ransomware, and is what holds our PII for our clients/workers.


> It was the only thing NOT totaled when we got hit with ransomware, and is what holds our PII for our clients/workers.

Because it's actually more secure, or because it's so obscure that the malware didn't know what to do with it?


Some combination of the two. IBM i has its vulnerabilities, but because of the way that the OS is built, getting code execution accidentally is extremely difficult, and even getting native code to run given full access to the system requires mucking about with the objects on disk.

However, with IBM i comes PASE, which provides a AIX-like environment that can run most software built for AIX directly, and that's going to have the same sort of vulnerabilities as any other UNIX system. As far as I know, that's mostly used for new development, though, because developers don't like learning unfamiliar platforms.


Indeed. A lot of banks overhauled their core banking platforms to run on IBM i in the early to mid 90ies. It scales well and considered secure.

I’m curious about the IBM i. Anyone here who can comment on how to get started? You can’t exactly buy one of these systems off the shelf.

There's an IBM i hobbyist discord[1] full of friendly people who will be happy to help you out with learning or acquiring your own iron, or you can use any of a number of public-access systems. The main public access system is pub400[2], but there are at least three that are run by people in the discord. I run one of them, albeit with the worst uptime of the bunch; PM for details. And finally, you can find all of the docs for the system on IBM's site[3], though they can take some getting used to.

[1] https://discord.gg/MzYzSvzN, or #ibmi on freenode [2] https://pub400.com/ [3] https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/rza...


Get in touch with a 3rd party vendor. They can offer classes/training or point you to someone who can.

You could also take a look at IBM's redbooks. They're guides written by 3rd parties. http://redbooks.ibm.com/

And here are the IBM technical docs: https://www.ibm.com/docs/en

You can get used IBM Power servers on ebay, but they're pretty pricey. Not sure about licensing and such also, so you should look into that before buying.


Reinventing QNX will be cutting edge for decades.

RiscOS and Rox Desktop under BSD/Linux.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: