> Anyway, my take on all this hyper-crap is that it's useless without a good scripting language. I think that's why Emacs was so successful, why HyperCard was so important, what made NeWS so interesting, why HyperLook was so powerful, why Director has been so successful, how it's possible for you to read this discussion board served by Frontier, and what made the World Wide Web what it is today: they all had extension languages built into them.
I'm wondering what your wiser, older self has to say about this 20 years on. Isn't it useful that documents you wrote 20 years ago can still be read?
From my memories, the Web craze started well before JavaScript, and JavaScript really only jumped on the bandwagon; so how could it be the critical success factor for the Web?
The success of the Web and JavaScript in the last two decades speaks for itself; but in 2018, JavaScript and the procedural Web could very well be its undoing when considering the original goals of the Web, couldn't it?
I don't think Don meant that JS was the critical success factor for the Web. But that extensible scripting is crucial to the kind of Web Ted Nelson wanted in the first place.
From my lived experience, the Web craze would be better termed the Modem craze. And the critical success factor that turned it into the Web, was NSF removing the restrictions on commerce in 1995.
JavaScript is just what got HTML closer to some ideals of Xanadu. Not close enough for Ted's vision, but that is a broad sociopolitical vision.
Server side scripting languages were critical to the success of the web, before browser side JavaScript was available and matured.
Simple stateless perl cgi scripts forked from apache that talk to text databases or mysql were the first simplest step, but things got much more interesting with long running stateful application servers like Zope (Python), Java, Radio UserLand, HyperCard, node, etc.
My favorite thing about node is that it lets you use the same language and libraries and data on both the client and server side. That's an enormous advantage that far outweighs JavaScript's disadvantages. But some people just can't see or believe that, for whatever reason, and they're fine with flipping and flopping back and forth between different languages, and hiring different people to write multiple subtly divergent versions of everything in different languages.
Face it: for all its faults, JavaScript won. I will always have a place in my heart for FORTH, PostScript, MockLisp, ScriptX, TCL, Python, HyperTalk, UserTalk, CFML, Java, and all those other weird obsolete scripting languages, but it's soooo much easier to program in one language without switching context all the time, even if it's not the best language in the universe. And TypeScript is a pretty darn good way of writing JavaScript.
You're right, the web was held back until it was finally considered "ok" to use it for commercial activity!
I'd say JavaScript is just what got HTML closer to implementing any ideal you want, and there's no reason Xanadu couldn't be implemented on top of current web technologies (except that Ted doesn't want to). But I don't think extensibility and scripting itself was part of Ted's original vision or implementation.
Just as so much has happened since MVC was invented (yet it's still religiously applied by cargo-cult programmers), also so much has happened since Xanadu was invented (like distributed source code control, for example), which requires a total rethinking from basic principles. We also have the benefit of a lot of really terrible examples and disasterous experiments to learn from (wikipedia markup language, wordpress, etc). Many of Ted's principles should be among those basic principles considered, but they're not the only ones.
Hmm, HyperCard in the same list as Zope and node? Interesting. :-)
The idea that JavaScript "won" is a little controversial to me. I think it's huge and important, but the world is still changing. Embedded Python goes places that Node still can't. I absolutely see the value you describe in sticking to one ecosystem, but I don't think JavaScript/TypeScript/Node is the only way to get those benefits. (See also: Transcrypt) I really enjoyed the PyCon 2014 talk on the general subject: https://www.destroyallsoftware.com/talks/the-birth-and-death...
The most recent conversation I had with Ted was after someone had just demonstrated the HoloLens for him and a few others. Ted had some feedback for the UI developer, and it didn't have anything to do with JavaScript or that level of implementation detail at all. It was all about the user experience. I don't want to put words into his mouth, but like he says in this recent interview, this is all hard to talk about because it really has changed so quickly.
I do think you're right that a lot of what Ted wanted to see could be implemented today in JavaScript and Git. But I think about the technical meat of that vision to be about data-driven interfaces. I am simply not old enough to really understand how notions of "scripting" changed between the 60s and the 80s. But the fact that Xanadu was started in SmallTalk suggests to me that scripting was part of the vision, even if a notion like "browser extensions" might not have been in mind.
Completely agree that there are other voices to learn from, and other important mistakes that have been made since Xanadu! (I think Ted would agree, too.)
Reading documents from 20 years ago is a mixed bag. Links usually fail horribly, which was something Xanadu was trying to solve, but I'm not convinced they could have solved it so well that 20-year-old links would still actually work in practice.
I've always tried to write documents in a simple format that's easy to translate to newer formats, and minimizes noise and scaffolding and boilerplate.
When we were developing the HyperTIES hypermedia browser in 1988 [1] at the UMD HCIL, we considered using SGML as the markup language, but decided against it, because we were focusing on designing a system that made it easy for normal people to author documents, and working with SGML took a lot of tooling at the time. (It was great for publishing Boeing's 747 reference manual, but not for publishing poetry and cat pictures.) So we designed our own markup language. [2]
It's not which scripting language you have, it's that you have a scripting language at all that's important. HyperTIES was actually implemented in C, plus 3 different scripting languages: FORTH for the markup language interpreter and formatter [3], PostScript for the user interface and display driver and embedded applets [4], and Emacs MockLisp for the authoring tool [5].
When you try to design something from the start without a scripting language, like a hypermedia browser or authoring tool, or even a window system or user interface toolkit, you end up getting fucked by Greenspun's Tenth Rule [6]
[6] Greenspun's Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
But when you start from day one with a scripting language, you can relegate all the flexible scripty stuff to that language, and don't have to implement a bunch of incoherent lobotomized almost-but-not-quite-turing-complete kludgy mechanisms (like using X Resources for event handler bindings and state machines, or the abomination that is XSLT, etc).
TCL/Tk really hit the nail on the head in that respect. TCL isn't a great language design (although it does have its virtues: clean simple C API, excellent for string processing, and a well written implementation of a mediocre language design), but its ubiquitous presence made the design of the Tk user interface toolkit MUCH simpler yet MUCH more extensible, by orders of magnitude compared to all existing X11 toolkits of the time, since it can just seamlessly call back into TCL with strings as event handlers and data, and there is no need for any of the ridiculous useless brittle contraptions that the X Toolkit Intrinsics tried to provide.
The web was pretty crippled before JavaScript and DHTML came along. Before there was client side JavaScript, there were server side scripting languages, like Perl, PHP, Python, Frontier (Radio Userland) [7], HyperTalk, etc.
Frontier / Manilla / Radio Userland was a programmable authoring tool, content management system, web server, with a build-in scripting language (UserTalk, integrated with an outliner and object database). That scriptability enabled Dave Winer and others to rapidly prototype and pioneer technologies such as blogging, RSS, podcasting, XML/RPC, SOAP, OPML, serving dynamic web sites and services, exporting static web sites and content, etc.
One of the coolest early applications of server side scripting was integrating HyperCard with MacHTTP/WebStar, such that you could publish live interactive HyperCard stacks on the web! Since it was based on good old HyperCard, it was one of the first scriptable web authoring tools that normal people and even children could actually use! [8]
I guess it's a matter of perspective whether you like the procedural Web (the developer/creative perspective) or not (the perspective of the consumer who gets all kinds of scripts for tracking, mining, fishing, and other nefarious purposes, all the while not being able to save something for later reading).
I have no doubt JavaScript was absolutely necessary to develop the Web to the point it is today. But I had hoped that development of HTML (the markup language) would keep up to eventually provide declarative means to achieve some of what only JavaScript can do, by sort of consolidating UI idioms and practices based on experience gained from JavaScript. But by and large this hasn't happened.
What has happened instead is that JavaScript-first development has taken over the Web since about 2010 (I like react myself when it's a good fit, so I'm not saying this as a grumpy old man or something). And today there's no coherent vision as to what the Web should be; there's no initiative left to drive the Web forward, except for very few parties/monopolies who benefit from the Web's shortcomings (in terms of privacy, lack of security, it's requirement of a Turing-complete scripting environment for even the most basic UI tasks etc).
> the abomination that is XSLT
Not trying to defend XSLT (which I find to be a mixed bag), but you're aware that it's precursor was DSSSL (Scheme), with pretty much a one-to-one correspondence of language constructs and symbol names, aren't you?
In the ideal world we would all be using s-expressions and Lisp, but now XML and JSON fill the need of language-independent data formats.
>Not trying to defend XSLT (which I find to be a mixed bag), but you're aware that it's precursor was DSSSL (Scheme), with pretty much a one-to-one correspondence of language constructs and symbol names, aren't you?
The mighty programmer James Clark wrote the de-facto reference SGML parser and DSSSL implementation, was technical lead of the XML working group, and also helped design and implement XSLT and XPath (not to mention expat, Trex / RELAX NG, etc)! It was totally flexible and incredibly powerful, but massively complicated, and you had to know scheme, which blew a lot of people's minds. But the major factor that killed SGML and DSSSL was the emergence of HTML, XML and XSLT, which were orders of magnitude simpler.
There's a wonderful DDJ interview with James Clark called "A Triumph of Simplicity: James Clark on Markup Languages and XML" where he explains how a standard has failed if everyone just uses the reference implementation, because the point of a standard is to be crisp and simple enough that many different implementations can interoperate perfectly.
I think it's safe to say that SGML and DSSSL fell short of that sought-after simplicity, and XML and XSLT were the answer to that.
"The standard has to be sufficiently simple that it makes sense to have multiple implementations." -James Clark
My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT, but without the other context and support it needed to actually be useful. So nobody was happy!
Then Microsoft came out with MSXML, with an XSL processor that let you include <script> tags in your XSLT documents to do all kinds of magic stuff by dynamically accessing the DOM and performing arbitrary computation (in VBScript, JavaScript, C#, or any IScriptingEngine compatible language). Once you hit a wall with XSLT you could drop down to JavaScript and actually get some work done. But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT.
Excerpts from the DDJ interview (it's fascinating -- read the whole thing!):
>DDJ: You're well known for writing very good reference implementations for SGML and XML Standards. How important is it for these reference implementations to be good implementations as opposed to just something that works?
>JC: Having a reference implementation that's too good can actually be a negative in some ways.
>DDJ: Why is that?
>JC: Well, because it discourages other people from implementing it. If you've got a standard, and you have only one real implementation, then you might as well not have bothered having a standard. You could have just defined the language by its implementation. The point of standards is that you can have multiple implementations, and they can all interoperate.
>You want to make the standard sufficiently easy to implement so that it's not so much work to do an implementation that people are discouraged by the presence of a good reference implementation from doing their own implementation.
>DDJ: Is that necessarily a bad thing? If you have a single implementation that's good enough so that other people don't feel like they have to write another implementation, don't you achieve what you want with a standard in that all implementations — in this case, there's only one of them — work the same?
>JC: For any standard that's really useful, there are different kinds of usage scenarios and different classes of users, and you can't have one implementation that fits all. Take SGML, for example. Sometimes you want a really heavy-weight implementation that does validation and provides lots of information about a document. Sometimes you'd like a much lighter weight implementation that just runs as fast as possible, doesn't validate, and doesn't provide much information about a document apart from elements and attributes and data. But because it's so much work to write an SGML parser, you end up having one SGML parser that supports everything needed for a huge variety of applications, which makes it a lot more complicated. It would be much nicer if you had one SGML parser that is perfect for this application, and another SGML parser that is perfect for this other application. To make that possible, the standard has to be sufficiently simple that it makes sense to have multiple implementations.
>DDJ: Is there any markup software out there that you like to use and that you haven't written yourself?
>JC: The software I probably use most often that I haven't written myself is Microsoft's XML parser and XSLT implementation. Their current version does a pretty credible job of doing both XML and XSLT. It's remarkable, really. If you said, back when I was doing SGML and DSSSL, that one day, you'd find as a standard part of Windows this DLL that did pretty much the same thing as SGML and DSSSL, I'd think you were dreaming. That's one thing I feel very happy about, that this formerly niche thing is now available to everybody.
> But the major factor that killed SGML and DSSSL was the emergence of HTML, XML and XSLT, which were orders of magnitude simpler.
That interview is wonderful, but in 2018, while XML has been successful in lots of fields, it has failed on the Web. SGML remains the only standardized and broadly applicable technique to parse HTML (short of ad-hoc HTML parser libraries) [1]. HTML isn't really simple; it requires full SGML tag inference (as in, you can leave out many tags, and HTML or SGML will infer their presence), SGML attribute minimization (as in `<option selected>`) and other forms of minimization only possible in the presence of a DTD (eg. declarations for the markup to parse).
> JC: [...] But because it's so much work to write an SGML parser, you end up having one SGML parser that supports everything needed for a huge variety of applications.*
Well, I've got news: there's a new implementation of SGML (mine) at [2].
> But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT
My thoughts exactly. Though I've done pretty complicated XSLTs (and occasionally am doing still), JavaScript was designed for DOM manipulation, and given XSLT is Turing-complete anyway, there's not that much benefit in using it over JavaScript except for XML literals and if we're being generous, maybe as a target language for code generation, it being itself based on XML. Ironically, the newest Web frameworks all have invented their own HTML-in-JavaScript notation, eg. react's JSX to drive virtual DOM creation, even though JavaScript started from day one with the principle design goal of a DOM manipulation language.
> My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT
+1. Though to be fair, XSLT has worked well for the things I did with it, and version 1 at least is very portable. These days XSLT at W3C seems more like a one man show where Michael Kay is both the language specification lead, as well as providing the only implementation (I'm wondering what has happened to W3C's stance on at least two interoperable implementations). The user audience (publishing houses, mostly), however, seem ok with it, as I witnessed at a conference last year; and there's no doubt Michael really provides tons of benefit to the community.
I'm wondering what your wiser, older self has to say about this 20 years on. Isn't it useful that documents you wrote 20 years ago can still be read?
From my memories, the Web craze started well before JavaScript, and JavaScript really only jumped on the bandwagon; so how could it be the critical success factor for the Web?
The success of the Web and JavaScript in the last two decades speaks for itself; but in 2018, JavaScript and the procedural Web could very well be its undoing when considering the original goals of the Web, couldn't it?