Anyone know where to find the full length interview? This is just a clip.
"We thought computing would be artisanal. We did not imagine great monopolies."
In a sense, it still is, and always will be. I feel that the core of programming, just making the computer do things other people want it to for money, will always lead us back to a highly individualized workflow, where a single programmer has complete mastery over the system. It's just that the programmer-artisan must find work in the context of these great, monopolistic organizations, which inevitably apply pressure to reduce the amount of skilled labor required to perform a given business function, and thereby increase the organization's leverage in salary negotiations, leading to hyper-specialized assembly line processes that eventually just stop working because no one has the broad systems-level knowledge required to trace a bug to its source anymore. It would be great if these monopolies could just accept the artisanal nature of software development and pay people what they're worth.
Yet Ted Nelson killed what he created because he wanted a monopoly for himself.
From the start he covered ZigZag and zzstructure with patents and could not get his own software going (I don't think he is very good manager). Several decades later, when some enthusiasts reached out and tried to recreate the ZigZag as open source project GZigZag he eventually screwed up the people involved.
I think it's fascinating to listen to baby boomers — not just in tech but in almost every realm — when they talk about the naive optimism of the '60s and '70s. Free love, hippy communes, drug use, tech-as-democratizing force and widespread anti-war sentiment eventually gave way to The War on Drugs, terrorism-fueled paranoia, #MeToo, income inequality and hypercapitalist tech monopolies.
They weren't just profoundly wrong about the future, a lot of them unwittingly helped to create what we have now. And to this day none of the boomers I've talked to seem to understand their role in that series of cultural and economic shifts. I'm generalizing a lot here, but it's a pattern I keep seeing when it comes to the legacy of that particular generation.
The problem was ceding the technical culture to people who only want TV 2.0. Wanting a new form of entertainment is fundamentally at odds with people wanting to build bicycles for the mind.
The first time you build anything it's artisanal. Look at the process of building the first doll, or the first car etc... of a multi million run of the same object. The difference in software is once you're done with the first one, there's nothing else to do.
That's only if the company doesn't know how to run a software project.
Most of the value extracted from software is in the maintenance phase, not the development phase.
When the principal developer on a successful project leaves, it multiplies the time and effort involved in maintaining the code, and then what happens? No one does the maintenance, and two years later the new team decides to rewrite it. Sunk cost fallacy is not a fallacy when you're talking about years of fully tested work lost, and you can't even find people who know what the requirements were.
"The difference in software is once you're done with the first one, there's nothing else to do."
I'm not sure if I parse what you wrote correctly. Currently we can't have enough software written. And solutions - patterns (but not in the god awful Gang-of-four sense) - are transferable between isomorphic problem spaces.
Following the doll metaphor, a police doll, a firefighter doll, a child doll, a werewolf doll, a gorilla doll - * a bicycle!?*. Sometimes you truly come up with something totally new, and adapt.
And a large part of the reason we can't get enough software written is that our teams are adversarially pitched against one another. One set of Great Minds is writing advertising, code, another ad-blockers, another HFT and financial derivatives crud that's quite likely to precipitate yet another financial speculation-bubble-crash cycle.
And basic needs and wants go unanswered, in large part because those with the needs and wants have no financial means to pay for them.
For numerous reasons, the information which should be more available and standardised ... is instead obfuscated and/or sliced-and-diced behind ad-ridden, paginated, metadata- and classification-free CMSes.
You can dig in to find various economic, power, political, psychological, sociological, and other reasons for all of this. But the upshot is that the frenetic code-generating that's going on is doing little to move the needle on improving either quality-of-life or overall systemic health. Frequently the opposite.
The problem is or tools and languages for expressing thought to a computer are terrible. And we're terrible at developing better ones, and we're terrible at evaluating their fitness, and we're terrible at adopting better ones once they come along.
It's a limitation of or brains. The cost of re-learning how to think is too damn high.
The solution space of ways to solve programs with computers is highly dimensional. We are climbing a local minima and we're not gonna be sitting anything different part of space any time soon.
What's even worse is we can't even iterate quickly because it takes us so long to evaluate large changes, and the costs of making small changes are too high to be worth it.
We're still using Algol syntax 50 years later! It isn't better, it isn't well constructed. It arcane, and unnecessary - but the tech costs of moving away from it are to high. Someone could fix improve Algol syntax in a few days max and every Algol derived language (c/c++/java/js/c#/...) would benefit. But nobody in hell is gonna update every algol derived compiler ever written and update all the code ever written to match. It'd be ludacrious!
But here, exactly the type of incremental improvement you'd make in a development cycle if it were a library being written now, is impossible to make because of change costs. Think of rapid prototyping and small iterative improvements - this is exactly the opposite.
When the engineering overhead to fix a bug is minimal ( "I'll code it in 10, run through automated tests and CI, then deploy") you fix every small bug you see. When the overhead is huge ("10 mins to code. But it'll reset our multiweek manual test pass, and force us to not physically ship until next product cycle") you don't bother and code rots.
Your comment reminds me of someone's essay on programming languages, and of the improved efficiency of adopting same. There was machine language, and then assembler, and then the first set of third-generation languages, and then ... C. And for all that's come along after C, there's been some progress, but in general, relatively little.
Each variant has its advantages, and disadvantages. C itself has major problems (it's insecure as hell), but ... works well enough. And so we're caught in a bit of a hairpin-bend effect.
I'd have to see a particularly good argument that this is a fundamental limitation of human minds, or of technological sociology (that is: the net effects of programmer skill and education), or something like that.
It stays artisanal because you can modify and craft your product even after it is shipped and used, now in real time with web apps.
It’s a quite unique mix of constant craft and creative changes with world scale instant distribution of these artisanal changes.
There is right now a programmer-artisan crafting some Google/Facebook/Startup code and this code will directly change the product. No intermediate.
It’s a lot more like radio and talk tv –where talents have to recreate each day their entire show, and creating the show is producing it– than the toy industry, where creation and production are entirely separate processes.
As a researcher, this is kind of what scares me about becoming a professional SE. In research, every time I start a new project, there are lots of unknowns and challenges. Hopefully the real world isn't so dull!
It kind of is. A typical software job is about doing pretty much the same thing you do in every other software job; the primary variable is the client and their idiosyncrasies.
(Lucky are those who get a job doing something more interesting than CRUD webapps or mobile apps.)
> I'm so glad you found the interview of interest. Unfortunately, as with our print interviews, the full unedited interview is not available. However if we get a number of similar requests, we may consider releasing a longer edit,
Please message him; I'd be interested in an uncut version as well.
I'm the editor (Stephen Cass). I really should have added "independent" to "similar" in my response there, but oh well ;). Anyhoo instead of emailing me, you can just let me know here if this thread has inspired you to want to see a longer version!
We're unlikely to just drop the raw footage (for the same reasons that journalists very rarely publish scans of the notes they take during interview, or the raw audio of any recordings, because part of our whole raison d'etre is editing things), but as I indicated, if there's enough interest to justify the resources, I could argue for a longer cut.
(Also, ProTip re: the poster below, yes, my personal email is publicly available with a google, but no, it's not a good idea to send a lobbying email to a journalist's personal account when there's a known-good work address available, not least because who knows when that email is going to be seen, and depending on the journalist's filters, it might never be seen at all.)
I saw a great talk from Alan Kay where he gives an argument that studying historical systems is important for innovation: Imagine all the possible directions computing could move toward over the next 50-60 years. Some will be good (near-optimal), but others will probably be a waste of time and merely pursued out of fashion, while some of today's good ideas will be dropped and forgotten.
Thus in the future, when programmers find they've reached a "dead end" / local maximum, they might study ideas from 2018 and find a way forward. In the same way, someone who wants to innovate today could be advised to explore some of the "paths not taken" from the 1950's and 60's.
Not every interview situation works this way, but I very much appreciated that behind the scenes look. Sound bites that can seem incredibly arrogant come off a lot more measured in the context of him repeating some things he's said before, and deciding he'd rather emphasize a different aspect.
I wouldn't make this argument generally; it's a journalist's job to not just let a subject present their own views unchallenged. In that 1990 interview, I enjoyed hearing some of the back-and-forth like when the interviewer wanted Ted to sound less like an advertisement at one point.
If you did manage to squeeze more sense out of Ted, I'd love to see it. Most especially on his "artisinal" point.
I've been digging at the trope of technology as decentralising, and am coming to suspect that it's anything but. Support for this in a few places, including Ed Glaeser of Harvard, particularly in his Triumph of the City and many YouTube talks, Evgeny Morozov and Zeynep Tufekci as regards Internet and communications, and others.
It's starting to hit me that we had a locally-oriented, peer-to-peer, decentralised world at one time, and it was decidedly lo-tech.
> It's starting to hit me that we had a locally-oriented, peer-to-peer, decentralised world at one time, and it was decidedly lo-tech.
These are almost exactly the words I am looking for. We had paradise, and then we threw it away with the same artifice we used to build it in the first place.
I'm not sure I'd describe it as "paradise", though I'm also not sure I'd describe it as "not paradise". Reserved judgement there.
But I will note that preindustrial life had many of the characteristics that decentralisation and peer-to-peer advocates seem to promote. Which gives a few opportunities for consideration:
1. Can technology provide that same level of decentralisation? Evidence is suggesting it does precisely the opposite, at least to date.
2. What are the other characteristics of decentralised / p2p systems? Are they inherently feudalistic? If not, what are the other possible structures? Are they inherently anti-technological?
3. Is "bad hygiene" necessary for decentralisation? That is, explicitly creating dysfunction in systems which are too large? I'm still developing that thought, though there's a Mastodon tootstorm here: https://mastodon.cloud/@dredmorbius/36518392
4. Are there ways of introducing necessary degrees of "bad hygiene" without bringing the entire high-technology regime crashing down?
5. Is p2p / decentralisation good, and if so, why? Or why not?
I was referring more to the decentralized, p2p phase of technology that we woke up from about 10-15 years ago. That world only existed because bandwidth to the net was so constrained. Before that world we had mainframes, and had to deal with exactly the same kind of centralized control bullshit as before.
I am learning that technology is only empowering when the only entity capable of wielding influence over it is it's user. In all other cases you're just somebody else's pawn. When I was young technology was my escape from that reality of pawnship, and I'm furious that this is no longer the case.
The only programming I want to do is artisanal. (That sounds really...something, but I'll leave it there.)
I'm glad to work with other people, but it feels like the past ten years or so have strains of collectivism trying to crowd out the artisanal aspect because social or whatever. I'm sure part of it is the enormous incidental complexity brought on by the web.
I know its wrong but I'm imagining a Hoxton based start up offering artisanal computer bureau running on a restored IBM 360 - where the punch persons craft each punch card by hand using a 18th century bradawl.
Top of the line would be leased line cyberpunked up asr33 (actualy they are quite cp as is)
I think productization of software use, for specific purposes, was inevitable... and therefore monopolies because of economies of scale.
However, I also thought this would happen to libraries/components, because the same dynamic is in play. But it's only happened to very large "libraries" that are in effect a platform, putting the programmer at a higher level of abstraction. Often called "engines': databases, game engines, etc.
Yet... for the vast majority of library product categories, open source has won...
it might be as simple as whether the programmer user has the expertise to make that engine themselves, or if there's a gap (as for non-programmer users). The coding imperative: If I can write it, I must write it. How can a business compete with that?
Another factor is companies "commoditizing their complements", so e.g. we got java, to sell sun boxes. It became impossible for a non-free language to win.
In a sense, it[programming] still is[artisanal], and always will be.
Certainly, programs will continue to be made essentially "by hand". But as Nelson goes on in the discussion, he expands his statement to mean something universe programming literacy, "citizen programmers" produce programs for each other. Despite some stabs at putting programming in elementary school curriculum, we're fairly far from that.
I understand what you are getting at, but aren't we some way towards the "citizen programmer", especially when you consider the number of private individuals contributing to open source software. I would also class many of the independent phone developers as citizen programmers. I would say that we have never had more independent developers writing software that other individuals want.
> I feel that the core of programming, just making the computer do things other people want it to for money, will always lead us back to a highly individualized workflow
I would be willing to bet against that. In fact, it's safe to bet that "always X" is false regardless of X, especially when humans are in that equation. "Always" is a very long time :)
I'm not a betting man, but history tends to repeat itself. If you read some of what was written in the 70s and 80s about programming, you often find certainty that programming is going to be commoditized and standardized within our lifetimes. I read one book where the author (who's very well-known in the CS world) was sure that by year 2000, we would have a standard catalog of programming tasks with labor estimates similar to what auto mechanics have. If anything, I think we're even further away from this now than we were in the 80s. It would be hard to find anyone who can even imagine such a catalog, much less create one.
I heard an anecdote that in the late '60s someone put a bunch of interns in a room with with a few computers & digital camera to solve the object detection problem - he figured they'd be done by the end of the summer. 30 years later, that looked very much impossible to do. But 50 years later, it was eventually solved. Like I said - "always"/"never" are very long time periods...
In the late 1970s, I was the only teacher with programming experience in a system. I will never forget the words of my superintendent, coming back from a conference in Arizona. The experts at the conference told everyone that the world wouldn't need many programmers in a few years. And so, he had concluded, the school wouldn't need more than the two Apple IIs it had borrowed.
except that most computers now are closed and dont allow control of the system. In fact, most comouters now dont even have adequate programming tools besides those meant for children.
I'm not sure that is generally true. The median software developer salary in the US is $102,280[1]. An income of $100,000 puts you in the top 9.15%[1]. A top 5% income writing software isn't completely unheard of, but it does not seem to be represented by any kind of majority. There are always outliers.
Thanks for fact checking me, I should have done my homework first.
I still think that the 9.15% stat supports my claim though. I hang out with mostly non-engineers so its a bit comical to be shamed by hacker newsians for proposing that we get paid enough. Most studies on the matter have demonstrated that any income over 75k has no correlation on happiness, so for me I guess I feel more concerned about finding meaningful work than a higher salary.
On another note, I am not sure why all of these people claiming to providing value equal to 10x their salary don't simply work for themselves and see if they can really rake in that much without the support of their company.
I understand where they're coming from. Pay, of course, is a function of supply and demand. Nothing more, nothing less. The greater the demand, relative to the supply, the higher the price. We're constantly told that developers in high demand and that companies cannot find anyone to hire, to the point that the government has even pushed 'learn to code' programs to try and bring more people into this 'future industry', so price should be rising rapidly to close that gap... but it's not happening for the most part.
So we have this disparity where the market is not reflecting what the media (et. al.) are reporting. I can see how that is leading to much confusion for those in the industry.
In 2014, the median income for software developer, applications was $103,054.85[1] (in 2017 dollars). Today, the median income for software developer, applications is $102,212.06[2] (again, in 2017 dollars). Developer incomes are on the decline.
It appears that we're reached equilibrium. If the industry was honest and said "go away, the industry is full" people would have less belief that they're worth more, but that's not the message we continue to hear.
The true programmer-artisan (often called 10xer, guru, ninja, whatever) builds systems that cause money to flow into bank accounts to the tune of 6, 7, 8 figures. Yet if this person wants a raise, they will never be able to get one. They will have to change companies and abandon their code. What do you think a software industry looks like that's full of abandoned code? Much like it does now.
I was once told by a recruiter that "industry standard" was to only expect a 5% raise with a new job. But I once tripled my salary in a single job offer, no lie.
I'm just saying that the way compensation is determined currently (primarily per tech stack and years of experience) doesn't seem to have a good effect on the industry as a whole. It encourages rushing into "hot" technologies, and discourages sticking around with a single company and project for more than a couple of years. I think companies have to start taking into account the possibility that the value of certain classes of employees (like software developers, for one) to the organization might drastically increase without that person learning any new skills.
If I'm making $50,000 as a developer at Company X, but I know that other developers using the same tech I do at other companies are making $75,000, what are the chances I'm going to get a $25,000 raise from Company X? Probably nothing. But the chances of me getting myself a raise by switching jobs are pretty good. Likewise if I know that developers using Tech Y are making $100,000, I'm going to try and get my company to adopt Tech Y, regardless if it's good for the company or not. Then, once I know enough about it to impress an interviewer, I'm gone. I have a theory that this dynamic is behind the phenomenon of new techs coming out of Silicon Valley suddenly becoming extremely popular and then causing huge waves of buyer's remorse.
My observation from working in this industry for 10 years, is that the true potential of software is not unlocked until it's been in production for at least 3 years. This is a bit of an oddball view of mine but I believe it's backed by research. 3-4 years is around the time when the developers are familiar enough with the requirements and the code to make big, bold plans and execute on them. But usually that doesn't happen -- usually, the developers who created it leave before then for greener pastures, and the new folks who come in have zero context on the decisions behind the original system. So they try to rewrite it, repeat all the original mistakes, and maybe even add some new ones.
The easiest thing companies can do to fix this is just get more aggressive with counteroffers. If a developer quits, ask them how much they're getting and then match it. Counteroffers happen all the time with new hires, but rarely with existing employees. It should be flipped; companies should try harder to retain than to hire new.
Not in all countries the train driver that takes me to London has a basic of 60k - for a 4 day week! with OT that's 75 to 80k and they have a DB pension.
Why do you feel that they are not being paid what they are worth?
Because they perceive that there are other people in their organizations making far more money, who don't contribute much value.. or at least certainly not 2x, 3x, 4x, 10x, whatever, more value.
Whether that is actually the case or not is tough to say, exactly because it's tough to directly measure the value created by an individual developer (in most cases).
"All I can say is: close your eyes and think what it might be. My first software designs were largely done with my eyes closed. Thinking if I hit that key what should happen if I hit that key..."
In my experience, this is a common characteristic of the 10x programmer. It is not about frameworks, it is not about patterns; both things help, but it is really about to be able to run the system in your head.
> this is a common characteristic of the 10x programmer. It is not about frameworks, it is not about patterns
IMO, the hope is that tooling can somehow obviate the need for 10x programmers (and their annoying salary requirements) by putting enough guard rails in place. But the reality is additional layers of abstraction all come at a cost (computational/cognitive), systems thinking is still required (the core trait of effective programmers), and, most importantly, great programs aren't made by distilling current best practices into tools.
As much as the industry tries, it can't replicate hard-won experience.
In a similar vein, the real task of debugging is not to make the program stop doing the wrong thing. It is to create in your mind an understanding of why the program as written necessarily behaves in the way it does.
Once you have that understanding, actually fixing the problem is straightforward.
> "In my experience, this is a common characteristic of the 10x programmer. It is not about frameworks, it is not about patterns; both things help, but it is really about to be able to run the system in your head."
I think this extends to users as well. Well before I knew how to program I'd approach new software as if I'd designed it myself, unconsciously putting myself in the mind of the designer. The applications I struggled to learn were those where I couldn't build a mental model of what it was doing and why.
Current real-world systems are too complex to simulate accurately in your head. I've learned to assume that I don't really understand a system until I've collected sufficient trace output from the real thing.
Are they? I "run" systems in my head all the time. I'm familiar with the code base, from that I'll know roughly where the bug is. I can tell you which function a bug will probably be in and how it's probably happening, just from having the symptoms described to me.
Hell, even when I'm not familiar I can guess what's happening based on past experience.
It's some sort of spatial ability, like navigating a map (though I recently found out I'm terrible at visualizing and almost completely lack a 'mind's eye'). I also find I rarely get lost in computer games and very rapidly learn FPS maps.
Maybe's it's one of those weird skills you don't even realise some of the population lack.
Personally I've always thought using traces are a massive crutch that are too expensive in cost vs benefit ratio, I don't get the point. Far too many extra lines of code for too little gain.
We are literally weeks into Meltdown/Spectre. The code you write runs on a complex programming language you didn't write, that may be running on a VM, that is running on an OS, that is running on a multicore CISC processor with 100s of publicly known instructions, that can only be built due to modern developments in quantum mechanics.
You run idealized versions of systems in your head.
"You run idealized versions of systems in your head."
Definitely true but running idealized approximations of systems [1] are also sufficient, at least in my experience, to get a rough idea of where a bug might be a majority of the time.
Certainly. I guess my main objection was to the parent questioning whether modern systems are "too complex to simulate accurately in your head". Effective programming, as you note, requires acknowledging that you are approximating, and understanding the scopes and ramifications of those approximations.
> "You run idealized versions of systems in your head."
Yes, they're idealised versions, but that still has value. The point at which the idealised model doesn't match up with reality is the point at which further investigation is required.
It's an effective way of filtering out the parts of the system that you already have a reasonable grasp on so that you can focus on parts that you don't.
On a more general point, almost all models are idealised approximations. The whole idea with modelling a system is to abstract away the rough edges by encapsulating complex behaviour in discrete groups. By working with these simplified models, the idea is that you gain a better intuition of the activity found within a system.
In short, the whole purpose of models is to guide intuition.
Maybe I'm really lucky, but in my entire programming career (12 years) I can only remember 2 or 3 bugs that weren't caused by errors in the top-level, most abstract, version of the code.
i.e. bugs that were due to compiler/bare metal/etc. problems. I remember a bug in old-skool asp that would only surface due to some weird code condition, a bug with an IMAP server we made used by Outlook that was caused by them using an int16 to store ids, and a race condition bug.
I've worked almost entirely in web dev with a mix or enterprise, b2b and b2c systems, with some of those having millions of users, some of them being intensive SQL.
Yeah, this is exactly how my brain works, and has led to my specialty being finding the difficult bugs that have eluded others. I've got a loose leaf notebook that I use to help, but all that's really in it is rough drawing of circles with labels scraweled next to them, fashioned in a shape with maybe some lines between them. All it's for is helping me imagine the system better in my head to connect the dots.
It's affected my design philosophy too, I always try to design software such that I could draw it as a series of discrete components with input/output arrows between them (and, of course, the assumption that could theoretically zoom in to a component to see whats inside of it)
I work this way to a large extent. I have pretty good recall of the whole codebase, sufficient to step through in my head and spot actual bugs while AFK. I really dislike it when people make assertions about what others supposedly cannot do.
I believe this is called "being mechanically minded". If you understand the physical rules and the components, you understand the inner workings. Good mechanics have this. Bad mechanics don't. Sadly being an auto mechanic pays little, but the skills are obvious in good programmers too.
That is still thinking within the boundaries of programming logic. The point that Ted Nelson makes is that you should never stop thinking beyond the immediacy of that, and always remember what the code is ultimately hoping to achieve in the end:
> "Thinking if I hit that key what should happen if I hit that key..."
The answer to what should happen is not rooted in thinking about the code. Programming is taking meaningful thoughts and turning them into thoughtless instructions that can be executed by a machine. And that is hard!
We take it for granted now, but something as basic as the idea to use on- and off-switches (which is what bits are) as representations of numbers, let alone strings of letters, is a ridiculous leap of thought. Then we built more complex software on top of that more similar ridiculous leaps of thought. None of those leaps came directly from the code itself.
This implies you are trying to simulate fully. Just as most programmers don't have to simulate the rather complex circuitry in an adder, you probably don't have to simulate the complexities of most of your system. Or, when you do, you can take the simple parts for granted.
Similar to plotting a trip to the grocery. You don't think of the effort your body is doing to stay balanced. Nor of reading labels
Yet you probably have a decent idea how long the trip will take.
Maybe "simulate" is the wrong word, then. Especially accurate simulation. Plotting to shop groceries is not a simulation. You create a model of the problem and roughly base your solution on that. The model is based on a very large number of assumptions and a great deal of ignorance of the entirety of the process it represents.
You can relatively accurately predict that when you press "OK", the dialog will close. You can maybe assume and envision a future where calling "closeDialogBox(box)" will close the dialog box. Maybe the lines in it that which simply call "box->close()" and return "STATUS_OK" can be assumed to do what you think it will do. The underlying machine code, too. The CPU, sure.
In the end the whole system is beyond your control and is subject to random input like cosmic rays flipping bits, power outages, leaking caps. Before that you have the operating system, the compiler, the overall codebase, perhaps more likely sources of mental model mismatches. I certainly wouldn't draw the line of "accurate simulation" at the source code base.
IMO one of our strengths as humans is that we can easily create these simplified models based on rough probabilities founded on previous experience, "gut feelings" and weighing risks and go great lengths without accurate simulation.
I'm sympathetic to your point, but I suggest you have the wrong take here. It might have been better if people referred to modeling the system, but that ship sailed, as it were.
That is, we are primarily discussing how people use that phrase in informal communication. And the you'd be surprised just how far many can take the simulations in their head.
It's not like "simulation" is commonly used to describe one's thoughts. So your idea is that we're just arguing about some sort of hypothetical colloquial sense of the word, even though it's obviously used here in the context of comparing humans to computers?
I wish IEEE would put up the whole interview. After all, I own the copyright on those words. I am making inquiries about it.
Rather than reply individually to the misimpressions of my work on this Reddit page, let me refer anyone who wants the true picture of my software deliveries, design variations and former ambitions, to
hyperland.com/HTbrew-D11
Or you might look directly at two of my productions, Cosmicbook (2003, with Ian Heath)
xanadu.com/cosmicbook
with visible links between pages-- you can make them too--
and
OpenXanadu (2014, with Nicholas Levin).
xanadu.com/xanademos/MoeJusteOrigins.html
showing transclusions (origins) only, no links. See above paper for further explanation.
I am not a programmer, but a designer-director of software (like Doug Engelbart and Steve Jobs, but with far fewer resources). Every delivery is a negotiated reduction of a vision against resources, technical restrictions, and time. (Not to mention misunderstandings and disagreements.) It's like directing movies, but with more options, more gotchas and less fun.
I am proud of the working stuff I've delivered and grateful to those collaborators, and I haven't given up yet.
"How to see the possibilities when there are so many things around you that are a certain way".
So much this, and this is something where trawling the old papers really, really helps. So much there that we have either (a) forgotten or (b) discarded, but forgot why we discarded it.
Now spending a lot of time trying to tell (a) from (b). Fun!
I am constantly amazed by the productivity of top computing experts in the 60s and 70s. They managed to accomplish so much in such a short time, while working in small teams, on ridiculously slow hardware and having next to no libraries or tooling.
In the corporate world today it often takes a dozen people and a few years to create what's effectively a website that records user inputs to a database and occasionally sends emails.
I understand that I'm comparing top programmers of the past to average programmers today, but I think modern tooling and accidental complexity of our computing environments plays a large part here as well.
Incredible. I agree with the top comment on that video, the interviewer was being pessimistic and not open minded but it served to make Ted Nelson expand and give very comprehensive answers.
Wow, that is not the best quality. I haven't read this one yet (def plan on at least glancing through it now). I found this http://www.newmediareader.com/book_samples/nmr-21-nelson.pdf it's just a sample showing chapter 21, but it is much nicer looking.
I've got a copy that I bought new while in grad school (back in the '70s). I read and reread it; it's a lot of fun. Unfortunately, I kept in in my grad student office in the CS building, so to protect it and my other CS books from wandering off I stamped my name with bright ink on a number of the pages. The name stamps probably ruined its value as a collector's item :( ...
One thing I remember from it was his fascination with APL and with the even more off-beat language TRAC. TRAC is a Turing complete macro language first implemented as an interactive programming environment[1].
It sounds a lot, consider it a fine piece of art in it's own right and I don't think you'll lose your investment over time... The reprint by M$ does not do this original large format version which is crisp and sharp.
It's a beautiful book. He also signed it for me, in case that's a thing you value.
I'd love to know how different his reprints are the the originals. I've seen one from the original run, and they looked the same to me. (But it had been a while.)
Many university libraries will offer access to non-affiliated individuals for a non-crazy amount. Even if you don't want to pay, if the library is open to the public, you can walk in, find the book on the shelf and read it at a desk.
OMG. I thought this guy's mannerisms were some kind of acquisition of fame. He was talking like that in the 60s. No wonder nobody listened! Perversely, I now have far more respect for him.
"As Kay himself has said so many times, this question is entirely about funding. With ARPA/IPTO-style funding, you get Kays (and Engelbarts, Minskys, McCarthys, Corbatós, etc) With VC and NSF, you don't. Even Kay hasn't been able to "be Kay" since the 80s: http://worrydream.com/2017-12-30-alan/ "
Another problem, I think, is that many people are no longer interested in this kind of research. Two popular lines of thought are:
- Our infrastructural computing is already awesome or at least good enough, so we do not need anything simpler or different, just more tooling and automation.
- Machine learning will fix everything in computing.
I'm very interested in doing this kind of research, but I can tell you it's incredibly difficult in academia, for instance, to publish research about programming languages, or wild new designs. It seems to me the academic world has moved to a model where everything has to be incremental, and you have to immediately have positive results that beat prior work on a number of metrics. Anything else is unpublishable.
As far as machine learning fixing everything. I work in machine learning now, and I'm incredibly skeptical. Especially with the way companies always rush things to market and seek to eliminate competition. There are going to be a lot of bugs, hacks, and malicious uses of machine learning. I will also mention that the quality of the code I'm seeing in ML right now is absolutely terrible. Everything is so fast-moving. People don't take time to polish anything, they've already moved on to another project, nothing is maintained. The code is largely written by academics who are rushing to write papers and look down on programming as some kind of unfortunate necessity, and the quality of the code they produce reflects that. Oh and, the machine learning world thinks Python is the best programming language ever... Sigh.
I'm in industry, but have similar leanings of wanting to work on this stuff. For a few years I was convinced academia was the way to do this; now I'm not certain. It's as if the collective capacity of dreaming about the possibilities of computing was lost when we realized we could make lots of money. Instead of building bicycles for the mind, we're obsessed with building TV 2.0, and getting lots of money doing it. Nowadays I see my meager, fledgling research as art pretty much.
I don't know where I'll end up long-term, but I doubt I'm the only person that feels this way. I've occasionally thought of starting a Slack for this sort of thing.
I wouldn't put Ted Nelson on the same page like the other ones. The other three actually build stuff that is now everywhere. Ted never got anywhere is now bitching all the time on how much better his ideas were than what we actually got.
Hmm. I could show where your dashed-off summary is wrong by knocking down the other three, or educating you on Ted's context. I'll go with education, as Ted would prefer.
You are confusing "I heard their names" with "They made the thing I use". Berners-Lee didn't write the browser you use. Kay isn't hard at work on the new iPhone. and Engelbart doesn't pop into the Logitech offices to see how the new controls are going.
Nelson wrote multiple critical works of theory and practice, edited very influential magazines, coded prototypes and processes that were then re-used or modified as time went on, and has continually spoken on his principles and ideas behind the interface of humans and machines. He has stressed, for over half a century, the importance of the treatment and access of information, which turns out to be the top priority of the human race in the contemporary sphere.
Ted's name has been brought up in meetings you're not invited to and events you can't attend, as well as ones you could be. His influence is gigantic, and the fact he freely shares his ideas past the age of 80 is a gift some of us appreciate very much.
He doesn't strut across a stage next to a product he yelled at others to make. That doesn't make him less of a giant.
> He has stressed, for over half a century, the importance of the treatment and access of information, which turns out to be the top priority of the human race in the contemporary sphere.
Jason was being as polite as he could to someone coming after Ted. It's not unreasonable to emphasize. More politely: Please catch up.
Thanks for putting so well what I lacked the patience to write, I don't think the OP deserves it, the comment is bordering on trolling, is unsubstantiated, poorly conceived and poorly researched by someone who claims to have an interest in these things.
I think his biggest problem is that he refuses to collaborate with other people, or build on top of current technology.
He's had a lot of great important inspirational ideas, but his implementation of those ideas didn't go anywhere, he's angry and bitter, and he hasn't bothered re-implementing them with any of the "inferior technologies" that he rejects.
Back in 1999, project Xanadu released their source code as open source. It was a classic example of "open sourcing" something that was never going to ship otherwise, and that nobody could actually use or improve, just to get some attention ("open source" was a huge fad at the time).
>Register believe it or not factoid: Nelson's book Computer Lib was at one point published by Microsoft Press. Oh yes. ®
They originally wrote Xanadu in Smalltalk, then implemented a Smalltalk to C++ compiler, and finally they released the machine generated output of that compiler, which was unreadable and practically useless. It completely missed the point and purpose of "open source software".
I looked at the code when it was released in 1999 and wrote up some initial reactions that Dave Winer asked me to post to his UserLand Frontier discussion group:
A few excerpts (remember I wrote this in 1999 so some of the examples are dated):
>Sheez. You don't actually believe anybody will be able to do anything useful with all that source code, do you? Take a look at the code. It's mostly uncommented glue gluing glue to glue. Nothing reusable there.
>Have you gotten it running? The documentation included was not very helpful. Is there a web page that tells me how to run Xanadu? Did you have to install Python, and run it in a tty window?
>What would be much more useful, would be some well written design documents and port-mortems, comparisons with current technologies like DHTML, XML, XLink, XPath, HyTime, XSL, etc, and proposals for extending current technologies and using them to capture the good ideas of Xanadu.
>Has Xanadu been used to document its own source code? How does it compare to, say, the browseable cross-referenced mozilla source code? Or Knuth's classic Literate Programming work with TeX?
>Last time I saw Ted Nelson talk (a few years ago at Ted Selker's NPUC workshop at IBM Almaden), he was quite bitter, but he didn't have anything positive to contribute. He talked about how he invented everything before anyone else, but everyone thought he was crazy, and how the world wide web totally sucks, but it's not his fault, if only they would have listened to him. And he verbally attacked a nice guy from Netscape (Martin Haeberli -- Paul's brother) for lame reasons, when there were plenty of other perfectly valid things to rag the poor guy about.
>Don't get me wrong -- I've got my own old worn-out copy of the double sided Dream Machines / Computer Lib, as well as Literary Machines, which I enjoyed and found very inspiring. I first met the Xanadu guys some time ago in the 80's, when they were showing off Xanadu at the MIT AI lab.
>I was a "random turist" high school kid visiting the AI lab on a pilgrimage. That was when I first met Hugh Daniel: this energetic excited big hairy hippie guy in a Xanadu baseball cap with wings, who I worked with later, hacking NeWS. Hugh and I worked together for two different companies porting NeWS to the Mac.
>I "got" the hypertext demo they were showing (presumably the same code they've finally released -- that they were running on an Ann Arbor Ambassador, of course). I thought Xanadu was neat and important, but an obvious idea that had been around in many forms, that a lot of people were working on. It reminded me of the "info" documentation browser in emacs (but it wasn't programmable).
>The fact that Xanadu didn't have a built-in extension language was a disappointment, since extensibility was an essential ingredient to the success of Emacs, HyperCard, Director, and the World Wide Web.
>I would be much more interested in reading about why Xanadu failed, and how it was found to be inadequate, than how great it would have been if only it had taken over the world.
>Anyway, my take on all this hyper-crap is that it's useless without a good scripting language. I think that's why Emacs was so successful, why HyperCard was so important, what made NeWS so interesting, why HyperLook was so powerful, why Director has been so successful, how it's possible for you to read this discussion board served by Frontier, and what made the World Wide Web what it is today: they all had extension languages built into them.
>So what's Xanadu's scripting language story? Later on, in the second version, they obviously recognized the need for an interactive programming language like Smalltalk, for development.
>But a real-world system like the World Wide Web is CONSTANTLY in development (witness all the stupid "under construction" icons), so the Xanadu back and front end developers aren't the only people who need the flexibility that only an extension language can provide. As JavaScript and the World Wide Web have proven, authors (the many people writing web pages) need extension languages at least as much as developers (the few people writing browsers and servers).
>Ideally, an extension language should be designed into the system from day one. JavaScript kind of fits the bill, but was really just nailed onto the side of HTML as an afterthought, and is pretty kludgey compared to how it could have been.
>That's Xanadu's problem too -- it tries to explain the entire universe from creation to collapse in terms of one grand unified theory, when all we need now are some practical techniques for rubbing sticks together to make fire, building shelters over our heads to keep the rain out, and convincing people to be nice and stop killing each other. The grandiose theories of Xanadu were certainly ahead of their time.
>It's the same old story of gross practicality winning out over pure idealism.
>Anyway, my point, as it relates to Xanadu, and is illustrated by COM (which has its own, more down-to-earth set of ideals), is that it's the interfaces, and the ideas and protocols behind them, that are important. Not the implementation. Code is (and should be) throw-away.
>There's nothing wrong with publishing old code for educational purposes, to learn from its successes and mistakes, but don't waste your time trying to make it into something it's not.
Your preface to the UserLand thread was fantastic. "We haven't even INVENTED twitter yet, but I promise y'all will ENJOY the hell out of this series of comments-with-character-limits, but you're gonna do it IRONICALLY."
It's been about a year since I talked to Ted Nelson, and I suspect the talk you saw at Almaden would've been close to the height of his bitterness about HTTP. He also speaks frankly about the deeper aspect of his bitterness, retrospecting on the 70s back in 1990:
I am curious about your current assessment that he refuses to collaborate with other people. Obviously he worked with other people to get Xanadu to where it got. And I don't have enough inside knowledge about the competitive strategy of Netscape or other commercial hypertext vendors, but I'm sure there was a lot more going on around the open sourcing decision. At least with nearly 20 years of hindsight since the open sourcing, he seems perfectly clear-eyed about the failure of the Xanadu project.
I would love to hear that postmortem more than I want to read the released code, too. But when I asked him about it, Ted just seemed a lot more interested in talking about the good ideas from Xanadu. Sure, he strikes me as a little grumpy, but I don't know anyone my age or older that isn't a little bit grumpy about computing. After I watched that WGBH interview, I developed an enormous amount of empathy for the fact that he had to live through the 70s.
> Anyway, my take on all this hyper-crap is that it's useless without a good scripting language. I think that's why Emacs was so successful, why HyperCard was so important, what made NeWS so interesting, why HyperLook was so powerful, why Director has been so successful, how it's possible for you to read this discussion board served by Frontier, and what made the World Wide Web what it is today: they all had extension languages built into them.
I'm wondering what your wiser, older self has to say about this 20 years on. Isn't it useful that documents you wrote 20 years ago can still be read?
From my memories, the Web craze started well before JavaScript, and JavaScript really only jumped on the bandwagon; so how could it be the critical success factor for the Web?
The success of the Web and JavaScript in the last two decades speaks for itself; but in 2018, JavaScript and the procedural Web could very well be its undoing when considering the original goals of the Web, couldn't it?
I don't think Don meant that JS was the critical success factor for the Web. But that extensible scripting is crucial to the kind of Web Ted Nelson wanted in the first place.
From my lived experience, the Web craze would be better termed the Modem craze. And the critical success factor that turned it into the Web, was NSF removing the restrictions on commerce in 1995.
JavaScript is just what got HTML closer to some ideals of Xanadu. Not close enough for Ted's vision, but that is a broad sociopolitical vision.
Server side scripting languages were critical to the success of the web, before browser side JavaScript was available and matured.
Simple stateless perl cgi scripts forked from apache that talk to text databases or mysql were the first simplest step, but things got much more interesting with long running stateful application servers like Zope (Python), Java, Radio UserLand, HyperCard, node, etc.
My favorite thing about node is that it lets you use the same language and libraries and data on both the client and server side. That's an enormous advantage that far outweighs JavaScript's disadvantages. But some people just can't see or believe that, for whatever reason, and they're fine with flipping and flopping back and forth between different languages, and hiring different people to write multiple subtly divergent versions of everything in different languages.
Face it: for all its faults, JavaScript won. I will always have a place in my heart for FORTH, PostScript, MockLisp, ScriptX, TCL, Python, HyperTalk, UserTalk, CFML, Java, and all those other weird obsolete scripting languages, but it's soooo much easier to program in one language without switching context all the time, even if it's not the best language in the universe. And TypeScript is a pretty darn good way of writing JavaScript.
You're right, the web was held back until it was finally considered "ok" to use it for commercial activity!
I'd say JavaScript is just what got HTML closer to implementing any ideal you want, and there's no reason Xanadu couldn't be implemented on top of current web technologies (except that Ted doesn't want to). But I don't think extensibility and scripting itself was part of Ted's original vision or implementation.
Just as so much has happened since MVC was invented (yet it's still religiously applied by cargo-cult programmers), also so much has happened since Xanadu was invented (like distributed source code control, for example), which requires a total rethinking from basic principles. We also have the benefit of a lot of really terrible examples and disasterous experiments to learn from (wikipedia markup language, wordpress, etc). Many of Ted's principles should be among those basic principles considered, but they're not the only ones.
Hmm, HyperCard in the same list as Zope and node? Interesting. :-)
The idea that JavaScript "won" is a little controversial to me. I think it's huge and important, but the world is still changing. Embedded Python goes places that Node still can't. I absolutely see the value you describe in sticking to one ecosystem, but I don't think JavaScript/TypeScript/Node is the only way to get those benefits. (See also: Transcrypt) I really enjoyed the PyCon 2014 talk on the general subject: https://www.destroyallsoftware.com/talks/the-birth-and-death...
The most recent conversation I had with Ted was after someone had just demonstrated the HoloLens for him and a few others. Ted had some feedback for the UI developer, and it didn't have anything to do with JavaScript or that level of implementation detail at all. It was all about the user experience. I don't want to put words into his mouth, but like he says in this recent interview, this is all hard to talk about because it really has changed so quickly.
I do think you're right that a lot of what Ted wanted to see could be implemented today in JavaScript and Git. But I think about the technical meat of that vision to be about data-driven interfaces. I am simply not old enough to really understand how notions of "scripting" changed between the 60s and the 80s. But the fact that Xanadu was started in SmallTalk suggests to me that scripting was part of the vision, even if a notion like "browser extensions" might not have been in mind.
Completely agree that there are other voices to learn from, and other important mistakes that have been made since Xanadu! (I think Ted would agree, too.)
Reading documents from 20 years ago is a mixed bag. Links usually fail horribly, which was something Xanadu was trying to solve, but I'm not convinced they could have solved it so well that 20-year-old links would still actually work in practice.
I've always tried to write documents in a simple format that's easy to translate to newer formats, and minimizes noise and scaffolding and boilerplate.
When we were developing the HyperTIES hypermedia browser in 1988 [1] at the UMD HCIL, we considered using SGML as the markup language, but decided against it, because we were focusing on designing a system that made it easy for normal people to author documents, and working with SGML took a lot of tooling at the time. (It was great for publishing Boeing's 747 reference manual, but not for publishing poetry and cat pictures.) So we designed our own markup language. [2]
It's not which scripting language you have, it's that you have a scripting language at all that's important. HyperTIES was actually implemented in C, plus 3 different scripting languages: FORTH for the markup language interpreter and formatter [3], PostScript for the user interface and display driver and embedded applets [4], and Emacs MockLisp for the authoring tool [5].
When you try to design something from the start without a scripting language, like a hypermedia browser or authoring tool, or even a window system or user interface toolkit, you end up getting fucked by Greenspun's Tenth Rule [6]
[6] Greenspun's Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
But when you start from day one with a scripting language, you can relegate all the flexible scripty stuff to that language, and don't have to implement a bunch of incoherent lobotomized almost-but-not-quite-turing-complete kludgy mechanisms (like using X Resources for event handler bindings and state machines, or the abomination that is XSLT, etc).
TCL/Tk really hit the nail on the head in that respect. TCL isn't a great language design (although it does have its virtues: clean simple C API, excellent for string processing, and a well written implementation of a mediocre language design), but its ubiquitous presence made the design of the Tk user interface toolkit MUCH simpler yet MUCH more extensible, by orders of magnitude compared to all existing X11 toolkits of the time, since it can just seamlessly call back into TCL with strings as event handlers and data, and there is no need for any of the ridiculous useless brittle contraptions that the X Toolkit Intrinsics tried to provide.
The web was pretty crippled before JavaScript and DHTML came along. Before there was client side JavaScript, there were server side scripting languages, like Perl, PHP, Python, Frontier (Radio Userland) [7], HyperTalk, etc.
Frontier / Manilla / Radio Userland was a programmable authoring tool, content management system, web server, with a build-in scripting language (UserTalk, integrated with an outliner and object database). That scriptability enabled Dave Winer and others to rapidly prototype and pioneer technologies such as blogging, RSS, podcasting, XML/RPC, SOAP, OPML, serving dynamic web sites and services, exporting static web sites and content, etc.
One of the coolest early applications of server side scripting was integrating HyperCard with MacHTTP/WebStar, such that you could publish live interactive HyperCard stacks on the web! Since it was based on good old HyperCard, it was one of the first scriptable web authoring tools that normal people and even children could actually use! [8]
I guess it's a matter of perspective whether you like the procedural Web (the developer/creative perspective) or not (the perspective of the consumer who gets all kinds of scripts for tracking, mining, fishing, and other nefarious purposes, all the while not being able to save something for later reading).
I have no doubt JavaScript was absolutely necessary to develop the Web to the point it is today. But I had hoped that development of HTML (the markup language) would keep up to eventually provide declarative means to achieve some of what only JavaScript can do, by sort of consolidating UI idioms and practices based on experience gained from JavaScript. But by and large this hasn't happened.
What has happened instead is that JavaScript-first development has taken over the Web since about 2010 (I like react myself when it's a good fit, so I'm not saying this as a grumpy old man or something). And today there's no coherent vision as to what the Web should be; there's no initiative left to drive the Web forward, except for very few parties/monopolies who benefit from the Web's shortcomings (in terms of privacy, lack of security, it's requirement of a Turing-complete scripting environment for even the most basic UI tasks etc).
> the abomination that is XSLT
Not trying to defend XSLT (which I find to be a mixed bag), but you're aware that it's precursor was DSSSL (Scheme), with pretty much a one-to-one correspondence of language constructs and symbol names, aren't you?
In the ideal world we would all be using s-expressions and Lisp, but now XML and JSON fill the need of language-independent data formats.
>Not trying to defend XSLT (which I find to be a mixed bag), but you're aware that it's precursor was DSSSL (Scheme), with pretty much a one-to-one correspondence of language constructs and symbol names, aren't you?
The mighty programmer James Clark wrote the de-facto reference SGML parser and DSSSL implementation, was technical lead of the XML working group, and also helped design and implement XSLT and XPath (not to mention expat, Trex / RELAX NG, etc)! It was totally flexible and incredibly powerful, but massively complicated, and you had to know scheme, which blew a lot of people's minds. But the major factor that killed SGML and DSSSL was the emergence of HTML, XML and XSLT, which were orders of magnitude simpler.
There's a wonderful DDJ interview with James Clark called "A Triumph of Simplicity: James Clark on Markup Languages and XML" where he explains how a standard has failed if everyone just uses the reference implementation, because the point of a standard is to be crisp and simple enough that many different implementations can interoperate perfectly.
I think it's safe to say that SGML and DSSSL fell short of that sought-after simplicity, and XML and XSLT were the answer to that.
"The standard has to be sufficiently simple that it makes sense to have multiple implementations." -James Clark
My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT, but without the other context and support it needed to actually be useful. So nobody was happy!
Then Microsoft came out with MSXML, with an XSL processor that let you include <script> tags in your XSLT documents to do all kinds of magic stuff by dynamically accessing the DOM and performing arbitrary computation (in VBScript, JavaScript, C#, or any IScriptingEngine compatible language). Once you hit a wall with XSLT you could drop down to JavaScript and actually get some work done. But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT.
Excerpts from the DDJ interview (it's fascinating -- read the whole thing!):
>DDJ: You're well known for writing very good reference implementations for SGML and XML Standards. How important is it for these reference implementations to be good implementations as opposed to just something that works?
>JC: Having a reference implementation that's too good can actually be a negative in some ways.
>DDJ: Why is that?
>JC: Well, because it discourages other people from implementing it. If you've got a standard, and you have only one real implementation, then you might as well not have bothered having a standard. You could have just defined the language by its implementation. The point of standards is that you can have multiple implementations, and they can all interoperate.
>You want to make the standard sufficiently easy to implement so that it's not so much work to do an implementation that people are discouraged by the presence of a good reference implementation from doing their own implementation.
>DDJ: Is that necessarily a bad thing? If you have a single implementation that's good enough so that other people don't feel like they have to write another implementation, don't you achieve what you want with a standard in that all implementations — in this case, there's only one of them — work the same?
>JC: For any standard that's really useful, there are different kinds of usage scenarios and different classes of users, and you can't have one implementation that fits all. Take SGML, for example. Sometimes you want a really heavy-weight implementation that does validation and provides lots of information about a document. Sometimes you'd like a much lighter weight implementation that just runs as fast as possible, doesn't validate, and doesn't provide much information about a document apart from elements and attributes and data. But because it's so much work to write an SGML parser, you end up having one SGML parser that supports everything needed for a huge variety of applications, which makes it a lot more complicated. It would be much nicer if you had one SGML parser that is perfect for this application, and another SGML parser that is perfect for this other application. To make that possible, the standard has to be sufficiently simple that it makes sense to have multiple implementations.
>DDJ: Is there any markup software out there that you like to use and that you haven't written yourself?
>JC: The software I probably use most often that I haven't written myself is Microsoft's XML parser and XSLT implementation. Their current version does a pretty credible job of doing both XML and XSLT. It's remarkable, really. If you said, back when I was doing SGML and DSSSL, that one day, you'd find as a standard part of Windows this DLL that did pretty much the same thing as SGML and DSSSL, I'd think you were dreaming. That's one thing I feel very happy about, that this formerly niche thing is now available to everybody.
> But the major factor that killed SGML and DSSSL was the emergence of HTML, XML and XSLT, which were orders of magnitude simpler.
That interview is wonderful, but in 2018, while XML has been successful in lots of fields, it has failed on the Web. SGML remains the only standardized and broadly applicable technique to parse HTML (short of ad-hoc HTML parser libraries) [1]. HTML isn't really simple; it requires full SGML tag inference (as in, you can leave out many tags, and HTML or SGML will infer their presence), SGML attribute minimization (as in `<option selected>`) and other forms of minimization only possible in the presence of a DTD (eg. declarations for the markup to parse).
> JC: [...] But because it's so much work to write an SGML parser, you end up having one SGML parser that supports everything needed for a huge variety of applications.*
Well, I've got news: there's a new implementation of SGML (mine) at [2].
> But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT
My thoughts exactly. Though I've done pretty complicated XSLTs (and occasionally am doing still), JavaScript was designed for DOM manipulation, and given XSLT is Turing-complete anyway, there's not that much benefit in using it over JavaScript except for XML literals and if we're being generous, maybe as a target language for code generation, it being itself based on XML. Ironically, the newest Web frameworks all have invented their own HTML-in-JavaScript notation, eg. react's JSX to drive virtual DOM creation, even though JavaScript started from day one with the principle design goal of a DOM manipulation language.
> My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT
+1. Though to be fair, XSLT has worked well for the things I did with it, and version 1 at least is very portable. These days XSLT at W3C seems more like a one man show where Michael Kay is both the language specification lead, as well as providing the only implementation (I'm wondering what has happened to W3C's stance on at least two interoperable implementations). The user audience (publishing houses, mostly), however, seem ok with it, as I witnessed at a conference last year; and there's no doubt Michael really provides tons of benefit to the community.
The 1999 "source code" referred to above is in two parts: xu88, the design my group worked out in 1979, now called "Xanadu Green", described in my book "Literary Machines"; and a later design I repudiate, called "Udanax Gold", which the team at XOC (not under direction of Roger Gregory or myself) redesigned for four years until terminated by Autodesk. That's the one with the dual implementation in Smalltalk. They tried hard but not on my watch. Please distinguish between these two endeavors.
Glad to hear from you, and I welcome your details and corrections to the record!
What are your opinions about scripting languages (not just for implementation, but for runtime extensibility)? Are they necessary from the start? Is JavaScript or TypeScript sufficient?
Ted Nelson's ideas continue to point a road to the future, to a better internet. Sir Tim's Web has been creaking along, almost since it's inception due to it's fragile design. That you don't rank him alongside the other three only says that you don't understand - which is fine.
I didn't say his ideas aren't good. I'm saying I value execution more than ideas. Everybody has good ideas, few do a good execution. It's like somebody had the idea to create a teleportation device, in 50 years time got absolutely nowhere with it and is now grumpy that the planes everybody uses are too slow. "If only they'd followed my idea of a teleportation device, everything would've been much better."
wrong, as in, false statement, there is evidence Ted has shipped software, your statement is wrong. "The other three actually build stuff that is now everywhere. Ted never got anywhere." This is patently ridiculous, it's like accusing Socrates of not creating the perfect society, I'm proud to wear a -1 on this, it means I'm annoying at least one other ill informed person.
what, like google, it's Ted Nelson and we are on hacker news, and the OP passed a comment that showed they obviously already know how to use the internet, what would be the point of me linking to sources, Ted Nelson, Computer Lib and Dream Machines, if you've not read it, what have you read?
My favorite part of reading Knuth and Martin Gardner has been how much they cite others.
That is the field has always had a lot of people with ideas. Selecting for the best communicators is limiting. Especially if they do self promotion.
So, where are the big names? Out doing work, most likely. Most of it work that won't go anywhere. The barrier to entry impact has risen, after all. Even as participation barriers have dropped. I say this as someone that probably would not be participating years ago.
I would consider someone like Vitalik to be a potential "great mind". I think its facetious to discount ALL the people in Crypto, there's defiantly a lot of talent there.
Don't fool yourself into thinking society writ large is wise enough to value them. So look at the people getting steamrolled and dismissed and somewhere in that pile you'll find them.
Agreed. Kay, Nelson, et al are well-known more as an accident of timeline convergence than due to their revolutionary insights.
That's not to say they don't have or didn't have revolutionary insights, it's just to say that such insights are rarely widely recognized. :)
This is painfully obvious when we see that these people themselves have been making comments critical of the industry for decades, and have generally been ignored.
I hear this notion bandied about once in a while. I don't think it's strictly true.
It's just something you say to express frustration about things you don't like broadly like about tech, by comparing it to an idealized version of the past. There were plenty of frivolous things back in the 60's. We just don't remember them as no one wrote about it that survived so we can read it.
As for some modern-day people doing progressive work in computing that should change how we compute in the future--a couple come to mind:
Bret Victor and others (along with Alan Kay) in the YCR HARC lab are carrying on in the same tradition with Dynamiclang.org.
Chris Granger at Eve is also exploring some of these new old ideas and trying to push programming forward.
In Neural Networks, there's George Hinton that helped push Deep Learning into the forefront of Machine Learning. (And others whose names I can't remember)
Personally, I'd also put Juan Benet of Protocol Labs and Vitalik Buterin of Ethereum in this group. The former built IPFS and FileCoin, and the latter built Ethereum. Out of everyone I've read and watched talks and listened to interviews about cryptocurrenies, they seem the most concerned about its potential for computing in the future.
I think one problem we have is that to those people, the future of computers was a blank slate -- sure they had computers back then, but the industry was tiny compared to what it is now. Now, we already have tons of paradigms -- databases, browsers with their own HTML / JavaScript / CSS ecosystem, mobile apps, cloud computing, neural networks, etc.
I think it's harder now for someone to say, let's do things completely differently. Everybody nowadays is all about incremental changes.
> "I think it's harder now for someone to say, let's do things completely differently. Everybody nowadays is all about incremental changes."
There is room for radical change, but you have to know how to make your own space for it.
To give an example, I think the opportunities for reconfigurable computing are huge, and that we've barely scratched the surface. However, building a whole new platform based on that idea doesn't make practical sense (at least, not yet). What would be best is to sneak it in the back door. For example, design a new open-source FPGA-based sound card. That way, you can get reconfigurable chips into people's computers. After you've got a decent install base, you can expand into new uses for those chips, similar to how GPGPU expanded the utility of GPUs.
Yep, I was gonna make this comment too. Alan Kay in particular is an active commenter on Quora and he shows up on HN occasionally (often to chastise someone for speculating about "how it was" when they could've just asked, or at least just referenced existing documentation). There's no need for us to box them out.
Alan Kay particularly seems to lament the way the field has developed and the average programmer's ignorance of and disrespect for historical context. He is validated in this more every day.
Judea Perl won the Turing prize in 2011 for contributions in artificial intelligence. I think we will see a number of people joining your "great minds" group from the AI field in the next 20 years.
AI seems to be doing ok, though you hear complaints about short-termism from there too. It's AHI (Augmenting Human Intelligence) that's largely been in a rut for decades.
Something about how ideas in an era end up being distorded in the next one. It seems like any important new thing shifts the gravity of society and thus society itself. A cool idea in the 60s will become the root of a bad one in the 70s. It's not linear
"We thought computing would be artisanal. We did not imagine great monopolies."
In a sense, it still is, and always will be. I feel that the core of programming, just making the computer do things other people want it to for money, will always lead us back to a highly individualized workflow, where a single programmer has complete mastery over the system. It's just that the programmer-artisan must find work in the context of these great, monopolistic organizations, which inevitably apply pressure to reduce the amount of skilled labor required to perform a given business function, and thereby increase the organization's leverage in salary negotiations, leading to hyper-specialized assembly line processes that eventually just stop working because no one has the broad systems-level knowledge required to trace a bug to its source anymore. It would be great if these monopolies could just accept the artisanal nature of software development and pay people what they're worth.