
Systems Past: software innovations we actually use (2014) - ingve
http://davidad.github.io/blog/2014/03/12/the-operating-system-is-out-of-date/
======
gavanwoolery
Read the section "Conclusion" at the end, if nothing else. It is extremely
well written. The author throws out some potentially controversial ideas, but
he (like many people I have seen) are all hoping for a common outcome -
dramatically "rethinking" the concept of the OS and programming language, but
really embracing ideas that are as old as time. I'll highlight one section:

"Most of all, let’s rethink the received wisdom that you should teach your
computer to do things in a programming language and run the resulting program
on an operating system. A righteous operating system should be a programming
language. And for goodness’ sake, let’s not use the entire network stack just
to talk to another process on the same machine which is responsible for
managing a database using the filesystem stack. At least let’s use shared
memory (with transactional semantics, naturally – which Intel’s latest CPUs
support in hardware). But if we believe in the future – if we believe in
ourselves – let’s dare to ask why, anyway, does the operating system give you
this “filesystem” thing that’s no good as a database and expect you to just
accept that “stuff on computers goes in folders, lah”? Any decent software
environment ought to have a fully featured database, built in, and no need for
a 'filesystem'."

~~~
chubot
What programming language should the OS be? That's the problem. What he's
saying will never happen, because nobody can design a programming language
that will fit all use cases. Unix is polyglot by design; that's a feature and
not a bug.

Everybody wants the OS to be easier for THEIR use case, underestimating the
diversity of computing. "Why can't I just have a whole OS in node.js, e.g.
[https://node-os.com/](https://node-os.com/) ? That's all I need."

Well some people need to run linear algebra on computers and don't have any
need for that stuff. Likewise you don't have any need for Fortran.

The "language as OS" thing has been tried with Lisp and Smalltalk, and failed
to gain adoption for good reason. They are both just languages now.

Microsoft already tried and abandoned the second idea (WinFS). File systems
are complex, but databases are an order of magnitude more complex. OSes evolve
on a slower time scale; databases and languages on a relatively faster time
scale.

Multiple databases are also a feature. Both sqlite and Postgres/MySQL work on
top of the file system. The file system supports the minimum API you need to
write a database. (It's somewhat bad, but the way to fix the API isn't to
replace it with a database)

If you had bundled a a relational database with the OS, then you wouldn't have
had the right abstractions to make distributed databases like Dynamo and
BigTable and whatnot. A database just has more design parameters than a file
system and so you can't write one that will fit all applications. Again, the
problem is underestimating the diversity of use cases.

So yeah I think both of these ideas are badly mistaken.

EDIT: I also don't think idea about shared memory is good idea. You can can
have IPC without the network stack using Unix domain sockets, and that's what
people actually use to connect to databases on the same machine. It has the
benefit that you can move the database to another machine with minimal
changes.

A shared memory interface between and application and a database sounds like a
bad idea.

~~~
david927
Which leads us to ask, "Where are the revolutionaries?"

It used to be the younger generation, around college-age, would rethink their
parents' systems: political, business, etc. These kids would look at the
buildings around them and imagine what could be accomplished in razing those
buildings and creating a new foundation. I don't see that anymore. I see
college-aged kids simply wanting to add a new floor on top of a very rickety
shack that should have been discarded ages ago.

Ironically, the people writing these types of manifestos for Computer Science
are almost exclusively older. They often wrote that first foundational layer
and know that it needs to be overhauled.

By the way, I would phrase it more that "any righteous programming language is
also an operating system," and that "no righteous operating system operates on
files, so no righteous programming language is coded in files." _Data that is
not stored or accessed as data eventually will be._

It's true we've tried many variations on these themes in the past, but if we
do the logical syllogism in our heads, we come invariably to the conclusion
this is how the future should be. We know these things. So why give up? What
does failure mean other than that we need to continue trying?

It infuriates me that we don't try -- when people say "we've tried everything
and there's no way forward." No, we're just lazy. Charles Duell never actually
said in 1899 that "everything has already been invented," yet, oh my God, we
can't stop saying it now. So I'm going to ask again, "Where are the
revolutionaries?"

~~~
chubot
It absolutely does need to be overhauled, but the problem is that people don't
agree on the direction. I think most of your ideas are profoundly mistaken,
and have been disproven by history. I'm sure you think my ideas are wrong too.

I think that people tend to dismiss the success or failure of certain systems
as luck or even marketing, rather than deeply reflecting on the intrinsic
properties of those systems that led them to be adopted or not adopted.

Unix and the web have lots of flaws, but they're not an accident of history.
They have fundamental architectural properties that made them succeed. (In
particular, REST is the same idea architecturally as Unix/Plan 9 style file
systems. Long story, because it's basically avoiding O(m*n) problems in the
ecosystem.)

You also have to accept evolution as a fundamental force in computing (as
Linus Torvalds does). To ignore it is to set yourself up for failure and
frustration. After all, humans are by no means the optimal living being
either, but they're the best that evolution came up with.

And as I said in my other post, "revolutionary" ideas tend to just add to the
pile. They fail to meet their expectations and then Unix subsumes them. It's
almost an economic inevitability.

And Linux WAS revolutionary, if you compare to to Microsoft. The first couple
decades of my life were DOS and Windows based, so from that perspective the
revolution happened. Microsoft is porting all their stuff to Linux now.

[http://www.revolution-os.com/](http://www.revolution-os.com/)

If you want a single language OS, check out Mirage:
[https://mirage.io/](https://mirage.io/) . I know somebody is going to say "I
don't like OCaml". Now you understand why Unix dominates: it supports
heterogeneity. Even Mirage can't really get rid of Unix, because it runs on
Xen, which typically runs on Linux.

~~~
david927
I recognize that we disagree and I respect your opinion, I would just like to
add a bit more:

> _" have been disproven by history."_

I guess that's my point. I don't think we've had enough history to be
conclusive yet. I keep hearing this, over and over, and it's the equivalent of
"heavier than air objects can't fly -- it's been proven in attempt after
attempt."

In my opinion, the reason why there are thousands of programming languages,
for example, and is not because we can't agree on them; it's because it hasn't
been solved yet. No language is good enough, so we keep trying. In the late
1990's there were seemingly hundreds of search engines. Google/Page Rank came
along and all the other search engines went away almost overnight.

> _" revolutionary" ideas tend to just add to the pile_

They do until they don't. All those crazy attempts at flight were truly crazy,
until they lead to flight.

We're at the beginning of history, not the end of it.

~~~
taserian
Search engines are comparable in terms of how successful they are at their
unique task.

Languages? What metric are you going to use that equally applicable to all
tasks?

Think of it as vehicles and their different purposes. Would you support
condensing all vehicles to one, so that one vehicle has to both haul and mix
concrete _and_ take your kids to get ice cream?

~~~
david927
I think systems programming would continue to be separate. I left it off
because it amounts to a small fraction of in terms of usage in terms of total
code written (and I consider it to be able to be grouped in with hardware
frankly).

The rest is application programming, and yes, when it's right, it will
absolutely consolidate: There will be one "language". Sure, the current
conventional wisdom is that languages are the "different tools in the
toolbox." I know. My statement wasn't borne out of ignorance. What I'm saying
is that kind of thinking is what's holding us back.

What is a program? Is it expressive text? Is it, "Shall I compare thee to a
summer's day?" No, of course not. There's no narrative. We use text flow for
instruction flow but that's coincidence. We are acutely aware that programs
are discrete instructions. Code is data.

I made a statement above. I'll repeat it because it's important. It's a law:
_Data that is not stored or accessed as data eventually will be._ Any
programming language that uses text files will eventually go away. There will
be one "language" eventually because the natural inclination of data is one
representation. There will always be a market for new widgets to manipulate
data or find insight in data, of course, but there will be one language -- and
it will be data. It's the law.

Now I should admit, it's not that we haven't tried. We've tried many times and
failed. What I'm saying is that if we come to a conclusion for a destination
then there is no other choice but to make that the target.

~~~
taserian
I'm not fully grokking the theory/philosophy you're trying to express here,
and I'd really like to, so I'll ask for forgiveness in advance if I seem too
dense.

When I think of language and getting a message across (be it to someone else,
or to a computer as code), I think of the components that are required for
communication:

    
    
      - Emitter (myself) (EM; mostly obvious)
      - Recipient (someone else/the computer) (RE; also obvious)
      - Message (what I want to communicate) (MS)
      - Medium (what carries the message) (MD)
      - Protocol (base rules that both sides agree on) (PR)
    

If I'm trying to help my mother-in-law with a computer problem, I'll tell her
to click on such-and-such over phone lines using English. (MD=phone line;
MS=instructions; PR=English grammar)

If I'm writing a script to unzip a bunch of files at once and distribute the
contents into different folders based on the file type, I'd probably whip up a
Python script. (MD=text file; MS=task description; PR=Python)

My script in turn will communicate with the file system and the operating
system to complete my task. (MD=bits; MS=instructions; PR=Python-to-Machine-
Language-Interpreter)

If I understand you correctly, you're saying that all application programs
should distill down to just the message, as the one representation of what I
want to communicate. I'm hesitant to accept that, since it tends to assume
that both emitter and recipient can independently determine and adapt to the
correct protocol when all they have is the message.

For example, my Python script will probably use the file extension as a
heuristic for the file type, but will make an error when picture.txt actually
is a badly named JPEG. If I want to increase correctness, I can change the
script so that it uses the file's Magic Number, but that may be overhead I
don't need if the files are correctly named. Choosing one technique over the
other requires that I craft the message differently, because Python doesn't
make a decision on how best to determine file type.

TL;DR: Data needs a protocol to be used as information, and languages provide
that. Doing away with languages means we need to define and agree on a non-
ambiguous way of reading data, which is a tall order.

~~~
david927
You raise an interesting point. Data, as we use it today, is often bare-bones
and without meta-data. How you interpret it is very much up to you. It's as
if, in English, we said "... apple ... tree" and then leave it to the user to
decide if that's "I'm planting an apple tree" or "You can't get an apple from
a walnut tree."

You're definitely describing how we work in today's terms -- there's nothing
wrong with that, I'm just challenging it. I'm saying that we can describe the
protocol better using data; or more I'm saying that it is data and that any
other way of describing it is sub-optimal and for that reason will eventually
desist.

 _> Doing away with languages means we need to define and agree on a non-
ambiguous way of reading data_ Actually I'm saying that language is more
ambiguous than data -- at all times and in all cases. Anything you can get
language to do, you can get data to do better (in this context). But it's not
a hard case to prove because actually we're using language as data. We're
doing the equivalent in programming languages of typing in "three hundred"
into a text file and having it read from the text file and converted to a
single data unit of 300.0 when we could just be operating directly with the
data.

But I didn't clarify how this would happen, so I can see that it would seem a
bit abstruse. I need to better communicate and clarify that we need to expand
how we're viewing data and how we work with data -- that we need to
incorporate more meta-data as an ancillary part of the data that both comes
with it and yet is still secondary.

------
Clubber
It's not easy to make such a comprehensive list over such a broad category, so
cheers to the author for trying. I think it's a good list, but to challenge
myself, I tried to think of some things that he missed.

1\. Encryption 2\. Compression

These might fit into another broad category like OS or Networking, but they
are both distinct from those I believe.

~~~
speeder
If you read the footnotes he states that he was tempted to include those, but
then there would be problematic to separate what algorithms would fit his
text, so he decided against including any.

~~~
Cyph0n
Encryption: RSA

Compression: JPEG or MP3

~~~
dom0
> RSA

No thanks.

~~~
Cyph0n
Why? RSA is one of the greatest breakthroughs of the last half century in my
opinion.

It was the first implementation of an asymmetric crypto scheme. Merkle came up
with the _idea_ of public key cryptography, but he never provided a
mathematical technique to achieve that.

Imagine a world where RSA was never invented. We would not be able to exchange
secrets over an insecure channel, so naturally, HTTPS/TLS would never have
existed - this alone is huge. We would not be able to exchange secure email in
a scalable way (PGP). We not be able to securely sign messages for
authentication. And there's a ton more. This is without counting the schemes
and techniques designed on top of RSA!

~~~
Ar-Curunir
There exist other asymmetric encryption schemes; indeed there exists one that
is basically non interactive Diffie Hellman key exchange, and DH was
discovered before RSA was.

~~~
Cyph0n
I was under the impression that RSA was first, but I guess I was wrong.

DH is a key exchange algorithm, not a general public-key encryption algorithm.
So, as far as I understand, TLS would work, but the others I listed would not.

~~~
Ar-Curunir
RSA was the first PKE scheme, but El Gamal, another PKE, is essentially a
small modification of DH. Elgamal was discovered in 1985; before encryption
really hit the mainstream.

~~~
Cyph0n
I see. I'm not too familiar with crypto, so thanks for explaining. I guess
that the argument is now: would ElGamal have been invented had RSA never
existed?

~~~
Ar-Curunir
I think so; as I said it's essentially a small modification to DH.

------
jamii
See also [http://danluu.com/butler-lampson-1999/](http://danluu.com/butler-
lampson-1999/)

Here are some more that come to mind:

Sandboxing+permissions+isolation. Being able to easily install, safely use and
cleanly uninstall untrusted software totally changed the way most people
consume software, on both the web and mobile. (Virtualization is neither
sufficient (because I do want to give some apps access to some hardware) nor
necessary (see NaCl))

Types. The static-typing wars may still be raging in other domains, but the
vast majority [1] of our infrastructure is written in statically-typed
languages, and many popular dynamic languages are growing gradual-typing
tumours.

LLVM. Writing a compiled language used to be a major effort. Codegen quality
was a huge incumbent advantage. Five years ago C, C++ were pretty much it for
compiled languages, with some heroic efforts like OCaml and GHC on the
sidelines. Nowadays pretty much all the language and database research that I
see involves LLVM.

Packaging. Both OS and library package managers have come a long way since
1950, and to a first approximation 100% of programmers use one or the other.

[1] [http://danluu.com/boring-languages/](http://danluu.com/boring-languages/)

~~~
nickpsecurity
Trying to take your categories and find where they started.

Re isolation. From high-assurance security, I believe the first, secure kernel
that was general-purpose was this one:

[http://csrc.nist.gov/publications/history/schi75.pdf](http://csrc.nist.gov/publications/history/schi75.pdf)

Types. Guess that would be ALGOL.

LLVM. BCPL seems to have gotten the idea started with a two-pass compiler that
went from complicated stuff to byte-code (O-code) then to assembly. That got
ported to PDP-11 in form of B & then C. Wirth further developed it &
popularized it with P-code in Pascal/P. Helped non-compiler experts port it to
70 architecture in 2 years whereas compiler experts could target something
easy. Did it at hardware level with Modula-2 & M-code assembly. So, either
Richards of BCPL or Wirth of P-code get credit for such a concept. Maybe both.

re packaging. I got nothing. Anyone know what the first package managers were
that had something inspiring modern functionality? Those ahead of their time.

~~~
dom0
I believe the first package manager was "pkg" in System V (early 80s).

The modern notion of package-manager-with-builtin-download-and-uprades only
came in the mid-to-late 90s, when internet bandwidth grew enough to make that
feasible.

The first one of these might've been FreeBSD ports.

~~~
nickpsecurity
I tried to dig it up. I think the terms "version control" and "package
management" overlap a lot in how people classify their software or do the
histories. They're different with some overlap but it's all muddied up.
Interestingly, I found no history in Google of package management from early
to modern stuff. There's an interesting project right there for some CompSci
student interested in computer archaeology. :)

I did find VC history where the first thing to track software seemed to be
SCCC on UNIX:

[https://en.wikipedia.org/wiki/Source_Code_Control_System](https://en.wikipedia.org/wiki/Source_Code_Control_System)

The rest, from CVS to mainframe stuff to Apollo Aegis's, all showed up around
mid-1980's in similar time frame per Wikipedia descriptions. pkg seems strong
contender for first, package manager if even version control of source didn't
hit others until after its creation.

~~~
jslabovitz
There were versioning file systems in the 1960s (MIT's ITS among them, and I
think Univac's EXEC 8, though I can't find a reference) [1]. Often, new writes
to a file bumped a version (or cycle) count in the file system. I don't know,
but I wouldn't be surprised if there were tools to do diffing and patching
much like modern VCS systems.

[1]
[https://en.wikipedia.org/wiki/Versioning_file_system](https://en.wikipedia.org/wiki/Versioning_file_system)

------
vanviegen
The only 3 real-life innovations we actually use:

\- Vehicles. Invented in Mesopotamia about 7,500 years ago.

\- Buildings. Invented by Neanderthalers about 44,000 years ago.

\- Tools. Invented by early humans some 2,500,000 years ago.

I'm mostly kidding - but I do think the 'innovations' listed in the article
are rather broad.

~~~
rootbear
Did Neanderthals build dwellings or similar structures? I don't think I've
heard that before.

~~~
dasmoth
Suspect the source is [http://www.telegraph.co.uk/news/science/science-
news/8963177...](http://www.telegraph.co.uk/news/science/science-
news/8963177/Neanderthals-built-homes-with-mammoth-bones.html) or similar.

------
nickpsecurity
I'm sure it will be debatable but I think Burroughs B5000 should be on this
list somewhere.

[http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...](http://www.smecc.org/The%20Architecture%20%20of%20the%20Burroughs%20B-5000.htm)

If not for OS or interactivity, I think it should be on there for being the
first machine with OS written in high-level language whose CPU was designed to
safely execute it. Entry would go something like this:

"5\. High-level, safe execution of software."

Benefits: Stuff crashed less. Hackers had harder time. Easier to expand or
maintain.

Drawbacks: Cost some performance due to higher level and some extra money for
extra hardware.

Exemplars: Ada language + secure, Ada chips. Java processors + OS's. SAFE and
CHERI architectures. Maybe NonStop with hardware/software architecture applied
to fault-tolerance. Erlang.

------
cr0sh
I found this essay to be fairly comprehensive, accurate, and informative; some
of the items mentioned I have little experience or knowledge of - or didn't
even know existed (BGP, for instance, was new to me).

I think the author's heart is in the right place, and I would love to hope to
see some of the ideas espoused in the essay come to fruition - but I think
that the momentum of history may be difficult to overcome (much like tiling
window managers are now popular with some people - though IIRC, Windows 1.0
was a tiling system, then for some reason switched to overlapping windows -
perhaps as a result of singular lower-resolution displays being the norm). The
author mentions LISP and some other programming languages trying to be an OS,
but failing to gain hold commercially for various reasons (though in LISP's
case, one could argue Symbolics did succeed to an extent?).

This essay certainly is "food for thought", and I plan to re-read it and think
about it; I'd love to see it expanded to book form (or some other format to
explore the subject in a deeper manner).

Kudos to the author.

------
dang
Discussed at the time:
[https://news.ycombinator.com/item?id=7402571](https://news.ycombinator.com/item?id=7402571).

------
kowdermeister
As Clubber said, I'd add a few things to a list that should be absolutely
there.

\- I'm not sure why he left out machine learning, deep learning and other AI
concepts

\- The GUI (from Xerox)

~~~
sophacles
Not that I agree or disagree - the idea of the GUI can be considered a logical
extension of hypermedia and interactivity.

~~~
romaniv
I think there is a difference between "GUI" in general and Xerox Park GUI
specifically. Sketchpad and GRAIL clearly had GUIs of sorts, but Xerox
developed UI concepts that were highly generalizable. And just because they
became popular doesn't mean they were "obvious". Without Xerox Park all the
computers we use right now could have drastically different interfaces.

------
greggyb
I'd love to hear feedback about this (even if it's mean).

Based on the notes toward the end of the article on the limitations of a
traditional OS with filesystem, I had this thought.

The OS clearly seems to be going in the direction of Smalltalk or a Lisp
machine as an environment. What would it look like, or what would be the
limitations of a merged OS/programming environment with something like a
single SQLite DB as the filesystem?

~~~
sedachv
> The OS clearly seems to be going in the direction of Smalltalk or a Lisp
> machine as an environment. What would it look like, or what would be the
> limitations of a merged OS/programming environment with something like a
> single SQLite DB as the filesystem?

MUMPS, most 4GLs, RPG on the AS/400\. I have not programmed in any of these
but have heard a lot of bad things about them. Also I think it is a mistake to
say the the OS is going in the direction of Lisp/Smalltalk. If anything, Linux
containers/Docker is more like the IBM VM mainframe operating systems.

~~~
greggyb
The OS that the author is pointing toward _

------
dasmoth
How many of these still have substantial unrealised potential?

For example, thinking back through various organisational Wikis I've
encountered over the years leaves me thinking that practical Hypermedia needs
a little bit more work yet...

~~~
cableshaft
I think that Operating Systems could do a lot better in representing content
as more than just a list of files in a folder, using metadata to package and
present the content into something that behaves more like a object you can
interact with in different ways, and is more user-friendly, dividable (take a
smaller piece of the content on a flash drive for portability, or divide
content into two pieces and don't lose the metadata associated with it),
automatically versioned, and with a variety of presentation modes (instead of
just icons, perhaps it could show 'box covers' or 'banners' or the first few
words of the text as a generated picture, etc).

I make all sorts of different types of content, and keeping everything
organized is just so difficult, especially if copies exist on different drives
and computers. I can't effectively put gigs of video into Git either, as far
as I know, so most of it you have to take care of manually (or maybe I should
set up a Perforce server, but most end-users and less tech-savvy content
creators aren't going to go through with that).

~~~
marcosdumay
> instead of just icons, perhaps it could show 'box covers' or 'banners' or
> the first few words of the text as a generated picture, etc

Dolphin does that. The Windows explorer tries to do that as well, but in a
much more limited fashion.

Dividing metadata (with the respective privacy issues) and versioning large
files are incredibly hard problems. And not the kind of "hard" that a genius
in academy writes a paper and it's solved, but the kind where no two people
want things to behave the same way, and getting into an usable shared set of
assumptions may be even impossible.

~~~
cableshaft
Not disagreeing with you, it is very difficult, and people would have
difficulty coming to a consensus for sure. But I still think it could and
should be attempted, at least with one of the operating systems out there.

------
hden
In case someone is wondering what is HyperCard.

\- [http://www.loper-os.org/?p=568](http://www.loper-os.org/?p=568) A short
article demonstrating how to build a calculator app. \-
[https://vimeo.com/95380430](https://vimeo.com/95380430) A 10-min talk showing
the full potential of the language.

------
dwheeler
For a different look at the topic of key software innovations, look at my
paper "The Most Important Software Innovations"
[http://www.dwheeler.com/innovation/innovation.html](http://www.dwheeler.com/innovation/innovation.html)

------
kpil
I think that relational databases made a huge leap forward. It's built on top
of a solid foundation presented and accessed using a 4-gl language that
describes intent.

But I guess as a database, it's just a continuation of the hierarchical
databases, general storage, and cards and index cards before that...

------
pconner
> FORTRAN’s conflation of functions (an algebraic concept) and subroutines (a
> programming construct) persists to this day in nearly every piece of
> software, and causes no end of problems

I'm not really sure what the author's point is here. I think "subroutine" is a
more descriptive name than the more commonly-used "function," but I'm not
aware of any problems this has caused.

~~~
samatman
Functions have properties which are completely defined by their inputs and
outputs.

Subroutines (I prefer the term procedure) are just a sequence of commands.
They may use arguments and return a value but they can also do anything else.

Functions, real functions, can be reasoned about, composed, mapped over
collections, and otherwise trusted to behave themselves. Very few programming
languages provide strict functions. One may write them, of course, if one is
careful, but that is doing work a compiler could be doing for you.

~~~
lotyrin
There's also a disconnect with using a PL's concept of a function (procedure,
calling convention) instead of the simple intellectual concept (composition)
whether or not you're concerned with the mathematical concept (purity);
language compilers or runtimes often perform poorly because as a matter of
course every function call adds to a stack (failing to perform obvious inline
or tail call optimizations), involves a dispatch that might be more expensive
than the function itself (just in case inheriting code overloaded it or
because every procedure lives in a run-time mutable table), or causes
potentially large copies of arguments (to pretend at them being immutable by
called code), etc. -- paying costs which in many cases could be avoided.

------
slightlycuban
I feel the premise of "the OS as a Language" is missing a very critical point:
our common definition of what "a computer is" has changed drastically in the
intervening years.

Take a close look at those early computers. It is easy to ask "why do we even
need an OS" when we don't have extra things like USB, hard drives, or RAM
(it's all just storage anyway). What was once called "a computer" is now known
as "a processor."

Even within the CPU itself we find layers of abstraction: caches,
interpreters, even out-of-order execution! Just because you write assembler
today, doesn't mean the processor is going to do exactly what you specified.

Some of these abstractions should be questioned and challenged, such as the
CISC instruction set I alluded to above. But we should remember _why_ these
abstractions are there in the first place, instead of blindly questioning (or
accepting) how things work.

------
ribs
I'd like to see something about dependency and package management here.

~~~
sedachv
I think it is no exaggeration to say that until Eelco Dolstra's 2006 PhD
dissertation there is nothing worthwhile to mention:
[http://nixos.org/~eelco/pubs/phd-
thesis.pdf](http://nixos.org/~eelco/pubs/phd-thesis.pdf)

------
nv-vn
All programming languages were derived from Fortran? _Really?_

~~~
sevensor
This point and the few exceptions to it are addressed in the footnotes. I was
skeptical too, particularly of the claim that the lisp family tree is rooted
in Fortran, but the claims seem to be pretty well researched.

~~~
nv-vn
Maybe, but there are plenty of languages I would say are completely
independent -- APL, Prolog, and Forth don't seem to resemble Fortran in any
major way. At least Lisp shares the same notion of functions=subroutines.

------
romaniv
I would add virtual files and filesystems to the list. We take them for
granted, but without files we would probably stuck with storing everything in
a giant database a=la Windows registry. That would make a lot of things
complicated and some things practically impossible.

------
anton_tarasenko
Slightly relevant:

\- MIT Technology Review Emerging Technologies Top (2002-2016):
[https://www.technologyreview.com/lists/technologies/2016/](https://www.technologyreview.com/lists/technologies/2016/)

\- Gartner Hype Cycle history since (2000-2016):
[http://imgur.com/gallery/noBKI](http://imgur.com/gallery/noBKI)

\- Google N-gram viewer (using technology as a keyword):
[https://books.google.com/ngrams/graph?content=xerox%2Cfax%2C...](https://books.google.com/ngrams/graph?content=xerox%2Cfax%2Cemail%2Cphone%2Ccell+phone&case_insensitive=on&year_start=1980)

~~~
scribu
This info would be a lot easier to process if the images were juxtaposed and
you had a slider to move through the years.

That way you could visually track how a particular item evolved.

------
Demiurge
This is interesting, and can be sort of hard to really understand until the
original examples are really internalized. In the end, we can say that
everything is change over some dimension, all systems have input and output.
Everything is energy or matter, just like it is data or operations. I think
these concepts and patterns are natural logical generalizations, and it's good
to remember these patterns when beginning to analyze something, but ultimately
it is moot and not helpful for any one specific task we are trying to solve. I
can't play Angry birds on ENIAC, but that is what I may want to do, now.

------
chewyshine
So many amazing ideas were produced in 1955 & 1956.

~~~
goatlover
Makes you wonder what those ideas would have looked like with more powerful
hardware. Give Englebert and his NLS team or Kay and his Smalltalk team
gigabyte RAM and processors with powerful networking, and see what they would
have come up with. It would be akin to taking the most innovative people today
and starting over from scratch.

------
bogomipz
Under the second item - Operating Systems this author states:

"Well-specified interfaces are great semantically for maintainability. But
when it comes to what the machine is actually doing, why not just run one
ordinary program and teach it new functions over time? "

What is her referring to there? What is the "alternative" a a time-sharing
system that the author is alluding to? Does anyone know?

~~~
VLM
Likely alluding to the Forth development experience where you start with
writing a word (function) that toggles a single bit on an IO port and a couple
dozen layers of abstraction later you have an assembly line or telescope, but
just like Feynman and the turtles, its turtles all the way down, or Forth
words all the way down or whatever.

Whereas OS style, you'd pick some layer of device abstraction, probably
extremely arbitrary, and on this side its all assembly or C and on that side
its all ... something else, Perl or shell scripts who knows, and generally
people don't ever cross the arbitrarily placed abstraction barrier. Its an
extremely hard arbitrary barrier.

A classic example of randomly tossing down a barrier over the decades, should
RS232 API be bit banging a TX bit with your own timing routines on the
application side and the OS merely arbitrates access or way on the other side
should the OS implement a full TCP/IP stack and your application talks to the
RS232 API at the TCP session layer while the OS does all the rest, maybe the
OS even implements a NFS server talking UDP in the kernel and your API is
being a NFS client? I've lived thru both extremes and everything in between.
It is totally arbitrary and mostly just gets in the way.

Back in the old days when the unix kernel was written in C and so were all the
apps... Well no one wants to hear this, but those were good days...

~~~
bogomipz
I not sure I really understood this but it sounds wildly fantastic if it
involves Feynman and Forth :)

------
mkoryak
forgot porn, ok maybe its not a software innovation. please downvote me

