newproc() in http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/sys/sys/...
In summary, the goal is a user-facing system in 25k SLOC, from kernel to GUI apps, including all the code for compilers for the custom languages they made. They call the languages "runnable maths", or "active maths", where they think of a minimal notation to specify how a system should behave, and then implement a compiler to run that notation as code. They manage this through a compiler-writing language, ometa.
For example, to implement TCP/IP, they parse the ASCII tables from the RFCs as code. Seriously, they just take it as is, and have a compiler to run it as code. The implementation is supposed to be around 200 LOC (includes the code written in Ometa for their compiler).
There's also a graphic subsystem, apparently with all the functionality of Cairo (which is 20k LOC) in about 800 LOC.
Crucially, all this code is supposed to be very readable. Not line noise like APL. The expressiveness comes from having custom languages.
I believe most of the code is visible in the linked article. The packet diagrams were redone but are very nearly identical to the RFCs, just some minor formatting tweaks. They are very readable.
The compiler (in OMeta) is a very BNF like grammar.
The readability is striking. The first time I heard of it, I was expecting it to be extra dense and hard to parse. It turns out they have entire lines of just separators counting against their limit, which seems useless until you realize one of the main goals is readability.
Incredible work, I'm looking forward to the end result.
The language they use to do that (O-Meta) is then a language-defining language, and designing THAT means balancing on a level of abstractions that is at least three levels deep, and each of those abstraction levels is hard and necessary. They achieve great simplicity by thinking damn hard.
On the other hand, the mailing list (as noted by the other respondent) is a pretty interesting place with few flamewars.
For me personally, the thought of giving up entirely on this vision is repugnant and it's important, even if such systems aren't going to make a grand reappearance anytime soon, to at least make sure that people who care about computing know that they once were. I see that reminding as Kay's primary role. I'm not switching to Pharo for my day-to-day work (in my defense I did try) but I find the philosophy appealing and don't see a compelling philosophical disproof. So it seems to me it could re-emerge someday, and a world built on these ideas will have different tradeoffs, and may be preferable.
Back in the mid/late 80s I used to do work for a public library and helped set up several public Macintosh computers that had, in addition to word processing and spreadsheet software, Hypercard installed. I was amazed how people "got it" so quickly. From kids, natch, to street people to soccer moms. And by "got it" I don't mean running the built in stacks, but in creating their own.
Before long I was helping people make some really interesting projects. Some of them were beautiful monsters, but others were very well thought out. And all of them solved specific problems, many of which would not have had enough general appeal to be marketable or even transferable to anyone else.
And there's real value in having that ability. But these days most people simply can't get past the gatekeeper of beginning complexity. To be sure, in Hypercard you quickly ran up against a wall of limited extensibility in terms of its objects. I can easily imagine better.
But Kay is right, The web and most modern OPP aren't it. And modern tools for software development are a nightmare.
_The big problem with our culture is that it's being dominated, because the electronic media we have is so much better suited for transmitting pop-culture content than it is for high-culture content. I consider jazz to be a developed part of high culture. Anything that's been worked on and developed and you [can] go to the next couple levels._
I'm not sure that our media is necessarily better suited towards transmission: I suspect we just lack mediums that make looking at what is happening evident and interesting (even with X years of programming experience, watching another persons webapp run is rarely fun or an obvious exercise). This is a failure of our runtimes and the weak structures we use to build code.
Even if Alan Kay is talking about PCs and not servers, you still need important things like the ability to connect to the Internet (TCP/IP), IPC, VMM, threading and multi-tasking, etc. I suspect I've not understood your point though, so feel free to clarify what I've missed :-)
Shortly afterwards this stopped making sense because RAM got cheap. Should a 4G PC have a 16G swapfile? Should a 256G server have a 1T swapfile?
In short virtual memory was a hack to get around economical constraints. If you have the main memory, use it! Were it not baked into the OS at a deep level, I've have stopped using swap 10 years ago. It could be taken out of the next version of all the main OSs altogether, and no-one would even notice.
However, disk-backed page faults are but one component of a virtual memory subsystem. The real genius is the OS can provide access to memory uniformly in a way that is transparent to processes.
The fact that the two are entwined now is because of system defaults and decisions by OS developers on how to implement the page replacement algorithms in MM subsystems.
So much insight, so few words.
Looks like the industrial revolution had nothing to do with wars at all... strange, somehow stuff got invented anyway.
I'd be very careful about making such implications that "wars are good for technology". We have seen more advances in the past 20 years in technology than any time before, yet it was already the end of the Cold war. How do you explain that??
>ekianjo: We have seen more advances in the past 20 years in technology than any time before, yet it was already the end of the Cold war. How do you explain that??
The US has kept itself pretty busy with warfare over the last 20 years. The Cold War is ages ago and war has been waging for quite some time since the end of it.
Wars just ensure such necessities occur at faster rates.
The necessity from War is to stay alive and beat your foes. To this purpose, you burn your own economy to the ground (and the rest of the world with it) by going into massive spending and carnage.
WHatever innovation you actually get is from a massive, unnatural diversion of resources that were in the pockets of free people before the war. So, of course, innovation can happen at an accelerated pace, but compared to innovation during peace-time, it is way more costly and less effective during war-time, because you do not focus on the quality of your investments, and decisions are not taken on economic, rational grounds.
The cost of War is huge on society, and that's not a surprise the US were almost bankrupt by 1944 (remember the pressure of war bonds?) and wanted to end it as soon as possible.
I'd be more interested to make a rationale on the missed opportunities of technology BECAUSE of war. There are numerous cases of scientists who were disturbed by war events and who had to drop their works in order to save themselves. There are things that you see and things you do NOT see.
> Wars focus only on how to make destruction
But I will say this: the academic world as I've been witnessing it is a world built on cooperation, and it seems to me to be the real driver of strong innovation. Another lucrative social website is not "strong innovation", but a new research algorithm that pushes the boundaries of what we can do, is. One is driven by profit, the other by curiosity. One happens because of competition, the other because of cooperation.
The open source community and Wikipedia are great examples of cooperation.
The biggest advances in business are usually by people who are completely focused on customer needs and ignore competition (google up until their google+ fascination, Elon musk, apple prior to android's success. Note how google hasn't made any advances from their competition with Facebook, and how Apple hasnt advanced the iPhone much since they started competing with Android).
But at the end of the day I think you are right to dispute the popular notion that somehow a cut throat environment that pits everyone against everyone else is the most conducive environment for innovation. It just causes fear and fear makes stupid.
And while lucrative social websites are not innovation per se, you should consider they may be funding the next SpaceX.
That doesn't mean it couldn't be far better. Which is the point Kay was getting at.
Lose your vision and striving for the ideal, and you inevitably find yourself sliding backwards.
Audio would be ace.
He doesn't seem to get that what makes the web amazing is the data and people connected to it. That's generally far more interesting to users than novel interfaces or whiz-bang features.
Google, Facebook, Twitter, Wikipedia, Yelp, etc aren't technically all that interesting or innovative. They're infinitely more transformative than nearly all software that came before them though.
Google, Facebook, Twitter, Wikipedia, Yelp, etc. ... infinitely more transformative than nearly all software that came before
Ah, pop culture, you make me laugh... you make me cry...
That the technologies you mention enabled the websites mentioned is not in doubt, but without the websites and related technologies would things like TCP/IP and Unix have had such an impact?
I do want to clarify that I'm not saying that the underlying technologies aren't amazing.
Oh, the web has got plenty of whiz-bang features alright. My problem with it as a technology is that, compared to the Internet, it feels as an ad hoc concoction of such: clunky, buggy and a PITA to work with.
I shudder to think what my programming experience would be if the other layers in the stack (PC hardware, Unix, TCP/IP, etc.) were built the same way.
But it is!!
I'd speculate you just don't know it as well as you know web programming.
PC hardware.. do I really need to rehash the whole "the x86 architecture is horrible compared to Alpha/ARM/68000" discussion?
Unix.. The original "Worse is Better" essay was written about how Unix beat Lisp machines because it was worse!
TCP/IP.. Well, there's that whole IPv6 thing where they try to fix some of the problems with IP. Even that ignores many of the fundamental problems, though.
All software is hacky when you get close enough. Once it has been around long enough the terrible corners get broken off and people get used to the problems, and can't imagine another way.
My background is in systems programming and low level stuff; I've only been a web-programmer for two years.
I agree with your points; all those technologies are not perfect. Their limitations are mostly the result of unexpected explosive growth (the 640kb barrier, depletion of the IP address space, etc.) But they feel soundly engineered overall.
I just don't get the same feeling working with the web-related things (browsers, JS, AJAX, CSS, cross-domain policies and hacks to bypass them, etc, etc.). They all feel ad-hoc and it shows in the codebase I have to work with nowadays: it's often ugly, buggy and hard to maintain.
As a systems programmer, I've been amazed by the ingenuity of things like process forking, pipes, the proc file system, regular expressions, lexer and parser generators (the list goes on). But I can't remember a single web-related technology that made my jaw drop in amazement.
And I don't even want to get started on the difference in the quality standards and average mindset of hardware/OS design and web development (the whole "agile" and "extreme" programming thing and such).
The 'feel' so because you are made to believe so. We haven't seen anything else better in comparison. But those who have, talk about it.
>>As a systems programmer, I've been amazed by the ingenuity of things like process forking, pipes, the proc file system, regular expressions, lexer and parser generators (the list goes on). But I can't remember a single web-related technology that made my jaw drop in amazement.
Web means? Back end engineering is very much a part of the web. And the most of the things that you talk of like Parsing, regular expressions, pipes et al are all there in languages like Perl.
Perl is known as a the swiss army knife of the Internet.
Maybe not what you meant, but still pretty jaw dropping.
Everyone's standing on the shoulders of giants, but the idea that the contributions of top engineers and scientists today are somehow less than those that came previously is ridiculous - Kay seems hurt that people have taken the ideas and run with them, and doesn't seem to get that not everyone's as smart as him - and that's fine, we can still have something to offer.
Only if they also created a time machine, map and reduce are there in the Common Lisp spec from '84 and doubtless pre-date that by a decade or two.
mostly solved fan-out on a massive scale
Well, if you mean created a poor copy of what Teknekron (now known as Tibco) were doing in '86, and they were mainly commercializing research from the '70s.
I think - the ideas behind MapReduce are not difficult but nor are they necessarily inevitable. The innovative part of MapReduce is Google getting to a place such that the constraints force that approach as obvious. MapReduce is impressive.
As for for the others, yes technically they are just plumbing and their value is more of a social innovation and very important in that way.
Better go correct those guys at the Economist too.
It's running on a ####ing computer and yet can't use any of the features of that computer - just because the browser is an app and the job of the OS is to protect itself from apps
Well, that and it fits on a desktop instead of in 100 monasteries connected by donkey carts.
The web was designed for reading documents not to try to shoehorn an operating system into the browser.
Thankfully the mobile world is bringing back the desktop applications concept, while using the network for communication protocols as it is supposed to be.
"...what you definitely don't want in a Web browser is any features.... You want to get those from the objects. You want it to be a mini-operating system..."
Essentially he's saying that the way we think of browsers is holding the web back. We could have collaborative WYSIWYG editing of Wikipedia or blogs, for example, but instead we're stuck with just reading documents.
Perhaps it would be correct to say that the web was designed for reading documents, but it shouldn't have been (at least according to Kay).
Once you've started explaining away why people disagree you with that kind of rhetoric, it's really hard to learn anything from them.
Of course that I'm aware of that, I'm not that self-centered :)
> Once you've started explaining away why people disagree you with that kind of rhetoric, it's really hard to learn anything from them.
English is not my primary language, so I'm not 100% sure I understand what you're saying, but in case you're suggesting that I'm full of what will turn out to be empty rhetoric then of course that you have the right to your opinion.
Anyway, I find it crazy that a guy like me (I think I'm in HN's bottom bracket when it comes to programming- or CS-ability) somehow sort of "defends" a guy like Alan Kay. I'm only doing it for the energy, the passion, the whatever-you-want-to-call-it that always gets to me when I'm reading Alan Kay's interviews. I agree it's something subjective.
Here's a link to the photos (some unpublished) that he refers to:
Look what happened to the previous submission of the same article here on HN: http://news.ycombinator.com/item?id=4224966. Such a good article went unnoticed because the title was plain unattractive.
Can we have a little bit of 'submitters freedom' here?
P.S: Original title was: "Internet by Brilliant minds, Web by Amateurs, Browser a lament"
I give you http://www.dougengelbart.org/about/hyperscope.html, implemented in 2006 on that horrible web thing Kay complains about. So there is that...
I do think we will get to a better place eventually. After all, the web is still a relatively new medium, especially when compared to the "Internet".
You may not agree, but I'm pointing out that the statement by Kay is pretty inaccruate.
https://www.biblegateway.com/passage/?search=luke%2023&v... (I would start around verse 32)
So it's not that I disagree with any assertions that there are many interpretations of what it means to follow Christ, but that I disagree that the statement was accurate for all Christians. It is my opinion that a very small minority of Christians really believe that.
I realise that I've been voted down for pointing this out, but I thought it was an important viewpoint to bring forth in the discussion, given that Alan Kay made it the basis for his "Socrates should go to heaven" argument. Alan Kay might be excellent as a Computer Scientist, but his digression into the Christian religion was remarkably uninformed.
"And that's true of most of the things that are called object-oriented systems today. None of them are object-oriented systems according to my definition. Objects were a radical idea, then they got retrograded."
It sounds like his conception of "objects" is a more user-facing thing; an object is something you see on your screen and can interact with and manipulate by programming. This view blurs the line between users and programmers. I'm not sure this is a terribly realistic model for how normal people want to interact with computers. For all his disdain for the web, it has succeeded in bring content to over 2B people worldwide, most of whom wouldn't have the first idea what to do with a Smalltalk environment.
Alan Kay is really the inventor of object-orientation. But when the concept of "type bound procedures" was added to compiled languages like C (making it C++), they called it object-orientation, even though type bound procedures are a different thing than object-orientation. Real object orientation is about sending messages between objects, those messages do not have to be predefined, they are not dependent of type hierarchies, they can be arbitrary, they can be relayed, ignored, broadcasted etc. You do not need classes or types at all to be object-oriented by Kay's definition.
> It sounds like his conception of "objects" is a more user-facing thing; an object is something you see on your screen and can interact with and manipulate by programming. This view blurs the line between users and programmers. I'm not sure this is a terribly realistic model for how normal people want to interact with computers.
Which misrepresents his contribution rather badly.
KAY: We didn't use an operating system at PARC. We didn't have applications either.
BINSTOCK: So it was just an object loader?
KAY: An object exchanger, really. The user interface's job was to ask objects to show themselves and to composite those views with other ones.
BINSTOCK: You really radicalized the idea of objects by making everything in the system an object.
KAY: No, I didn't. I mean, I made up the term "objects." Since we did objects first, there weren't any objects to radicalize. We started off with that view of objects, which is exactly the same as the view we had of what the Internet had to be, except in software.
I realize that objects in Smalltalk were not all graphical, but this concept of objects as graphical entities seems to be near and dear to his heart.
I'm glad he's working on STEPS; I'm eager to see him push the boundaries of what is possible, and if it succeeds, it will validate his ideas. But Smalltalk and Squeak have been around for decades, and yet the Web and Wikipedia are orders of magnitude more popular. So why does he have to bash their creators as "amateurs" or "lacking imagination" when their ideas have caught hold in a way that his work has not? What does he have to back up this criticism? Sure, a lot of the ideas from Smalltalk and his early work on object-oriented design have influenced other programming languages, but he himself says that the way in which object-oriented design evolved runs counter to his vision, not in support of it.
I'm not really sure what I got wrong that explains the downvotes to -3.
I'm all for Alan trying to make his vision a reality with the STEPS project, and I truly am interested to see if he demonstrates a new way of thinking about computing. But when it comes to ideas, the proof is in the pudding: the web is enormously successful, Wikipedia is enormously successful, so his criticism of them rings hollow.
The problem is that right now it is simply to hard to create this kind of interactivity. But in my opinion it does not necessarily have to be more complex than creating or maintaining a corporate spreadsheet.
will Wikipedia contributors be able to collaboratively design and refine such a thing as easily as they write and revise a simple article?
how do you prevent a random Wikipedia contributor from modifying the interaction's code to steal the viewer's Wikipedia credentials?
The same ways as you would for text or image based content.
and I truly am interested to see if he demonstrates a new way of thinking about computing
Even when he is unable to demonstrate a new way of thinking about computing, I am sure we all can agree that a lot of things can still be improved. It would be a shame if the best we can do on computers is presenting information like we do on paper.
Try this one: http://www.tele-task.de/de/archive/video/flash/14029/