Simple Made Easy changed how I think about constructing software systems of any kind. It provided me with a strong vocabulary and mental model to identify coupling and design with a more clear separation of concerns and focus on the output rather than the ease of development.
Also see Stuart Halloway’s earlier talk, Simplicity Ain’t Easy. There’s a fair amount of overlap with Rich Hickey’s talk, but both are worth watching: http://www.youtube.com/watch?v=cidchWg74Y4
- everyone knows you cannot keep up the pace of a sprint
over a long distance race - so they solved it by running
a long distance race but just firing a starting pistol
every 400 yards - and we're off again!
Just rewatched it, and it is a good talk, but i always think the whole OO dismissal is a bit too extreme. I did code "generic data structure + functionnal language" program and "ORM + objects + states" and i didn't find any problem in both cases, because i used it when they were suited.
A document based user application is basically a gigantic state. If you're using generic data structure such as loosely typed maps and sets, with separate functions in various modules for manipulating parts of that structure, you'll end up with a far bigger mess than if you're having a regular three tier MVC code with objects on the model layer (even with an ORM).
I do think, and i have experienced it, that sometimes, regular OO is the good abstraction.
Just watched it...really great stuff. But can anyone chime in on how you can apply some of the principles in his talk to something like a retain-mode display library, for GUI or 3d for example? Libraries like these pop up in all popular OO languages and usually have long inheritance chains with very state-heavy classes, which further form somewhat rigid hierarchies of class instances at runtime. This violates some of his tenants in a big way, but they seem to be the predominant design pattern for getting stuff on screen. Even HTML5 is essentially like this.
Any thoughts on how his talk could apply here? Is there a better way?
View layers can get complex, but you can at the very least encapsulate complexity and have it interact with other parts of the system in a simple well defined way. Have the separate layers of the application communicate via interfaces that keep the ingress, egress points of data flow well defined. Things like event pub/sub systems can further decouple things, the observer pattern, etc.
I meant that if I were designing a general-purpose retain-mode GUI library or 3d engine from the ground up and wanted to incorporate his principles as much as possible, how could I do that? Maybe a retain-mode approach is just inherently (too?) complex?
Correct me if I'm wrong, but I think your answer is in reference to using such a library, and I can certainly see how my question implied that, so sorry for the confusion if this is the case. Thanks for your answer regardless.
At that level it's similar tradeoffs. Consider what the code would look like if it were purely functional. In fact a good answer to your question would be for a thought exercise take a look at how XMonad is implemented in Haskell. That would be a completely different approach to the large, heavily coupled messes that OOP can sometimes lead to when modeling the state as mutable object members.
This is something I've been experimenting with. My intuition is that the scene graph will look a lot more like an AST made from algebraic data structures than an OOP actors network. Down that road, the system looks like an optimizing compiler with the really tricky added bit of iterating in response to user input.
This is the talk that, according to Wikipedia, included the first public demonstration of the following technologies: the computer mouse, video conferencing, teleconferencing, hypertext, word processing, hypermedia, object addressing and dynamic file linking, bootstrapping, and a collaborative real-time editor. Pretty good for one talk!
If you have not watched at least that first 30 minutes of Engelbart's talk, I would recommend you stop what your doing and watch it right now.
In an hour and 40 minutes Englebart demonstrates input via mouse, video conferencing and collaborative editing among other things. This was before the internet, before UIs, before the idea of personal computing. Incidentally the most profound thing about this demo is not the technology demonstrated but rather the introduction of the concept of personal computing. At a time where computers were reserved for number crunching Englebart envisioned a future where they would be a part of our daily lives not controlling us but enhancing our abilities.
A quick side note, it is worth noting that while Englebart was certainly the visionary, his partner Bill English brought Englebart's visions into the world.
"And observe that if I forget to tell the computer to save my work, it loses it!"
furious scribbling in the audience as everyone takes notes
I'm so glad we're finally getting away from this paradigm after four decades. For example, the iPhone notepad (and now Mac TextEdit) doesn't wait for a cue from the user to write the few dozen bytes of new input to persistent storage.
Autosave without saving many undo states is equally dangerous.
Not saying the iPhone does anything in a bad way (I have no clue) - but I've hardly been bitten by losing unsaved work because I'm deliberately using ctrl-c very often.
That's why many people (I first read about this in Alan Cooper's About Face 2.0) advocating for automatic save also advocate for unlimited undo (or at least something close to it) because closing a document and not saving it is just a form of undo while closing a document and saving it should be the default action.
This is also a place where the desktop metaphor (with paper documents) around which most WIMP systems are built exposes computer details and fails to stick to its metaphoric roots. When I scribble something on paper it's there and I don't have to consciously remember to somehow commit a transient state of my scribbling to paper explicitly.
After some incidents in my shool years, I've noticed that I unconciously tend to hit ctrl-s every few dozen keystrokes without even noticing it.
Good software doesn't lose anything ever, but even with "average" (i.e., unacceptably bad if you think about it critically) tools I've had sudden power cuts where I lost just a few words because of this habit.
A friend of mine invited Bret to do this talk. He told me that Bret sent close to 50 emails leading up to the event making sure that every aspect of the talk was perfectly choreographed.
Meredith Patterson's astonishing CCC talk on computing and linguistics. It's a tour de force, presenting a systematic and practical proposal for how we can build an open and secure future for computing. I can't imagine a better example of the joys of inter-disciplinary thinking.
I found the message in the talk persuasive, but the presentation of it sorely lacking. Her delivery was stilted, probably because she was either very nervous and/or hadn't practiced her presentation nearly often enough to sound natural. On top of that, the talk was way too long. It could have been easily cut in half, at the very least.
In a Boing Boing article[1] (which also, incidentally, calls her presentation a "tour-de-force"), her points were summarized in these quotes:
Hard-to-parse protocols require complex parsers. Complex, buggy
parsers become weird machines for exploits to run on. Help stop weird
machines today: Make your protocol context-free or regular!
Protocols and file formats that are Turing-complete input languages
are the worst offenders, because for them, recognizing valid or
expected inputs is UNDECIDABLE: no amount of programming or testing
will get it right.
A Turing-complete input language destroys security for generations of
users. Avoid Turing-complete input languages!
Ok. I got that. Seems reasonable enough. It didn't take me 40 minutes read that, and it shouldn't take 40 minutes to present basically the same thing. Maybe half that if you thought you really had to fight hard to make your case.
A more concise presentation containing just the important/interesting bits would have been less painful to watch and probably a lot easier to prepare for.
That said, the Q&A period at the end was good. She seemed a lot more comfortable there, and there was a lot more interesting and informative content there than in the presentation proper.
I'm watching it right now and there is nothing wrong with the delivery whatsoever. I also didn't get the feeling that the talk was too long. Sure, you can sum up pretty much anything with a TL;DR, but that's not the point of giving a talk, right? I mean at its core a presentation is meant to inform or entertain and this one here did both. So I really don't get where the criticism is coming from.
This is a great talk, although maybe a little bit too precious (but this is common to a lot of [security] conference talks so I don't hold it against her). Pushing people towards using more formal methods to generate and accept protocols is always good.
That said, I thought the argument against length fields was somewhat.. weak. But maybe I'm misunderstanding the context. The question at the end was not answered satisfactorily. A protocol with a length field is most certainly deterministic, and if you go the other way and use a delimiter, escaping/encoding is the only way you're going to work with arbitrary user data. I would argue the length field is miles better. If someone injects bytes into your stream that match the protocol, your recognizer isn't going to save you, just the same way someone rewriting the length field is going to blow up.
What this talk seems to argue for is making the language simpler (e.g. context-free), so you can validate that the semantics are working as intended, but transferring arbitrary blobs of data is always going to be an issue as long as people enter random musings in text boxes that are rendered by Turing complete software. :-)
As I understood it, the arguments against using an unbounded length field is that it makes the language recognizing it context-sensative. When processing some inner payload of a data packet you need to carry around the state of outer context-sensative protocol layers to make sure your inputs are well formed.
The fact that it is trivial to maliciously craft the length field makes it cheap for the attacker to try to exhaust receiver memory, overflow buffers or make DDoS attack more effective. If you use a delimiter, the attacker has to at least spend the required bandwidth to try to exhaust resources.
I suspect that if your protocol specification bounds the length field to some finite amount, then your language can be classified as a regular language for verification purposes, just with a FSM branch for each possible value of the length field.
The fact that it is trivial to maliciously craft the length field makes it cheap for the attacker to try to exhaust receiver memory, overflow buffers or make DDoS attack more effective. If you use a delimiter, the attacker has to at least spend the required bandwidth to try to exhaust resources.
A length field doesn't mean that you have to pre-allocate that amount of memory. Never do that! Robust implementations use the length field only as a hint, as an hidden delimiter, and allocate memory as the data comes in.
That said, she does have a point. Though escaping is fraught with dangers, too (remember PHP in the beginnings? magic quotes, ugh).
And also talks about a general parser she's building for this issue on github[1]. Using this parser , she have built a DNS and base64 recognizer , and i think they are working on building recognizers for more protocols.
I just watched it and thought it was great. She is well spoken and clear.
But I sort of thought this was common knowledge. I made a related comment here: https://news.ycombinator.com/item?id=4677998 (and before the Ruby Gems YAML exploit, which maybe was kind of prescient)
Incidentally, I felt she sort of brushed off computational complexity when once person brought it up. Complexity is just as important as computability/decidability, and it fits very well with her formal language perspective (e.g. regular languages can be recognized in linear time and constant space).
Another thing was that she didn't give that many concrete examples. The x.509 certificates was one that seemed really good (didn't look into the details). But I would have liked to see an instance of a protocol that was not meant to be turing complete, but IS.
She gave the example of HTML5 + CSS which was a good one. But I don't think people have accidentally written Turing complete HTTP parsers or anything like that. How big a problem is it in practice? She's making the claim that most security bugs fall in the category of being caused by language errors?
And, like another comment, I'm not sure I agree with disavowal of the length field. I'll have to think about that. I believe delimiting and escaping is more complex and thus more fragile. I also believe it's slower, despite what she says in the talk. We have to consider the tools we have now; maybe in some future world we'll get better formal language tools.
Great talk. One of the best. His talk with Gabriel, 50 in 50, is not as good, but still quite fun. Well worth watching, but it is much better in person than on video (saw it at HOPL):
Steele (and Grabriel) are great at doing this talks, they are almost art. Which is why I wouldn't call them so much technical (even growing a language) as they are entertaining and perhaps a bit influential.
Great for repeated watching, I get something new from it every time I watch it. It is also great for recalibrating your point of view from amazement at whatever the current trend is in technology, to a more long-term outlook as well as encouraging higher standards for what is currently available.
I think this shift in outlook is important for technologists like us, because it easy to become immersed in the day to day goings-on of tech and become myopic in a way. Using the invention of the printing press and literacy, etc, etc is a great way to reorient your attitude towards technology and what it can/should do.
Is this the one where he says the web was designed by amateurs and it should have been a virtual machine instead? He says they did a better web 30 years before or something to that effect.
If so I'll commit blasphemy and say he's wronger than wrong. I was especially offended that a curious person would display such ignorance in understanding the work of others.
Without going too much into detail, if he believes this, then he can't possibly understand Tim Berner-Lee's principle of least privilege. And that the web would have never become ubiquitous, e.g. made the jump from PCs to mobile, in his bizarre universe.
That exact quote is from a more recent talk of his, but he does mention the web and his current distaste for the design principles it embodies. I would encourage you to still watch the talk as it his criticism of the web is an incredibly small portion of the overall talk.
I wish that you had included more substance in your criticisms so I could have either agreed with you, disagreed, or explained my interpretation of his remarks. I will say that I think Kay explicitly agrees with the principle of least privilege (Rule of Least Power according to wikipedia) and calls that out in his talk. I see no way in which the ever-enlarging html,css, and javascript standards embody what Berners-Lee was talking about, especially when they still don't offer nearly the functionality that an OS offers currently. You can tell this because there are really only 4 organizations capable of creating a full browser and they happen to be mostly the largest tech corporations on the planet. If you need those kinds of resources to be able to implement the web today, how can this ever be considered "Least Power"? I suspect that in 10 years the W3C will have added so much more functionality to the various standards in an attempt to make them somewhat at parity with native development that browsers will collapse under their own weight as they try to implement what is basically an OS (this really became explicit when Chrome started using processes to "protect" tabs. Sound familiar?).
The web as it exists today is a huge pile of hodge-podge conflicting standards, none of which are remotely close to offering the level of development performance for apps rather than pages that you can get with any desktop toolkit. Alan is fully behind the "idea" of the web, but is taking the long-term, and I believe correct, view that this cannot continue as it is and we would be well served by trying do something about it by conscious effort rather than evolving half-heartedly in that direction.
I don't think your response addresses the principle of least power. That it takes millions of lines of code to create a web browser is a separate issue.
The principle of least power basically means that you should convey meaning across the web using the least expressive language or abstraction possible. The reason is that you want give the client more choices to interpret the data/content, rather than forcing it to correspond.
Google, or any other search engine, would not be possible without the principle of least power. Mobile phones wouldn't have been able to use the web if the semantics were over-fitted for specific devices.
I've heard what is essentially Kay's argument from other sources too. Some computer scientist at Adobe (may have even been one of the founders) lamented that the web wasn't build on PDF. PDF is a Turing complete language that can describe any page layout.
Certainly after one has messed with CSS enough, you can see why he would say that (aside from the obvious favoritism). But it is terrible idea, in that it overfits content for specific devices, and is a vector for malware, etc. Once you are Turing complete, it's very hard to avoid these pitfalls.
Anyway, as I recall, Kay is making essentially the exact same argument as the Adobe guy, and the way he said it in that talk (Programming and Scaling, thanks below) displays astonishing ignorance for such a brilliant guy.
If Kay is right, then he should just create a web that is based on a VM. And then he can see how far it goes. It will never go anywhere because it will be broken by design.
That we are developing more VMs now for the web doesn't negate the point. The web should be layered; each kind of content should choose the least powerful channel to transmit it. If you need arbitrary code, then the least powerful abstraction is a Turing complete VM. But you don't need a Turing machine to display a blog post.
And yet no desktop toolkit has ever approached the ubiquity and deployability of the web. However much of a hodgepodge the web is, it can and will continue to improve incrementally, whereas the a GUI toolkit no matter how technically superior can never cross the chasm to ubiquity, wring your hands though you might.
Well I'm not wringing my hands, and I don't think you really have the ability to comment on my body language. Let's keep this debate focused solely on the contents of the talk.
The ubiquity and deployability of the web is not an inherent quality of the design of html, css or javascript. The ubiquity and deployability seem to me to be largely due to the internet which is one of the comments that Kay makes. People confuse the workings of the internet with the workings of the web. Right now we send documents that are interpreted, but there is nothing to say we can't send objects which are "interpreted" (or JITed more likely). But his point in the talk I posted was that the amount of accidental complexity that is piling up on the web as we speak (and which will only get worse and worse) will eventually collapse upon itself. You can already see the complexity of things like javascript and css increasing as the web attempts to offer applications, not just documents, and doing this by having a committee standardize high-level behaviour like layout seems to be a poor approach in the long run.
Most of the talk is focused on thinking in the long-term as well as working to decrease the complexity of the software edifices we are currently creating, before they become too large to improve in any revolutionary way.
Who knows, maybe the next web will be built on top of the web, the way the web was built on top of the internet. The point of his comments in the talk is that we should explicitly think about the design of what we are building before we rush off to pile up code.
You'll have to forgive me not watching the talk yet. I'm not in a position to comment on Kay's entire vision.
My comment was aimed squarely at the refrain that we've been hearing from serious engineers for a long time about the utter unsuitability of the web for apps. I find this tiresome because technology doesn't win by being better, it wins by being adopted. Of course there's no technical reason we can't have a better basis than the web for internet apps. It's not about technical limitations, it's about the adoption curve. The perfect cross-platform GUI toolkit in 1992 would have still failed if it required you to write C++ to render your class schedule.
The fact that any person with almost no technical expertise can access an app from any computer they sit down at anywhere in the world without requiring installation is actually an amazing achievement that far outweighs the kludginess of the apparatus.
I am enjoying this part of the discussion. One of the things that struck me a few years ago was how the design of the web was so heavily oriented around trying to reproduce the print media industry in a browser. Thus, we get the "markup" of HTML, which provides layout at the expense of structure. Later on, we added CSS which ideally should provide layout, but bad habits learned at an early age or provided by tools that came about in the 90s still haven't caught up in many cases to this day.
There's no concept of "line 4 of chapter 3 of book X" in the web, at least not down at its core. You have to shim it in with anchor tags or something similar, and as a result we don't have ubiquitous browser apps that let me highlight a snippet of text in a book, write some notes in the margin (example: "this is a reference to Hamlet's soliloquy"), and then on a whim search for all of Noam Chomsky's (or whoever's) notes on Hamlet, or perhaps search for all public notes on this book in the context of Hamlet. The lack of structure means that everything's this haphazard soup of markup, with the focus being so much on how it looks that structure of the text and the relationship between parts gets lost. I'm not sure the "semantic web" will make up for that lack, but it's probably a good example of how the "next web will be built on the bones of this one" (to paraphrase someone else's comment in this thread).
The web can, and will continue to accumulate stuff. You can't clean up a mess by accumulating more stuff.
There is no technical reason why a proper GUI framework couldn't be as ubiquitous as HTML+CSS+JS+SVG+etc, except that historically it didn't happen that way. It's definitely not inconceivable that somebody will build Kay's VM vision as an abstraction layer on top of web technologies, so that at least we no longer have to deal with the mess in each and every web application separately. If such a VM becomes popular enough we can implement said VM directly instead of on top of existing web technologies.
> You can't clean up a mess by accumulating more stuff.
You can't clean up a mess by adding layers, but you can replace layers (eg. you can add web sockets and start driving real time apps that way). Therefore incremental improvement is possible essentially forever.
> There is no technical reason why a proper GUI framework couldn't be as ubiquitous as HTML+CSS+JS+SVG+etc, except that historically it didn't happen that way.
Agreed. However it's not just accidental either. The web is many things to many people. It's not like cross-platform GUI toolkits failed for lack of trying, but the fact is that they could not reproduce the advantages that the web had in the very beginning. Specifically, it was comically easy to create content and even a browser on any platform in those early days. Obviously over the next decade web tech was severely abused to shoehorn application functionality where it was never intended, and ends up being more complex in the long run. However the number and utility of the dead simple documents is absolutely massive. That, combined with a defacto standard for creating server apps without installation that is perfectly accessible to a layman (theoretically with any disability as well) was the wedge that GUI toolkits never came close to having.
Your idea about building a new abstract layer on top of existing web technologies and then reimplementing it natively is the most credible idea I've heard about how the web could be supplanted, and I certainly wouldn't bet against that happening over the coming decades. However I think the web should be a textbook example of how worse is better is not just a conciliatory platitude for disgruntled engineers, but as an actual requirement for certain classes of technology to gain critical mass.
I used to think differently, but my time on HN has persuaded me that I'm wrong. I now agree with you.
I do miss the "Write it according to the specs and publish it, and anyone on any browser, any OS, any machine, any display type, can access it" attitude. Some people really do need to re-learn the universality of the www, instead of fencing off content inside apps. (Why do newspapers insist on really poor online versions, and mobile versions, and app versions?)
If we'd waited for perfect we'd never have linux, we'd still be waiting on hurd. If we'd waited for perfect we'd never have the internet and the web, we'd still be waiting on xanadu.
>If so I'll commit blasphemy and say he's wronger than wrong. I was especially offended that a curious person would display such ignorance in understanding the work of others.
I don't see how he can be wrong. The point he made wasn't that the things they did in the 70s were better than today's web or UIs. Just that people (and companies) should have invested in THAT line of work, and building on THOSE principles, instead of making a mess with reinventing the wheel, competing on BS differentiators and cutting off research.
I'd take my OS X or an Ubuntu desktop over any Xerox guy of the time, any day of the week. But I'd take the volume of research, innovation, insights and coherence that they had at the time over today's ho-hum idea of development too.
Despite having all those things 30 years ago, it took 30 years for them to become mainstream -- and far fewer stuff emerged since. And not for lack of CPU resources. Even something like BeOS could offer a 2005 desktop experience in 1996.
The most wide innovation mover in the 00's was Apple. And even they, they mostly worked on market innovation (ie. repackage older niche stuff in a way that finally makes people want to use them, instead of a jumbled mess), not on research innovation.
Read my response above; you're not addressing the point he makes that the web should have been based on a VM. That's what I'm saying is wronger than wrong.
If their line of research is based on distributing content over VMs, then that's completely different than the web's design philosophy, and it's probably THE reason his ideas never were adopted while the web was.
Those design ideas are diametrically opposed. The design of the web won, for that specific reason. It was BETTER rather than worse, as he claims.
I was astonished during the talk that he never reflected on why his ideas in this area weren't successful while the web was. It is NOT because the designers of the web are idiots or amateurs. That is specifically what he claims.
Your whole premise is that his ideas "weren't successful", whereas I see it more as a case of "Greenspun's tenth rule".
That is: tt's not that the VMs lost and the web won.
It's the web started from something that wasn't adequate for what we wanted, and after all these years it has been adding all kinds of kludgy, warty and half-baked implementations of VM features.
In essence, instead of designing a proper web-as-VM from the start (and slowly evolve it as capabilities of networks and CPUs increase) we started with Lee's document based thing, that we soon found out is barely enough, and we have to have video, sound, app capabilities, process isolation, maybe a common bytecode, networking, real-time, etc.
I.e we started from dumb documents we links, when what we wanted was a VM. And now we have twisted the dump document model to sorta kinda look like a VM.
How many times have you heard that "the browser is the new OS" and such?
So, it's a Greenspun's tenth rule, only instead of Lisp we build an ad-hoc, bug-ridden, clumsy version of Alan Kay's, Doug Engelbart, etc, ideas.
Right, so you are arguing exactly what Kay argued, and what the Adobe guys argued. I am saying that is wrong for the reasons above (i.e., exactly what Tim Berner's Lee argued).
Anyway, Kay basically attributed it to ignorance and stupidity on the part of the TBL and the web's designers, which I think is incredibly arrogant, especially since he is wrong.
You didn't really address the arguments above -- you're basically saying "it would have been nicer and cleaner if we started with a VM".
I can sort of see how it would have been nicer in some theoretical world, just like using PDF/PS instead of CSS would have been nicer (in a way).
But I am arguing that it would have never happened. If you started with a VM-based system, there would be technical problems that prevented it from being widely adopted.
Actually, Java was pretty much exactly that. It was supposed to be a VM that ran anything, and disenfranchised Microsoft's OS. And we know JS is more popular than Java, despite way more marketing muscle behind Java in the early years.
It's already been tried. I'd argue that it wasn't just beyond the abilities of Java's designers, but actually impossible (this is as much social problem as a technical one).
You make the vague statement as Kay does that "we should have started with a VM", but it would be impossible to come up with a specific VM design (actual running code) that actually worked as well and as widely as the Web does.
You're absolutely right that historically, it wouldn't have worked. But the web has since moved beyond plain old static documents. What was good for a 1989 web isn't what's good for a 2013 web.
The design by committee of the W3C has resulted in overcomplicated yet at the same time underpowered designs. The model where an ivory tower mandates new things from the top down rarely works well. How many man years does it take to implement a new browser? It's basically an impossible task unless you have mullions of dollars to spend. A much better model for innovation is one where multiple competing technologies get implemented, and the best one wins organically. The web as a VM would have enabled that. You no longer need a committee that mandates HTML5. It can be implemented as a library, and if people like it they will use it. The W3C has a chicken and egg problem: before it standardizes something, there is not much experience with the features it standardizes, so it's hard to do good designs. On the other hand once you standardize something, you're stuck with it. It can (practically) never die. If you have multiple "HTML5 libraries" then it's not a problem to try something out, and pick the best thing. The best solution can win, and the others can slowly fall out of use.
I agree with the problems, i.e. that web standards and implementations are messy, but not with the solutions. It just seems like a big pipe dream -- "oh I wish we could start over from scratch, rewrite the web, and make things clean". Will never happen.
I also don't agree in 2013 that moving say HTML5 to a VM is a good idea. There's nothing stopping anyone from releasing a VM now. You could release a new VM and an application platform. I mean that is essentially what, say, Android is. But even Android clients will always be a subset of all web clients.
I think people proposing this mythical VM don't have a clear understanding of what VMs are. They are fairly specific devices. Do you think Dalvik, something like v8, or the JVM, or .NET, can form the basis for all future computing? It's an impossible task for a single VM. Even Microsoft is taking more than 10 years to move a decent amount of their own software to their own VM, on a single platform. All VMs necessitate design decisions that are not right for all devices. Even Sun had diverging VMs for desktop and mobile.
Do you think HTML+CSS+JS+(...) can form the basis of all future computing? You have to apply the same standards to both sides. It's hard to be perfect, yes, but it's easy to do better than the web mess. If I had to choose between HTML+CSS+JS+(...) and either the JVM or CLR as the basis of the future of all computing, I would definitely choose one of the latter options. Both the JVM and .NET VM would work okayish, though obviously not ideal. You want a simple bare bones low level VM with the sole purpose of building things on top of it. The JVM and .NET have much cruft of their own. That stuff should go into libraries, not into standards.
You don't need to start from scratch and force everybody to adopt, W3C style. You can build the initial version as an abstraction layer on top of web technologies, so that only the VM implementors have to take care of hiding the mess, and application developers can build on top of that. With current improvements in JS VMs and things like WebGL this is slowly becoming feasible. If that is successful, you can implement a high performance version natively.
This is inevitable. We won't be using HTML+CSS+JS in 100 years, other than for archaeological purposes. The question is how soon will it happen.
HTML and company aren't the basis of all future computing, but they or their non-Turing complete descendants absolutely will exist in 100 years. They don't exist for lack of imagination; they exist for timeless and fundamental reasons.
Do you think plain text will exist in 100 years? If so then it's not that much a stretch to say that HTML will. It astonishes me that people think that only code, and not data, will be transported over networks. That seems to be what you are claiming -- that it's preferable to transmit code than data in all circumstances?
The argument isn't symmetric because I fully believe that VMs are necessary for the web. I just don't agree with Kay that the designers of the web are idiots (he really says this) because they didn't start with a VM. VMs will come and go as hardware and devices and circumstances change. Data has much more longevity; it encodes fewer assumptions.
Assumptions that are no longer true are generally the reason a technology dies. It is pretty easy to imagine a VM (unknowingly or not) encoding preferences for keyboard and mouse input; that technology would have died with the advent of touch screens. Likewise, a VM that provides affordances for touch screens will likely be dated in 10 years when we're using some other paradigm.
HTTP and HTML are foundational to the web. They are basically the simplest thing that could possibly work. You can reimplement them in a few days using a high level language. They will be around for a LONG LONG time.
More complicated constructs like JS and VMs will have shorter lifetimes. I guarantee you that HTML will be around long after whatever comes after JavaScript, just like HTML will outlive Java Applets and Flash.
They layered architecture of the web is absolutely the right thing. Use the least powerful abstraction for the problem, which gives the client -- which necessarily has more knowledge -- more choices. You could distribute all text files as shell scripts, but there's no reason to, and a lot of reasons why you shouldn't.
Sure some kind of markup construct will exist in 100 years. But not HTML. I do not believe plain text will exist in 100 years, but that's another discussion. I am claiming something far weaker than what you seem to think I'm claiming. I do not think each and every website will be written from scratch on top of the VM. I'm simply claiming that the distribution mechanism for new features of the web will change.
Currently the features are mandated by the W3C, then implemented by all browser vendors. The browser vendors send out updates to all users. Instead what will happen is that some future version markup language that's better than the then current HTML will be distributed as a library, running on top of the VM. This way if you are a site owner, you can immediately start using it. You do not need to wait until (1) the W3C recognizes that this markup language is a good thing and standardizes it (2) the browser vendors have implemented it (3) your visitors have updated their browser. You simply include the library, and start using the new markup language.
Note that this is already happening in the small. Javascript libraries like knockout.js are already changing the fundamental model of building web applications. Instead of waiting for the W3C to standardize some kind of web components with data binding, people implemented it as a library. 20 years ago people would have thought such a thing impossible. They would have thought that something like that surely has to be built in to the browser. As JS gets more powerful, more and more features can be implemented this way, instead of through standardization. Note that things are still flowing over the network in markup language (in this case, knockout.js template language). A similar thing happened with form elements. Remember the xforms standardization effort? Nobody cares anymore because JS libraries offer far better rich forms elements. The thing that changed is where it's implemented: ON the web, rather than IN the web. This organic model is far more in line with the principles of the internet, rather than the centralized way it's done with the web standards. Instead of giving somebody a fish, give him the tools to fish.
> HTTP and HTML are foundational to the web. They are basically the simplest thing that could possibly work. You can reimplement them in a few days using a high level language.
A few days?! This is absolute nonsense. You can't even read the spec in a few days, let alone all the specs it depends on, like PNG, JPG, etc. Maybe you can implement some tiny subset of HTML in a few days, but the whole thing is massively complicated. In comparison a VM is far far simpler.
100 years is a very long time. The web is 23 years old.
So either TBL is an idiot for designing HTML instead of a VM, or he's not and Alan Kay is an idiot for calling him such. Which is true? Maybe you are not defending Alan Kay's stance, but you haven't said that.
Plain text has existed for 50+ years; I'm sure it will exist in 100. I'm pretty surprised you don't think so. Actually Taleb's Antifragile talks about this exact fact -- things that have stood the test of time will tend to stick around. For example, shoes, chairs, and drinking glasses have been around for thousands of years; they likely will be around for thousands more. An iPad has maybe another decade. HTML has already stood the test of time, because it has gone through drastic evolution and remained intact.
Your knowledge of how web standards are developed isn't quite correct. The W3C didn't invent SSL, JavaScript, XmlHttpRequest, HTML5, or HTTP 2 (SPDY), to name a few. Browser vendors generally implement proprietary extensions, and then they are standardized afer the fact.
I agree that the JS developments you list are interesting. JavaScript is certainly necessary for the web because it lets it evolve in unexpected directions. AJAX itself is a great example of that.
I'm talking about HTTP and HTML 1.0 -- they are conceptually dead simple, and both specs still use the exact same concepts and nothing more. I don't know if HTML 1.0 had forms -- if it did not then you could certainly implement HTTP + HTML in a couple days. I'm talking something like Lynx -- that is a valid web browser that people still use.
Lynx can still view billions of web pages because the web degrades gracefully, because semantic information is conveyed at a high level. The problem with VMs is they don't degrade. Suppose everyone gets sick of CSS. You can throw out all your CSS tomorrow, and write MyStyleLanguage, but your HTML will still be useful. If you encode everything in a VM, then the whole page breaks. It's all or nothing.
An analogy is that HTML vs VMs is similar to compiler IR vs assembly language. You can't perform a lot of optimizations on raw assembly code. The information isn't there anymore; it's been "lowered" in to the details of machine code. Likewise the client is able to do interesting things with non-Turing languages, because it can understand them. Once it's lowered into a VM language, the client can't do anything but slavishly execute the instructions. The semantic info that enables flexibility is gone by that point.
If you think markup will still exist in 100 years, then it's not too much more of a claim to say that markup will be transmitted by web servers and browsers. Do you agree with that? If that is true then TBL is not an idiot. Alan Kay's claim is basically ridiculous. A weaker version of it is still untrue.
I would say that in 100 years, HTML will still exist -- i.e. a thing that lets you separate plain text in to paragraphs, make it bold, etc. In contrast, we will have already gone through dozens of more VMs like the JVM, Flash, JS, Dart, etc. Certainly the JVM and Flash will be gone long before that point. They will have lived and died, but HTML will still be there.
I'm not calling anybody an idiot, and Alan Kay isn't either. You are the only one calling people idiots here I'm afraid.
The fact that some things stood the test of time has no predictive value. It's just confirmation bias. There are plenty of things that existed for far more than 50 years that did not stand the test of time. The horse cart, the abacus, and the spear for example. I believe that as IDEs and other programming tools become more sophisticated, it starts to make more and more sense for them to work directly on abstract syntax trees rather than plain text strings. But this is another discussion.
I'm well aware that the W3C has not have original started the standards (including the original HTML), but it's how they evolve, accumulate cruft, and it's how we get stuck with them. Standardization can only ever accumulate complexity on the web. Things cannot easily, or at all, be removed. With an organic model, as soon as things fall out of use they disappear from consideration.
Perhaps you can implement HTTP and HTML 1.0 in a few days if you are a hero programmer. I'm not sure what your point is. We are living in 2013 not 1989.
Yes, the client can't do as much high level optimization, and that's a good thing. That kind of optimization belongs be at the library level, not at the client level (and certainly not duplicated several times for every different browser vendor each with a slightly different and inconsistent implementation).
I agree that markup will be transmitted in 100 years, and I'm pretty sure so does Alan Kay. He, like I do, simply believes that the building blocks should be inverted. The thing that interprets the markup doesn't belong hard coded in the client. The client should simply be universal (i.e. Turing complete) and the thing that interprets and renders the markup should be built on top of that.
I agree that in 100 years we can still separate text into paragraphs and make things bold, but there are plenty of things that let you do that which are not HTML, and neither is the main point of HTML separating text into paragraphs and making things bold (certainly not in 2013). So this is no reason why HTML will exist in 100 years.
Sorry, you simply didn't watch the video then. He does this in the video linked, and also in Scaling and Computation (which are more than 10 years apart; it's something he deeply believes).
One quote is, "this is what happens when you let physicists play with computers" -- but that's not all. I am paraphrasing "idiot", but he certainly heaps scorn on them for not understanding something obvious and fundamental, when he is the one who doesn't understand something obvious and fundamental.
If you have JavaScript, "technically" HTML, CSS, XML, and JSON are unnecessary. You can write programs to print and layout text (that's essentially what a PDF file is), and programs can emit and compute data as well.
Kay specifically calls out HTML as a "step backward" for this reason (although he is simply wrong as I've said). He wants a web based entirely on executable code. Yes, it sounds dumb prima facie but that's what he has said consistently over a long period of time.
FWIW I just watched this talk, and tried to judge it apart from my opinion that he is totally wrong about the web.
I think Kay is one of the most interesting speakers in CS; he's a learned man. But I didn't find the content that great. If you want some thoughts on software architecture, programming languages, with a lot of great cultural references outside the field, I much prefer the thoughts of Richard Gabriel:
If you search for his work on "ultra large scale systems", it is a lot more thought provoking than what Kay offers, and it is a lot more specific at the same time.
When watching the Kay talk, I was struck that he seems to be lumping every "good" in software engineering under the term "object oriented". Virtual machines are objects. Servers are objects. Numbers should be objects. All abstraction is objects. It was not really a useful or enlightening way of thinking about the world.
I know OOP has a different definition than what he intended with SmallTalk, but he didn't invent modularity. For example, Unix is highly modular but not object-oriented, and it predates his ideas.
Real Software Engineering, by Glenn Vandenburg. Not a perfect talk (especially the conclusions IMO), but a very good exploration of how some of the common beliefs in the field of software "engineering" came to be, and how something resembling actual engineering practice might be beneficial and practical.
Abstract: "Software engineering as it's taught in universities simply doesn't work. It doesn't produce software systems of high quality, and it doesn't produce them for low cost. Sometimes, even when practiced rigorously, it doesn't produce systems at all.
That's odd, because in every other field, the term "engineering" is reserved for methods that work.
What then, does real software engineering look like? How can we consistently deliver high-quality systems to our customers and employers in a timely fashion and for a reasonable cost? In this session, we'll discuss where software engineering went wrong, and build the case that disciplined Agile methods, far from being "anti-engineering" (as they are often described), actually represent the best of engineering principles applied to the task of software development."
"Inseperable from Magic: The Manufacture of Modern Semiconductors" — an overview of semiconductor fabrication (and its current challenges) by a former Intel engineer.
http://www.youtube.com/watch?v=NGFhc8R_uO4
"The Atomic Level of Porn", by Jason Scott — a history of low-bandwidth pornography, from ham radio to telegraphs to BBSes.
http://vimeo.com/7088524
This is a talk about the practical realities of integrating with APIs over the lifetime of a project. In particular, it presents an insightful list of pitfalls API designers often fall into that hamper integration, and it suggests ways to avoid those pitfalls.
Sadly, a decade or so later, many of us are still making the same basic mistakes. If this talk were better known, perhaps we wouldn’t be, so it gets my vote.
Since everyone's already mentioned Rich Hickey's talks, I loved Bjarne Stroustrup's talk on C++11 style: http://channel9.msdn.com/Events/GoingNative/GoingNative-2012... . He provides a crystal clear view of what he thinks C++ could do better and what steps are being taken to move in that direction. Also, I think he has a cool accent.
Maybe not the 'best', but there's a great short presentation called "Wat" that I really enjoyed that talks about weird behavior in programming languages when operations are performed on variables of different types.
I consider Sussman to be one of the eminent genius' of computing. This talk is one of many that blew my mind. Time and again he demonstrates that his use of Scheme is simply for demonstration (I mean to emphasize this because the correlation with SICP/Sussman and Scheme is often regurgitated in FUD). It's not the languages we use that are important -- it is the models by which we compute things which is!
Because these just got added to my watch list, here is a link for others. :) Thanks for letting this ignorant programmer realize that these were available!
I'm partial to Bryan Cantrill's Dtrace talk since it tackles a fundamental problem in software engineering and shows how to solve it using a new technology. The talk is more than 5 years old now, but sadly still as relevant as ever.
https://www.youtube.com/watch?v=TgmA48fILq8
I second that and pretty much anything else Feynman did. His lectures on quantum electrodynamics for the layman that later got put into his QED book are also excellent.
I seem to remember the famous "Diligence, Patience, and Humility" bit by larry wall as reported in the "Open Sources" book was originally a speech. If so I'd vote for that.
Otherwise, at some time I really enjoyed Guy Steele's talks while he was working on Fortress, e.g.
I'm always impressed by Larry's talks. Every time I see him talk about Perl 6 it really makes me want to work on and use Perl 6. So, I'm trying not to see Larry talk, anymore.
No too sure about technical but Greg Wilsons "What We Actually Know About Software Development, and Why We Believe It's True" has greatly influenced how I approach everything I have to look at in life.
These talks are about the STEPS project from former PARC and ARPA guys, how a modern computing environment from the metal up can be reduced to a mere 20KLOC, about a factor of 1000 code reduction with the use of carefully designed DSLs
They also redefine what an OS and the Web (Hypercard style) means by removing as much accidental complexity as possible.
"Alan Kay: How Simply and Understandably Could The "Personal Computing Experience" Be Programmed?"
http://vimeo.com/10260548
He has a series of lectures that explain physics as understood by the modern theoretical physicist. He starts with classical mechanics, goes on to quantum mechanics, special & general relativity, statistical mechanics, and cosmology. The prerequisites are only high school mathematics, he explains the more advanced mathematics as he goes along. The physics that he teaches is condensed, but not dumbed down. It's really how a working theoretical physicist understands physics, "the real deal" as he says. Beware that it's very much a theoretician's viewpoint.
Van Jacobson was the major architect of TCP/IP; here is how the Internet would work if he were in charge. If this is ever implemented it will change everything.
I enjoyed the Crockford on Javascript talks. Even if you're not interested in JS, the first talk is about the history of computing/programming languages and is quite interesting.
Robert Lefkowitz's keynote from Pycon 2007: "The Importance of Programming Literacy" I was listening while driving home and had to keep driving around my neighborhood because I wasn't ready to turn it off.
A similar talk: http://www.youtube.com/watch?v=Own-89vxYF8
The Computer Revolution hasn't happened yet. Alan Kay.
my summary: new tools are first used to do old things a new way. but the revolution is on doing new things and having new thoughts.
miniKanren is an embedding of logic programming in Scheme. In this interactive presentation, William E. Byrd and Dan Friedman introduce miniKanren, from the basic building blocks to the methodology for translating regular programs to relational program, which can run "backwards". Their examples are fun and convincing: a relational environment-passing interpreter that can trivially generate quines, a relational type checker that doubles as a generator of well-typed terms and a type inferencer.
Guido did a talk at Uber, about the reasons behind the decisions/trade-offs in how cPython is implemented. The pypy guys were there, and it was interesting to hear how different groups could implemented such different interpreters for the same language. It was much deeper than most of the technical talks I've heard in the bay area.
Therapeutic Refactoring by Katrina Owen is a talk I keep going back to for inspiration. It's a well written and funny talk that instills optimism when faced with tricky code.
I wouldn't want to use the term "best" because there are so many good ones in various areas, but this talk by Steve Yegge was both entertaining and informative.
Stanford Seminar - Google's Steve Yegge on GROK (large scale source code analysis)
As a passionate web developer, Can We Get There From Here? at Google IO 2008, by Alex Russell (before he joined Google), will always be paramount.
It's a technical view of the web platform, the problems attached to it (especially when we try to push its boundries), and questions why they're not being answered by the standards process and browser vendors.
"The Next Generation of Neural Networks" -- a Google TechTalk by Geoffrey Hinton in 2007. I have never been able to sit through 60 minutes of lectures without fidgeting constantly, however this one managed to keep my attention until the end.
Truly an amazingly great talk and worth watching through (even if you only only peripherally care about ANNs).
But I think the 3 minute talks we host at home once a month or two have among them the 3-5 best I've heard (we just started taking videos, but they're in Hebrew).
I highly recommend hosting your own 3 minute talk session. It's really easy and the format just inherently leads to amazing talks.
Regardless of one's opinions on microkernels vs. monolithic kernels, it's a very interesting but accessible talk for those interested in lower-level systems and fault-tolerant architectures.
Richard Feynman's talk Computers from the Inside Out (titled on Youtube as Computer Heuristics) is a wonderful description of how computers work and what they can and cannot compute using a file clerk metaphor. He gets bonus points for wearing a Thinking Machines t-shirt.
Has to be the facebook lead talking about how they deploy code to main site. Absolute fun to watch and learned a ton of cool stuff. Dont have a link, sorry.
I think the Cliff Click tech talk you are looking for was at Stanford in 2007. He covered both his wait free hash table and scaling on modern systems, specifically Azul Systems in 2007.
A talk about expressions (c#), it's very usefull if you use the entity framework a lot (like i do).
To bad the author lost the video files (everyone recorded their screen), other video's of the virtual conference are available: http://www.mvcconf.com/videos )
I don't remember having him any tech talks per se. Although there were a few interesting presentations on the NeXT technology with Jobs being a talker, but no one bring these up usually.
http://www.infoq.com/presentations/Simple-Made-Easy