And also his talk Radical Simplicity: http://skillsmatter.com/podcast/java-jee/radical-simplicity/...
Seem more like a series of really obvious ideas and some platitudes thrown in for good measure.
- everyone knows you cannot keep up the pace of a sprint
over a long distance race - so they solved it by running
a long distance race but just firing a starting pistol
every 400 yards - and we're off again!
One that immediately pops to mind is this:
A document based user application is basically a gigantic state. If you're using generic data structure such as loosely typed maps and sets, with separate functions in various modules for manipulating parts of that structure, you'll end up with a far bigger mess than if you're having a regular three tier MVC code with objects on the model layer (even with an ORM).
I do think, and i have experienced it, that sometimes, regular OO is the good abstraction.
Any thoughts on how his talk could apply here? Is there a better way?
Correct me if I'm wrong, but I think your answer is in reference to using such a library, and I can certainly see how my question implied that, so sorry for the confusion if this is the case. Thanks for your answer regardless.
This is the talk that, according to Wikipedia, included the first public demonstration of the following technologies: the computer mouse, video conferencing, teleconferencing, hypertext, word processing, hypermedia, object addressing and dynamic file linking, bootstrapping, and a collaborative real-time editor. Pretty good for one talk!
In an hour and 40 minutes Englebart demonstrates input via mouse, video conferencing and collaborative editing among other things. This was before the internet, before UIs, before the idea of personal computing. Incidentally the most profound thing about this demo is not the technology demonstrated but rather the introduction of the concept of personal computing. At a time where computers were reserved for number crunching Englebart envisioned a future where they would be a part of our daily lives not controlling us but enhancing our abilities.
A quick side note, it is worth noting that while Englebart was certainly the visionary, his partner Bill English brought Englebart's visions into the world.
furious scribbling in the audience as everyone takes notes
I'm so glad we're finally getting away from this paradigm after four decades. For example, the iPhone notepad (and now Mac TextEdit) doesn't wait for a cue from the user to write the few dozen bytes of new input to persistent storage.
Not saying the iPhone does anything in a bad way (I have no clue) - but I've hardly been bitten by losing unsaved work because I'm deliberately using ctrl-c very often.
This is also a place where the desktop metaphor (with paper documents) around which most WIMP systems are built exposes computer details and fails to stick to its metaphoric roots. When I scribble something on paper it's there and I don't have to consciously remember to somehow commit a transient state of my scribbling to paper explicitly.
Good software doesn't lose anything ever, but even with "average" (i.e., unacceptably bad if you think about it critically) tools I've had sudden power cuts where I lost just a few words because of this habit.
That is exactly how I learned too. Years later, ctrl-s is still a reflex action.
Thank you :-)
You might like it.
In a Boing Boing article (which also, incidentally, calls her presentation a "tour-de-force"), her points were summarized in these quotes:
Hard-to-parse protocols require complex parsers. Complex, buggy
parsers become weird machines for exploits to run on. Help stop weird
machines today: Make your protocol context-free or regular!
Protocols and file formats that are Turing-complete input languages
are the worst offenders, because for them, recognizing valid or
expected inputs is UNDECIDABLE: no amount of programming or testing
will get it right.
A Turing-complete input language destroys security for generations of
users. Avoid Turing-complete input languages!
A more concise presentation containing just the important/interesting bits would have been less painful to watch and probably a lot easier to prepare for.
That said, the Q&A period at the end was good. She seemed a lot more comfortable there, and there was a lot more interesting and informative content there than in the presentation proper.
 - http://boingboing.net/2011/12/28/linguistics-turing-complete...
That said, I thought the argument against length fields was somewhat.. weak. But maybe I'm misunderstanding the context. The question at the end was not answered satisfactorily. A protocol with a length field is most certainly deterministic, and if you go the other way and use a delimiter, escaping/encoding is the only way you're going to work with arbitrary user data. I would argue the length field is miles better. If someone injects bytes into your stream that match the protocol, your recognizer isn't going to save you, just the same way someone rewriting the length field is going to blow up.
What this talk seems to argue for is making the language simpler (e.g. context-free), so you can validate that the semantics are working as intended, but transferring arbitrary blobs of data is always going to be an issue as long as people enter random musings in text boxes that are rendered by Turing complete software. :-)
The fact that it is trivial to maliciously craft the length field makes it cheap for the attacker to try to exhaust receiver memory, overflow buffers or make DDoS attack more effective. If you use a delimiter, the attacker has to at least spend the required bandwidth to try to exhaust resources.
I suspect that if your protocol specification bounds the length field to some finite amount, then your language can be classified as a regular language for verification purposes, just with a FSM branch for each possible value of the length field.
A length field doesn't mean that you have to pre-allocate that amount of memory. Never do that! Robust implementations use the length field only as a hint, as an hidden delimiter, and allocate memory as the data comes in.
That said, she does have a point. Though escaping is fraught with dangers, too (remember PHP in the beginnings? magic quotes, ugh).
And also talks about a general parser she's building for this issue on github. Using this parser , she have built a DNS and base64 recognizer , and i think they are working on building recognizers for more protocols.
But I sort of thought this was common knowledge. I made a related comment here: https://news.ycombinator.com/item?id=4677998 (and before the Ruby Gems YAML exploit, which maybe was kind of prescient)
Incidentally, I felt she sort of brushed off computational complexity when once person brought it up. Complexity is just as important as computability/decidability, and it fits very well with her formal language perspective (e.g. regular languages can be recognized in linear time and constant space).
Another thing was that she didn't give that many concrete examples. The x.509 certificates was one that seemed really good (didn't look into the details). But I would have liked to see an instance of a protocol that was not meant to be turing complete, but IS.
She gave the example of HTML5 + CSS which was a good one. But I don't think people have accidentally written Turing complete HTTP parsers or anything like that. How big a problem is it in practice? She's making the claim that most security bugs fall in the category of being caused by language errors?
And, like another comment, I'm not sure I agree with disavowal of the length field. I'll have to think about that. I believe delimiting and escaping is more complex and thus more fragile. I also believe it's slower, despite what she says in the talk. We have to consider the tools we have now; maybe in some future world we'll get better formal language tools.
EDIT: http://en.wikipedia.org/wiki/Billion_laughs -- Related link to a design flaw in XML that is related to computational complexity, but not computability.
Are We There Yet? - http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hic...
Simple Made Easy - http://www.infoq.com/presentations/Simple-Made-Easy
this one is so far the best by the Guru himself
Great for repeated watching, I get something new from it every time I watch it. It is also great for recalibrating your point of view from amazement at whatever the current trend is in technology, to a more long-term outlook as well as encouraging higher standards for what is currently available.
I think this shift in outlook is important for technologists like us, because it easy to become immersed in the day to day goings-on of tech and become myopic in a way. Using the invention of the printing press and literacy, etc, etc is a great way to reorient your attitude towards technology and what it can/should do.
If so I'll commit blasphemy and say he's wronger than wrong. I was especially offended that a curious person would display such ignorance in understanding the work of others.
Without going too much into detail, if he believes this, then he can't possibly understand Tim Berner-Lee's principle of least privilege. And that the web would have never become ubiquitous, e.g. made the jump from PCs to mobile, in his bizarre universe.
The web as it exists today is a huge pile of hodge-podge conflicting standards, none of which are remotely close to offering the level of development performance for apps rather than pages that you can get with any desktop toolkit. Alan is fully behind the "idea" of the web, but is taking the long-term, and I believe correct, view that this cannot continue as it is and we would be well served by trying do something about it by conscious effort rather than evolving half-heartedly in that direction.
The principle of least power basically means that you should convey meaning across the web using the least expressive language or abstraction possible. The reason is that you want give the client more choices to interpret the data/content, rather than forcing it to correspond.
Google, or any other search engine, would not be possible without the principle of least power. Mobile phones wouldn't have been able to use the web if the semantics were over-fitted for specific devices.
I've heard what is essentially Kay's argument from other sources too. Some computer scientist at Adobe (may have even been one of the founders) lamented that the web wasn't build on PDF. PDF is a Turing complete language that can describe any page layout.
Certainly after one has messed with CSS enough, you can see why he would say that (aside from the obvious favoritism). But it is terrible idea, in that it overfits content for specific devices, and is a vector for malware, etc. Once you are Turing complete, it's very hard to avoid these pitfalls.
Anyway, as I recall, Kay is making essentially the exact same argument as the Adobe guy, and the way he said it in that talk (Programming and Scaling, thanks below) displays astonishing ignorance for such a brilliant guy.
If Kay is right, then he should just create a web that is based on a VM. And then he can see how far it goes. It will never go anywhere because it will be broken by design.
That we are developing more VMs now for the web doesn't negate the point. The web should be layered; each kind of content should choose the least powerful channel to transmit it. If you need arbitrary code, then the least powerful abstraction is a Turing complete VM. But you don't need a Turing machine to display a blog post.
Most of the talk is focused on thinking in the long-term as well as working to decrease the complexity of the software edifices we are currently creating, before they become too large to improve in any revolutionary way.
Who knows, maybe the next web will be built on top of the web, the way the web was built on top of the internet. The point of his comments in the talk is that we should explicitly think about the design of what we are building before we rush off to pile up code.
My comment was aimed squarely at the refrain that we've been hearing from serious engineers for a long time about the utter unsuitability of the web for apps. I find this tiresome because technology doesn't win by being better, it wins by being adopted. Of course there's no technical reason we can't have a better basis than the web for internet apps. It's not about technical limitations, it's about the adoption curve. The perfect cross-platform GUI toolkit in 1992 would have still failed if it required you to write C++ to render your class schedule.
The fact that any person with almost no technical expertise can access an app from any computer they sit down at anywhere in the world without requiring installation is actually an amazing achievement that far outweighs the kludginess of the apparatus.
There's no concept of "line 4 of chapter 3 of book X" in the web, at least not down at its core. You have to shim it in with anchor tags or something similar, and as a result we don't have ubiquitous browser apps that let me highlight a snippet of text in a book, write some notes in the margin (example: "this is a reference to Hamlet's soliloquy"), and then on a whim search for all of Noam Chomsky's (or whoever's) notes on Hamlet, or perhaps search for all public notes on this book in the context of Hamlet. The lack of structure means that everything's this haphazard soup of markup, with the focus being so much on how it looks that structure of the text and the relationship between parts gets lost. I'm not sure the "semantic web" will make up for that lack, but it's probably a good example of how the "next web will be built on the bones of this one" (to paraphrase someone else's comment in this thread).
There is no technical reason why a proper GUI framework couldn't be as ubiquitous as HTML+CSS+JS+SVG+etc, except that historically it didn't happen that way. It's definitely not inconceivable that somebody will build Kay's VM vision as an abstraction layer on top of web technologies, so that at least we no longer have to deal with the mess in each and every web application separately. If such a VM becomes popular enough we can implement said VM directly instead of on top of existing web technologies.
You can't clean up a mess by adding layers, but you can replace layers (eg. you can add web sockets and start driving real time apps that way). Therefore incremental improvement is possible essentially forever.
> There is no technical reason why a proper GUI framework couldn't be as ubiquitous as HTML+CSS+JS+SVG+etc, except that historically it didn't happen that way.
Agreed. However it's not just accidental either. The web is many things to many people. It's not like cross-platform GUI toolkits failed for lack of trying, but the fact is that they could not reproduce the advantages that the web had in the very beginning. Specifically, it was comically easy to create content and even a browser on any platform in those early days. Obviously over the next decade web tech was severely abused to shoehorn application functionality where it was never intended, and ends up being more complex in the long run. However the number and utility of the dead simple documents is absolutely massive. That, combined with a defacto standard for creating server apps without installation that is perfectly accessible to a layman (theoretically with any disability as well) was the wedge that GUI toolkits never came close to having.
Your idea about building a new abstract layer on top of existing web technologies and then reimplementing it natively is the most credible idea I've heard about how the web could be supplanted, and I certainly wouldn't bet against that happening over the coming decades. However I think the web should be a textbook example of how worse is better is not just a conciliatory platitude for disgruntled engineers, but as an actual requirement for certain classes of technology to gain critical mass.
I do miss the "Write it according to the specs and publish it, and anyone on any browser, any OS, any machine, any display type, can access it" attitude. Some people really do need to re-learn the universality of the www, instead of fencing off content inside apps. (Why do newspapers insist on really poor online versions, and mobile versions, and app versions?)
Actually there are more Windows installations than are internet users.
(If you count mobile phones the numbers might skew a little, but the vast majority of those on third world countries are not used for the web anyway).
If we'd waited for perfect we'd never have linux, we'd still be waiting on hurd. If we'd waited for perfect we'd never have the internet and the web, we'd still be waiting on xanadu.
I don't see how he can be wrong. The point he made wasn't that the things they did in the 70s were better than today's web or UIs. Just that people (and companies) should have invested in THAT line of work, and building on THOSE principles, instead of making a mess with reinventing the wheel, competing on BS differentiators and cutting off research.
I'd take my OS X or an Ubuntu desktop over any Xerox guy of the time, any day of the week. But I'd take the volume of research, innovation, insights and coherence that they had at the time over today's ho-hum idea of development too.
Despite having all those things 30 years ago, it took 30 years for them to become mainstream -- and far fewer stuff emerged since. And not for lack of CPU resources. Even something like BeOS could offer a 2005 desktop experience in 1996.
The most wide innovation mover in the 00's was Apple. And even they, they mostly worked on market innovation (ie. repackage older niche stuff in a way that finally makes people want to use them, instead of a jumbled mess), not on research innovation.
If their line of research is based on distributing content over VMs, then that's completely different than the web's design philosophy, and it's probably THE reason his ideas never were adopted while the web was.
Those design ideas are diametrically opposed. The design of the web won, for that specific reason. It was BETTER rather than worse, as he claims.
I was astonished during the talk that he never reflected on why his ideas in this area weren't successful while the web was. It is NOT because the designers of the web are idiots or amateurs. That is specifically what he claims.
That is: tt's not that the VMs lost and the web won.
It's the web started from something that wasn't adequate for what we wanted, and after all these years it has been adding all kinds of kludgy, warty and half-baked implementations of VM features.
In essence, instead of designing a proper web-as-VM from the start (and slowly evolve it as capabilities of networks and CPUs increase) we started with Lee's document based thing, that we soon found out is barely enough, and we have to have video, sound, app capabilities, process isolation, maybe a common bytecode, networking, real-time, etc.
I.e we started from dumb documents we links, when what we wanted was a VM. And now we have twisted the dump document model to sorta kinda look like a VM.
How many times have you heard that "the browser is the new OS" and such?
So, it's a Greenspun's tenth rule, only instead of Lisp we build an ad-hoc, bug-ridden, clumsy version of Alan Kay's, Doug Engelbart, etc, ideas.
Anyway, Kay basically attributed it to ignorance and stupidity on the part of the TBL and the web's designers, which I think is incredibly arrogant, especially since he is wrong.
You didn't really address the arguments above -- you're basically saying "it would have been nicer and cleaner if we started with a VM".
I can sort of see how it would have been nicer in some theoretical world, just like using PDF/PS instead of CSS would have been nicer (in a way).
But I am arguing that it would have never happened. If you started with a VM-based system, there would be technical problems that prevented it from being widely adopted.
Actually, Java was pretty much exactly that. It was supposed to be a VM that ran anything, and disenfranchised Microsoft's OS. And we know JS is more popular than Java, despite way more marketing muscle behind Java in the early years.
It's already been tried. I'd argue that it wasn't just beyond the abilities of Java's designers, but actually impossible (this is as much social problem as a technical one).
You make the vague statement as Kay does that "we should have started with a VM", but it would be impossible to come up with a specific VM design (actual running code) that actually worked as well and as widely as the Web does.
The design by committee of the W3C has resulted in overcomplicated yet at the same time underpowered designs. The model where an ivory tower mandates new things from the top down rarely works well. How many man years does it take to implement a new browser? It's basically an impossible task unless you have mullions of dollars to spend. A much better model for innovation is one where multiple competing technologies get implemented, and the best one wins organically. The web as a VM would have enabled that. You no longer need a committee that mandates HTML5. It can be implemented as a library, and if people like it they will use it. The W3C has a chicken and egg problem: before it standardizes something, there is not much experience with the features it standardizes, so it's hard to do good designs. On the other hand once you standardize something, you're stuck with it. It can (practically) never die. If you have multiple "HTML5 libraries" then it's not a problem to try something out, and pick the best thing. The best solution can win, and the others can slowly fall out of use.
I also don't agree in 2013 that moving say HTML5 to a VM is a good idea. There's nothing stopping anyone from releasing a VM now. You could release a new VM and an application platform. I mean that is essentially what, say, Android is. But even Android clients will always be a subset of all web clients.
I think people proposing this mythical VM don't have a clear understanding of what VMs are. They are fairly specific devices. Do you think Dalvik, something like v8, or the JVM, or .NET, can form the basis for all future computing? It's an impossible task for a single VM. Even Microsoft is taking more than 10 years to move a decent amount of their own software to their own VM, on a single platform. All VMs necessitate design decisions that are not right for all devices. Even Sun had diverging VMs for desktop and mobile.
You don't need to start from scratch and force everybody to adopt, W3C style. You can build the initial version as an abstraction layer on top of web technologies, so that only the VM implementors have to take care of hiding the mess, and application developers can build on top of that. With current improvements in JS VMs and things like WebGL this is slowly becoming feasible. If that is successful, you can implement a high performance version natively.
This is inevitable. We won't be using HTML+CSS+JS in 100 years, other than for archaeological purposes. The question is how soon will it happen.
Do you think plain text will exist in 100 years? If so then it's not that much a stretch to say that HTML will. It astonishes me that people think that only code, and not data, will be transported over networks. That seems to be what you are claiming -- that it's preferable to transmit code than data in all circumstances?
The argument isn't symmetric because I fully believe that VMs are necessary for the web. I just don't agree with Kay that the designers of the web are idiots (he really says this) because they didn't start with a VM. VMs will come and go as hardware and devices and circumstances change. Data has much more longevity; it encodes fewer assumptions.
Assumptions that are no longer true are generally the reason a technology dies. It is pretty easy to imagine a VM (unknowingly or not) encoding preferences for keyboard and mouse input; that technology would have died with the advent of touch screens. Likewise, a VM that provides affordances for touch screens will likely be dated in 10 years when we're using some other paradigm.
HTTP and HTML are foundational to the web. They are basically the simplest thing that could possibly work. You can reimplement them in a few days using a high level language. They will be around for a LONG LONG time.
They layered architecture of the web is absolutely the right thing. Use the least powerful abstraction for the problem, which gives the client -- which necessarily has more knowledge -- more choices. You could distribute all text files as shell scripts, but there's no reason to, and a lot of reasons why you shouldn't.
Currently the features are mandated by the W3C, then implemented by all browser vendors. The browser vendors send out updates to all users. Instead what will happen is that some future version markup language that's better than the then current HTML will be distributed as a library, running on top of the VM. This way if you are a site owner, you can immediately start using it. You do not need to wait until (1) the W3C recognizes that this markup language is a good thing and standardizes it (2) the browser vendors have implemented it (3) your visitors have updated their browser. You simply include the library, and start using the new markup language.
> HTTP and HTML are foundational to the web. They are basically the simplest thing that could possibly work. You can reimplement them in a few days using a high level language.
A few days?! This is absolute nonsense. You can't even read the spec in a few days, let alone all the specs it depends on, like PNG, JPG, etc. Maybe you can implement some tiny subset of HTML in a few days, but the whole thing is massively complicated. In comparison a VM is far far simpler.
100 years is a very long time. The web is 23 years old.
Plain text has existed for 50+ years; I'm sure it will exist in 100. I'm pretty surprised you don't think so. Actually Taleb's Antifragile talks about this exact fact -- things that have stood the test of time will tend to stick around. For example, shoes, chairs, and drinking glasses have been around for thousands of years; they likely will be around for thousands more. An iPad has maybe another decade. HTML has already stood the test of time, because it has gone through drastic evolution and remained intact.
I'm talking about HTTP and HTML 1.0 -- they are conceptually dead simple, and both specs still use the exact same concepts and nothing more. I don't know if HTML 1.0 had forms -- if it did not then you could certainly implement HTTP + HTML in a couple days. I'm talking something like Lynx -- that is a valid web browser that people still use.
Lynx can still view billions of web pages because the web degrades gracefully, because semantic information is conveyed at a high level. The problem with VMs is they don't degrade. Suppose everyone gets sick of CSS. You can throw out all your CSS tomorrow, and write MyStyleLanguage, but your HTML will still be useful. If you encode everything in a VM, then the whole page breaks. It's all or nothing.
An analogy is that HTML vs VMs is similar to compiler IR vs assembly language. You can't perform a lot of optimizations on raw assembly code. The information isn't there anymore; it's been "lowered" in to the details of machine code. Likewise the client is able to do interesting things with non-Turing languages, because it can understand them. Once it's lowered into a VM language, the client can't do anything but slavishly execute the instructions. The semantic info that enables flexibility is gone by that point.
If you think markup will still exist in 100 years, then it's not too much more of a claim to say that markup will be transmitted by web servers and browsers. Do you agree with that? If that is true then TBL is not an idiot. Alan Kay's claim is basically ridiculous. A weaker version of it is still untrue.
I would say that in 100 years, HTML will still exist -- i.e. a thing that lets you separate plain text in to paragraphs, make it bold, etc. In contrast, we will have already gone through dozens of more VMs like the JVM, Flash, JS, Dart, etc. Certainly the JVM and Flash will be gone long before that point. They will have lived and died, but HTML will still be there.
The fact that some things stood the test of time has no predictive value. It's just confirmation bias. There are plenty of things that existed for far more than 50 years that did not stand the test of time. The horse cart, the abacus, and the spear for example. I believe that as IDEs and other programming tools become more sophisticated, it starts to make more and more sense for them to work directly on abstract syntax trees rather than plain text strings. But this is another discussion.
I'm well aware that the W3C has not have original started the standards (including the original HTML), but it's how they evolve, accumulate cruft, and it's how we get stuck with them. Standardization can only ever accumulate complexity on the web. Things cannot easily, or at all, be removed. With an organic model, as soon as things fall out of use they disappear from consideration.
Perhaps you can implement HTTP and HTML 1.0 in a few days if you are a hero programmer. I'm not sure what your point is. We are living in 2013 not 1989.
Yes, the client can't do as much high level optimization, and that's a good thing. That kind of optimization belongs be at the library level, not at the client level (and certainly not duplicated several times for every different browser vendor each with a slightly different and inconsistent implementation).
I agree that markup will be transmitted in 100 years, and I'm pretty sure so does Alan Kay. He, like I do, simply believes that the building blocks should be inverted. The thing that interprets the markup doesn't belong hard coded in the client. The client should simply be universal (i.e. Turing complete) and the thing that interprets and renders the markup should be built on top of that.
I agree that in 100 years we can still separate text into paragraphs and make things bold, but there are plenty of things that let you do that which are not HTML, and neither is the main point of HTML separating text into paragraphs and making things bold (certainly not in 2013). So this is no reason why HTML will exist in 100 years.
One quote is, "this is what happens when you let physicists play with computers" -- but that's not all. I am paraphrasing "idiot", but he certainly heaps scorn on them for not understanding something obvious and fundamental, when he is the one who doesn't understand something obvious and fundamental.
Kay specifically calls out HTML as a "step backward" for this reason (although he is simply wrong as I've said). He wants a web based entirely on executable code. Yes, it sounds dumb prima facie but that's what he has said consistently over a long period of time.
I think Kay is one of the most interesting speakers in CS; he's a learned man. But I didn't find the content that great. If you want some thoughts on software architecture, programming languages, with a lot of great cultural references outside the field, I much prefer the thoughts of Richard Gabriel:
If you search for his work on "ultra large scale systems", it is a lot more thought provoking than what Kay offers, and it is a lot more specific at the same time.
When watching the Kay talk, I was struck that he seems to be lumping every "good" in software engineering under the term "object oriented". Virtual machines are objects. Servers are objects. Numbers should be objects. All abstraction is objects. It was not really a useful or enlightening way of thinking about the world.
I know OOP has a different definition than what he intended with SmallTalk, but he didn't invent modularity. For example, Unix is highly modular but not object-oriented, and it predates his ideas.
Abstract: "Software engineering as it's taught in universities simply doesn't work. It doesn't produce software systems of high quality, and it doesn't produce them for low cost. Sometimes, even when practiced rigorously, it doesn't produce systems at all.
That's odd, because in every other field, the term "engineering" is reserved for methods that work.
What then, does real software engineering look like? How can we consistently deliver high-quality systems to our customers and employers in a timely fashion and for a reasonable cost? In this session, we'll discuss where software engineering went wrong, and build the case that disciplined Agile methods, far from being "anti-engineering" (as they are often described), actually represent the best of engineering principles applied to the task of software development."
"The Atomic Level of Porn", by Jason Scott — a history of low-bandwidth pornography, from ham radio to telegraphs to BBSes.
How to build your own X-ray backscatter imager (aka "airport body scanner") by Ben Krasnow
"The Secret History of Silicon Valley" by Steve Blank. Other, more recent versions of this talk exist, but the audio quality is poor in them.
Hey that's me! Very cool and VERY humbling to be mentioned in such esteemable company. I tried to cram way way to much into 50 minutes...
"You and your research" by Richard Hamming:
"How to design a good API and why it matters" by Joschua Bloch:
Google TechTalk on Git by Linus Torvalds:
All talks ever given by Alan Kay, for example:
This is a talk about the practical realities of integrating with APIs over the lifetime of a project. In particular, it presents an insightful list of pitfalls API designers often fall into that hamper integration, and it suggests ways to avoid those pitfalls.
Sadly, a decade or so later, many of us are still making the same basic mistakes. If this talk were better known, perhaps we wouldn’t be, so it gets my vote.
I've watched him give talks in Klingon.
I've watched him explain how to build a supercomputer using laser printers.
And who can forget that classic talk, "Temporally Quaquaversal Virtual Nanomachines"?
My favorite was his talk on SelfGOL, and I don't even like obfuscated code! (See http://libarynth.org/selfgol for an explanation of what SelfGOL is.)
Dr James Grime / Numberphile - Encryption and HUGE numbers
Les Hazlewood - Designing a Beautiful REST+JSON API
Definitely worth a watch
Incredible talk for both the content and the form ! There is so much we could learn from him.
Otherwise, at some time I really enjoyed Guy Steele's talks while he was working on Fortress, e.g.
How to Think about Parallel Programming: Not!
They also redefine what an OS and the Web (Hypercard style) means by removing as much accidental complexity as possible.
"Alan Kay: How Simply and Understandably Could The "Personal Computing Experience" Be Programmed?"
"Alan Kay: Extracting Energy from the Turing Tarpit" http://www.youtube.com/watch?v=Vt8jyPqsmxE
"Alan Kay: Programming and Scaling"
"Ian Piumarta - To trap a better mouse"
Papers here http://vpri.org/html/writings.php
Notice most of the talks people are referencing are about tech philosophies -- a vision/perspective -- not a particular technology.
He has a series of lectures that explain physics as understood by the modern theoretical physicist. He starts with classical mechanics, goes on to quantum mechanics, special & general relativity, statistical mechanics, and cosmology. The prerequisites are only high school mathematics, he explains the more advanced mathematics as he goes along. The physics that he teaches is condensed, but not dumbed down. It's really how a working theoretical physicist understands physics, "the real deal" as he says. Beware that it's very much a theoretician's viewpoint.
Van Jacobson was the major architect of TCP/IP; here is how the Internet would work if he were in charge. If this is ever implemented it will change everything.
The Golang "Concurrency is not parallelism, it's better" is good too.
Though related to Clojure, they make you think about development in different ways.
By Andrew Tanenbaum =^.~=
LXJS 2012 - James Halliday - Harnessing The Awesome Power Of Streams: http://youtu.be/lQAV3bPOYHo
miniKanren is an embedding of logic programming in Scheme. In this interactive presentation, William E. Byrd and Dan Friedman introduce miniKanren, from the basic building blocks to the methodology for translating regular programs to relational program, which can run "backwards". Their examples are fun and convincing: a relational environment-passing interpreter that can trivially generate quines, a relational type checker that doubles as a generator of well-typed terms and a type inferencer.
To wit, "Sketchpad" was the first GUI program.
* Interactive graphics
* Constraint-based layout
* Object Oriented Programming
* Pen-based input
Sutherland wrote "Sketchpad" as part of his Ph.D. thesis in 1963.
Here's some links on this:
Alan Kay describing "Sketchpad"
Wikipedia Entry for "Sketchpad"
 Spoiler alert: Everything's in Redis!
http://www.youtube.com/watch?v=agw-wlHGi0E (features pg)
Stanford Seminar - Google's Steve Yegge on GROK (large scale source code analysis)
This class is still, to my eyes, the best I have ever seen on this topic.
He has given another one recently on time and computing which I have not yet seen, but which is promising too.
I really liked googles map reduce lectures:
Wish google would update the quality and fix the link to the slides - heres a copy on slide share:
You can find the links to the rest of the videos and slides from there, theres 7 total I think.
Captcha creator on how we can trick humans into doing useful work via "games with a purpose".
It's a technical view of the web platform, the problems attached to it (especially when we try to push its boundries), and questions why they're not being answered by the standards process and browser vendors.
Looking back, it's amazing how far the web platform has come, but also in the problems that still plague it.
Truly an amazingly great talk and worth watching through (even if you only only peripherally care about ANNs).
But I think the 3 minute talks we host at home once a month or two have among them the 3-5 best I've heard (we just started taking videos, but they're in Hebrew).
I highly recommend hosting your own 3 minute talk session. It's really easy and the format just inherently leads to amazing talks.
Regardless of one's opinions on microkernels vs. monolithic kernels, it's a very interesting but accessible talk for those interested in lower-level systems and fault-tolerant architectures.
Jean Serre's "Writing Mathematics" http://www.dailymotion.com/video/xf88b5_jean-pierre-serre-wr...
I might not be the best I've heard, but it's the one I enjoyed the most:
Anything from Cliff Click is quite good. This guy has a very deep understanding of compilers, virtual-machines, CPUs, and their interactions.
A JVM does that? http://www.youtube.com/watch?v=uL2D3qzHtqY
Java on a 1000 Cores http://www.youtube.com/watch?v=5uljtqyBLxI
To bad the author lost the video files (everyone recorded their screen), other video's of the virtual conference are available: http://www.mvcconf.com/videos )