I invented the term Object-Oriented and I can tell you I did not have C++ in mind.
Whether or not you share the disdain for C++ that is implicit in this quote, I believe it is important to understand the difference between the way OO was originally conceived of and the way C++ interprets it. I believe the best way to understand the difference is to look at object-variable binding. Smalltalk and almost every other OO language that followed it uses reference semantics: the assignment
x = y
causes the variable x to release its binding to whatever object it is currently referring to, and to refer to the object that y is referring to instead. C++, on the other hand, is pretty much the only OO language that uses value semantics: the intended meaning of the assignment
is to copy the state of the object to which y is referring over to the object to which x is referring. The older and wiser I get the more I am inclined to believe that this attempt to marry OO and value semantics was an experiment that has failed. Even if you do not believe this, you'd have to admit that the collective amount of time and effort that the C++ community has spent on dealing with the assignment operator is simply staggering. (I would have made that last statement even two years ago, and now we have rvalue refereces.)
It's a really tough dichotomy: people who invent world-changing things very rarely have the skills (or desire) to spend years polishing and shipping. I think almost all of academia is great example of this, where invention is praised over innovation.
I'm not saying this is bad. Researchers excel at doing research, and should focus in that domain. But papers and conference talks don't change the world. It takes real products, which are the result of substantial non-idea work. Even Doug Engelbart's famous demo took decades to actually build at scale.
One of the most important things about PARC may have been that it was a pairing of research-like projects with real world engineering accomplishments. For several years, I've felt like Google is on the verge of this as well. Glass may be their first shipping experimental product, so we'll see.
It'd be interesting if Alan took a research position at Google. Hal Abelson (inventor of Scheme, founder of Creative Commons, prof at MIT, etc.) spent a sabbatical there and shipped App Inventor for Android, which essentially took ideas from the Scratch programming environment and made them work for Android. Something like Smalltalk on top of App Engine could be really awesome.
When the Mac first came out, Newsweek asked me what I [thought]
of it. I said: Well, it’s the first personal computer worth
criticizing. So at the end of the presentation, Steve came up
to me and said: Is the iPhone worth criticizing? And I said:
Make the screen five inches by eight inches, and you’ll rule the world.
-- Alan Kay
And before you write it, I'll save you the time. Here's the typical HN response to this question:
Just go get an EC2 instance and apt-get a bunch of shit and
then figure out how to use a CLI to push your code there and
add framework dependencies and learn Postgres and and configure it
for DNS and oh and you probably need a credit card. Not that
hard, srsly guys. Also, Dropbox is just git with a shiny frontend.
Lego Mindstorms was proprietary and I loved the shit out of that-- probably was the primary reason I dove into science and engineering. And to your point, everything I built with Lego was (literally) locked into the Lego world. But by the time it actually mattered, I had moved to a legit machine shop. But those plastic blocks laid the foundation for construction. That's what I really want Alan Kay's work to do: lay a new foundation for teaching people how to think about computation and symbolic manipulation with computers.
"In 1968 — three years before the invention of the microprocessor — Alan Kay stumbled across Don Bitzer's early flat-panel display. Its resolution was 16 pixels by 16 pixels — an impressive improvement over their earlier 4 pixel by 4 pixel display. Alan saw those 256 glowing orange squares, and he went home, and he picked up a pen, and he drew a picture of a goddamn iPad. And then he chased that carrot through decades of groundbreaking research, much of which is responsible for the hardware and software that you're currently reading this with."
Alan Kay is the goddamn Carl Friedrich Gauss of Interaction Design (Leonhard Euler was already taken by Douglas Engelbart).
In WWII Douglas Engelbart was trained as a radar operator. After WWII he is exposed to computers (driven by punchcards and tape, and emitting printed output) and immediately thinks "these should interact via cathode ray tubes".
It seems to me that anyone with imagination who could grok what a computer was immediately imagined the computer being embedded in any information device they could think of -- whether it's a watch, a notepad, a telephone, or the human brain.
The "light pen" was invented in 1952. Do you think the inventor's "vision" was that it be part of a monstrously big, complex, and expensive piece of hardware? (Do you think he/she hadn't read "Foundation"?)
Alan Kay has done many amazing things, we don't need to fight about this specific thing.
In fact, you could say it was his push for the Dynabook that helped usher in the personal computer as well as WYSIWYG, etc.
Many academics do desire to build product. But would they get funding? Maybe someone in academia could chime in.
We jump on what's popular for the sake of being "cool as well." We get hung up on minor syntax differences in languages. Or worse, in some cases we ignore real benefits of syntax simply because we are too lazy to take the week it takes to retrain ourselves to learn it. We fight about editors! Editors!
If all of that wasn't bad enough, the more ignorant we are the more we seem to aggressively champion a given technology and attack others without the backing of any serious thought. We become so attached to our favorite tools that they become beyond criticism of any form.
Well... you get it...
Well, it's still the language from the year 1980. The very fact that you judge a 1980-vintage system as "not bad" compared to existing alternatives is telling in itself. Alan is working on a 2010 version even as we speak. :-)
* Laser Printers
Yes -- invented at Xerox PARC.
* Object Orientated Programming / Smalltalk
No. OOP was invented in Norway (Simula) / Yes
* Personal Computers
Um... really? No.
* Ethernet / Distributed Computing
Yes Xerox PARC. / Not really
* GUI / Mouse / WYSIWYG
No (Englebart et al) / No (Englebart et al) / Yes
There is no question that Alan did invent the term "Object Oriented Programming" (and was rewarded with a Turing award for his work) and also invented Smalltalk which was well known to be inspired by Lisp, Simula, Logo, Sketchpad.
The GUI, overlapping windows on a bitmap screen, WYSIWIG editor, etc i.e. the personal computer as we know it today was first created at PARC that later influenced and was commercialized by Apple.
Kay's Turing award was "For pioneering many of the ideas at the root of contemporary object-oriented programming languages" which might or might not include inventing "Object Oriented Programming..." and as it happens turns out not to.
Key quote from the article:
"While working on FLEX, Kay witnessed Douglas Engelbart’s demonstration of interactive computing designed to support collaborative work groups. Engelbart’s vision influenced Kay to adopt graphical interfaces, hypertext, and the mouse."
Incidentally, Doug Engelbart is still alive and a Turing award winner:
"Engelbart slipped into relative obscurity after 1976 due to various misfortunes and misunderstandings. Several of Engelbart's best researchers had become alienated from him and left his organization when Xerox PARC was created in 1970."
History of OOP including Simula 67:
Go look at the "Mother of All Demos" -- http://en.wikipedia.org/wiki/The_Mother_of_All_Demos -- which introduces the mouse, a macdraw-like drawing program, and a mouse-driven word processor. This is the invention of the GUI that not only predates Xerox PARC but students from the lab that did this work went on to work at Xerox PARC. (And the Xerox PARC mouse was a clumsy, expensive, and unreliable device very similar to the crude device from the older demo -- the modern mouse was actually invented by an engineer contracted by Steve Jobs).
"the personal computer as we know it today was first created at PARC"
The personal computer as we know it today was invented by a lot of people over a period of time, although it is usually ascribed to Apple, Altair, IMSAI, etc. -- not Xerox. Xerox added a GUI to their $10,000 "personal computer" and that is a feature of today's personal computer, but so are scalable fonts (btw: that's Donald Knuth who deserves as much fanboi love as Alan Kay) and web browsers, neither of which were created by Xerox.
Most of the people involved with designing the Lisp Machines ultimately left MIT, however, to start companies to manufacture and sell them. The most prominent of these was Symbolics. There was also LMI.
And yes, Lisp Machines were very expensive personal computers. But computers for "professionals" were generally pretty expensive at the time. A timesharing computer that could handle 20 or 30 users could easily cost $1 million.
How much did the mouse for a Lisp Machine cost? IIRC, around $250. Economies of scale, and all that.... Yes, Lisp Machines always had mice.
Btw, it was the founding of Symbolics and LMI that prompted Richard Stallman to become a free software radical. He stayed at MIT, and spent a lot of his time porting back to MIT's Lisp Machine Lisp the improvements made to it by Symbolics. This is why Stallman came up with the GPL. He was galled that Symbolics had hijacked MIT's open code and had made it proprietary.
The mouse was a three button mouse. The way it worked certainly made sense to me at the time, but it was made for nerds. Apple certainly did a ton of great work to make personal computers usable (and affordable) by normal people.
At that time Xerox was also using their own Lisp Machines internally - Alan Kay knew those. Those were later also sold as a product.
E.g. https://www.youtube.com/watch?v=AYlYSzMqGR8 (I only watched the first minute or so)
Then Andy Hertzfeld or Bill Atkinson implemented clipping in such a way that most drawing commands became no-ops when they were outside a clipping region (if you set a clip rect then call "fillrect" and it's outside the clipping rect then you do nothing, right?), allowing highly performant overlapping windows. There's a bit somewhere (folklore, Hackers? can't remember) where they showed this to Xerox folk who couldn't believe how well it worked.
I remember 15 years ago when Philip Greenspun (http://philip.greenspun.com) first introduced me to the idea that a Web service is like an object -- it may seem obvious today, but at the time it completely changed my Web view:
The challenge is in realizing that the Web service itself is an object. The object has state, typically stored in a relational database management system. The object has methods (the URLs) and arguments to those methods (the inputs of the forms that target the URLs). The engineering challenges of Web development are (a) coming up with the correct data model for the object state, (b) coming up with a correct and maintainable organization of URLs, and (c) defining the semantics of each URL. By the time an individual page is constructed, the engineering challenge is over and it doesn't really matter whether you build that script in a simple language (e.g., Perl or Tcl) or a complex powerful language (e.g., Common Lisp or Java).
Philip also discusses this idea in chapter 13 of his classic book, "Philip and Alex's Guide to Web Publishing" (http://philip.greenspun.com/panda/).
But as Alan Kay points out, this RPC-type model still hasn't evolved to a point where you can pass around self-contained objects that are RCATWD.
* On Software Engineering
* On Object Orientated Programming
* Tear it down and build something better
* On Messaging
* On LISP
* The Unknown side of Alan Kay
Here's my definition of a message-based service:
Here are some of the advantages:
Here is the difference between WCF RPC methods and a message-based ServiceStack service:
http://www.servicestack.net/files/slide-35.png (i.e. 6 RPC methods vs 1 Msg service)
Here is the difference between WebApi RPC methods vs an equivalent message-based ServiceStack service:
http://www.servicestack.net/files/slide-34.png (e.g. 5 RPC vs 2 msg ones)
It's analogous to the idea of objects adopted by C++ and SmallTalk. With C++, method calls are essentially procedure calls with a specific signature that are dispatched based on a special parameter (i.e. this). It makes no sense to invoke a method that doesn't exist. In SmallTalk, you can send any message you'd like to an object, and the object can choose to deal with it in any way it wants, up to and including catch-all code for dealing with any kind of message.
CORBA and its ilk work with a protocol, and messages not conforming to the protocol are nonsensical. Computers on the internet can send any message to any computer, and each can deal with the messages in any way it wants. The internet is a network of message-passing objects.
[...] felicitous context and process [...]
I am interested in what the process is he's referring to that PARC used.
I think software engineering as a discipline is too young to have decent processes figured out, as is evident with the takeover of the "agile/scrum" process (I don't mean this in a bad way, but before this it was either NO process or a bug tracker with dates in it).
In his talk he uses Leonardo Da Vinci who despite being one of the smartest people in existence wasn't able to invent a motor for any of his machines, a feat accomplished by a less capable Mr Ford who did it en-masse.
Not learning from history is one of the reasons why the state of the industry is where it's at today.
"The Deep Insights of Alan Kay"
"Alan is the most important person alive in Computer Science""
is over the top.
And I have no regrets over the title "The Deep Insights of Alan Kay"...
Alan has generated a wealth of intellect, experience and insights in his works that most IT people still don't know about or even know who he is. The objective of the post is to bring some of his meaningful and powerful ideas to the surface and summarize them so it's more palatable and more accessible to more people - I think the title accurately captures this.
Look into the special properties of LISP which Alan refers to as the Maxwell’s equations of software:
Its more powerful and extensible than most languages despite requiring orders of magnitude less effort to implement (see: Peter Norvig's Lisp interpreter in 60 lines of Python http://norvig.com/lispy.html, and re-written in 40 lines of Ruby http://stuff.thedeemon.com/rispy/).
It's one thing to build a minimal implementation where your main constraint is that it works and its concise. It gets a lot messier once you start worry about constraints like efficiency on real machines - there are limits to how much you can abstract out the heuristics and special cases.
Right, if you need it to be efficient than mechanical sympathy starts creeping into your code-base, though language design should optimally be designed to capture human thought - that's the part that doesn't scale well or solved by adding more machines.
The Nile/OMeta work is an example of using DSL's as one way to reduce the LOC's required, it captures the required top-level math fn's for image processing and uses OMeta to convert it to runnable source code. DSL's might not be the best way forward in all situations, but investing in research of different ways to the solve the problem can only help.
"The whole thing is 1750 lines of code (about twice what it should be) and includes a garbage collector and user-level implementations of user-defined types and structures, single-dispatch object-oriented "messaging", and generic "multimethods" (all of which are needed by the compiler). The compiled binary code runs at about 30% the speed of aggressively-optimised C."
The current version of Maru seems to be about 9000 lines of code (almost half the entire complexity budget for STEPS!) I think this reinforces army's original point, "Sure, you can define a lisp interpreter in very few lines of code. But to build a decent standard library and optimizing compiler/JIT, that is where the LOCs rack up."
Which also has a JS port http://tinlizzie.org/ometa-js/ so you can create new languages/DSLs and run them in your browser.
How does someone read so much material on Alan Kay and still manage to have "object orientated" stuck in their head? It's like putting together an overview of Edward Said's "Orientatilism".
That said, Kay did call it object-oriented programming, so it makes sense to stick with his name.
"Orientation" is the term for the process of orienting something. So a design process in which, say, user stories are turned into object-oriented designs could be called "object-orientation". But it does not follow that the result is "object-orientated" - we'd just call it "object-oriented", in the same way that something which has been through a transformation is not "transformated" but "transformed".
being exclusively goal oriented constrains the originality of the high level ideas that are produced.
on the other hand, being constrained by specific goals (eg: put a person on the moon) can produce a lot of creativity as people's efforts are focused on a narrow domain of problems to solve.
Some quotes from that interview:
"My interest in education is unglamorous. I don't have an enormous desire to help children, but I have an enormous desire to create better adults."
"Extracting patterns from today's programming practices ennobles them in a way they don't deserve."
"I like to say that in the old days, if you reinvented the wheel, you would get your wrist slapped for not reading. But nowadays people are reinventing the flat tire. I'd personally be happy if they reinvented the wheel, because at least we'd be moving forward. If they reinvented what Engelbart did we'd be way ahead of where we are now."
and so on...
HTTP is defensible, HTML also has its time and place.
But what it evolved into (HTML/CSS/JS) is pretty obviously a dead-end waiting to happen. The number of hacks we pile on one another for a "modern web-application" is nothing short of ridiculous.
Realistically we still live in the internet stone-age, helplessly piling up stones and wooden sticks...
Other than that, it's a nice collection of quotes and links. Good to see.
Here's a copy of it on Scribd:
"Lisp: greatest single programming language ever designed"
My previous favorite quote attributed to Alan Kay, which is not in TFA:
"I made up the term "object-oriented", and I can tell you I did not have C++ in mind."