Hacker News new | past | comments | ask | show | jobs | submit login
Alan Kay on Messaging (1998) (c2.com)
136 points by stesch on May 3, 2015 | hide | past | favorite | 61 comments



I remember this being posted in 1998, as people were debating the surge of Java relative to C++ or Smalltalk.

These were the twilight days where C and assembly were the only language Real Programmers used, Perl and Tcl or any scripting/interpreted language were considered toys for sysops and never meant for production, OO was treated with deep suspicion, and functional languages were considered crazy talk. I remember people in my company saying a particular C++ replacement program my colleague wrote was "too OO" and would execute 10x slower than required (when it was 2x faster than the legacy).

Point is, a lot of what Kay says is obvious now, but it wasn't in the 90s, and it REALLY wasn't in the 70s.

One sees new things every time one reads this. For example, Kay comes out as a supporter (of sorts) to functional programming principles by calling assignments (mutable state) a metalevel change.


1998?... More like the twilight days when Cobol still ruled the land and we thought (feared) that it would never die. Fortunes were being made consulting on the Y2K bug and patching the dodgy Cobol systems responsible.

I do agree, though that OO was treated with suspicion and hostility. Only a tiny number of (us) mavericks were brave enough (or foolish enough) to propose using what our management saw as a "newfangled and risky" proposition.

Let's not even mention mucking about with Extreme Programming. ;)


Oh yes, COBOL was C's evil twin in that regard. It depended what circles you travelled in - engineering depts or IT depts.

I remember posting the c2 wiki pages for XP on my cube wall when XP first came out, and had a Gantt-chart oriented systems engineer tear them down as dangerous nonsense.


FORTRAN was my language of choice back then. Oh the memories. I'd still be using it if I stayed in the infrastructure industry.


> Point is, a lot of what Kay says is obvious now

It's obvious because he's not saying much, besides trying to redefine "function call" with "message passing" so that it makes Smalltalk look better.


In a function call the arguments are applied to the function after being evaluated (LISP's apply).

In Message Passing a message (which is just another object) is sent to the receiver object.

The crucial difference is in a function call the the function is already bound when the args are sent. With message passing the args are sent to the receiver object which can handle them there or up its superclass hierarchy -- with the ability to handle messages it does not recognize too after exhausting the superclass hierarchy.

The smalltalk blue book has the VM implementation of message sending which is very straight forward.


Lisp/CLOS provides parts of that in the function-call model.

In Lisp/CLOS the function is being assembled based on the arguments. That's called 'computing the method combination'. If there this is not possible, because there are no methods defined for these argument types, the generic function NO-APPLICABLE-METHOD gets called with the the original function and the arguments - for NO-APPLICABLE-METHOD, the user can write methods.


The distinction is mostly of historical relevance, being relevant to the time where function names were bound at link time in mainstream systems. In a modern system, Java/C#/Python/Javascript/..., we take for granted that function names are bound at runtime. "Calling a function into a module/object" means having the module resolve the function name, then execute the function. Messaging is an implementation detail, just like vtables are another way to implement this idea.


That's a fundamental difference which allows a whole pile of stuff to happen that otherwise would be either hard or impossible.

Messaging versus parameter passing is an entirely different beast that for instance (but not limited to) allows you to send the message to another process for handling and that other process could (theoretically) live on another host.

Smalltalk has a lot of nifty stuff under the hood and never managed to realize even a fraction of its potential due to all kinds of factors none of which have anything to do with the core ideas of the language.

Message passing is a game changer, and is in no way equivalent or comparable to calling functions with a bunch of parameters though implementing that using message passing is trivial.

The runtime binding aspect is only one detail, and not the most important one at that.


> The runtime binding aspect is only one detail, and not the most important one at that.

What are some of the others?


- messages are asynchronous, function calls are (pretty much by definition) synchronous

- messages do not require a reply (but function calls typically do expect a result)

- messages are a lot more secure in that the credentials of the sender can be inspected by the recipient which can be walled off in a different area of memory or even on a different machine

- message based architectures are easier to scale than function call based architectures

- message based architectures tend to have higher overhead per message than a function call with the same payload and return value

- message based architectures tend to lend themselves to soft real time approaches better than monolithic threaded software using functions as the main means of passing data around

- message based architectures tend to be more reliable

- message based architectures typically allow for things that are next to impossible in a function based environment, such as process migration and load balancing as well as coping with hardware failures

- message based architectures are easier to debug (by far) than large systems built up using nothing but function calls.

- message based systems tend to be a bit slower, especially if they take the message idea all the way to the lowest levels of the language (such as in smalltalk).

Really, the differences are legion. I've built some stuff using message passing that I would simply have had absolutely no way of getting production ready if not for using a message based approach, it allows you to limit the scope to the processing of a single message at all time rather than to have to model the entire call stack in your head to keep track of what you're doing and why. As an abstraction model it is extremely powerful and for that reason alone I'd happily advise anybody that has not yet played with an environment like that to give it a whirl.

It's not a passe-partout but it certainly is a powerful tool in the toolbox.


> message based architectures are easier to debug (by far)

This is the opposite of my experience, mainly because of the lack of proper call stacks, it is very hard to inspect the reason the values in a message are what they are. Can you elaborate?


That's interesting. I guess this boils down to how far from the point of origin a bug originates.

In a message based system that should be at most one step away from where you find the bad value. If it is more than one step away that's more of an architectural issue, it probably means that you are sending your messages first to some kind of hub which then passes it on to the actual recipient so you lose the point of origin information.

A message that gets passed on untransformed from a module that does not realize the data is corrupted is a missed chance for an integrity check and a subsequent crash.

Smalltalk does a half decent job at integrating the 'debugger' with the runtime environment (they're so tightly integrated it's hard to tell where one starts and the other one ends) and allows you to see fairly detailed traces of what happened to which parameter along the way, it's as detailed as any call stack dump.

Erlang has a 'let it crash' policy (see Joe Armstrong's excellent paper) which tries to enforce the point of origin and the subsequent crash happen as close to each other as possible (and provides a whole range of mechanisms to deal with the fall-out).

A 30 level call-stack like say Java would produce simply doesn't happen in those environments. Though if you programmed in a 'Java' style in say 'erlang' then you could generate one and it would make your life much harder.


No hehe, my experience is mostly C++ PC and console games, so a) let it crash is not viable, and b) performance precludes performing extensive validation (a debuggable but unresponsive game is not viable).

In games, messaging troubles are usually related to systems like character behaviour and AI that deal with multiple complex entities responding to the world state and interacting with each other. It's rarely a case of corrupt data and null pointers (those are easier): physically valid yet logically incorrect or unexpected data.


I think you're missing out on something, have a read and see if maybe 'let it crash' is more than you think it is:

http://web.archive.org/web/20090430014122/http://nplus1.org/...

https://mazenharake.wordpress.com/2009/09/14/let-it-crash-th...


Okay, what is the difference, then? And what are the most important consequences?

Let me venture a guess: could it be that Erlang realised this idea of message passing better than Smaltalk ever did?


> Okay, what is the difference, then?

Message passing as a construct allows you to model each and every object as an independent process which is for many reasons a very powerful tool.

> And what are the most important consequences?

This is where smalltalk got it wrong I think, it never managed to really capitalize on what could have been done with an architecture like that, I suspect this has something to do with the hardware available at the time and the mindset that went with it (machine efficiency at all costs).

We'll likely never really know what we lost, but I suspect that an architecture that is built up out of very small objects each and every one of which works as independently of the others as you can get away with has properties that will be hard if not impossible to simulate in a regular way. Taking that all the way down to the hardware level would allow you to take advantage of architectures like 'fleet'.

Anyway, no way to turn back the clock on that one, I don't think we'll ever see a sudden mainstream resurgence in the interest in these weird but beautiful experiments from the history of computing unless there is a compelling business advantage to be had. It's a bit like the piston gasoline engine, once you've invested untold billions in its development it is very hard to do a do-over even if you suspect that there must be a better alternative unless there is a very large external factor pushing you.

> Let me venture a guess: could it be that Erlang realised this idea of message passing better than Smaltalk ever did?

Erlang does message passing and does it in a very good way, at a level of abstraction where it really moves the needle rather than by defining even language primitives and very basic operations as message passing based. It's extremely practical and shows some fairly significant advantages over more traditional approaches even though the initial deployment will likely come at a significant cost.

Smalltalk does it all the way down to the metal if it has to, which I'm not sure is equally effective in a larger setting.

It is super interesting though.


Messaging and vtables are not at all the same thing. See my other comment here for an explanation of the messaging part.

Vtables by themselves fall just on the other side of the "is it still a message" divide. You can obviously optimize a messaging system further with vtables and enrich a vtable-based system so that it has enough information to be considered "messaging", but out of the box that's where the divide is.


The implication of "function call" is that we're calling a particular function (eg. in `square(x)`, `square` is a function).

The implication of "message passing" is that we're only providing a name (eg. in `square(x)`, `square` is a name). This name is used to look up a function just-in-time, and the mapping may be altered arbitrarily during the course of the program.

To me, it's like a half-way-house between first-order (functions and variables are distinct) and higher-order programming (functions are a sub-set of variables).


Other differences between message passing and function calls aside, I disagree with this characterization. While many mainstream languages perform static binding of functions, it is not an inherent feature of functions (at least as I personally have come to understand them); the canonical example is Common Lisp's generic functions, which behave in almost exactly the way you describe message passing.

I think no small part of the disagreements in this thread can be attributed to people simply having different notions of what phrases like "function call" mean, with some envisioning invoking a statically resolved, monotyped C-style subroutine, and others a more abstract notion of issuing a command to perform a computation with an understood set of semantics on a provided collection of data. The latter is what I think of a function as in a computational context, and from what I've gathered from the other comments here, a fair approximation of what is typically meant by "message passing."

Now, there's something to be said for differences like the treatment of synchronization and process boundaries, but I'm inclined to categorize those as the same sort of pragmatic-but-technically-untrue distinctions as that between so-called "compiled languages" and "interpreted languages." Certainly there is at the very least an isomorphism to be established between the two conventions, given access to threads, promises, and some form of IPC.


So if "function call" and "message passing" are the same thing, why does Erlang support both?


Guesstimate: Message passing implies a fully self-encompassed message carrying enough meaning for potential distribution between systems? [Substitute: Erlang VMs, physical machines, or whatever] Function call typically means within the same interpreter instance / language virtual machine / operating system native process.

(ie. in cases where parallelism exists or is warranted, a message passing model enables it. In cases where it does not, they are essentially the same, possibly message passing being slightly slower due to marshalling/encode/decode overheads. These days that overhead is usually irrelevant and thus message-orientation is a plus, as it encourages smaller components and thereby helps to bring clearer architectural paradigms to complex systems)


tell me about message passing without concurrency. is it message passing in ruby? and why.


Message passing in KO (Kay Objects) is always inherently concurrent. After all the object abstraction is "individual computers all the way down"[1], and with separate computers, I hope we agree to call the message asynchronous.

Anyway, mapping the passing of messages to synchronous lookup of a method in Smalltalk was always an optimization purely out of expediency. Highly important at the time so they could get real systems running on their "Interim Dynabooks" (aka Altos, 128K DRAM, 5.8 MHz[2]), and try them out.

However, the ability to do a full message-send was always retained in systems like Smalltalk and descendants such as Objective-C and Ruby, with the DNU hook turning the message that only exists in pieces on the call stack into a bona-fide object (Message in Smalltalk, NSInvocation in Objective-C).

And lo and behold, these hooks were and are used to implement asynchronous, remote and other forms of messaging.

So to answer your question: yes, what Ruby, Smalltalk and Objective-C do is message passing. It is probably on the extreme end of what can still be considered KO-style message passing, which to me means it is late-bound and an object representing the message for further processing can be recovered easily.

[1] http://worrydream.com/EarlyHistoryOfSmalltalk/

[2] http://www.cpushack.com/2010/10/18/before-the-pc-before-appl...


Just a little comment: Kay's mentioning various meta-things reminds me that I have The Art of the Metaobject Protocol (which Kay called "the best OOP book in ten years," and I should read it.

And another little comment as regards Kay's "complaining" that Smalltalk had become "something to be learned": I very much failed to learn computer science from high school Java, where classes, methods, encapsulation etc were treated as things-to-be-learned. I didn't know that I /could/ care about how they were put in place or why.

If I had been taught that everything in a meta-system can be changed when you structure your changes, I would have been a happier camper--motivated by autonomy and purpose.


The art of the meta object protocol was good, it basically was a well-written code commentary for how to build an advanced OO meta system (CLOS) for LISP. CLOS was a bit of a dream language, I'd say Scala is perhaps the closest practical language in terms of ideas.

Interestingly, this work led Gregor Kiczales & team to create Aspect Oriented Programming - a subset of his metasystem - which became somewhat popular with Java and the Spring framework's use of AspectJ.

http://en.m.wikipedia.org/wiki/Aspect-oriented_programming

As an aside, I found in college (let along high school!) that students have almost anti-philosophical learning objectives. The SICP approach of teaching computer science from the metasystem up with Scheme (and then possibly moving on to CLOS/Lisp) was near-universally hated among my peers (1996-2002) - most wanted a practical language they could "get a job" with. I believe MIT first year CS switched to Python in part to deal with this resistance.


AOP also encompasses techniques you can't find in CLOS/MOP. The analogues to before/after/around methods are the easiest concepts to implement and reason about, so those are what popular language extensions tend to emphasize. But if you look at the original AOP papers describing Reverse Graphics, you'll see an AOP system used to liberally restructure the control flow of the entire base program, not just augment its behavior. It was an early version of what you can see now in something like the Halide language.

> I believe MIT first year CS switched to Python in part to deal with this resistance.

In part. The full story is a bit more depressing: http://www.wisdomandwonder.com/link/2110/why-mit-switched-fr...

Most of the other top universities that had used SICP also switched shortly after MIT.


> most wanted a practical language they could "get a job" with

This was me in college. Especially early on, I kept hounding my professors why they weren't teaching a particular language (e.g., C++) or toolchain (Visual Studio). It wasn't until well after I graduated that I appreciated the distinction between using tools and making tools---and that my professors taught me the latter.


What makes Scala more "practical" than Common Lisp, other than incidental matters of library support on the JVM?


That, and the dearth of people that know CLOS, or are able and WANT to learn CLOS. "Design by resume" surrounds us.


JavaScript in a browser + DevTools offer a similar environment to Smalltalk + Squeak. http://en.wikipedia.org/wiki/Squeak

One can change everything, and manipulate even the UI. Even the DevTools in all browsers nowadays are HTML5 iframes.

Manipulating the objects and meta/prototype objects are features that both languages allow.

Though they are different, and afaik only JS is called a prototype based programming language: http://en.wikipedia.org/wiki/Prototype-based_programming


JavaScript in a browser + DevTools offer a similar environment to Smalltalk + Squeak.

Sure, they're "similar", but at a relatively low approximation. The capabilities of the Smalltalk environments are simply on a different plane.

Manipulating the objects and meta/prototype objects are features that both languages allow.

This is sort of a bare minimum for all OO languages. Even Java with its rigidity allows for complex tricks with reflection and method body replacements for hot code reloading.


That were just examples.

While this is often considered to be one of JavaScript's weaknesses, the prototypal inheritance model is in fact more powerful than the class based model. It is, for example, fairly trivial to build a class based model on top of a prototypal model, while the other way around is a far more difficult task.

> The capabilities of the Smalltalk environments are simply on a different plane.

Please explain. Maybe use the following examples: FirefoxOS and Squeak


” the prototypal inheritance model is in fact more powerful than the class based model. It is, for example, fairly trivial to build a class based model on top of a prototypal model, while the other way around is a far more difficult task”

This is precisely what Kay is arguing against in his second point. In Smalltalk everything is an object including classes but there is a sharp distinction between a class and other objects and it's very important to make this distinction. It's important to be explicit about which level you are operating on.

In general arguments about one thing being better than another because you can express one in terms of another don't hold up. Then you can argue that assembly is the most powerful of all. (see also the definition of Turing Tar Pit)


Not really. Your first paragraph is also true for JS (think of ES6 "class" syntactic sugar). Some browsers implement the DOM in JavaScript (https://www.chromium.org/blink/blink-in-js, http://www.phantomjs.org/, https://github.com/andreasgal/dom.js). And there are things like asm.js and even Linux VM running in JavaScript (http://bellard.org/jslinux/). JavaScript/ECMAScript is one of the most misunderstood languages, please read up before jump at conclusions.


> It is, for example, fairly trivial to build a class based model on top of a prototypal model, while the other way around is a far more difficult task.

not in languages where you can control method dispatch and object allocation (smalltalk, ruby, python etc), than it's quite trivial.


Can you formulate more precisely (or by example) what you mean by "simply on a different plane"?


See the Design Principles Behind Smalltalk article: https://www.cs.virginia.edu/~evans/cs655/readings/smalltalk....

Some relevant excerpts:

   Here are some examples of conventional operating system
   components that have been naturally incorporated into
   the Smalltalk language:

   [...]

   Display handling -- The display is simply an instance of
   class Form, which is continually visible, and the 
   graphical manipulation messages defined in that class are
   used to change the visible image.

   [...]

   Debugger -- The state of the Smalltalk processor is
   accessible as an instance of class Process that owns a
   chain of stack frames. The debugger is just a Smalltalk
   subsystem that has access to manipulate the state of a
   suspended process. It should be noted that nearly the
   only run-time error that can occur in Smalltalk is for
   a message not to be recognized by its receiver.

   Smalltalk has no "operation system" as such. The 
   necessary primitive operations, such as reading a page
   from the disk, are incorporated as primitive methods in
   response to otherwise normal Smalltalk messages.
This is from 1981, BTW.


I felt the same. People differ in the way they interact with systems and logic. I certainly disliked being fed the Java way rigidly. I actually learned more about basic OOP by simulating it in ADA. Some people like to understand by building. But it's also valuable to understand an abstraction outside of any implementation. In the end it's better to have done both. And I feel strongly that to understand programming, one must write an interpreter and a compiler, so you can reason about the different abstraction levels clearly, without blind spots or dark magic.


Messages are inherently decoupled from the methods they invoke, unlike function calls or function pointers. For example, in Smalltalk you can rewrite this:

  dictionary at: aKey put: aValue
to:

  dictionary perform: #at:put: with: aKey with: aValue
where #at:put: is just a Lisp-style symbol object, an identifier, and perform:with:with: sends a message to the receiver with the first argument as the selector and second and third as its arguments. As one example, this allows you to have a button object that you configure to send some message selector to some object when clicked. Obviously, you can accomplish this equally well with anonymous functions, and since Smalltalk has probably the most concise syntax for anonymous functions (just pairs of "[" and "]"), they are often used instead.

Speaking of anonymous functions and OO, it's bizarre to see FP zealots cite their gradual introduction into mainstream OO languages as proof of some move away from OOP and towards FP, when Smalltalk has had them since 1972 and uses them much more extensively than many "pure" functional languages do (though admittedly in part due to currying and partial application as in Haskell subsuming many of their use cases).


> The big idea is "messaging" - that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word - ma - for "that which is in between" - perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet - to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas.

What is the difference between "messaging" and "function call"?

"messaging" data into a module = call a function with the data as arguments.

"messaging" data out of a module = package the data as the return of the function.


You should take a look at the Reactive Manifesto [1] and the actor model [2]. My own personal and non-expert take is that message orientation is about the wholeness of the actors that pass messages. "Would you care for a drink?" is message oriented. Whereas many systems built on function calls are more like you jabbing me in the butt with a syringe filled with 20 ccs of alcohol.

[1] http://www.reactivemanifesto.org/

[2] http://www.infoq.com/news/2014/10/intro-actor-model


I think about conceptually like this...

When one object is "calling a function" inside of another object, it's reaching into that object, grabbing the method it wants to invoke, and invoking it. It puts the responsibility of invoking that method on the caller.

When one object "sends a message" to another object, the receiving object is responsible for determining what function gets invoked.

The most obvious place this difference manifests itself is the existence of things like "method_missing". That can't happen when you're "calling a function", but if you're just sending a message and the receiving object doesn't have a method that matches the message name, it can still make a decision about how to respond.


Function call usually means static binding (call a specific address) or at best a virtual call (call address from a vtable).

Messaging is much more flexible - the receiver can have a custom dispatch function that looks at the message and chooses the implementation in a Turing-complete way (like "call foo if third argument is 7, bar otherwise"), logs the message, duplicates it to send to multiple implementations, serializes it to send over the network, whatever.

Obviously for this to be efficient, the underlying system often optimizes "sending a message" to "calling a function".


Messages and function calls are similar but interpreted at different times. In a OOP like Java the method is usually set at compilation. In Smalltalk or ObjectiveC the method is interpreted when the program is run. A class may not even know the method and either ignore it or handle it. Delayed binding gives more flexibility in designing class and increases the chance of runtime errors. In the olden days delayed binding made programs slower and executables larger. But that isn't much of an issue now.


Messaging is a protocol level abstraction, communication between entities [1]. A function call is a much lower level abstraction that does a <verb> to/with the data. One could have an entity that executes one function or many, or it could manage a whole sequence of events.

The Kay Object (KO) is more of an encapsulated microservice, like an Erlang process.

How would mail (SMTP) look if it was implemented via RPC?

[1] I specifically didn't use the word object because the Kay Object and the Java/C++ object are not comparable.


Kay's use of "messaging" started early; he'd talk about "sending a message to a number to add a value to itself". It sounded like there were all these asynchronously running objects communicating. That was his original vision - every object a virtual CPU. But in reality, it was just a synchronous function call.

The funny thing is, we now do have a lot of systems that are async message-oriented. Mostly in Javascript, not Smalltalk. You send a one-way message to something, and maybe it calls you back later.


Think of the internet...

Great quote along these lines is: Upgrade cost of network complexity: The Internet has smart edges ... and a simple core. Adding an new Internet service is just a matter of distributing an application ... Compare this to voice, where one has to upgrade the entire core. - RFC3439 (2002) ... from my fortune clone @ https://github.com/globalcitizen/taoup


> What is the difference between "messaging" and "function call"?

If the object supports the message, then it will execute some code. Otherwise it will just ignore it. This is a more dynamic approach than an explicit function/method call where the object must support that function.

You can think of e.g. Javascript events plus function calls (callbacks) that pass an object ("message") and a typeof/switch check as an equivalent.


My own personal view is:

- function calls: single recipient, return values, synchronous

- messaging: publish & subscribe, no return value, asynchronous

Also - using pure message passing "inside" a component is kind of painful, using pure function calls between components can be painful. Of course, this depends on your definition of "component"....


If the big idea was messaging and not objects, I'm curious to know how did we end up in this primarily object oriented world. Especially, when its founding father didn't had that intention as he says: ""I invented the term object-oriented, and I can tell you that C++ wasn't what I had in mind".

I have a guess from following quote: "This was why I complained at the last OOPSLA that - whereas at PARC we changed Smalltalk constantly, treating it always as a work in progress - when ST hit the larger world, it was pretty much taken as "something just to be learned", as though it were Pascal or Algol." -

There were people in hurry to make something with whatever theoretical basis there was instead of further refining the original concept. Probably, that's where it took the wrong turn.


> I'm curious to know how did we end up in this primarily object oriented world

Hundreds of millions of lines of C and millions of C programmers.

They could, in theory, smoothly migrate into the world of object orientation using C++.

C++ has, however, held as a constant design principle that if you don't use the new stuff, it's still C. Put another way: feed your C program to a C++ compiler and it behaves the same way.

That design back pressure makes late binding and message-passing a la Smalltalk a non-starter.

Objective C solved this in a very different way: by making the Smalltalk-y additions syntactically orthogonal to the existing C bits. That's why you send messages [inside brackets].

C++ was backed by AT&T. Objective C was backed by some whacky tech company from California founded by that loony guy they kicked out of Apple when the grownups took over.

Not to mention that Smalltalk had serious corporate backers too (eg IBM), so what room was there for Objective C?

Eventually Java came along with the mission to rescue the world from the explosive mixture of C++ and median programmers. It took a page from the C++ book and aimed to be easy to switch. Similar model, similar basic concept of operation. And that's largely been that.


I don't think we really did end up in an object-oriented world. I think we ended up with nominally object-oriented languages used in ways that are more struct-oriented than object-oriented.

I had the good fortune to do early Java work with a bunch of Smalltalk programmers, so I know you can write Java that is reasonably well object oriented. But I don't think I ever saw an intro-to-Java book that was any good in that regard.


Refining a language or programming paradigm requires a philosophical or scientific bent. Not much money in that. Most people just want to get a job done and go home at 5pm.

Edit: Bright side, I think polyglot programming is more popular now than ever in history. In the past you learned one language and that was 20 years of your career before you went into management. Your identity was wrapped up in being a "VB person", "C", "COBOL", or "Java/C# person" or "Rubyist" .

There still is language identity politics but I think Ruby might have been the last of the One True Religions, now the frontier is all over the place between Python, Ruby, Node, Java, Golang, c#, etc.


I think it is a perversion of computer science that we think ourselves as users of languages, instead of creators of languages. Especially because it is not hard to create a language these days -- we know much more and have better tools than people 30 years ago. Many of the problems that we consider uber-complicated could be relatively easily solved by creating appropriate problem-specific languages: consider web programming, data mining, etc.


I'm with you. But it becomes a skills and NIH-attitude issue at scale, where the evolution of science is less of a concern than treating the science as constant for an engineering project.

I've seen lovely crafted DSLs that were deemed unmaintainable when the creator moved on, and management cursed the use of them if they weren't already a popular OSS project.


The reason is that DSLs are just viewed as "another" burdensome language, instead of a design for codified problem-specific knowledge. A DSL should evolve as we evolve our understanding of the problem. Think about successful DSLs like Mathematica, for example.


"Object-oriented" was like "agile" in the late 80s and early 90s. It was viewed by management as a quasi-silver-bullet that would solve the software complexity crisis -- and by consultants as easy money. The qualities of Smalltalk that were selected for in the object-oriented craze were the ones that most appealed to computer-unsavvy middle management -- the ones that helped them herd programmers by the dozens or hundreds without them stepping on one another's toes. So classes, data encapsulation, inheritance, and virtual-method polymorphism were emphasized to promote the creation of sealed components that could be extended or wrapped but not changed (Brad Cox's software IC dream) and late-bound messaging de-emphasized in order to enforce static typing constraints.


OO is still a useful way to organize programs. Even if you use, lets say, templates in C++ for inheritance instead of method table dispatch...it's still a useful way to mentally/physically organize your code.


Because it was much simpler to bolt on the more primitive Simula model of OO, which is a straightforward extension to procedural Algol. Beefed up structs, at their core.


In case anyone is looking for the video of the keynote Kay gave before this post. There's a dead link to it at the bottom of the page.

  https://www.youtube.com/watch?v=oKg1hTOQXoY




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: