Hacker News new | past | comments | ask | show | jobs | submit login
Alan Kay and OO Programming (ovid.github.io)
173 points by signa11 61 days ago | hide | past | web | favorite | 135 comments



>> Extreme late-binding is important because Kay argues that it permits you to not commit too early to the "one true way" of solving an issue (and thus makes it easier to change those decisions), but can also allow you to build systems that you can change while they are still running!

>> Binding can also refer to binding a variable type to data.

As someone who has over 15 years of experience going back and forth between statically typed and dynamically typed programming languages (but who has settled on dynamically typed languages in the last 4 years), this statement resonates very strongly with me. Also, it ties in perfectly with my philosophy of focusing on integration tests instead of unit tests; the idea that you should lock down the features of your system but keep the flexibility to move around the all the internal implementation is critical.

It's a shame to see the new generation of developers moving back to statically typed languages (e.g. TypeScript) instead of actually trying to understand how to use dynamically typed languages properly. The obsession around achieving 100% unit tests coverage is equally shameful.

Many of the people who came up with or promoted the idea of dynamically typed languages had decades of experience working with punchcards, assembly code and statically typed languages; they were onto something. Why does the new generation so easily discard this vast amount of wisdom?


> As someone who has over 15 years of experience going back and forth between statically typed and dynamically typed programming languages

I have >20, in a mix of dynamic and statically typed languages. I've even designed a few of both including some that straddle the line.

> It's a shame to see the new generation of developers moving back to statically typed languages

It's only a shame if you presume to know better than all of those developers. If dynamic typing was what they wanted, they wouldn't have added TypeScript to their existing JavaScript code. Personally, I trust that in the aggregate, developers aren't dumb and do understand their own pain points and solutions even when the pain and solutions aren't obvious to me.

> Many of the people who came up with or promoted the idea of dynamically typed languages had decades of experience working with punchcards, assembly code and statically typed languages; they were onto something.

Most of the designers of the initial statically typed languages came from that same era and technology background, so I don't think this argument carries much weight.

Neither dynamic nor static typing supplanted the other. They are parallel tracks and have been for virtually all of computing history (see BASIC and FORTRAN). Both branches are still going strong, so anyone on one branch isn't discarding the wisdom of the other, they're simply choosing their preferred path. If I use a hammer and you use a screwdriver, I'm not "discarding the vast amount of screwdriver wisdom", I simply know that I have a nail and not a screw to deal with.

One of the real problems I see with advocates of dynamic typing is that we for the most part, we have lost a dynamic code editing experience. Without something like Smalltalk's "open a code editor when an error occurs" live debugging/editing experience, you'll never get the real benefits of dynamic typing.

Editing dynamically typed code in a textual static user experience seems like the worst of both worlds to me.


> https://vimeo.com/9270320

There's very little you can prove, other than "less code gives you fewer errors/bugs". Arguably, this is primarily due to specification gaps being opaque and less logic to hold state. You can't file a bug or raise an issue about a choice that was already made by a dynamic language.

> It's only a shame if you presume to know better than all of those developers.

Statistically, someone is going to be right. The "right tool for the job" trope does not extend to every facet of every choice. The totality of developers are not experts in every set of practices (while it still may apply to some cases). Less code is better and until there's a study to say something more, I'm not interested in the hand waving.

> > Many of the people who came up with or promoted the idea of dynamically typed languages

> Most of the designers of the initial statically typed languages came from that same era

This isn't an argument about better, but precisely the point you turn around and make. It's not worse. Less code is better, for sure.

> Without something like Smalltalk's "open a code editor when an error occurs" live debugging/editing experience, you'll never get the real benefits of dynamic typing.

Debuggers allow this. You generally don't want the user to have this power.

There is a shocking lack of science in the industry. Re: The Quorum Programming Language made some attempts at advancement. It's been decades and I still see the same squabbles and continue doing 10x more with dynamic languages than when I am forced back to something like Java. At some point, I have to assume that I'm special or one of those language choices are crippling.


> Statistically, someone is going to be right.

Yes. If you ask 100 people what the answer to "3 + 5" is, you'll mostly get "8". But that's because you're asking them all for the right answer to the same problem.

If you ask 80 of them the answer to "3 + 5" and 20 the answer to "3 + 2", the right answer isn't 8 and the people who answered 5 for the latter aren't wrong. They are solving different problems.

Given the breadth of computing today, it seems very unlikely to me that all programmers are solving the same problem, and thus that there is a single objective right answer for what language or language paradigm is best. It certainly doesn't align with my own personal experience, where I can't point to a single language that I would prefer for all of the different kinds of programs I've written.

> Debuggers allow this.

Yes, with limitations. A SmallTalker will tell you that debuggers are a pale imitation of the full experience. (I don't have much first-hand experience with it myself, but I know people who get misty-eyed when you ask them about it, despite being very familiar with "modern" debuggers.)

> You generally don't want the user to have this power.

I don't disagree with you personally, but there's a counter-argument that forcibly separating people into "developers" and "users" is itself a moral failing akin to welding the hood shut on a car.

> There is a shocking lack of science in the industry.

I'd like more science too, but I don't find its absence that shocking. PL is very hard and expensive to study scientifically. Doing controlled experiments is very difficult when step one is "Design an entire programming language, implement it and all of its tools and ecosystem, and then get people to spend a long amount of time learning it to proficiency."

That's a lot more difficult than "Take a sip of two sodas and tell me which one you like more", and even that simple experiment turned out to be famously flawed.


> I can't point to a single language that I would prefer for all of the different kinds of programs I've written.

I never liked the "pick the right tool for job" cliche in the context of programming languages. I'm curious if you can imagine a single programming language which you /would/ prefer for all of the different kinds of programs you've written. Not that it does exist, but could it?

Other than size, weight, power, and price limitations, I don't pick different computers for different problems. I mean I could live with one ISA for pretty much everything, including GPUs. I'm sure different people would pick different answers (one of your points), but after looking at a lot of languages over the years (including some of yours), I can't come up with two features I want which are inherently in conflict and necessitate being different languages.

I don't think it has to be a superset of all languages monstrosity either. And for the sake of argument, let's say this is just for one-person development. There's too much politics in trying to decide what features you /don't/ want your coworkers to abuse. :-)


> I'm curious if you can imagine a single programming language which you /would/ prefer for all of the different kinds of programs you've written. Not that it does exist, but could it?

Nope. Granted, I may work on a greater breadth of software than the average programmer. But, at the very least, I have implemented language VMs and garbage collectors where I needed to work at the level of raw bytes and manual memory management. But I sure as hell prefer memory safe languages when I'm not doing that.

I like static types for decent-sized programs, but I also use config files and other "data languages" where that would be more frustrating than anything.

Even if there was a single language that was perfect for me for all of the code I write, I don't expect that that language would be perfect for others, and I don't think those people are wrong.


> I have implemented language VMs and garbage collectors where I needed to work at the level of raw bytes and manual memory management

Fair enough, and I guess I'm forced to agree a little. However, if the one true language already existed, the VM/GC problem wouldn't have to be solved twice. Somebody had to write the first assembler in machine code, too.

I've written a (Hans Boehm style) GC of my own, and I admit that wouldn't fit with what I had in mind either, but working with raw bytes is a solvable problem in almost any level of programming language as a few library functions. All the batch or command line utilities, GUI applications, back end server modules, and most of the one-off exploratory programs could fit in a single elegant language.

> I like static types for decent-sized programs, but I also use config files and other "data languages" where that would be more frustrating than anything.

Again I agree, but (to me) this has a solution. I could be very content with a statically typed language with a single variant type for when you need to handle JSON-ish type dynamic variables or hierarchical data. I think dynamic and static typing can coexist very nicely in one language.

> Even if there was a single language that was perfect for me for all of the code I write, I don't expect that that language would be perfect for others, and I don't think those people are wrong.

I did caveat this was for single person programs and that different people would make different choices. I think my point was just that, having looked at languages from Icon to Prolog to SQL to Scheme to Ocaml to Rust to C++ and a lot of others, I think there is a point in the high-D trade space where I would be content to live and breath for almost every programming problem.

I've never gotten anyone else to agree, but I think it's an interesting exercise to fill in the details. I mean, computers are so much malleable than real world tools - you could have a single thing which handles screws, nails, rivets, and bolts effectively.


I find Objective-C to have that range. I have used it for implementing everything from kernel drivers (DriverKit, yay!) to programming languages to server apps and GUI apps.

Not perfect at the entire range, but it does have it.

And having used it and seen what worked well and what didn't, I have some ideas as to how to make it better.

I think it could be improved by having the Smalltalk-side be the default and then add mechanisms to move towards the machine again. Either very simply (add some primitive type declarations) or with greater power, from a less constrained base.


I always liked the way Objective-C added the message passing syntax in a way which fit in well with C. The [squareBrackets means: "we're in SmallTalk land"] looks nice to me. The @ (at sign) sigils I don't appreciate as much.

> I think it could be improved by having the Smalltalk-side be the default

It does kind of seem like you'd want the higher level language on the outside and only dive into the lower language when you need it, but I could go either way for that.


> never liked the "pick the right tool for job" cliche in the context of programming languages

Me neither. Many of the differences are fairly random, at least in relation to the task they're being applied to.

Reminds me of the distinction we had in the late 80s and early 90s between "server" and "client" operating systems. "Client" operating systems had user friendly GUIs and crashed a lot. "Server" operating systems were solid but didn't have (nice) GUIs. Makes sense, right? Except that it was complete hogwash, there was no actual reason for it except random chance/history. As NeXTstep amply proved.

Why do we have Java with byte-codes on the server? This was initially invented for small machines, and the bytecodes/VM were for applets and "write once, run anywhere". How does that make sense on a server. You are deploying to a known machine. With a known instruction set architecture. It doesn't, that's how. But Java failed on the desktop and the server was all that was left.

> I don't think it has to be a superset of all languages monstrosity either.

Agreed. Most programming languages are actually quite similar. I am personally finding that the concepts I am adding to Objective-Smalltalk[1] work well, er, "synergistically" in (a) shell scripting (b) application scripting (c) GUI programming (d) server programming. Haven't really tried HPC or embedded yet.

[1] http://objective.st


>I'm curious if you can imagine a single programming language which you /would/ prefer for all of the different kinds of programs you've written. Not that it does exist, but could it?

>However, if the one true language already existed,

The one true language can't exist because we want to use a finite set of characters to express convenient programming syntax. (A previous comment about this.[0])

It might be possible to craft a single optimal language for only one particular programmer but I even doubt that limited scenario is even realistic. Consider trying to combine syntax of 2 languages that many programmers use: (1) bash (2) C Language

In bash, running an external program is a first class concept. Therefore the syntax is simple. E.g.:

  gzip file.txt
  rsync $HOME /backup
Basically, whatever one types at a bash command prompt is just copy-pasted into a .sh file.

But in C Language, external programs are not first-class concepts so one must use a library call such as "system()":

  main() 
  {
    system("gzip file.txt");
    system("rsync $HOME /backup");
  }
In C, we have to type out "system("")" that surrounds each external program. We have to add the noisier syntax of semicolons after each line. It's ugly and verbose for scripting work.

In the reverse example, C makes it easy to bit-shift a number using << and >>.

  y = x << 3;
How would one transfer that cleanly and conveniently to bash? Bash uses a bunch of special symbols for special functions.[1] Bashes uses << >> for input output redirection. Therefore, bash would need to have noisier syntax such as "bitshiftleft(x, 3)"

So, if we attempt to create a Frankenstein language called "bashclang" that combine concepts of bash and C, which set of programmers do we inconvenience with the noisier syntax?

What if we just tweaked C's parsing rules so that naked syntax to run external programs would look like bash? Well, what if you have executable binaries with names like "void", "switch"? Those are reserved names in C Language.

Same thing happens with other concepts like matrices. In Julia and Mathematica, matrices are first class. You can type them conveniently without any special decoration. But in Python, they are bolted on with a package like NumPy. So one has type type out the noiser syntax of np.full() and np.matmul().

Convenient syntax to enable easy-to-read semantics in one language leads to contradictions and ambiguity in another language.

To add to munificent's comment, I also don't see how one language can offer both garbage-collected memory and manual allocated memory using convenient concise syntax _and_ and zero-cost runtime performance-penalty for manual memory. Those two goals contradict each other. When I want to write a line-of-business type app, I just use C# with GC strings. On the other hand, when I'm writing a server-side app that's processing terabytes of data, I can use C++ with manually allocated strings with no virtualmachine runtime overhead for max performance.

[0] https://news.ycombinator.com/item?id=15483141

[1] https://mywiki.wooledge.org/BashGuide/SpecialCharacters


I've never used it, but based on what I have read about it, this language seems fairly radical (well, not completely - LISP could be considered a prototypical form?):

https://www.jetbrains.com/mps/

It is open-source, too:

https://github.com/JetBrains/MPS

It seems to have active development, and - interesting aside - it is written in Java.

But again - it purports to do what you seem to be explaining here, and a bit more: It's a system that lets the programmer define the programming language as they use that same language, for the specific purpose at hand (aka, DSL - Domain Specific Language).

As I've noted - I've not used this tool, but I've kept it in the back of my mind as the concept seems very fascinating to me (I don't know if it is practical, workable, or anything else - but I do think it's a "neat" idea).


> So, if we attempt to create a Frankenstein language that combine concepts of bash and C, which set of programmers do we inconvenience with the noisier syntax?

I won't speak for others, but I can make that choice for myself, and I'm willing to give the C-like language the upper hand. If bash like things were high enough priority, I might change the name "system" to "run" so it was just a bit more concise, perhaps taking multi-line strings to tidy it all up. I'm not saying one language to rule them all, I'm just saying I could have one language for nearly everything I've done or want to do.

What I was talking about was more like what features does the language have. For instance, I like algebraic data types (sum/product types). I like generics/templates. I want an "any" (variant) type. I want complex numbers and matrices. I like operator overloading so I can implement new arithmetic types. I want simple and immutable strings. I want structs and unions. I want SIMD types. I could also list things I don't want.

Anyways, I could go on, but all of those fit in a single efficient and expressive language. Some current languages come close, but get important details wrong.

> I also don't see how one language can offer both garbage-collected memory and manual allocated memory using convenient concise syntax _and_ and zero-cost runtime performance-penalty for manual memory. Those two goals contradict each other.

There are a lot of details that matter, and I can already anticipate some of your objections, but I would be very happy with automatic reference counting on a type system which precludes reference cycles. I would not use atomic increments or decrements (which is one of the more costly aspects of reference counting), and I would not let threads share data directly. This provides deterministic memory management and performance not too short of what you get in C, C++, or Rust.

So not "zero-cost", but damned close. It's also simple enough to think about the implementation so you can easily keep the non-zero-cost parts out of the inner loops.

Of course someone else would disagree and say they can't accept this (minor) compromise.

Ousterhout had a famous quote about needing both a high and low level language. For him, that was Tcl and C. I think I could have everything I need/want for high and low level tasks in a single elegant language. You're not alone in disagreeing :-)


Have you ever considered what a VR programming language could look like? A language doesn't have to be black and white text. C and Bash don't have to directly overlap if technology creates simple syntax that expands beyond how keyboards type.


> A language doesn't have to be black and white text

I agree. I'd like to be able to insert pictures to explain data structures and algorithms, or equations for the mathy bits. I'd like to be able to choose different font sizes for different parts of the code to indicate their relative importance. Instead of files in directories, I'd like to be able organize functions clustered 2D parts of a page. I'm not sure what you mean by VR (3D?), but I'd be curious to see it.

> C and Bash don't have to directly overlap if technology creates simple syntax that expands beyond how keyboards type.

I distinctly don't want a polyglot catch all set of languages. I mean you can almost do that in the .Net world where a project can use many languages. I don't have any idea what would be better than a keyboard or touch screen for entering the syntax.


>>>"discarding the vast amount of screwdriver wisdom"

Thank you for that analogy. It is hilarious, and i will definitely be slipping that into some of my work conversations this week.


> Personally, I trust that in the aggregate, developers aren't dumb and do understand their own pain points and solutions even when the pain and solutions aren't obvious to me.

Not saying you're wrong on your other points, but "Wisdom of the crowd" is a logical fallacy.

plenty of examples where developers as a group followed some line of bad practices for years until someone smart managed to "steer the elephant" in a better direction.


> Why does the new generation so easily discard this vast amount of wisdom?

If you are actually looking to have some input from people who prefer statically typed languages about their reasons, you might want to write your comment in a way that does not presuppose that these people are obviously wrong (and, apparently, unwise).


Indeed. Berating people is the worst form of advocacy.

Advocates of LISP and Smalltalk need to come up with reasons why their languages have not won the adoption wars that aren't "everyone else is an idiot" or "if only we told them the gospel they would convert".

(In fact, in some ways they've almost overwhelmingly won, it's just that those two particular languages didn't. The bulk of code today is written in the late-binding language Javascript, with its novel prototypical inheritance that enables run time patching. Lots of data is stored in JSON, which is a marginal improvement over S-expressions. Apple have people undramatically writing Objective-C, the Smalltalk inheritor. Perhaps the problem is not that they didn't win but that they aren't being hailed as heroes?)


I think our industry has become rife with people who don't know its history. So we get people "discovering" new ways of doing things that were known decades ago. Take for example "JAM stack", also known as basically the way web development was done 20 years ago. It is touted it as something new and exciting when it's only so if you don't understand the history of web development. Sure, JavaScript is used more now than it was back then but that doesn't make Jam Stack altogether new, does it?

Likewise, statically typed languages aren't new and refreshing. They're just another way of solving a problem. Note I said they're "another way", not a "better way". They have their place (right tool for the job, etc...) but I think OP's point is that too many people think statically typed languages are better for everything and that's simply not the case. But one needs to understand the history of languages and programming in order to better appreciate the distinction and, more importantly, know which tool is the right one for the job.


Pop culture, that's it. A rather known and obvious approach at web development needed a catchy name and dedicated promotional website so that webdevs know that's cool and trendy.

Alan Kay said in 2004: «You could think of it as putting a low-pass filter on some of the good ideas from the ’60s and ’70s, as computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were. So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.» https://queue.acm.org/detail.cfm?id=1039523


> Many of the people who came up with or promoted the idea of dynamically typed languages had decades of experience working with punchcards, assembly code and statically typed languages

The problem IMO is that the languages used at that time mostly had ad hoc, poorly specified, unsound type systems. I find that more "modern" type systems are helpful rather than burdensome, but the theoretical foundation for them didn't really exist until the 70s.


Focusing on integration tests: oh, the test fails, something is wrong somewhere... I will debug all layers for the next week, as we simply don't have too many tests covering the internal code.

I like to have both.


I don't know about integration testing, but it seems like that problem necessarily means important lower level tests are missing. Something like having a bunch of nested switch statements where some cases were left unhandled.

In that context, are integration tests checked automatically the same way, say, Rust would error out on unhandled cases? Or is this left up to developer (who I assume is always writing this stuff at the last possible moment with the least possible focus)?


Why next week? Why not use the integration tests as part of your debug cycle now? The kinds of integration tests that I write only take a few milliseconds to run so if they fail I can just make a code change (or add a new breakpoint or log), then run again, then repeat. The flow should not be any different from doing TDD with unit tests. Note that integration tests don't necessarily have to be end-to-end; the point is more that each test case should cover a specific behavior.


> Why next week?

I think pleasecalllater's point is that without unit tests, you'll spend a lot of time just pinpointing the bug, since it could be anywhere in your module (which unit tests would cover), not just at the interface to other modules (which integration tests cover).


If your tests:

- run fast (<10s)

- run on your local machine

- run on your local code (including uncomitted diffs)

... whether they're called "integration" or "unit" tests doesn't matter, as they will detect regressions at the earliest possible time.

Unit tests might guarantee smaller code search area (because maybe the failing test only executed 1% of the code), but if running "git stash" makes all the tests pass again, I have a pretty good confidence about where the error lies.


That's not my experience. Usually it only takes me a few minutes to fix a bug once it has been caught by an integration test.


Depends on your system. Mine usually are huge. The integration test showing "this json reply is wrong" doesn't tell me anything else than I have to dig through all the layers.


Maybe you should break up that large system into more manageable chunks (e.g. microservices, engines, ...) and integration test those individually.


>The kinds of integration tests that I write only take a few milliseconds to run

So your codebase probably doesn't solve the problems and use the stacks most of us solve/use.


> your codebase doesn't solve the problems most of us solve

Why not?

> and use the stacks most of us use

Probably. Or at least not use them the same way.

I am consistently amazed by the time even unit tests take when I join projects. If they have them.

And those unit tests are usually also at least part broken and very incomplete. I would argue those phenomena are closely related.

The unit tests for one of my projects typically run in the ~1 second range, meaning that I can, and do, run them as part of the build process. A build is not complete until the unit tests pass.

This includes Postscript/PDF interpreters, so it's not just testing completely trivial stuff.

(Just checked: currently 3.9s real for 1498 tests across 9 projects, so a bit on the slow side overall)


How many external build systems and proprietary closed source tech do you have to yous to build your project and run integration tests on it? In my case, full production build that produces the same library takes about 30 minutes using asset cache: re-building everything completely from scratch would take about two hours.


Not "the next week" but "for the next week". My experience shows that debugging pure integration test systems is really time consuming.


I really don't agree with your point. The analogy Kay uses with biology is very telling :

Cells (co-)evolved for billions of years, they were not "designed" to accomplish certain tasks. Just survival and reproduction. And still, even with those billion of years, we still have cancer (as in bug in the reproduction process).

We design software. We evolve software. Software doesn't build itself, or decide for itself what's the best way to accomplish some task. We maintain software, we need guarantees, as much as we can get.

Dynamic languages let you think a problem you haven't thought of at design time will somehow solve itself magically at runtime, using components designed weeks before, with knowledge of that time.

It just doesn't work. It's an ideal vision that explodes as soon as your software gets any kind of commercial success and longevity, and you need to maintain it, and evolve it.


> Dynamic languages let you think a problem you haven't thought of at design time will somehow solve itself magically at runtime, using components designed weeks before, with knowledge of that time.

I don't think most people who program in dynamically typed languages believe that. However there are people in the extreme that believe that static typing will save you from errors. Sometimes I worry that the cognitive overhead of and code legibility, conciseness, and lax documentation that become habits in static typed languages add complexity to the program which is antithetical to maintainability. In practice, static typechecking of a dynamic language is probably enough. Especially erlang family languages do a great job of keeping type annotations looking "more like documentation" which is how it ought to be.


Static typing doesn't eliminate any errors, it just exposes them earlier when you run the compiler rather than when a unit test finds them. Of course, that's if the unit test has been written.


I've had several experiences where static typing forced my hand into making a software design decision which I later regretted.

Many real-world entities don't have a fixed set of characteristics/properties; instead, they have fluid characteristics that can change over time.

Trying to model those entities using static types which have a fixed/static set of properties is a bad idea.

For example, in nature, a tadpole might grow arms and legs and become a frog. Trying to represent a single entity using two different classes 'Tadpole' and 'Frog' doesn't always work... What type is the creature if it's exactly halfway between being a Tadpole and a Frog? Do you keep inventing new discrete types to keep re-categorizing the creature as it grows or do you acknowledge that the exact type of this creature cannot be expressed discretely but that it lies on a continuum?


> What type is the creature if it's exactly halfway between being a Tadpole and a Frog?

It's an animal. The type checker will make sure you don't try to water() it. Static typing does not have to mean infinitely granular typing (though of course typing everything "statically" as any is pointless).


> However there are people in the extreme that believe that static typing will save you from errors.

Unless s/errors/all errors/ that isn't extreme at all.


> Dynamic languages let you think a problem you haven't thought of at design time will somehow solve itself magically at runtime, using components designed weeks before, with knowledge of that time.

I think that's a misleading summary. Both dynamic and static languages suffer when it comes to changes over time (people will argue over which suffers more, but both suffer) so solving problems you haven't thought through is hardly a problem unique to dynamic languages.


Ok so what is the benefit of dynamic / runtime binding, then ? Isn’t it to reach potential combination of components talking together in ways you couldn’t (or didn’t want to) anticipate at compile time ?


Your comment feels very aggressive and uninterested in actual debate.

My primary benefit is not spending the time telling the computer things it should already know. Dev time is the most limited resource, I prefer not to spend it restating the obvious. And yes, sometimes this restatement could be complex to express.

The biggest benefit I see most static typing advocates benefiting from is IDE-hints, not compile-time checking. But IDE hints aren't what people use to advocate for static typing.

There are many reasons for different approaches - but if you want to limit it to a single thing, feel free. Just don't expect many people to be convinced, or even interested in your arguments.


> My primary benefit is not spending the time telling the computer things it should already know.

Sounds like you're in favour of type inference, not dynamic typing. A lot of statically typed languages don't need you to type out type declarations of your functions.

> Dev time is the most limited resource, I prefer not to spend it restating the obvious.

But on a typical dev job, you spend 10 times more reading code than writing it. And when you read, these type declarations actually save your time.

> The biggest benefit I see most static typing advocates benefiting from is IDE-hints, not compile-time checking.

Uhm, no? I'm pretty sure it's about the guarantees that compiled languages give you - the more invariants, the better. The same logic as with immutability.


I'm a fresh graduate from uni so I'd say that I don't have that much experience. My last years working on the side of studies using Python really made me prefer strongly typed languages like Haskell, Rust, Elm etc. My experience is that the compiler almost always finds my small errors and would-be-bugs which Python exposes at runtime (crashes with e.g. None-type errors).

What would you say the benefits are with dynamic typing?


Python IS strongly typed. ie: 1 == '1' is FALSE, and 2 + '2' raises an exception.


So forgive jompe for using the wrong term. The point, however, was clear. It wasn't "strong vs weak"; the point clearly was "compile time vs run time". And, well, Python is, from jompe's perspective, on the wrong side of the line, no matter how "strong" Python's type system is.


There is nothing to forgive, many people conflate dynamic typing with weak typing, so I thought I would point it out.

Whether or not a specific method of typing is good or bad depends on the project and the developer(s) working on it. But if you're going to use that as a reason to make decisions, then you should understand the difference.


since python 3.6 you can use static types in python if you want.


I don't have any particular love for dynamic typing, but I really can't find a language I like developing in more than Python. I use type hints wherever I can so I (and others) know what the hell is going on, but I have yet to find a language as expressive and productive with so many great stdlib or community packages as Python.

I started futzing with Golang but I felt like I was always typing 50 lines to do what I could in 3-4 in Python. I might circle back as it matures more and there are more libraries.


Yeah, Go is that way on purpose. You're not meant to use magic that works mysteriously, but to be a bit "step-by-step" in your code, so it's obvious what's being done.


The article explains (very well) what the benefits of dynamic typing are, it's just that Python is a terrible language that spits in the face of Alan Kay's ideas. It's like someone went out of their way to create a language that would not be too paradigm shifting by discarding the crucial elements that Alan Kay keeps raving about and only choosing the most superficial with the only consideration being ease of use and popular appeal.

Try programming in Erlang, Common Lisp or Smalltalk. All dynamically typed languages. All meshing perfectly with Alan Kay's vision. All very different to Python. It's too bad that people's idea of "dynamic typing" has been - mostly - reduced down to Python and Javascript.


I wouldn't say current Smalltalk implementations mesh perfectly with Kay's vision. It's focus on class hierarchies and inheritance breaks with what Kay's initial vision was. That said, it's possible to write Smalltalk code favoring composition over inheritance or in a much more functional way that focuses on messages.


I think you're confusing strong and dynamic. It's static vs dynamic and strong vs weak.


Because having some guarantees about the correctness of my code before it goes into production is valuable. Because the system I develop on is not the system that will run the code. Wisdom is not being discarded, the importance of different factors is simply not universal.


I've always been on the fence with respect to dynamic versus static typing. In some cases, static typing is nice but the majority of my issues don't rise from such cases where data passed in don't match the type of a function or class. I've noticed this quite a bit in my toy projects with Racket and Common Lisp. I think static typing might have its uses in places where you want to ensure the numeric values are within a range but one can argue that the compiler or runtime system should be competent enough to make arbitrarily large numbers acceptable even if they're rare in execution. But for all others, I think one can ignore types unless the former situation affects your work.


Everyone gets this one wrong. It's static typing within objects and dynamic typing outside of objects. Sadly we don't have languages with good micro/macro split. Maybe Axum only.


How exactly types and unit tests stop you from moving around the internal implementation? My impression was always that they help you in exactly that, allowing one to refactor any piece of code without worrying about unexpected side-effects.


> It's a shame to see the new generation of developers moving back to statically typed languages (e.g. TypeScript)

Equating TypeScript to Pascal, C++, and Java, or even newer languages like Scala, Kotlin, and Rust is a poor comparison.

TypeScript's type system is extremely flexible and even optional.

It lets you choose your own point on the spectrum of dynamic and static types, and move that fluidly over time if that changes.


We need to acknowledge that this particular field of, let's be generous and call it engineering, we are talking about is, for the most part, slow-moving and rather backwards to begin with. It's also shaped by forces and concerns that are suboptimal. As I wrote in the past:

"Software Engineering must be the only engineering discipline where mastery of tools and barrier to entry in terms of skills are in many ways inversely correlated with market demands. Every clown can write software these days and in fact a lot of clowns get paid six figure salaries to do so. When the domain is largely composed of clowns, standards become extinct and one finds oneself calibrating for a circus."

This is very important when we think about certain ideas expressed by geniuses that are so far ahead of their time that even smart people don't understand them. Unfortunately this seems to have happened to Alan Kay and his ideas. Maybe in 50 or 100 years, the modern software engineering race-to-the-bottom culture will have evolved enough (or more realistically, either exhausted dead-end paradigms or slammed into one unfathomable disaster or another) to appreciate them.


You sound like a really fun guy to hang out with. Also, a lot of those "clowns" are building things that make a lot of money and serve a lot of customers.


I'm not sure I understand why late binding and Alan Kay style messaging is desirable.

If you send a message to something that's supposed to do a task, and it just ignores your message because it was the wrong type or whatever, how are you supposed to debug that, or reason about the correctness of your program at all?


You aren't. You don't have a "program." You have millions of tiny isolated processes that can send messages to one another. Since you cannot observe the state of any process from the outside, and cannot tell if it has failed or not, you have to rely on other things (think tcp/network protocols) to indicate message reception and processing.

You can reason about your protocols, as they have an algorithm. But your object graph as an entire entity is a bunch of black boxes with very few observable properties about which to reason, or indeed even need to in the world of isolation.


In my experience silent failure leads to some of most difficult to debug systems that end up in a not quite right state. Fail as early as possible has always worked better for me, bugs are found much earlier.


Things are different in-the-small vs. in-the-large.

You can't statically typecheck the internet. Oh, and no-one said the failure needs to be silent.


Totally agree failure should be announced for another object to observe and decide to do something about.

I didn't feel the author got the messaging part. 'You send some data to an object' kind of misses the point.

Going back to Kay's inspiration in cellular function the messaging is more a broadcast, though proximity matters as far as which other cell responds (Internet routing tables are a good approximation but you need something more dynamic to allow objects to be created and destroyed).


All apple software use objective c and seems fine.

Run time checking is not unreal. Internet run on that. Ethernet run on that.

Too much error correction like sna and token ring make the network brittle.

You think you help error handling but by enforcing failure you have not decouple the “objects”. By assuming type and minor change the whole thing collapse on you.

It is between ibm network and internet. Or osi vs ietf.


> objc - fine

People forget this, particularly the ones who claim "this can't possibly work". If that's your position, you are simply wrong. Now which works better in what ways is something we can talk about.

> Internet

This is another big one people seem to not think about. Kay was thinking about scaling up, or rather about taking an idea that can scale up and then scaling it down. The internet and the WWW clearly demonstrate that you must late-bind. In fact, Allen Wirfs-Brock noted how the Internet is a lot like a huge distributed Smalltalk image: it runs all the time, it's late-bound, it's very stateful and you can't take it down and restart it from scratch.


Have successfully completed tasks call home to mama. No call no work. Easy.

Alan Kay and fellow minded et al. are looking at Biology and to an extent Physics as the successful approach to pervasive computation at large.

It is the desirable feature set of the biological organism that informs that perspective: self-healing, self-regulating, regenerative systems operating in an open-ended operational context.

So that is the holy grail. Or at least one of them. Clearly biological systems can do all that. Can we do this with software? That's what Alan Kay wants us to find out.


Biological systems do that because they evolve. Interactions between independent systems happen due to chance changes in some of them. But the nature of the interaction is not something controlled. Switch one bit of a genetic code and a whole bunch of behaviors change (the ultimate in side effects). This is the anti-Dijkstra; a system that cannot be fully reasoned about.

What we're seeing in "successful" biological systems is survivorship bias. They are useful because the system they operate in happens to be such that their function is useful to the system (they are in a temporary equilibrium). And that system is not under anyone's control, nor can it be, because every tiny change in one component, or the environment they operate in, alters that equilibrium in ways we can't predict. But there's no controlling entity pushing biology towards a particular function, so this doesn't matter in nature.

A computer system modeled after this would be by nature difficult to fathom and brittle to change.


I was not discussing fitness. Even unfit organisms are constituted according to the same organic order as the fit specimen.


Yes so true


Sending messages asychronously and not knowing who (or even if anyone) will handle it is pretty desirable. Like a holy grail of writing maintainable software. (think of async communication via a queue)

Isn't always possible, but it's worth the constant vigilance to drive each feature towards that type of architecture.


OK, I must be missing something fundamental, then.

Let's say you want to take payment and ship a product. You send a message that payment of X is needed. And... nothing happens because the component that handles that isn't listening for whatever reason. Do you wait for a message saying payment was posted? How long do you wait? What about the UI? How long does the customer wait?

It seems like a messaging system like this would require all calling components to micro-manage anything they're calling, to make sure things actually get done...


If I'm the service that needs a payment, then it is up to me to make sure it happens. If it doesn't, then I need to deal with that, because that's something that can happen with a payment.

So if I don't hear from the payment that it has been initiated, then I have to go on the basis that I can't get a payment.

But I also don't get to tell the payment how to deal with its internal activities, I don't want to know about it retrying, or working out which payment provider to use etc.

Objects arent FiniteStateMachineManagerFactories. They're actual things that your system is built to handle. If an object needs an FSM, it'll get one, just like cells have mitochondria.


I like pub sub for sending events(where the sender doesn’t expect any responses).

I dislike it for modelling RPC which is what you seem to be referring to.


> It seems like a messaging system like this would require all calling components to micro-manage...

I think that's part of the trick, actually. When every component of a system behaves in similar ways, it's easier to extract that behavior into re-usable patterns.

Rather than each component worrying about handling failure and micromanagement, that job is delegated to an another component, a supervisor or orchestrator. Supervision trees, automatic service discovery, and elastic resource managers all come to mind.

What payment systems don't send messages to communicate with the credit card company, or the eCheck service, or Venmo/PayPal/Apple pay? Websites and native apps alike have spinning graphics and loading bars, messages that say "please don't refresh the page while we do this". These systems are fundamentally asynchronous to begin with. By uniformly applying the practices you get consistency, in exchange for a higher low-bar of complexity (which you're likely to exceed anyway).


Message sending function can block until timeout or response is received.


How is timeout and sending message without knowing how it was handled different?


Have you programmed anything on the web? fetch() has interface just like that.


That is not the point. The point is having the ability to fully control how to react to a message.

Sometimes just ignoring certain messages is fine: You might slot in an object as a sink to log certain messages and not care about others. Other times you want to be able to do things like delegate certain subsets of messages to other objects without having to know the precise set of messages in the sender or ultimate recipient want to exchange.

Returning errors by default is reasonable, as long as you can override that when it makes sense.


That sounds great for a quick & dirty prototype, or for debugging and testing. But it would be an absolute nightmare for building and maintaining large-scale projects.


On the contrary. It's fantastic on large-scale projects in part because it keeps them much smaller.

I think you're picturing just losing messages, but that is not what happens unless you explicitly design a system to randomly drop messages. The reality is a system where by default you'll typically still get exceptions if you call something that does not exist (and in production use, the stack traces from those exceptions likely ends up somewhere like Bugsnag), but where you can selectively override what happens and e.g. redirect messages somewhere else. If someone sends a message that truly can not be handled, we'll still throw exceptions.

But that flexibility means you save a massive amount of boilerplate. When I switched from C++ to Ruby, that was a major driver - I'd spent years doing meta-programming using templates, and was dimissive of dynamic typing. Until I tried it and realized that for the pitfalls it has, the amount of ceremony and boilerplate that was cut was astounding.

I write all my code in a homegrown editor these days. It uses Drb to split the UI frontend with a server that holds all open buffers. Drb is a Ruby RPC library that uses the ability to intercept all messages to dynamically create proxy objects that serialize calls over the network connection. Because we can do that, it's almost entirely transparent and no custom code whatsoever needs to be written.

If the server side throws an exception, by client catches it and throws me into a REPL (or it could have dumped state and exited if I preferred). Messages don't get randomly lost, even though the client side objects have no idea what the actual interfaces of the server side objects are.


You mean like the internet?


The Internet as a whole [1] is not designed or maintained – it evolves.

[1] As opposed to its components (infrastructure, protocols, individual websites), which are designed and maintained, and which would in general be a nightmare to design and maintain if they followed the principles described by vidarh.


Their point is that a lot of the underpinnings of the internet are based on not assuming safe delivery, nor assuming the sender will get things right, and generally accepting that if you want to interoperate reliably you will have a very hard time if you assume everything works perfectly.

But taking it a point further: A huge amount of large websites are built on languages with this level of dynamic features, and often take advantage of it heavily. Every site that uses Rails for example (or any Ruby framework) will likely heavily rely on runtime introspection of the database to dynamically generate parts of the ORM apis.

I'm right now working on a Ruby application where I went a step further and axed a large chunk of semi-manual API writing by writing a few small components that dynamically generates most of the API at runtime from small pieces of metadata and the actual state of the database. A large chunk of the userinterface is similarly dynamically generated from data dynamically generated by the dynamically generated backend API.

There is no way we could deliver what we deliver with the kind of size team we're using if we could not rely on introspection and dynamic features like this to generate most things.


Rapid prototyping and hot-patching running systems come to mind. Note that traditionally, Smalltalk environments are modified at run-time and dumped to images for deployment.


Not that I'm advocating the approach but I think the benefit is that if your recipient is out of action then your callee survives

In a typical Simula I program (C++ Java PHP C# etc) if you fatal in your recipient's object your callee and the entire program is going to come crashing down, in something like erlang the callee can keep going

Before jumping to putting everything on a queue, there are downsides like you've identified which is why there's this comment from Rich Hickey on the erlang variant of Clojure: https://github.com/clojerl/clojerl#but-but-rich-hickey-lists...

Essentially, it's good when you need it but an overhead when you don't, kind of like async in my mind, use it when you absolutely need it, but otherwise, you'd be better operating without it


Why better without async? You feel the same about std::async?


As parent mentioned, there is overhead to using async: both in actual execution (e.g. likely spinning up another thread, plus creating a future object) and cognitively (in terms of following control flow, and knowing when exactly things are happening).

It's not that it isn't great when you do need it, but there are also plenty of cases where you don't need it and it's just unnecessary overhead.


You step into the call and see what the object did with the message. In Smalltalk if a message cannot be dispatched to an object or it's inheritance hierarchy and there is no custom handling then I believe it errors or pops open the debugger for you.


I have a ton of respect for Alan Kay and think he's a genius. But why does it seem like every time he talks about OO it's always painting an apocalyptic picture like we're in some kind of twilight zone alternate nightmare reality of broken patterns and models? Surely our concept of objects and OOD can't be that bad, but his apparently contrary outlook is just so persistent...


> I have a ton of respect for Alan Kay and think he's a genius. But why does it seem like every time he talks about OO it's always painting an apocalyptic picture like we're in some kind of twilight zone alternate nightmare reality of broken patterns and models?

Because we are? Have you seen most OOP code-bases, they are a train wreck!


Could you elaborate on the “most” part? Coming from my experience on Rails over the past couple of years, I’m quite happy with what I’ve found so far.


Well, Ruby is Smalltalk with a different syntax..


I think one of the biggest issues is that languages such as Java account for large percentages of existing OOP in the wild and Java is a "kitchen sink" language with too many supported paradigms and styles.

For example, look at class attributes and methods. I'm not sure they're needed but in a language like Java the semantics often leads to wildly different programming styles within the same code base. And it's hard to enforce consistent styles using conventions.

That said, I've seen/read Kay asked directly, on more than one occasion to elaborate on his notion of a real OOP language implementation and his answers are always to coy and slippery for my taste.


JavaScript is functional and oh boy!


What does "functional" mean to you? If "OO" has a hundred different meanings, then "functional" must have at least a thousand. To me, JavaScript is definitely not functional - it lacks a notion of purity, and its meaning is the evaluation of statements, not the construction of expressions.


My point was popular languages attract masses of developers and the resulting quality of code is poor on average.


That's a good point, but it's not implied by "JavaScript is functional and oh boy!" at all.


[flagged]


The bar for thoughtfulness is higher than this here. Even if the J-word gets us, could we please be more informative and kind or just not say anything?

https://news.ycombinator.com/newsguidelines.html


This


> Surely our concept of objects and OOD can't be that bad

Yes, it is. Object orientation is probably the worst idea to ever have appeared in computer science, both from a philosophical and practical point of view: it is an unsound principle and it leads to disastrous engineering practices.

We will be mocked mercilessly by our descendants.


Actually, OOP is the most successful architectural style in existence.

Pretty much all the GUI software you see is OOP. Modern personal computing was invented on OO. The Web was invented on a NeXT in Objective-C on AppKit.

OOP was seen by Fred Brooks as one of the few possible technologies that come close to a "silver bullet" when he wrote NSB, and ten years later he said that this had come true.

Today, we tackle problems orders of magnitude larger and more complex than pre-OO. We have reuse at levels that were only dreamed of (and largely thought impossible) before.

etc.


Yeah, history won't be kind to OOP and OOD.

It will go down as a massive mistake that costs billions in both financial cost and just wasted human effort.


>Surely our concept of objects and OOD can't be that bad

How would you know if it were?


Because software isn't as cooperative and scalable as Alan Kay says. We are getting better at this, but probably at a painstakingly slower rate than he would like.


There are actually two languages (aside from Smalltalk) I know that are very close to Alan Kay's idea of message passing objects (as I understand it). Those are elixir and erlang.

They're usually regarded as functional languages, but they have message passing actor model at their core. It might not be the best for everything, but they definitely are very good in their niche of embarassingly parallel computation.


I would go as far as to say that the nature of most software development is similar to the state of physical infrastructure in the United States: failing because of lack of maintenance and poor practice. I strongly believe that these problems will lead to foundational collapse of our current processes and the systems that were built from them. It is equally likely that we simply keep on living with minimal modification to current way that we do things, with what I imagine will be slowed innovation.

All of the signs of general failure are starting to show: the great successes of recent computer attacks against the United States, against both public and private institutions, and the regular drum beat of data leaks.

Imagine having the level of education and knowledge that Alan Kay does and seeing a contemporary code base (I don't imagine that my work would be treated kindly by Mr. Kay)? I imagine that his view of whatever is coming is far worse than mine.


Language features: it depends on what you need.

I am a huge fan of Lisp (started using professionally in 1982) and to a lessor degree Smalltalk (I wrote a nifty little NLP library for Pharo). That said, sometimes I really like the strict typing of Haskell, which I am using right now to develop a commercial product.

Smalltalk, especially with the increasing good modern Pharo system, is a great platform for some application, especially for getting close to data and flexibility in trying UI ideas.

In any case, Alan Kay has always been an inspiration to me.


What is late binding of everything ?

To me it's a bit like doing the opposite of what macros in Lisp do, but to do it during run-time rather than compile-time.

Take Ruby for instance (which is interpreted in its standard implementation). You can define methods at runtime and 'keywords' that allow you to define new methods are methods as well.

As a result you can extend the more "static" part of the language, adding new ways to declare methods and other "quasi-syntactic" sugars because this static part has been moved close to run-time and is executed like any other part of code. There are still a few things you can't do, like building a new way do declare classes or changing the superclass they point to.

Edit: this interpretation is driven by the notion of PHP's Late Static Bindings[1].

[1] https://php.net/manual/en/language.oop5.late-static-bindings...


Rather than seeing "late binding of everything" as a specific implementation or bits and pieces of a particular implementation, I find it is much more clear to see it as a general self-referential idea. Why then would one _go against it and try to constrain it_ ?

Taken in that light, as the article does a really good job of explaining, late binding of everything is a modeling process (or a process of thinking about things). It means that I should try and keep parts of the structure that I'm building up as flexible as they can be so that they match the model in my mind. One benefit of doing that is that I can later revisit and rework them. Another benefit that is seldom talked about, is that _the object I'm working on no longer constrains my thinking_.

In other words, only specify in detail that which is crystal-clear in my mind model, and get away with fuzzier representations elsewhere. Tools and languages that fall inline with that process, not only empower one to work in this way but make it _ultra-efficient_ by triggering short feedback-loops and allowing the programmer to mesh with the modeled object.


Late binding to me means late binding of the implementation, not the interface/protocol. So the discussion above about static vs dynamic languages is orthogonal. You can do it in either. In the example at the end of the article the protocol is that you give the shopper object a budget and a list of things to buy. But you could replace the implementation (late-bound) for one that does machine learning to optimise the result for cost or quality without changing the interface.


Some interesting points:

- One cell has protein molecules with 5,000 atoms each; 30% of cell is 120 million components that transmit information; About 100GB of state.

- The internet is the only successful OO program

- Your program should be able to change its code as it runs

My own interpretation of the "messaging" paradigm is this: you send 100 people a letter with some random symbols in it and say, "I would like a pony." Over time you will get letters back, and eventually you will get a response which is basically what you were looking for. Then you send more letters.

Also I'd mention that based on the bugs in the erlang code I've seen, it should stop being used as some kind of holy savior of the unscaling mess of OO code out there. I don't find it any better than C code.


What Alan Key describes reminds me of actor systems like Akka. Each actor is like a "cell" isolated from other "cells". An actor receives messages of arbitrary types ("messaging" and "late binding") and can do what it wants and whenever it wants. Actors can form a hierarchy to handle errors. This again is much like how cell handle injuries.

Interesting is that the Akka community put a lot of effort in implementing statically typed actors. But as far as I know (I'm not quite up to date) there is no final solution to it. It is controversial if it is even the right thing to do.


>, he realized that while software routinely has trouble scaling, cells can easily coordinate and scale by a factor of over a trillion, creating some of the most fantastically complex things in existence, capable of correcting their own errors. By comparison, the most sophisticated computer software programs are slow, tiny, bugfests. Kay's conception of OOP starts with a single question: how can we get our software to match this scalability?

Here's my pet theory on why Kay's vision for OOP didn't win in the marketplace of ideas: The software industry achieves "extreme late-binding" via network protocols like TCPIP/HTTP instead of a single programming language's "message bus".

Instead of using the "message bus" of Smalltalk or Objective-C's "objc_msgSend()", the world has decided to express the evolution of software via multiple programming languages and runtimes and by the mechanism of software updates instead of depending on a single language ecosystem like Smalltalk to write a metaphoric cell biology system to evolve itself.

The majority of software we write isn't burned onto a printed circuit board and never ever updated again. An extreme example of an algorithm that's forever engraved in hardware is the computer on the Voyager space probe.[0] Instead of launching a computer out to space and never touching it again, we have the luxury of just updating the software.

If anyone remembers the 1980s online services like Prodigy and Compuserve, they would have scheduled maintenance outages. E.g. they'd send out a notification that "Prodigy will be down for service from Saturday midnight to Sunday at 4am".

But consider today's massive web properties like Amazon.com, Facebook, Google. They run virtually 24/7 with no scheduled maintenance downtimes. How do they do that even though we know they're constantly deploying new software, microservices, etc -- and -- they're not using an extreme-late-binding programming language like Smalltalk? Because they achieve that dynamism at the http network layer instead of the programming language with devops practices such as "continuous deployment".

E.g. The url can be thought of as a "request message" in Kay's parlance. Here's a url that uses Google Translate to convert Russian text to English:

https://translate.google.com/?hl=en&tab=wT0#view=home&op=tra...

Before September 2016, the Google's backend of software responding to that url was linear algebra on a corpus of text. After that, they completely switched out the translation engine to be a deep-learning neural network. The http layer allowed Google to transparently change out an entire backend stack without users being aware of it. There was no publicized "maintenance outage" to swap out the entire language translation engine. Presumably in the future, that same url ("message") will extremely late bind to a different and better translation engine ("receiver object").

Today, we also have constant software auto updates on the desktop and smartphone. How does Chrome/Firefox evolve new features even though they're written in static C++ instead of dynamic Smalltalk? Because the browsers auto-downloads the updates and install them.

[0] https://www.allaboutcircuits.com/uploads/articles/voyager_fd...


>Instead of using the "message bus" of Smalltalk or Objective-C's "objc_msgSend()", the world has decided to express the evolution of software via multiple programming languages and runtimes and by the mechanism of software updates instead of depending on a single language ecosystem like Smalltalk to write a metaphoric cell biology system to evolve itself.

I find your post very insightful. I'd just like to add that you have that kind of message passing in Elixir and Erlang, and they also support hotswapping code. So the idea didn't completely lose in the marketplace of ideas. It did lose the war for the name OOP though.

I'm not a fan of what we understand as OOP today either, just looking at the GoF book should make you think whether you want to work in the model that makes all this necessary. So I kind of understand Alan Kay. I just think calling Elixir an actor based functional language is just as good to me.


The GoF book was written for Smalltalk and C++, so the limitations that compel the invention of design patterns clearly apply to Kay’s own language as well.


Yes, so true. I grew up with stacks of floppy disks with huge statically linked applications. The whole era was preoccupied with basically one massive problem: how to achieve modularization and some degree of reuse.

Now these issues are still important but cheap storage and fast, always available communication has changed software development to the point where Alan Kay's points sound a bit like misplaced puritanism. I respect him but I don't think his views have the relevance they once did.


This is one of the arguments for microservices and the tooling around them: microservices act as the new 'objects' given this perspective.


> Here's my pet theory on why Kay's vision for OOP didn't win in the marketplace of ideas: The software industry achieves "extreme late-binding" via network protocols like TCPIP/HTTP instead of a single programming language's "message bus".

Sounds precisely like his vision.

https://youtu.be/oKg1hTOQXoY

In this video he refers to how the notion of treating each object as a complete computer, with its own URL.


An insightful and well illustrated perspective.


Objective c is better in this respect. It is more about message passing than type enforcement.


Ruby even more so. (Both ruby and ObjC were originally heavily influenced by Smalltalk).


Great article. The idea that software should learn from billions of years of biological evolution is super interesting.


>>In other words, you don't execute code by calling it by name: you send some data (a message) to an object and it figures out which code, if any, to execute in response. In fact, this can improve your isolation because the receiver is free to ignore any messages it doesn't understand. It's a paradigm most are not familiar with, but it's powerful.

A Redux reducer consuming an Action object would possess the same properties, no?


The term object-oriented has often been misinterpreted. Alan Kay explained that "[the term object-oriented] was a bad choice because it under-emphasized the more important idea of message sending".


What is a modern language that comes close to embodying these ideals?


Smalltalk comes to mind. You might argue over whether it counts as modern or not, but working in Smalltalk for two years was something of a revelation to me, at least.


Check out languages in the OTP ecosystem, especially Erlang and Elixir.


Between elegant and 'industry-friendly' languages latter always win. Often shortcomings of the language are accounted for in tools around it.


I always wonder what exactly did he mean. I'm not quite familiar with Smalltalk and CLOS though I have tried them. But after work with Erlang for a while made me think I'm slowly getting what he said. I found these ideas are really fascinating.

Try to think about the following pieces:

1. A bee dies, the hive won't explode. Millions of cells die every minute within you.

In Erlang, usually, processes (object) are mostly organized as a bureaucratic structure, which has supervisors letting other objects doing the actual job. If someone dies or fails, the supervisor could just kill and replace them. This really looks like biological systems or human society. Not in a world that the boss let someone do some work, then the boss and the worker along with a whole world explodes if the worker fails (99% of OOP languages exception handling are not OOPish at all, it's totally imperative instead of modeling the relation between objects).

These correspond to the biological metaphor in the talk from Alan Kay in OP's article.

2. You can find and talk to someone alive if you know his address, email or phone number.

Java has object reference but it's not transparent to other systems, there is always the 'outside world' concept in this kind of reference or pointer system, similar to ST monad in Haskell. For a normalized database, there are transparent addresses for entity records, but they are dead and cannot talk to anyone until every time you need it, you sort of revive then interact with it for a short period, then you kill it and take its guts back to the database.

In Erlang, you can register any process (object) with an address like {user,42}, any other object can talk to it if they know the address, even from other servers. Just like how URL works mentioned in Alan Kay's talk.

3. The world is concurrent.

You have approximately a thousand audiences in a room. In order to count them, you let them all stand up. You tell everyone needs to get a number '1' in their mind. Then everyone finds another person, add their '1's, one person sits down, another person remains to stand, takes the former guy's number, then repeat the process. The last person stands has the count.

Usually, you only need to count very few times to get the answer. And this is what computer science is about. The problem is, that's not the way most computer program works. Because for most programs, even if you modeled 1000 people in Java, there's still only one person doing one thing at a time. Everyone runs on a monolithic thread. If you call libraries, you are giving them the most important thing you have -- the thread. And they don't promise they will return it to you.

Contrastly, in Erlang, every process (object) must have its own resource, no one can stop other people from doing things, no one can use up all the resources. The real world is a concurrent world where everyone is an isolated individual who can do things at the same time.


https://lobste.rs/s/8yohqt/alan_kay_oo_programming#c_5xo7on

>> He doesn’t have random opinions about “objects”, he invented the word

> Kay did not invent the term “object”.


Alan Kay says he did.

"I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea."

Source: http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-...


> Alan Kay says he did

How sad if true! I'm reserving the right to think you misunderstood what he was trying to say there.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: