I like to think that all three languages benefited from the competition.
Bottom line, it demonstrated an answer to a problem that a lot of people were having, and it promised to answer that problem in an 'open source' kind of way.
I have no way of confirming the authenticity of the following, I read the whiteboard but anyone could have written it. On a whiteboard, in an office associated with Adele Goldberg where Parcplace Systems had moved out, was penned the question; "What is the killer app for SmallTalk?" and below that in a different hand, was written, "and that is how the unholy love child between C++ and UCSD PASCAL wins."
In Self we have two ways of specialization:
Parents are objects that other objects "inherit from". At run-time property lookups are delegated to a parent. The difference between a parent and a prototype is that changes to the prototype do NOT affect the derived instances. Changes to a parent DO affect the derived instances.
So, when I read about "class-based" versus "prototype-based" languages, I cringe. It is really "class-based" versus "parent-based". How did cloning get confused with run-time delegation?
Self introduced the notion of self-describing instances. That is the essential coolness. The simplifying notion.
Compare and contrast with other Smalltalk children, like Java or E, where this isn't possible because classes are closed. (E doesn't have classes, but it has a similar property, in that object literal scripts are closed for modification.)
In the original Lieberman paper you created new objects that were empty and inherited from the prototype. Then you added local slots (name/value pairs) to the object for anything that was different from the prototype. Languages based on this concept often have two kinds of assignments: one that adds a new local slot and another that just replaces the value of the old slot (either local or inherited).
In Self, on the other hand, a prototype is a template for an object you want and you clone it to get a new object. Once it has been cloned, there is no longer any relation between the prototype and the new object and they evolve separately. These kind of languages tend to have only one way to assign values to slots.
Many languages that claimed to follow the Self model (like NewtonScript and Io) actually use the Lieberman model instead. In either case the prototypes are fully functioning examples of the object you want to create new instances of. So, unfortunately, it is natural that one word is used for both. But this results in very confusing discussions when someone is talking about a language with one model while thinking about the other model instead.
Edit: Oh I see. Cloning is the alternative to instantiation. Parent slots are the alternative to classical inheritance.
But it didn't have to be that way. And now the sentiment is that OOP is bad, and inheritance is evil, and classes are the worse, forcing one to predefine a taxonomy that's likely to need refactoring.
But prototypal languages can be easily changed. Just change the parent slot(s), or modify the object itself, etc.
Are there any codebases around 100K to 1 M lines of code written in a prototypal style, which are actually in production use?
You can claim thousands of such systems for classes. In that sense, classes are a success. That people write horrible class-based code isn't a knock against them. People also write horrible procedural code. Most code is bad. But there is some code with classes that is very good.
There was sentiment in the 90's and early 2000's that OOP is bad. I think the world has learned how to use classes since then -- e.g. no more large inheritance chains and fragile base classes. Not everything is an object -- some things are just functions, and some things are just data with no behavior.
As far as I can see, prototypes are worse along all dimensions than classes.
However, the flip side of this is that more freedom is not always good for more readable software. GOTO, for example, can express any control flow that for/while/do-while/if/switch can and a number that it cannot (coroutines/exceptions/etc.), but as an industry we've moved away from GOTO because most programmers can't hold this flow control in their heads. Prototypes are the same way: they grant a lot of freedom to implement fancy abstractions, but many programmers seem to be unable to understand the resulting abstractions, so they don't find widespread use.
Asking for codebases of 100K-1M lines in prototype-based languages is the wrong question. Because prototypes let people define abstractions that other programmers find unreadable, they a.) let you write equivalent software in fewer lines of code and b.) get rewritten in class-based languages as soon as you have more than a handful of programmers working on the codebase. They're much more likely to be used by a small team of hackers who sells their startup for $40M or so and then vests in peace while another team rewrites all their code in Java than by a big company.
If you broaden the question to "has anyone ever made significant money working on or with Self", the answer is yes:
(Fun fact: Urs Hoelzle, Animorphic's CTO and sender of the second message in that thread, later went on to become employee #9 and the first executive hire at Google.)
(I also worked at Google and am somewhat familiar with the Self to HotSpot to v8 heritage.)
Your second paragraph is exactly what I think is wrong with prototypes. There's not enough structure, and not enough constraint. Constraints are useful for reasoning about programs. You might as well just have a bunch of structs and function pointers (and certainly many successful programs are written that way).
It's analogous to every library in C rolling their own event loop or thread pool, leading to a fragmenetation of concurrency approaches. Go and node.js both unify the approach to concurrency so every application doesn't end up with 3 different concurrency abstractions.
I don't buy your 3rd paragraph. Python is class-based; it has the characteristic you're talking about with respect to a small team of hackers; and that has been empirically supported by hundreds or thousands of startups being acquired (Instagram, etc.) and even huge companies created (Dropbox).
I honestly think prototypes have failed in the marketplace of ideas and there's a good reason for that. I use metaclasses all the time but not for dynamically changing sets of methods. That seems like a horrible idea. The way I use them is for generating types from external sources like CSV/SQL/protobuf schemas.
I actually think metaclasses are probably a better solution than prototypes, because they let you write the majority of your code in a class-based style and only incorporate funky abstractions when they're really necessary, which is typically infrequently. My point is that prototypes are strictly more powerful than (non-metaclass) class-based systems, and that this power lets you build powerful abstractions that can dramatically decrease the number of lines of code you need to write for an initial system.
By the way, "composition" is ambiguous:
- Composition of classes is usually a shortcut used to describe inheritance of interfaces and implementation through aggregation (there's only one mainstream language today that supports this natively: Kotlin).
- Composition of functions, which is pretty much mainstream in most popular languages today.
That approach results in fewer unnecessary and more useful abstractions, because they follow the contours and requirements of the actual working code, instead of trying to predict and dictate and over-engineer it before it even works.
Instance-First Development works well for user interface programming, because so many buttons and widgets and control panels are one-off specialized objects, each with their own small snippets of special purpose code, methods, constraints, bindings and event handlers, so it's not necessary to make separate (and myriad) trivial classes for each one.
Oliver Steele describes Instance-First Development as supported by OpenLaszlo here:
He explains how the "Instance Substitution Principal" (where class definitions looked like instance definitions) enables a more seamless granular transition from rapid prototyping to developing a shipping product.
"Instance substitution principal: An instance of a class can be replaced by the definition of the instance, without changing the program semantics."
"The instance substitution principle can be applied at the level of semantics, or at the level of syntax. At the level of semantics, it means that a member can equivalently be attached either to a class or its instance. At the level of syntax, it means that the means of defining a class member and an instance member are syntactically parallel."
I described OpenLaszlo and compared it to Garnet, another early constraint based prototypical user interface system written in Common Lisp at CMU, here:
What is OpenLaszlo, and what's it good for?
OpenLaszlo and Garnet both integrated prototypes, constraints, data binding, events and delegates, in a way that (some aspects of which) could be described by the buzzword "Reactive Programming" today.
Here's some discussion about prototypes, Instance-First Development and Lua:
Re: Need good examples of when prototype-based objects are better.
Benedek and Lajos discussed "Bottom Up Live Micro Ontologies" including "Instance-First Development" in
"Conceptualization and Visual Knowledge Organization:
A Survey of Ontology-Based Solutions":
2.1.3 Bottom Up Live Micro Ontologies
The structures that emerge in the course of knowledge building that includes discovery and
conceptualization typically have all the characteristics we have described in the previous section.
To distinguish the emerging Knowledge Architectures from alternative approaches we propose to refer to
them as “Bottom Up Live Micro Ontologies”.
'Bottom up', in compliance with the literature, , because
they are created in the course of elaborating a concrete domain and 'micro', because the meta terms
introduced can affect a single node or any that are linked to a node in a piecemeal agile way and in close
contact to the context from which they emerge.
These micro ontologies are usually smaller in size than
the so called “local ontologies” in domain modeling  because they are amenable to reuse from any
context, way beyond the one that gave rise to them.
Using Micro Ontologies it becomes possible to
define and manipulate domain knowledge with the aid of meta-level structures introduced on the fly, and
these meta-structures can also be treated later as domains of their own right.
Elaborating meta level
structures as domains of their own right, leads to additional meta-meta level structures, and the same
process can be repeated as far as needed.
So knowledge architecture constructions are “turtles all the
In a bottom up approach domain specific, as well as meta-level concepts and methods can be
developed in a form of “instance first”.
“In instance-first development, one implements functionality for a
single instance, and then refactors the instance into a class that supports multiple instances” , which is
to say we are “going meta”.
Only through live exploration and elaboration of descriptions of exemplars,
specific instances of objects of interests, it is possible to develop suitable situated elaborations and
conceptualizations that can capture ontologically what “there is” across many instances.
This can be stated
as the methodological requirement of the “primacy of bottom up live development”: the characteristics of
instance descriptions and the relationship with other instances should not be lost as we construct
conceptualizations that are applicable to the class of things that are being described.
Hence, instead of
“conceptual atomism”  and correspondences between descriptions and some aspects of reality, KO
seeks to establish correspondence between the structure including the relationships between instances
and their class models in a more abstract sense of ‘images’, or using a current term ‘visual models of
reality’, in the spirit of Hertz’s Principles of Mechanics.
In the process of KO the formation of these ‘images’
is however, much closer both historically and methodologically, to Whewell’s “consilience of inductions”
trough the “colligation of facts”. [41, p.74].
To paraphrase Ward Cunningham’s quoted dictum: the
emerging live, visual knowledge architectures should be “the simplest thing that could possibly work” that
enable us to achieve our knowledge goals and intentions in a given situation.
With respect to ontology
evolution timelines, it is not only the results of conceptualization that matter but the creation of “knowledge
model[s] that preserves audit trails of resource manipulation” as the records of “concept growth can
increase the transparency of a research enterprise”. [56, p. 672]
The vision that takes us “beyond
ontologies” had largely been explored with the Augmentation System that Engelbart created in his NLS
half a century ago on a ‘milli iphone’.
With the millionfold increase of computing resource available
even to individuals today, we can embark on developing the means to promote the “culture shift” Ibid.]
that could lead to collaborative creation of the ‘great chain of emergent meanings’.
In this quest we
need dynamic mechanisms for recognizing and merging alternative conceptualizations.
I totally agree with this bottom-up style of software design. In Python, I start with dictionaries, tuples, and functions. And then later I might turn them into more structured classes and methods.
I'm not sure you need prototypes for this evolution, but I concede that it's plausible that they will help.
Actually I think Python is too impoverished in letting you make things stricter as the design evolves. I suspect the same may be true of prototypal languages. Yes they are good in the initial stages of program design, but perhaps the later stages are just as important.
A successful program spends more time being maintained than being written, and it's maintained by more people than it is authored by. So it makes sense to devote a good chunk of your language design to the later stages, and implement classes + metaclasses rather than just prototypes.
Anyway, thanks for the interesting perspective. Yes I concede classes can lead to early "over-modeling". But there's also a difference between Java classes and classes in languages like Python and Ruby. And classes vs. prototypes is not the only relevant issue when doing bottom-up, iterative design.
Reminds me of the container-based virtualization renaissance that's happening right now.
Hmm, and I've always thought Chrome's extra-thick titlebar was liberated from the Xerox Star UI (http://imgur.com/6h13MlP).
That's really doubtful. Self's delegation-based inheritance was expedient for quickly implementing an object model instead of having to build a class system, the influence never seems to have gone any deeper.
From what little I looked at it, delegation (through parent slots) is everywhere in Self, it's used for inheritance but also for mixins and scopes chaining and… It's not just an object model, it's a core semantic principle and tool.
I’m not proud, but I’m happy that I chose Scheme-ish first-class functions and Self-ish (albeit singular) prototypes as the main ingredients. The Java influences, especially y2k Date bugs but also the primitive vs. object distinction (e.g., string vs. String), were unfortunate.
For me though it's not about the Self language but about the combination of the language and environment. I did a screencast attempting to show some of how Self development is done https://bluishcoder.co.nz/2015/11/18/demo-of-programming-in-...