The further up the abstraction hierarchy you customize things in languages that let you do this, the more differentiated your code is and - generally speaking - the more awkward it is to integrate with other people's code.
Lisp and Smalltalk are at one extreme end of the scale here. They let you define rich DSLs. Unfortunately, application of their most powerful features to any given problem domain typically results in a custom language specific to that domain - so much so that you lose much of the benefits of sharing a language with other programmers. Understanding what's going on takes much more work, because you need to understand more of the system before its gestalt becomes clear.
DSLs are excellent when they're applied to problem domains that are otherwise very clumsy to express, when the problem is widespread, when it doesn't intermix too deeply with other domains (i.e. the abstraction is mostly self-contained), and when there is a critical mass of programmers using it - enough that understanding the system at its own level of abstraction is enough for most people.
But when you apply the same techniques to relatively humdrum problems like object construction, function binding etc., it's best if the language or its standard library has a reasonable approach and everybody uses it. Developing your own ad-hoc language around class construction means that everyone coming into your code needs to understand it before they can truly understand what the code is doing. It's a barrier to understanding, and the pros of customization need to be substantially higher than the costs of subtlety.
In short, it's usually better to be explicit.
Take ExtJs core, it's a good exemple of over abstracting everything.
But JS is also guilty,guilty of providing "weak" and loose structures.If JS had classes(like ES6),nobody would create all these complicated frameworks to build them.
Having classes does not solve the meta-programming issue, which is what most frameworks additionally provide. Look at the Java world for an example that classes don't prevent the proliferation of frameworks.
In my experience, the kind of language hacking that goes on in Lisp systems is not as you describe it. Rather, it's akin to what any programmer does when extracting duplicate code into a function. A better word for this than "DSL" is "factoring"; Lisp's strength is that it offers a simple and powerful way of factoring that is harder in other languages.
I can't comment on Smalltalk, except to say that the way you've lumped it in with Lisp makes your argument weaker. Those two languages achieve their flexibility in such different ways.
This is really an argument against higher-level languages in general.
No, it's really, really not. It's an argument against deep home-grown abstractions when there are close approximations already available in the language or its community.
Hopefully you've had experience of a wide range of languages. Do you see a pattern in those languages, where, when the abstractions provided by the standard library are higher level, that libraries are generally more useful and the ecosystems are richer? When collections are in the standard library, libraries can communicate using these collections - but when they're not, every library chooses its own collections, sometimes using iterators, sometimes callback enumerations, sometimes linked lists, sometimes opaque handles, sometimes length-prefixed arrays, sometimes null-terminated arrays, etc.
This is what I mean by sociology. The more our code shares in common with other programmers, the richer our ecosystem is.
And the more our code diverges from other programmers, the more the ecosystem hurts.
Another word for "factoring" is compression. Taken to its limit, eliminating all duplication makes code indistinguishable from random noise. Human communication has redundancy and duplication for a reason.
Both Lisp and Smalltalk are able to implement 'if' as a library routine rather than a primitive. Their power to build abstractions is why I've lumped them together, not how they achieve their flexibility. How is irrelevant. Its the amount of rope they give programmers to grow their own little divergent gardens. It's how much they can encode their own idiosyncrasies - their common patterns - into their programs, and compress (aka factor) their code using those uniquely odd patterns.
I believe this rope is what prevented Lisp and Smalltalk both from becoming big successes. I suspect this rope is what might kill Ruby one day, but the amount of convergence and the extra coordination / speed of information spread from the modern web has already been enough to help it go far.
I think you've misread my statement as an argument against high-level languages, and are reacting to stuff I'm not writing.
If Reg's class language took off, and became part of a generally accepted library - as common as jQuery or even CoffeeScript - it would be fine. But in isolation, used here and there, it's very much not worth it. And yes, I'm aware of the catch-22 here in adoption. That's the rough road every new language needs to travel, because that's basically what Reg is doing; building a new language.
I don't think there's any "statistical significance" (using the term loosely) in the failure of Lisp, Smalltalk, or any other language to become a big success. There are too many non-technical and historical factors that can easily explain all these outcomes. Meanwhile the sample sizes are so small—consider how short the list would be of all programming languages that ever had a major turn at bat—that we have no way of drawing true conclusions about this.
For example, it's easy to make a case that Smalltalk lost to Java because (a) the Smalltalk vendors were short-sighted and (b) Java was magically "the internet language" just when the internet was everything new and important. Those are historical accidents that had nothing to do with abstractive power.
A word about the 'antisocial' argument, which I think is deeply wrong about both Lisp and Smalltalk.
It's true that Lisp and Smalltalk let you define "if". That's a measure of their power. But anyone who thinks that Lisp and Smalltalk programmers willy-nilly define "ifs" all over the place (or do anything at that level very often) does not understand the culture of these languages. There are many examples of such willy-nillyness, but that's because it's a phase most programmers seem to go through with this stuff. (I did.) Experience teaches you to be more discerning, and culture, when you're lucky enough to be exposed to it, accelerates experience.
Thus the solution to the misuse of abstractive super-power is culture and community, which work perfectly well in both Lisp's and Smalltalk's case, as their long historical records amply show. A friend of mine was part of the later wave of Smalltalkers in the pre-Java days, and he talks fondly both of how much he learned from the older Smalltalkers (over BBS mostly, since he grew up in a small town) and also how the community as a whole gradually learned to use the power of extending the language well—as well as what to avoid. So it's wrong to attribute some weird anti-social property to these languages. There's even a vibrant counterexample going on right now in Clojure.
First, how easily integrated your code is with some other code depends on how much of underlying semantics the two pieces of code share. Both Smalltalk and Lisp are quite minimalist in terms of semantics, which means that everything you write in them will share large part of its semantics with any other code. At some level in Lisps all code is full of conses and in Smalltalk is just objects with messages (I'm over-simplifying ofc, but you get the idea). I know that "at some level" all code is just a list of instructions for CPU - but you get to this common denominator much quicker in Lisp and Smalltalk than in most other programming languages, which makes programs written in both of them more composable, not less.
Second, how easy a DSL is to understand depends on the "L" - after all it's a language, and you most certainly can create completely undecipherable language. That's not a problem with DSLs, however. There are many bad DSLs, but there are also many pretty and readable ones. Some examples I just thought of: http://docs.racket-lang.org/reference/Command-Line_Parsing.h... in Racket and http://pharobooks.gforge.inria.fr/PharoByExampleTwo-Eng/late... in Pharo. They are both readable and easily composable, of course only if you know the semantics of underlying language, but even this is easier because there are so few of them in both languages.
Third, metaprogramming, meta-object protocols, macros and so on are indeed easily abused and you should avoid using them if a simpler abstraction will suffice. But there's some point where you find yourself repeating the same patterns over and over, despite already using all the lower level abstractions. This is where you should use one of higher level ones - contrary to what you say it can make overall readability of the code better. For example, Django ORM does quite a bit of magic behind the scenes - but this means that all this magic is gathered in one place instead of being scattered through all the models in your project. And it makes simple cases (which are probably a majority of cases anyway) much shorter than they would be otherwise - and being shorter (not on its own, see http://weblog.raganwald.com/2007/12/golf-is-good-program-spo...) improves readability.
In short, the problem is as it has always been: bad (domain) languages, unnecessary abstractions and too rich base languages. Next time you have an opportunity to design a good DSL, or fitting high-level abstraction or macros library, just do so. Just try it on your own quite a few times before introducing some of these techniques into production system or team environment.
This shows you're agreeing with me, BTW.
"If X is a good thing, why aren't we using more X?" No. I've seen horribly convoluted and overly complex code written as a result of following this sort of dogma with things like design patterns. OOP can be incredibly useful when applied correctly, but don't think that it's a panacea or that "OOP-ness" is somehow directly correlated with code quality. Layers of abstraction and encapsulation are very good at hiding bugs too, and IMHO they're overrated.
I'd say the most basic proposition is that you solve problems by representing the nouns in the problem domain as objects in software. You could have a version of OOP that doesn't involve hiding internal state.
I think that would make writing sentences with those nouns somewhat difficult.
And what is bad about defining the prototype at once? I don't see a great advantage to the defineMethod() style.
EDIT: I'm not sure self-binding is part of ES6, to address your actual question. self-binding introduces a certain overhead to every function call, so I'm not even sure it would be a good thing if it would.
And assuming one wants to express the problem in terms of objects, then the slight overhead might be worth being able to pass bound functions around, "without thinking about it"
It makes working with teams easier.
Or, were you demonstrating the method overriding + calling to parent use case?
Yep. This gist is what I run co-workers through for demonstrating OOP in JS.