IMO, the OC is incorrect. First-class functions exist in many languages. As a matter of fact, most features that were first pioneered in Lisp now exist in many other programming languages. Languages have borrowed everything they could from Lisp, except its single most powerful feature — the homoiconicity. Homoiconicity remains less common outside of Lisp-like languages, though it is not entirely absent. Prolog, Rebol, and R are homoiconic programming languages without Lispy syntax.
> IMO, the OC is incorrect. First-class functions exist in many languages.
Note that I did say "Some of the more modern languages have that facility, but it's rarely as elegant as it is in Lisps."
Example: JavaScript is sometimes claimed to have first-class functions. It kinda does, but you can't redefine (e.g.) the "+" function as you can in Lisps. So it's basically half-assed. C has function pointers, which again is (sort of) like first-class functions, but also don't let you redefine "+" (they're also incredibly ugly to use). C++ has operator overloading, so you can (again, sort of) redefine "+", but unless things have changed since the last time I used it (which has admittedly been a long time, though not long enough), I believe that's only available at compile time, not run time.
And of course even in (e.g.) JavaScript when you redefine a function it doesn't automatically get compiled, as it does in many Lisps (maybe the JIT will eventually get around to it, maybe it won't).
And so on.
(I'm using "function" here as a shorthand for "function or procedure" because I don't want to type that a half-dozen times).
Ah... now I see what you meant. Indeed. + is an operator in langs like Javascript and C++ - a built-in, rigid syntax construct. You can't redefine it, you can't extend it, you can only use it for addition and string concatenation. You can't even extend it in JS to let's say, merge two objects together.
Lisp doesn’t even have "operators" — everything is a function. Gosh, the more you learn about Lisp, the more it feels like we went backwards. We tried to simplify programming, only to find ourselves buried under layers of complications. Just think about how many times we've reinvented state management for React alone: Redux, MobX, Recoil, Zustand, Jotai, Effector, XState, Overmind, Hookstate, Easy Peasy, Rematch. And that's not even a comprehensive list. What's funny is that some of these are "improved" versions of previous "improvements". Each time we try "improving" the previous "improvement", it seems we continually need to reinvent another layer of "improvements". Dafuk are we doing?
I started out writing machine code. With each "improvement" we lost flexibility and I didn't need any of it but they are all useful experiments and lots of things objectively improved. Imho software is for fooling around. We need it because we don't exactly know what we need or how to make it. When the needs are exactly defined we can bake thing into hardware. At that point all flexibility is gone.
The syntax of + and - is rigid in C++, but functions can be written to extend it. This is not new; you are making claims that were not true in the first ISO C++ in 1998, or its predecessors revisions going quite far before that.
When a is a class type (and more recently this is now supported for enumerations also) then the surface syntax a + b means the same thing as a.operator+(b): invoking the operator + member function of a with argument b.
I replied to this not knowing exactly where the goalposts are.
Indeed, in C++, our a + b expression is statically analyzed to be working with specific types; the declared types of a and b. When that code is compiled, it will work with no other types.
However, if we anticipate that kind of extension, we can have a be a reference to some numeric_base class where operator + is a virtual function.
With some help from a bit of platform specific coding, we can have dynamically loaded modules which can provide new classes derived from numeric_base that ca be passed to existing code in the program which uses + on numeric_base.
Also: limiting the redefinition of standard operators is not backwards, it's useful, to prevent to crash a Lisp, since in Lisp function names are often late bound. -> changing a core function will have immediate effect on all code using that.
Macros are also not "function". Macros are operators which expand source code to new source code.
Lisp typically has these operator types:
* functions, which are operators which get called with evaluated arguments
* macros, which are operators, which get called with source code and which return new source code
* special operators (QUOTE, LET, DEFUN, SETQ, IF, ...), which are typically hard-wired in the implementation with both syntax and semantics. Scheme has a set of those, too.
Additionally one my see fexprs, which are operators, which are called with source code. Then in some Lisps some functions lazy evaluate their arguments.
operator n.
1. a function, macro, or special operator.
2. a symbol that names such a function, macro, or special operator.
3. (in a function special form) the cadr of the function special form, which might be either an operator[2] or a lambda expression.
4. (of a compound form) the car of the compound form, which might be either an operator[2] or a lambda expression, and which is never (setf symbol).
> You can't redefine it
In standard Common Lisp it is UNDEFINED what happens when one redefines a standard operator of Common Lisp. Typical implementations will signal an error (which often provides a restart). Typical implementations will also extend this to other protected packages, such that the user can't accidentally replace an operator and the Lisp system may crash.