Interestingly, this is the case in OCaml (albeit with a shorter syntax), and it is very handy.
OCaml has two separate operators for integer addition (+) and float addition (+.), and it is illegal to write something like "1.5 + 1.5", as you either have to write "1.5 +. 1.5" or convert them to integers. This is used for the strong type checking of OCaml.
While this seems annoying at first, I found this to be very practical, especially in numerical algorithms. In other languages like C/C++/Java you always struggle with implicit conversions between int and double. When do these really happen, given all those implicit rules? If you care about correctness, you'll find yourself casting stuff to double or int/long, like this:
(double)(SOME_EXPRESSION) / (double)(ANOTHER_EXPRESSION)
(SOME_EXPRESSION) /. (ANOTHER_EXPRESSION)
You'll then have to add explicit conversions wherever needed, but this is really the lesser evil – especially since it gives you full control over when this actually happens.
Is 'needed' (as opposed to 'used') really correct here? Haskell's type classes manage to handle the type checking for different kinds of addition perfectly well without an explicitly different operator:
> :t 1 + 1
1 + 1 :: Num a => a
> :t 1.5 + 1.5
1.5 + 1.5 :: Fractional a => a
Not that I've noticed. Uni-typed operators are essential in something like Perl, which has lots of implicit conversions (think of the horrible "+" operator in PHP, or "|" in Perl). They're also helpful in something with type inference like Ocaml, though Haskell does okay without them. But with the plentiful explicit type declarations in C-style languages, polymorphic operators are rarely confusing.
Python2 also has "1 // 2", but the "/" operator can be floating point as well as integer, depending on the inputs.
> Generic programming by default: map, fold, and friends are generic [in Racket2] and not specialized to lists.
I have something similar in my private "language" that I use for internal projects, it also use that for append and length (actually len, because length is too verbose). And I use only list-append or string-append in the few cases where the speed is critical.
If something like this is implemented, the transformation from append to list-append or string-append could be made automatically in some cases by the Typed Racket optimizer (or the core optimizer).
Because concatenating two lists and concatenating 2 strings are distinctly different operations on different data types, and it helps that they are named as such.
"Named" is the wrong word for that sentence. Names are for people, boats, pets, not data structures. You probably meant "data type named".
Fractions, tons of operators, rounding, irrationals, transcendentals, important constants, etc.
The comparisons are interesting because of the limits of computer accuracy.
In the end its probably a bit of both.
Other data structures aren't painful to use in Scheme.
I'm always disappointed when I see a language with syntax and semantics but no real library. You have no way of knowing if it is a good design. It's like buying building parts out of a catalogue and hoping the architect will be able to build you something great with what you've ordered.
"Strings as lists of chars" in a language with first-class functions is an absolute pleasure.
I think many of the design choices the author is griping about (which numbers mercifully avoid) is an illustration of this tendency, which itself is a result of the two boats Scheme had a leg in for years: both a minimalistic language with elegant semantics, useful for pedagogy and optimal for hobby implementations ("50-page purists") and a viable, modern, competitive language for nontrivial applications.
Despite all this, you'll take my Scheme and Racket away from my cold, dead hands.
I love Scheme as a first-boat language, but dread every new R(+1 N)RS for the inevitable shift towards the rest-boat language design.
Edit: ... as well as methods for reading and displaying numbers.
"What would Scheme be like if numbers followed the same style as the rest of the language?
It would be necessary to import a library before using any numbers."
Every language has primitive types. I don't see any Schemers complaining about that fact. Sure, maybe you find it annoying that you have to (import srfi-69) to use hashmaps, and that the accessor functions have verbose signatures: (hashmap-ref ht key), etc. I think the author is forgetting that Scheme gives you the power to easily define a new syntax for yourself that does not affect the regularity of the rest of the language.
"Numbers would have no printed representation."
I'm reading this as a complaint about there not being a default printing representation for types like records. I don't see how this is a valid complaint. The point of using a record is presumably to distinguish your data type from any old vector. Almost all implementations will allow you to define your own printed representation if you want.
"There would be no polymorphism. Most operations would include a type in their name."
R5RS does not have type signatures for functions. If you want to write a ref procedure that works on vectors and lists, you can. In fact:
(define (ref x n)
((list? x) (list-ref x n))
((vector? x) (vector-ref x n))
"But the lack of polymorphism would make it even more obvious that in practice exactness was simply one of the type distinctions: that between floats and everything else."
For lack of a better word, this seems like a strawman. I think it was a practical idea to have the language distinguish between exact and inexact numbers. Interestingly, Chicken Scheme implements the numeric tower in an library (the numbers egg) instead of in the standard distribution.
"Names would be descriptive, like inexact-rational-square-root and exact-integer-greatest-common-divisor."
I much prefer descriptive names that I can easily shorten using macros or functions to overloading the binary shift operator to write to streams.
"Converting the lists into strings would be up to you."
I guess this is a complaint about how small the standard library is. This is part of a larger debate within the Scheme community. See R7RS-large vs. R7RS-small. Many people think the latter is more to the nature of Scheme.
"This would really be about whether numeric operations should always return fresh numbers, and whether the compiler would be allowed to copy them, but no one would mention these merely implementational issues."
Relying on undefined behavior is an implementational issue. To quote R5Rs, "Eq?'s behavior on numbers and characters is implementation-dependent, but it will always return either true or false, and will return true only when eqv? would also return true." If you want defined behavior, why are you comparing things that can't already be compared with =, string=?, & co? Comparing a vector to a number doesn't make much sense.
"There would still be no bitwise operations on integers. Schemers who understood the purpose would advise using an implementation that supports bitvectors instead of abusing numbers. Those who did not would say they're easy to implement."
This point I am sympathetic to. But it seems to me a bit like complaining new language X does not support feature Y that language Z does. If feature Y is so important, why not decide whether or not it's worth your time to implement it and if not go with whatever language would be the most convenient to you? It's the same as many other decisions in life.
Ruby for example lacks primitives, even integer numbers are actually objects of type Fixnum.
… but isn't Fixnum a primitive type?
EDIT: I guess that it depends on what 'primitive' means, but there is some root to the inheritance hierarchy—for Ruby, it's BasicObject ( http://www.ruby-doc.org/core-2.2.0/BasicObject.html )—and surely that must be regarded as primitive.
Wikipedia makes the division between "basic types" and "built-in types": http://en.wikipedia.org/wiki/Primitive_data_type
Other types are reference types, which are "object-like". For each primitive type there is a corresponding reference type which can (and in more recent versions of Java, automatically does) wrap an instance of the primitive type (known as "boxing", and as "autoboxing" when done automatically).
C# calls these "value types" and "reference types".
Python, once upon a time, similarly had two hierarchies of types: those which could be subclassed, and those which could not (the latter all types built in to Python). The unification of these, and elimination of the distinction so that all of Python's types shared common behavior and a common hierarchy, began in Python 2.2 (introduction of "new-style" classes inheriting from `object`) and completed with Python 3 (where "new-style" classes are the default).
Numerical Tower is Scheme's invention. It is one of Scheme's distinct features and a major innovation of that time.
R5RS, which is considered the classic Scheme (some would even say R4RS) has only Strings and Vectors as an ADTs with such convention of naming procedures. It amounts for just one page of the standard. And, of course, not these two types is the essence of Scheme.
Also Scheme precedes CL, which incorporates Numerical Tower from Scheme.
> Also Scheme precedes CL, which incorporates Numerical Tower from Scheme.
I'd say it's different. R2RS appeared 1985...
R2RS from 1985:
It mentions Common Lisp as the inspiration for it numeric capabilities.
And Common Lisp got parts of it from Maclisp and Macsyma.
Before that: R1RS. does not describe a numeric tower.
Smalltalk also has Numerical Tower for the very same reasons - to have "generic" '+ '* '/ etc.
Here classic discussion of the topic: http://ocw.mit.edu/courses/electrical-engineering-and-comput...
Note when he emphasize how generic '+ procedure is used. This btw is one of most important realization in a whole course. It is justification behind the Numerical Tower.
These things existed way before Scheme and Common Lisp. 60s. Lisp had a generic + shortly after the dinosaurs disappeared... Scheme hasn't introduced that.
See for example the Lisp 1.6 manual from 1968. It had inums (short ints), fixnums, bignums (large ints) and reals. See chapter 4. It had generic numeric functions. See chapter 12.1 of the manual.
Extensive mathematical software was written long before Scheme existed: Macsyma, Reduce, .... Maclisp's compiler was hacked up quite a lot to support Macsyma, in the mid 70s. Bignums were added to Maclisp in 70/71. Complex numbers were in S-1 Lisp and came from there to Common Lisp, it seems.
Common Lisp was a successor to Maclisp and it was expected that the large amount of math software (like Macsyma or IBM's Scratchpad, both available under Maclisp) should be supported.
Common Lisp had nothing numerical from Scheme. Basically it got its capabilities from Maclisp, Macsyma and S-1 Lisp. Neither CLtL1 nor HOPL2 gives any indication of any Scheme influence in that area. Scheme took CL's numeric capabilities (see the Scheme Report from 1985 - CL was under development from 81/82 onwards with CLtL1 published in 1984) and then developed new ideas... Common Lisp had some Scheme influences, but not the numeric capabilities.
My point was that the second approach is an improvement compared to the first one. And it is not my own fancy - watch the lectures. They also emphasized that it works nice only with numbers, which happen to form nested sets.
Actually, I cannot get what we are arguing about? That I missed some historical nuances or my sources are not credible enough, or that I got wrong the goal of switching to generic procedures in Scheme? Or, perhaps, that the original article makes any sense?
Wrong. Lisp had a generic + function LOOONG before Scheme. It was called PLUS.
The Lisp 1.6 manual from 1968:
'Unless otherwise noted, the following arithmetic functions are defined for both integer, real and mixed combinations of arguments... The result is real if any argument is real, and integer if all arguments are integer...'
It then describes the functions MINUS, PLUS, DIF, TIMES, QUOTIENT, DIVIDE, ...
Examples in the manual:
(PLUS 1 2 3.1) = 6.1
(PLUS 6 3 -2) = 7
(TIMES -2 2.0) = 4.0
> Scheme pioneered the approach with only one '+ generic procedure exported with all the "rising" done implicitly - influenced by Algol.
That's what Lisp did in the 60s. The function was called PLUS.
It was fully generic.
It is nothing Scheme has contributed.
This is good point. Thanks. It seems that the original view of John John McCarthy - before the decade of different implementations, was "right",
> It is nothing Scheme has contributed.
Scheme, it seems, re-emphasized it, much later, as the answer to the mess made by different implementations.
I could give you an example form a complete different field, in order to show how common such pattern is.
The very first Aryan Vedas emphasize the notion of taking inspirations from the nature and remaining in unity with it. They use deities as symbols for the major natural powers and appreciate them.
Then these ideas has been taken by "other people" and mechanistic, ritual-based religions, based on worshiping and praising of anthropomorphic idols emerge. This, in turn, resulted in emergence of Upanishads as thinking people got sick with all that nonsense, and from the Upanishads the Advaita Vedanta and Buddhism schools has been developed (and got ruined by "commentators").
This is a social pattern. I can't tell how meany times in history these cycles of inflation and reduction, mass hysteria and returning back to the very few "great insights" happened.
I am not telling you that this my analogy for the evolution of Lisps is precise - only the MIT guys behind Scheme could tell whether I am wrong or not - but I have this notion, based on what I read in books and watched in lectures, so I, if you have no objections, would still hold my opinions.
But generic operations were explored in detail in actor implementations in Lisp and in various forms of object systems in Lisp. Scheme itself as a language only provided hard-coded generic functions. The 1985 Scheme report included nothing generic beyond the few hard-defined generic functions. At a time when software in Lisp explored already user defined multiple inheritance, message sending, pattern-based invocation, ... Sussman himself knew Maclisp very well. True, SICP showed how to implement and use generic operations, but then that was already a decade old...