
If Scheme's numbers were like Scheme - jsnell
http://arcanesentiment.blogspot.com/2015/01/if-scheme-were-like-scheme.html
======
vog
_> Most operations would include a type in their name._

Interestingly, this is the case in OCaml (albeit with a shorter syntax), and
it is very handy.

OCaml has two separate operators for integer addition (+) and float addition
(+.), and it is illegal to write something like "1.5 + 1.5", as you either
have to write "1.5 +. 1.5" or convert them to integers. This is used for the
strong type checking of OCaml.

While this seems annoying at first, I found this to be very practical,
especially in numerical algorithms. In other languages like C/C++/Java you
always struggle with implicit conversions between int and double. When do
these really happen, given all those implicit rules? If you care about
correctness, you'll find yourself casting stuff to double or int/long, like
this:

    
    
      (double)(SOME_EXPRESSION) / (double)(ANOTHER_EXPRESSION)
    

just to be sure this is really a floating division happening there, rather
than an integer division that throws away accuracy. In OCaml, you just write:

    
    
      (SOME_EXPRESSION) /. (ANOTHER_EXPRESSION)
    

and have a guaranteed floating point division, as well as a typecheck telling
you when you e.g. forgot to convert SOME_EXPRESSION from integer to floating
point.

You'll then have to add explicit conversions wherever needed, but this is
really the lesser evil – especially since it gives you full control over when
this actually happens.

~~~
JadeNB
> This is needed for stronger type checking.

Is 'needed' (as opposed to 'used') really correct here? Haskell's type classes
manage to handle the type checking for different kinds of addition perfectly
well without an explicitly different operator:

    
    
        > :t 1 + 1
        1 + 1 :: Num a => a
        > :t 1.5 + 1.5
        1.5 + 1.5 :: Fractional a => a
    

In other words, since 1 makes sense as an element of any 'Num'mable type, so
does 1 + 1; but, since 1.5 only makes sense as an element of a
'Fractional'able type, we can only regard 1.5 + 1.5 as an element of those
more restrictive types.

~~~
vog
Indeed, I meant "used". Fixed this in my comment above. Thanks!

------
ianbicking
I mostly read this as an exposition on how painful other types in Scheme are -
like, if numbers were as painful to use as ever other thing, what an absurd
world that would be!

~~~
soegaard
So did I. So why have append for lists, and string-append for strings? Well,
Scheme implementors were early on (still are) interested in writing efficient
compilers - that affects the naming choices.

~~~
davexunit
>So why have append for lists, and string-append for strings?

Because concatenating two lists and concatenating 2 strings are distinctly
different operations on different data types, and it helps that they are named
as such.

~~~
copsarebastards
> Because concatenating two lists and concatenating 2 strings are distinctly
> different operations on different data types, and it helps that they are
> named as such.

"Named" is the wrong word for that sentence. Names are for people, boats,
pets, not data structures. You probably meant "data type named".

------
cousin_it
Fair points. We can see similar examples in other languages, e.g. C++ strings
are "like C++" and a pain to use, while Java strings are "not like Java" and a
pleasure to use. Maybe language design really isn't about general-purpose
elegance, but about finding good special-purpose solutions.

~~~
jarcane
On the other hand, Haskell strings are "like Haskell" (ie. they're lists like
almost everything else), and it's actually because of that they're such a joy
to use.

"Strings as lists of chars" in a language with first-class functions is an
absolute pleasure.

~~~
tome
That's a rather starry-eyed view of Haskell's String type. It has terrible
performance in terms of both memory and time, which is why we now have Text
instead.

------
srpablo
Reminds me of the 2009 Scheme Steering Committee Position Statement, they
stated (correctly, imo) "the Scheme community has rarely missed an opportunity
to miss an opportunity."[1]

I think many of the design choices the author is griping about (which numbers
mercifully avoid) is an illustration of this tendency, which itself is a
result of the two boats Scheme had a leg in for years: both a minimalistic
language with elegant semantics, useful for pedagogy and optimal for hobby
implementations ("50-page purists") and a viable, modern, competitive language
for nontrivial applications.

Despite all this, you'll take my Scheme and Racket away from my cold, dead
hands.

[1]: [http://www.scheme-reports.org/2009/position-
statement.html](http://www.scheme-reports.org/2009/position-statement.html)

~~~
yvdriess
The thing is that there is already a Lisp that is a viable, modern and
competitive language for nontrivial applications. One that has incorporated
the institutional knowledge and hindsight of two decades pragmatic programming
practices.

I love Scheme as a first-boat language, but dread every new R(+1 N)RS for the
inevitable shift towards the rest-boat language design.

------
wedesoft
Nice idea because it would make for a more minimalistic core. One could then
use some OO library (e.g. Guile's GOOPS) to implement polymorphic versions
of+, -, *, etc.

Edit: ... as well as methods for reading and displaying numbers.

------
jonathanyc
It's hard to see what the author is complaining about. I understand that it
may just be a bit of fun, but in this post I'm going to try to address the
points made and note how they don't seem valid to me.

"What would Scheme be like if numbers followed the same style as the rest of
the language?

It would be necessary to import a library before using any numbers."

Every language has primitive types. I don't see any Schemers complaining about
that fact. Sure, maybe you find it annoying that you have to (import srfi-69)
to use hashmaps, and that the accessor functions have verbose signatures:
(hashmap-ref ht key), etc. I think the author is forgetting that Scheme gives
you the power to easily define a new syntax for yourself that does not affect
the regularity of the rest of the language.

"Numbers would have no printed representation."

I'm reading this as a complaint about there not being a default printing
representation for types like records. I don't see how this is a valid
complaint. The point of using a record is presumably to distinguish your data
type from any old vector. Almost all implementations will allow you to define
your own printed representation if you want.

"There would be no polymorphism. Most operations would include a type in their
name."

R5RS does not have type signatures for functions. If you want to write a ref
procedure that works on vectors and lists, you can. In fact:

(define (ref x n) (cond ((list? x) (list-ref x n)) ((vector? x) (vector-ref x
n)) (else #f)))

"But the lack of polymorphism would make it even more obvious that in practice
exactness was simply one of the type distinctions: that between floats and
everything else."

For lack of a better word, this seems like a strawman. I think it was a
practical idea to have the language distinguish between exact and inexact
numbers. Interestingly, Chicken Scheme implements the numeric tower in an
library (the numbers egg) instead of in the standard distribution.

"Names would be descriptive, like inexact-rational-square-root and exact-
integer-greatest-common-divisor."

I much prefer descriptive names that I can easily shorten using macros or
functions to overloading the binary shift operator to write to streams.

"Converting the lists into strings would be up to you."

I guess this is a complaint about how small the standard library is. This is
part of a larger debate within the Scheme community. See R7RS-large vs. R7RS-
small. Many people think the latter is more to the nature of Scheme.

"This would really be about whether numeric operations should always return
fresh numbers, and whether the compiler would be allowed to copy them, but no
one would mention these merely implementational issues."

Relying on undefined behavior is an implementational issue. To quote R5Rs,
"Eq?'s behavior on numbers and characters is implementation-dependent, but it
will always return either true or false, and will return true only when eqv?
would also return true." If you want defined behavior, why are you comparing
things that can't already be compared with =, string=?, & co? Comparing a
vector to a number doesn't make much sense.

"There would still be no bitwise operations on integers. Schemers who
understood the purpose would advise using an implementation that supports
bitvectors instead of abusing numbers. Those who did not would say they're
easy to implement."

This point I am sympathetic to. But it seems to me a bit like complaining new
language X does not support feature Y that language Z does. If feature Y is so
important, why not decide whether or not it's worth your time to implement it
and if not go with whatever language would be the most convenient to you? It's
the same as many other decisions in life.

~~~
kyllo
_Every language has primitive types._

Ruby for example lacks primitives, even integer numbers are actually objects
of type Fixnum.

~~~
JadeNB
> Ruby for example lacks primitives, even integer numbers are actually objects
> of type Fixnum.

… but isn't Fixnum a primitive type?

EDIT: I guess that it depends on what 'primitive' means, but there is _some_
root to the inheritance hierarchy—for Ruby, it's BasicObject (
[http://www.ruby-doc.org/core-2.2.0/BasicObject.html](http://www.ruby-
doc.org/core-2.2.0/BasicObject.html) )—and surely _that_ must be regarded as
primitive.

~~~
djur
I think the parent is using 'primitive' in the Java sense, which means 'a
typed value that is not an object' \-- that is, 'int' instead of 'Integer'.
The distinction is meaningless for languages that don't have separate type
systems for objects and everything else.

Wikipedia makes the division between "basic types" and "built-in types":
[http://en.wikipedia.org/wiki/Primitive_data_type](http://en.wikipedia.org/wiki/Primitive_data_type)

~~~
kyllo
Yes, I am talking in the C++ or Java definition of primitive types, like int,
char, float, double, long, bool. It's only meaningless to talk about primitive
types in the context of Ruby because Ruby doesn't have primitive types. Every
type in Ruby is an object type.

------
dschiptsov
_Facepalm_.

Numerical Tower is Scheme's invention. It is one of Scheme's distinct features
and a major innovation of that time.

R5RS, which is considered the classic Scheme (some would even say R4RS) has
only Strings and Vectors as an ADTs with such convention of naming procedures.
It amounts for just one page of the standard. And, of course, not these two
types is the essence of Scheme.

Also Scheme precedes CL, which incorporates Numerical Tower from Scheme.

~~~
lispm
> It is one of Scheme's distinct features and a major innovation of that time.

Is it?

> Also Scheme precedes CL, which incorporates Numerical Tower from Scheme.

I'd say it's different. R2RS appeared 1985...

R2RS from 1985:

[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.1...](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.1891)

It mentions Common Lisp as the inspiration for it numeric capabilities. And
Common Lisp got parts of it from Maclisp and Macsyma.

Before that: R1RS. does not describe a numeric tower.

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-452.pdf

~~~
dschiptsov
The first edition of SICP form 1985 already has a discussion of numeric
capabilities in Scheme as an example of programming style based on generic
procedures. It might don't have the term Numerical Tower but the ideas were
well-developed before 1985. Original HP lectures of Abelson and Sussman
explicitly explain the how and whys. Brian Harvey's CS61A has it.

Smalltalk also has Numerical Tower for the very same reasons - to have
"generic" '\+ '* '/ etc.

Here classic discussion of the topic: [http://ocw.mit.edu/courses/electrical-
engineering-and-comput...](http://ocw.mit.edu/courses/electrical-engineering-
and-computer-science/6-001-structure-and-interpretation-of-computer-programs-
spring-2005/video-lectures/4b-generic-operators/)

Note when he emphasize how generic '\+ procedure is used. This btw is one of
most important realization in a whole course. It is justification behind the
Numerical Tower.

~~~
lispm
> don't have the term Numerical Tower but the ideas were well-developed before
> 1985

These things existed way before Scheme and Common Lisp. 60s. Lisp had a
generic + shortly after the dinosaurs disappeared... Scheme hasn't introduced
that.

See for example the Lisp 1.6 manual from 1968. It had inums (short ints),
fixnums, bignums (large ints) and reals. See chapter 4. It had generic numeric
functions. See chapter 12.1 of the manual.

[http://www.softwarepreservation.org/projects/LISP/stanford/S...](http://www.softwarepreservation.org/projects/LISP/stanford/SAILON-28.2.pdf)

Extensive mathematical software was written long before Scheme existed:
Macsyma, Reduce, .... Maclisp's compiler was hacked up quite a lot to support
Macsyma, in the mid 70s. Bignums were added to Maclisp in 70/71\. Complex
numbers were in S-1 Lisp and came from there to Common Lisp, it seems.

Common Lisp was a successor to Maclisp and it was expected that the large
amount of math software (like Macsyma or IBM's Scratchpad, both available
under Maclisp) should be supported.

Common Lisp had nothing numerical from Scheme. Basically it got its
capabilities from Maclisp, Macsyma and S-1 Lisp. Neither CLtL1 nor HOPL2 gives
any indication of any Scheme influence in that area. Scheme took CL's numeric
capabilities (see the Scheme Report from 1985 - CL was under development from
81/82 onwards with CLtL1 published in 1984) and then developed new ideas...
Common Lisp had some Scheme influences, but not the numeric capabilities.

~~~
dschiptsov
OK, let's put it differently. In these old Lisps they indeed had FIXNUM+ and
REAL+ or whatever and explicit coercing, the Fortran way, reflecting how a
machine works, while Scheme pioneered the approach with only one '\+ generic
procedure exported with all the "rising" done implicitly - influenced by
Algol.

My point was that the second approach is an improvement compared to the first
one. And it is not my own fancy - watch the lectures. They also emphasized
that it works nice only with numbers, which happen to form nested sets.

Actually, I cannot get what we are arguing about? That I missed some
historical nuances or my sources are not credible enough, or that I got wrong
the goal of switching to generic procedures in Scheme? Or, perhaps, that the
original article makes any sense?

~~~
lispm
> OK, let's put it differently. In these old Lisps they indeed had FIXNUM+ and
> REAL+ or whatever and explicit coercing, the Fortran way, reflecting how a
> machine works,

Wrong. Lisp had a generic + function LOOONG before Scheme. It was called PLUS.

The Lisp 1.6 manual from 1968:

'Unless otherwise noted, the following arithmetic functions are defined for
both integer, real and mixed combinations of arguments... The result is real
if any argument is real, and integer if all arguments are integer...'

It then describes the functions MINUS, PLUS, DIF, TIMES, QUOTIENT, DIVIDE, ...

Examples in the manual:

(PLUS 1 2 3.1) = 6.1

(PLUS 6 3 -2) = 7

(TIMES -2 2.0) = 4.0

> Scheme pioneered the approach with only one '\+ generic procedure exported
> with all the "rising" done implicitly - influenced by Algol.

That's what Lisp did in the 60s. The function was called PLUS.

It was fully generic.

It is nothing Scheme has contributed.

~~~
dschiptsov
> Unless otherwise noted, the following arithmetic functions are defined for
> both integer, real and mixed combinations of arguments

This is good point. Thanks. It seems that the _original view of John John
McCarthy_ \- before the decade of different implementations, was "right",

> It is nothing Scheme has contributed.

Scheme, it seems, re-emphasized it, much later, as the answer to the mess made
by different implementations.

I could give you an example form a complete different field, in order to show
how common such pattern is.

The very first Aryan Vedas emphasize the notion of taking inspirations from
the nature and remaining in _unity_ with it. They use deities as symbols for
the major natural powers and _appreciate_ them.

Then these ideas has been taken by "other people" and mechanistic, ritual-
based religions, based on worshiping and praising of anthropomorphic idols
emerge. This, in turn, resulted in emergence of Upanishads as thinking people
got sick with all that nonsense, and from the Upanishads the Advaita Vedanta
and Buddhism schools has been developed (and got ruined by "commentators").

This is a social pattern. I can't tell how meany times in history these cycles
of inflation and reduction, mass hysteria and returning back to the very few
"great insights" happened.

I am not telling you that this my analogy for the evolution of Lisps is
precise - only the MIT guys behind Scheme could tell whether I am wrong or not
- but I have this notion, based on what I read in books and watched in
lectures, so I, if you have no objections, would still hold my opinions.

~~~
lispm
I can't remember anything in that direction from Scheme. In the original
Scheme papers there was an exploration of the Actor ideas of message passing.

But generic operations were explored in detail in actor implementations in
Lisp and in various forms of object systems in Lisp. Scheme itself as a
language only provided hard-coded generic functions. The 1985 Scheme report
included nothing generic beyond the few hard-defined generic functions. At a
time when software in Lisp explored already user defined multiple inheritance,
message sending, pattern-based invocation, ... Sussman himself knew Maclisp
very well. True, SICP showed how to implement and use generic operations, but
then that was already a decade old...

