
Chez Scheme is now free - jordigh
https://github.com/cisco/ChezScheme
======
rohwer
Dybvig's compiler course was exemplary. Say what you may about Scheme, you
learned so much more in those classes. His Scheme Programming Language book is
highly recommended. Especially check out his extended examples chapter:
[http://www.scheme.com/tspl4/examples.html#./examples:h0](http://www.scheme.com/tspl4/examples.html#./examples:h0)

~~~
dman
I thought scheme was like the beatles - people universally only had good
things to say about it.

~~~
chubot
I might not be getting a reference here, but FWIW, I had a recent experience
with Scheme that was interesting.

I did SICP nearly 19 years ago as a freshman, in 1997. And then a few months
ago, I ported the metacircular-evaluator -- the "crown" of the course -- to
femtolisp (the Lisp implementation underlying Julia).

My thoughts were:

1) It sure is awkward to represent struct fields 1 2 3 as (cdr struct), (cadr
struct), (caddr), ... Yes this is nice to show that car and cdr are all you
need as axiomatic primitives, but for practical purposes it's annoying. You
end up with lots of little functions with long names.

2) Scheme code is very imperative! Even the metacircular evaluator uses set-
cdr! and so forth. I don't like imperative code with the Lisp syntax.

3) It is awkward to represent environments with assoc lists. I feel that
having a language which is really bootstrapped requires some kind of hash
table/dictionary. Because you need that to implement scopes with O(1) access
rather than O(n). I believe there are experimental lisps that try to fix this.

4) Macros also seem to have a needlessly different syntax than regular
functions. There are Lisps with f-exprs rather than s-exprs that try to fix
this:
[https://en.wikipedia.org/wiki/Fexpr](https://en.wikipedia.org/wiki/Fexpr)

I was surprised by #3 and #4 -- it some sense Scheme is less "meta" and
foundational than it could be. #2 is also a fundamental issue... at least if
you want to call it the "foundation" of computing and build Lisp machines; I
think this is evidence that this idea is a fundamentally flawed. #1 just makes
it pale into languages like Python or even JavaScript.

~~~
davexunit
>1) It sure is awkward to represent struct fields 1 2 3 as (cdr struct), (cadr
struct), (caddr)

Every Scheme implementation I know of supports record types aka SRFI-9[0]. No
one actually makes new data types from cons cells.

>2) Scheme code is very imperative!

Scheme supports many programming paradigms. Imperative programming is one.
Functional programming, object-oriented programming, and relational
programming are others. Not all Scheme code is imperative.

>3) It is awkward to represent environments with assoc lists.

Every Scheme implementation I know of has traditional mutable hash tables.
Sometimes you want a hash table, sometimes you want an alist. It depends.

>4) Macros also seem to have a needlessly different syntax than regular
functions.

I don't really understand this point. Are you talking about syntax-rules? If
so, then I must disagree. syntax-rules is a very elegant language for defining
hygienic macros.

SICP does _not_ teach you everything that Scheme has to offer, and the Scheme
implementations of the 1980s are a lot different than the Scheme
implementations of 2016.

[0]
[http://srfi.schemers.org/srfi-9/srfi-9.html](http://srfi.schemers.org/srfi-9/srfi-9.html)

~~~
chubot
OK, point taken about #1. It is valuable to have the basic axioms and then
separate syntactic sugar.

But #2 and #3 are what I would call bootstrapping problems... in other words,
there is a reason that C is the foundation of computing rather than Lisp. I
don't think anybody really thinks otherwise anymore. But for example, set-cdr!
is not in the lambda calculus, and you need it even for basic things.

Likewise, Scheme implementations have mutable hash tables, but they're written
in C and not Scheme. I don't know how you even write a hash table based on
cons cells rather than O(1) indexing.

Regarding #4, here is a good link. The basic idea is that macros could just be
functions on lists, and then you get composition of macros like you have
composition of functions. Paul Graham incorporated this into Arc.

[http://matt.might.net/articles/metacircular-evaluation-
and-f...](http://matt.might.net/articles/metacircular-evaluation-and-first-
class-run-time-macros/)

My point is you could say Scheme sits at a somewhat awkward place between "not
foundational enough" (not fully bootstrapped) and "awkward in practice"
(compared to say Python). Though I wouldn't go as far as to say that...
Obviously it was groundbreaking work that influenced Python and R and tons of
stuff we use today. It's outstanding research, but it feels like it has been
almost fully incorporated into the computing culture now.

~~~
davexunit
>there is a reason that C is the foundation of computing rather than Lisp. I
don't think anybody really thinks otherwise anymore.

C is not the foundation of computing. Why would you say this?

>Likewise, Scheme implementations have mutable hash tables, but they're
written in C and not Scheme.

A native code compiler written in Scheme would have its hash table
implementation also written in Scheme.

>I don't know how you even write a hash table based on cons cells rather than
O(1) indexing.

You wouldn't do that! Cons cells are not the only primitive data type! Another
primitive type in Scheme is the vector, which is a mutable array.

I'm sorry, but you greatly misunderstand Lisp and how compilers work.

~~~
coldtea
> _C is not the foundation of computing. Why would you say this?_

Because all the popular OSes, drivers, userlands, servers, GUI libraries, and
compilers/languages are 99% written in C (or C++ which is close enough).

~~~
TeMPOraL
That's ignoring all the computing work done before C. C and UNIX were huge
steps back in computing, that we're only slowly beginning to recover from.

~~~
pjmlp
There a sentence from a famous women in the history of computing, I don't
recall which one, about C setting the progress of compiler optimizations back
to the dawn of computing.

~~~
groovy2shoes
Perhaps it was Fran Allen. In _Coders at Work_ , she has quite a few things to
say on the topic:

\--- Begin Quote ---

-Seibel-: When do you think was the last time that you programmed?

-Allen-: Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization.

The nubbin of the debate was Steve's defense of not having to build optimizers
anymore because the programmer would take care of it. That it was really a
programmer's issue. The motivation for the design of C was three problems they
couldn't solve in the high-level languages: One of them was interrupt
handling. Another was scheduling resources, taking over the machine and
scheduling a process that was in the queue. And a third one was allocating
memory. And you couldn't do that from a high-level language. So that was the
excuse for C.

-Seibel-: Do you think C is a reasonable language if they had restricted its use to operating-system kernels?

-Allen-: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve.

By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL,
Algol 60. These are higher-level than C. We have seriously regressed, since C
developed. C has destroyed our ability to advance the state of the art in
automatic optimization, automatic parallelization, automatic mapping of a
high-level language to the machine. This is one of the reasons compilers
are... basically not taught much anymore in colleges and universities.

\--- End Quote ---

(taken from pp. 501-502)

~~~
pjmlp
Yep that one.

~~~
eggy
Great quote. I had never read that one. The irony of language wars about the
monolith that is C vs. the many higher-level languages that allow a human to
code according to his mental abstractions and cognitive ability, rather than
memorizing machine-specific, or os-specific facts that don't translate over to
newer machine architectures. All this on HN, running on a Lisp, Arc, that once
sat upon an academic scheme now called Racket.

I agree C is necessary for low-level programming, but for the meat of all the
other applications and usages, higher-level languages are needed. I don't mind
programming certain things in C, but I thoroughly enjoy the mental exercise
when programming in the J programming language, Lisp, or Forth. Yes, Forth. C
programmers can have the Earth; I would like to be coding with the fellas who
design mission-critical software for satellites like Rosetta, and groups like
NASA and the ESA, using Forth in Rosetta's case [1].

Slim Whitman sold more records than the Beatles, or so I think I heard that on
a late-night TV commercial back in the 80s, but I never owned a record from
him ;)

    
    
      [1]  http://adsabs.harvard.edu/full/2003ESASP.532E..72B

~~~
tbirdz
This is a bit off topic, but would you mind sharing what resources you used to
learn forth? I'm interested in the language myself, but I don't really get how
to use it. I've learned the basics about defining words and manipulating the
stack, but whenever I try to apply it to an actual problem, I have a very hard
time even understanding how to approach, wheras if I were using a procedural
or OO language, I would know where to start.

Any advice on getting over this hump?

~~~
eggy
There are the main books everyone refers to like 'Thinking Forth' [1], and
others, but I really learned faster by picking up Factor [2] and Retro [3].
The community on Factor is very helpful and smart. I actually wrote my first
real Forth program for work in Factor. I had written a lot of one-liners, and
some curiosities, but this was a business need met in one night. It was
basically a program to munge a whole lot of tab-delimted text files, and do
some math on fields and then generate a report. It was all < 100 LOC, and it
was easy to interactively test if I was missing anything in the interpreter.
Retro is cool, since it is so minimal, and there are add-ons for Chrome so you
can run it in your browser locally to try it; otherwise, you download the
image file for your platform, and Retro itself. Retro was built on a VM called
Ngaro, which is an interesting diversion, but not Forth. There are also
microcontroller ports of Forth [4], which I found a lot easier than using the
assembler for a particular microcontroller. FlashForth has been recently
updated, and is featured in a 'Circuit Cellar' magazine article. I have also
purchased iForth [5], since I had stopped dual-booting linux at the time, and
it offered a lot for Windows, and mathematics (fast). I hope these help you
get started. If you are a maker, using an Arduino or PIC-like chip with
FlashForth is great. The whole dictionary, or word list (all the defined
functions called 'words' in Forth, are listed in the reference I provided). In
Forth, you can easily add to the dictionary, it is concise enough to know your
whole system in a short time, and add to it when you need it. Good luck!

    
    
      [1]  http://thinking-forth.sourceforge.net/ 
      [2]  https://factorcode.org/
      [3]  http://forthworks.com/retro/
      [4]  http://flashforth.com/tutorials.html
      [5]  http://home.iae.nl/users/mhx/

~~~
tbirdz
Thanks for the reply. Those look like some pretty cool resources, and I can't
wait to check them out.

------
FraaJad
IIRC, this was a high performance scheme developed at Indiana University that
was closed source for a long time.

Good on Cisco for open sourcing it.

I'm interested to hear what regular scheme programmers feel about this news.

~~~
hga
And aside from the JVM implementations and some works in progress the last
time I checked, the only one with native instead of green threading.

~~~
jbclements
Racket has native threads, called "places".

~~~
hga
Not in the shared memory way I mean (which I did not make clear), per
[http://docs.racket-lang.org/guide/parallelism.html](http://docs.racket-
lang.org/guide/parallelism.html)

 _The racket /place library provides support for performance improvement
through parallelism with the place form. The place form creates a place, which
is effectively a new Racket instance that can run in parallel to other places,
including the initial place. The full power of the Racket language is
available at each place, but places can communicate only through message
passing—using the place-channel-put and place-channel-get functions on a
limited set of values—which helps ensure the safety and independence of
parallel computations._

Compare to current Guile, where the documentation says sharing a hash table
without using a mutex will not corrupt memory, but probably won't give you the
results you desire.

------
lispm
Cisco prices from 2013

    
    
        SP-SW-LMIX0CH0 SP BASE Chez Scheme Dev Env for Wind,  Per Unit  $65.00
        SP-SW-LMIX0CHL SP BASE Chez Scheme Dev Env for Linux, Per 10    $325.00
        SP-SW-LMIX0CHW SP BASE Chez Scheme Dev Env for Wwind, Per 10    $325.00
        SP-SW-LMIX0CH1 SP BASE Chez Scheme Dev Env for Apple Mac,Per 10 $325.00
        SP-SW-LMIX0CHA SP BASE Chez Scheme Developm $65.00
        SP-SW-LMX01CHL SP BASE Chez Scheme Developm $65.00

~~~
mayoff
Chez Scheme price schedule I received in 2002:

    
    
        Chez Scheme Version 6
        Software License Fee Schedule
        V60901f
        
        Supported machine types:
           Intel 80x86 Linux 2.x
           Intel 80x86 Windows 95/98/ME/NT/2000
           Silicon Graphics IRIX 6.x
           Sun Sparc Solaris 2.x
        
        
        Classification                                  License fee (USD)
        -----------------------------------------------------------------
        Single Machines
          first machine per machine type                            $4000
          each additional machine                                    3000
        -----------------------------------------------------------------
        Site
          first machine type                                         9000
          two machine types                                         14000
          three or more machine types                               19000
        -----------------------------------------------------------------
        Academic Site (for qualified academic institutions)
          first machine type                                         4500
          two machine types                                          7000
          three or more machine types                                9500
        -----------------------------------------------------------------
        Corporate
          each machine type                                         24500

------
sheepleherd
For people interested in the legalities of licenses, it's released under the
Apache License 2.0 which is a "free software" opensource license that is
compatible when combined with GPL3, but not with GPL 1 or 2.

the Apache 2.0 license includes not just copyright but patent licensing, so
the software will contain no hidden patent restrictions for patents owned by
the creators and contributors.

~~~
jordigh
Why did you quote free software but not opensource? I am curious to know what
the difference in your writing intended to convey.

~~~
TuringTest
That may be because "free software" is still an ambiguous term, while open
source is relatively unambiguous. I prefer to use Free Software as my
disambiguator of choice, but I understand the GP using the other form, and
occasionally use it myself.

~~~
jordigh
In my experience, people frequently think that "open source" means a bunch of
different things, such as "visible source code" (and nothing more) or
"opposite of commercial" (i.e. no money allowed) or "inviting public
participation" (as in "open source governance").

------
dmpk2k
What is the library ecosystem like? This is ultimately what limits all other
Scheme implementations.

~~~
kkylin
That is indeed a great question. Anyone know? I used to use Scheme for
prototyping numerical code, etc., but have switched to Julia. Partly because
Julia is more convenient in some ways, but mainly because of access to
libraries, both native Julia libraries and Python libraries (via PyCall). I
personally still prefer Scheme as a language, but missing libraries is a real
problem.

------
webkike
Making fast scheme interpreters is something I always come back to, a timeless
exercise that makes for a great way to decompress over a week (wow I'm a
nerd). I'm excited to find some nuggets of micro-optimized gold!

~~~
j-pb
Compiler. Chez scheme is compiled with a nano-pass compiler.
[https://www.youtube.com/watch?v=Os7FE3J-U5Q](https://www.youtube.com/watch?v=Os7FE3J-U5Q)

~~~
webkike
Compiler what? I simply said that I enjoyed writing fast scheme interpreters.

~~~
j-pb
You said it again. It's a compiler, not an interpreter.

~~~
sesquipedalian
Its both a compiler and an interpreter. Read up on Petite Chez Scheme.

~~~
j-pb
Yeah most compiled lisps need to do some interpretation in case an
unsuspecting eval comes around ^^

------
xenophonf
It's about time! Although I'm curious---how did Cisco end up owning it?

~~~
brianobush
If you look at scheme.com, you will see Copyright © 2011 Cadence Research
Systems. Cisco bought Cadence back in 2012.

~~~
iheartmemcache
(Cadence for those who don't know are basically the equivalent of SolidWorks
for _anything_ in EE. They are the only company ( _maybe_ Synopsis or Mentor,
but I think there are a few holes here and there) on the planet that lets you
go from designing something as simple and low level as an analog jellybean op-
amp (with state of the art EM and S-param simulation) and verification, to RTL
design (full power simulation and everything), to piecing together TSMC[edit:
learn2proof-read, self]-based fab designs on their 28nm processes (ASICs or
SoCs), to laying out boards at the Altium level. You're paying around
100k/yr/seat for all this but if you're actually using the whole feature set
it's worth every cent.

Other than Solidworks (which has everything from industrial machining stock
cold-rolled steel and the finite-element analysis of your components all the
way to computational fluid dynamics of anything your engineering heart could
desire) I've never seen a company cover such an industry so thoroughly, so
well[1].)

[1] Maybeeee Adobe has equal coverage re: for vector work with
Illustrator/image manipulation with PS/page layout and typesetting with
PageMaker/Indesign and doing post on Video with After Effects and such. And I
don't think anyone would disagree, the degree of complexity for this genre of
software is on a different tier.

edit: Haha. Confused Cadence _Design_ Systems with Cadence _Research_ Systems.
Thanks for correcting me. Mea culpa. Keeping this up just for continuity sake.
Viva la Scheme though! Have an upvote, @beering ;)

~~~
beering
I'm _pretty_ sure that the Cadence you're thinking of is not the Cadence that
owned Scheme. Cadence Research Systems was afaik a tiny entity that mainly
owned Chez Scheme.

~~~
nickpsecurity
Yeah, it's a sifferent company. I doubt Cisco could even afford an EDA vendor
other than little Mentor. Parent is also wrong about Cadence being only one
that can do each of the major jobs: all of them have products for that. Too
many to keep travk of actually.

Mentor has an advantage since they acquired Tanner, best of the budget ones.

------
rrnewton
I used Chez Scheme for many years and loved its lightning fast compile times.
For example, I'm not aware of any other full-scale compiler that can compile
itself as fast as Chez can.

~~~
PeCaN
OberonSystem can build the whole compiler, OS, and applications in around 3
seconds.

~~~
pjmlp
I look forward to the day someone takes the effort of writing a bare metal
runtime for Go and producing something like "Goberon", given the influence.

~~~
goldbrick
This is pretty close:
[https://github.com/deferpanic/gorump](https://github.com/deferpanic/gorump)

~~~
pjmlp
A step on the right direction, but I was thinking more about Oberon System 3
with its Gadgets UI framework.

------
Johnny_Brahms
Looking forward to having this integrated in geiser/emacs. I worked with chez
recently, and it is really a high quality scheme implementation.

~~~
dman
Any details on what you were using it for? What did you like about it vs
something like Racket?

~~~
Johnny_Brahms
We have a macro expander for pascal written in scheme (quick and dirty draft
made by me that worked so well it stayed). We had some performance problems
with some crazy recursive macros (it generates a _lot_ of code. Don't ask, I
am not allowed to talk much about it), so I investigated porting it to chez.

Instead I just switched to guile (scheme implementation) trunk and got a 3x
speedup. Did some optimizing work and it ended up at 4x, which is good enough.

~~~
davexunit
What Scheme implementation did the code originally use?

~~~
Johnny_Brahms
The Guile 2.0 branch. I don't know what magic optimisation dust they sprinkled
over the upcoming 2.2, but it sure is fast.

We thought about using chicken, but it depends quite a lot on using syntax-
case to deconstruct everything, and I didn't want to learn their implicit
renaming stuff.

Apparently the 2.2 branch has full elisp support. Can't wait for Emacs to run
on it.

~~~
davexunit
Ah, that makes sense! Guile 2.2 has a completely rewritten compiler and
virtual machine. I'm happy to see some real-world instances of it greatly
improving performance.

~~~
Johnny_Brahms
"greatly improving performance" is an understatement! It was literally 3x. I
didn't even have to change anything. Not bad for a language that usually beats
python by quite a large margin :)

------
bitmadness
Someone explain to me: Chez vs Gambit vs Chicken vs Bigloo. Which do I pick?
Especially interested in parallel/multithreading abilities, standards
compliance, and overall performance.

~~~
jopython
Out of the four you mentioned, only Chez has true posix threads.

~~~
justinethier
Bigloo has support for posix threads. The only one on the list that absolutely
does not have them is Chicken.

------
abc_lisper
Why should I be interested in this - given there are many free high quality
implementations for Scheme like Racket, Gambit etc?

~~~
rntz
Chez is known for being very fast, and for using state-of-the-art compiler
technology.

~~~
abc_lisper
Any numbers?

~~~
soegaard
Look at Clinger's benchmarks for Larceny.

------
nickpsecurity
This is great. I was hoping this would happen. The reason was a paper posted
here in the past tracing its development from 8-bit days. I was impressed but
knew it needed a community and OSS license.

Good.

------
mark_l_watson
Very cool!

I had trouble building it on Linux if I tried to set --installprefix= to a
non-standard location, it built fine using the defaults. Nice!

On OS X, I have a clang version of gcc installed and perhaps because of that
my build broke.

~~~
mark_l_watson
UPDATE: building on OS X:

I installed gcc-5 using brew, set "alias gcc=gcc-5", and then ./configure ;
sudo make install worked fine.

------
amttc
Wow, I never thought they'd open source Chez. This is really cool.

------
niccaluim
Wow, this takes me back. I took intro CS at IU in 1993. At that time they were
still teaching Scheme, using George Springer's _Scheme and the Art of
Programming_ and something like _The Little Schemer_ (but not that because I
guess it didn't come out for another two years). Delightful language with a
really clean library. I always found Common Lisp's naming conventions to
be—dare I say it?—PHP-esque in their irregularity. Scheme, meanwhile, actually
_has_ naming conventions. :)

~~~
ams6110
The Little LISPer (precursor to the Little Schemer) was definitely in print
and used at IU in the 1980s. Maybe that was it?

~~~
niccaluim
That must've been it.

------
anta40
A quote from BUILDING: "Building Chez Scheme under Windows is currently more
complicated than it should be. It requires the configure script (and through
it, the workarea script) to be run on a host system that supports a compatible
shell, e.g., bash, and the various command-line tools employed by configure
and workarea, e.g., sed and ln. For example, the host system could be a Linux
or MacOS X machine. The release directory must be made available on a shared
filesystem, e.g., samba, to a build machine running Windows. It is not
presently possible to copy the release directory to a Windows filesystem due
to the use of symbolic links."

Maybe someone have managed to build the Windows version?

------
agumonkey
Pretty great, always wanted to look at the implementation.

------
defvar
Wow, such a big news! I have heard so much praise about it and have been
always want to use it. Now dream comes to true. Thanks!

------
blaket
anyone know how this became cisco's to release?

~~~
detaro
[https://news.ycombinator.com/item?id=11572733](https://news.ycombinator.com/item?id=11572733)

------
dschiptsov
This is good news. The code is worth reading. It is as good as MIT Scheme, and
less esoteric than Gambit.)

