Alan Kay's reading list: http://www.squeakland.org/resources/books/readingList.jsp
Bret Victor's reading list:
Since we're on the topic of Sussman, has anyone here read through SICM? I've heard that the code is difficult to get to work, but does anyone have an opinion on the rest? I haven't had a chance to read it yet.
Anyone read both Lanczos and SICM?
The SICM code really only works for a specific scheme interpreter, so if you have that it should be fine.
How can the code be difficult to get to work if he ships the required package with the book?
Specifically note that his theory, although debunked, still lives on in philosophy and literature analysis. This "stickiness", that people refuse to give up the theory when proven wrong is additionally a bad sign.
Recovered memory therapy is a recent failure http://en.wikipedia.org/wiki/Recovered-memory_therapy
Freud's major contribution to psychology was that we actually have a dynamic subconscious that profoundly affects how we live our lives. This aspect of his theory has become so ingrained in our culture that it's hard to imagine the world before Freud. Also, that aspect of his theory has held up over the years.
Also, he got a number of things correct: many of his coping mechanisms have strong empirical support, for instance.
Freud was wrong in detail, but his overarching approach changed psychology for the better.
 Yes, I know this "history" is a vast oversimplification.
The psychologists I've worked with design experiments to attempt to determine fundamental mental capabilities in terms of perception, memory, spatial reasoning, etc. and how this can be applied to design safe and effective user interfaces, in particular for safety critical systems like aircraft.
Sure, the models they develop are likely just useful approximations, but it seems that models at the neuron level would be too unwieldy for these sorts of questions anyway.
In the 1950's, behaviorism fell out of favor (primarily because it lacked the explanatory power for some things, especially language).
(I didn't do this myself with QCD, but I very nearly did it with SICP in middle school.)
I didn't understand much of algorithms or quantum computing when I first read the lecture notes the book is sourced from . Was still worth it.
Now I'm about half-way through Godel, Esher, Bach, and I have to say that GEB and QCSD feel similar, with an overlap not only in theme but also in genre and style.
It might be a bit overkill, but if you go over the main chapters of Arora and Barak you should have more than enough background in complexity theory for your purposes.
It looks like the text is freely available; I skimmed through the first chapter and it makes sense to me so far (I don't know how long that would hold true). I've been looking for a basic probability text for some time now, nothing too heavy but something to compensate for not having taken enough math in college.
I read Time Enough for Love every few years. Each time I can't wait for enough years to go by until I've forgotten enough to read it again.
* By Kenneth Man-Kam Yip, 1989
* Coolest PhD thesis ever!
* Solve problems using graphs.
* So cool!
It is indeed one of the coolest papers and programs ever. KAM is a smart ODE solver, written in ZetaLisp on a Symbolics. It analyzes 2D pointsets created by any 2d equations, esp. non-linear ones. Typically a system of ordinary or partial differential equations, with a set of boundary and initial conditions. A typical non-linear physical system.
It creates MST's (Minimal spanning trees) of the calculated points to get the shape and number of curves, to see the number of clusters (checking the distance of the curves), and if the curves are linear or space filling.
Then the phase space is searched for initial states and end conditions, and to get useful summaries. It cannot do shape matching though, so repetitions and mirroring are not detected as such.
The goal is to get high-level descriptions of the model and the numerical dataset, and at which parameter ranges and conditions the system falls into chaos. Chaotic systems are bad for predictability but mostly good for engineering purposes.
I'm certain only a handful of people have read all these books completely.
I don't understand: is data-parallel computing on a GPU much worse somehow? Or is it that there are better sources to read about data-parallel algorithms?
OpenCL is a very awkward way to do vector processing - everything is hard-coded to an abstract model of a typical consumer GPU memory hierarchy. CUDA is even worse with a ton of versions all having different limitations according to what the Nvidia chips can do.
It's awkward to do a lot of SIMD tasks on GPUs. The Connection Machine was a general-purpose SIMD originally designed for parallel graph algorithms.
OpenCL looks like what the Connection Machine C* language might get macroexpanded into prior to compilation: http://people.csail.mit.edu/bradley/cm5docs/CStarProgramming...
The first volume, at least.)