This comment doesn't really tell us much. Was the book missing its cover? Did you not realize what the book was about? Did you dislike the presentation style?
I am a scheme/lisp/clojure fan by heart, so it definitely wasn't the parens.
(This really did occur, but it was just glue.)
The "state" of the system St is defined here https://github.com/walck/learn-physics/blob/f6ce9bbaece24f48...
It's amazing to see code and implementation details that match the physics language.
I wonder if this codebase is ELM-able? i.e. does it use any specific features from Haskell that are not in ELM? I'd love to see this in the browser. Weekend project anyone?
I think that there could be books on poetry (analyze meters, language, cross-references, symbolism, poet connections), literature, biology, math, psychology, sociology, economics, ecology, chemistry, etc. through programming.
Some of those exist, but there should be more.
[Free book] https://mitpress.mit.edu/sites/default/files/titles/content/...
The purpose of the course is to strengthen a student’s understanding of basic physics by learning a new language (Haskell), and instructing the computer to do physics in that language. Our attitude was strongly influenced by the work of Papert, and the subsequent work of Sussman and his coworkers[7, 8].
 Gerald Jay Sussman & Jack Wisdom (2001): Structure and Interpretation of Classical Mechanics. The MIT Press.
 Gerald Jay Sussman & Jack Wisdom (2013): Functional Differential Geometry. The MIT Press
For example, finding a square root using Newton's method and implementing the Fermat test in Lisp. Such problems could either be seen as fun or annoying depending on your background and/or enjoyment of learning this type of stuff in addition to programming concepts. They do tie in together nicely, it's just something I'd mention for someone looking into studying the book.
SICP covers a lot of the higher level concepts we use day-to-day, but in such a way that you can grasp lower-down concepts along the way.
That's let me write code faster, understand compiler stacktraces better, and grasp why some things work in a language, but similar code doesn't, and get a handle on optimisation where I need it.
Everyone seems to get a little different benefit out of it, but most people I know have got some benefit, even just from the first few exercises or lectures.
Unfortunately, it gave me my favourite area of CS: Language design. Which isn't exactly a career track, being so niche.
SICP is a book that I took my time to read. I read the whole thing front to cover. Some days I would read many pages some days fewer pages. I read a lot of it on a vacation. Don’t remember if I got through the whole thing then or if it took even more time.
SICP was so good that I plan on reading it again.
I recommend looking at the exercises and thinking about them a bit but like the other guy said don’t get bogged down by them.
But I had forgotten many of the details. Here is just one: in 3.1.1, Local State Variables, SICP shows how to, in effect, define classes and create objects using only function definition with lambda and set! (that is, assignment) in the definition body. The function you define this way is a like a class, the functions it returns are like instances. So you can do all this without any specifically object-oriented language features. This occupies just a few pages near the beginning of a long book - the whole book is dense with worked-out ideas like this.
The cumulative effect of all these examples in SICP is to demonstrate that you can solve any programming problem from first principles - that is, by combining a few simple but powerful constructs in the several ways they describe. Moreover, a solution constructed this way might be simpler and easier to understand than a solution made the more usual way: by looking for a specialized language construct or library that seems to offer an already-made solution to the problem.
One of the first examples is Newton's method for approximation. Which many beginning programmers have never encountered.
Also, I think right in the beginning of the very first lecture of the series Abelson talks about Computing as a way of codifying processes. In the link I included, Sussman says flat out that what he's interested in is using Computing as a better way to teach physics.
Sicp is solid fundamentals.
For folks with a programming background, I suspect it would be hard to do better than Matter and Interactions by Chabay and Sherwood. (Someone already mentioned this elsewhere in the comments here.) It makes substantial use of Python code in the text and problems, and (partly enabled by that) it takes a delightfully modern approach that discusses the true (relativistic) formula for momentum (etc.) very early on. I haven't taught out of it yet, but I've been very tempted.
I'm currently teaching the intro course using Tom Moore's Six Ideas that Shaped Physics series. His approach is a little quirky at times, but usually for good reasons. I like it a lot.
Those are both purely introductory texts (though both cover topics up through "modern" physics: relativity and basic quantum phenomena). Beyond that, well, that's a bigger question.
Books are hard reading though, so video lectures are also the next best option. Walter Lewin does a good job with this, so try googling his lectures.
Also, you'll probably want to stay away from Feynman lectures until you actually understand the theory.
The book is amazing, since it introduces advanced concepts like group theory and greens functions very early with very simple examples from classical mechanics.
Unfortunately it is only available in german, yet:
This will show you how to do the physics, then (Landau Lifshitz) show you how to reformulate this physics (again, it is actually in the feynman lectures too) in the terms we use in modern physics (Variational methods etc.)
With that in mind, Landau/Lifshitz is an entirely different class. I'm a practicing astrophysical theorist and I would never claim to have grokked these volumes in anything nearing completeness. A serious post-PhD program of study can be undertaken to understand to the finer points of those books; this is a pursuit that would challenge any practitioner. (I place Physics of Shock Waves and High Temperature Hydrodynamic Phenomena by Zel'dovich and Razier in the same category of extremely information dense Soviet tracts on physics.) They are incredibly valuable, although I do think there is a bit of an element of "if you don't know L/L, you aren't a real theorist."
I think that recommending these texts is great for someone down the road, but if someone hasn't even had a full year of intro physics recommending these is not appropriate. Intro-level textbooks exist for a reason - not all of them are cheap, watered down versions of the "real stuff." (Some are, of course.) There are also some texts I've found (but don't remember, unfortunately) in the computer vision community that introduce some physics that would be great for someone with a computer science/engineering background.
edit: "cheap" as in "cheap feeling" not inexpensive, as most of those intro texts are more expensive than the graduate level
The downside to them is that they don't come into contact with the "mathematical" physics side e.g. Treatment of Lagrangian/Hamiltonian formalism through differential geometry.
Here  is an example of a program, complete with 3D graphics.
Maybe a student learning this will go on to make a HDL for analog circuits in Haskell that are easier to get correct or simulate for being in Haskell. Prior work includes using Haskell for digital design (i.e. Bluespec) and non-Haskell HDL's for analog components. There's definitely more possibilities for developing or modelling electronics in Haskell. :)
The issue is that for most things you'd want (+) to be defined like:
(+) :: a -> a -> a
But actually, to be more flexible it could be defined as:
(+) :: a -> b -> a
So then all the normal stuff would supported, as now, but you could also support things like:
instance BetterNum DateTime where
(+) :: DateTime -> Duration -> DateTime
Haskell’s type system does not prevent you from overloading the + operator. It’s the standard library (“Prelude”) that defines + to only work on numbers, and not on vectors, but with Haskell’s type system allows you to define + to do anything you want.
And for what it's worth, I certainly agree that the way Num is structured is a wart in Haskell, and I think the previous two commenters agree. It's just that we should be clear about where the blame lies - and that's not the type system.
Why is that bad? In Haskell, + is reserved for Num, which require some other properties that you can't guarantee for vectors. It'd be pretty bad to run afoul of that in a physics conversation.
Haskell certainly can let the + operator mean multiple things in multiple contexts.
Yes, as long as they have the structure your classes demand. It's not exactly Haskell's type system that is bad, it's the numeric classes that are a bit of a mess.
Math has a much cleaner structure that, for example, does not demand that you can convert any integral value into an instance of your numbers. Haskell tried to simplify it, but wasn't successful.
Fields in Physics are like Curried Functions or Physics for Functional Programmers: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.49.1...
How would this approach generalize to more modern physics?