These days, I think CS algorithms are pretty well packaged in code you can download, they're not even that relevant to most problems most of us face in our work, and being really clueful instead involves more traditional math than CS per se. This might be something like a cycle returning to where it started. I find myself wishing I knew more about probability and statistics (useful all over the place), abstract algebra (fancy type systems use it), a reasonable amount of mathematical logic (program verification), number theory (cryptography), geometry and maybe even functional analysis and PDE's (machine learning), etc. I'm probably more into math than the average programming geek but I still feel like a helpless little kid when I try to understand these topics. This is me, wagging my tail in the picture (j/k):
but in all sorts of areas of programming, it always seems like there is a "doorknob principle" just outside my reach. To the extent that I have any motivation to study math these days, I think the above subjects are more likely to be useful than CS theory per se. YMMV.
Groups of chimps > Single chimp roaming the jungle in the darkness.
If I look at mathoverflow with my 6 inch brain, it somehow seems like everyone over there managed to get a 12 inch brain or larger, except under the lo.logic tag. There, tons of the questions are at the basic undergraduate level, which is good for me because I can sometimes understand them, but they are nowhere near the research level expected on the rest of that site.
I do like your formulation about the 6 inch chimp brain.
Its more a Library than a School. There are many difference between those two spaces.
Think about why no one, anywhere in the world, across all cultures, drops their kids of at a library instead of a school...
Really? I took two courses in discrete math and both started with logic. How can you move on to proofs without logic? I can't imagine a "research mathematician" without a sound basis in it.
> 6.042J Mathematics for Computer Science
> 18.200 Principles of Discrete Applied Mathematics
And, prerequisites for the above prerequisites:
> 18.01 Single Variable Calculus
> 18.02 Multivariable Calculus
Overall, you need to have a thorough understanding of high-school math (12th grade), which is not unreasonable.
In fact, I went through most of his book on my own - my own electrical engineering degree was enough to follow and solve exercises.
Besides, rate of change is high school material: physics and last year math lessons do cover it.
At least in eastern europe they do.
I don't really understand why. This results in many omissions in school-level curriculum, I.e giving formulas with no proofs. This results in kids seeing math as a kind of meaningless and boring symbol manipulation.
Was there ever a discussion around this problem in the US?
They mostly made sure we were familiar with relevant notation. This somewhat helped in early uni physics-related courses, until calculus caught up.
That was in a math school back in post-soviet lithuania, so russian approach to math schooling implied.
On the other hand, that choice of topics is pretty idiosyncratic. There are a lot of big concepts, some pretty tangential, that are covered very briefly and could easily be skipped. So I don't think that exact course is a good outline for a new canon. But I agree with the premise for sure.
I will absolutely credit the course with igniting an intense curiosity in the foundations of computing (and of mathematics in general) but I agree that it’s time for a more modern approach.
I've been implementing the flip flop gates using SICP wires:
It was neat seeing the concept of "feedback" (logic gate output gets fed in as an input) then seeing it actually work in the SICP simulator. It immediately went into an infinite loop, but that's correct behavior. The solution was to add a clock line (a wire which goes 0 to 1 and back to 0 over a specific time interval), then update the simulation until the clock line changes. Presto, I had my own flip flops.
Ended up implementing some shift registers (thanks to Feynman). Lots of fun, and it clarifies the concept of a "cycle" -- I found it really hard to understand what a "cycle" actually meant till implementing the gates.
The reversible computation is cool: NOT, Controlled NOT (CN), and Controlled Controlled NOT (CCN) gives you everything you need to make a reversible universal computer.
It turns out that reversible computation is far more energy efficient than normal computation. You can understand with an analogy. Think of a pool table. Imagine the cue ball hitting the 8 ball. When the cue ball is in motion (and ignoring friction), then no energy needs to be spent -- it's already in motion, so you don't need to do anything to cause the situation to "keep going". It just keeps going on its own.
When the cue ball hits the 8 ball, part of that energy is transferred. In the ideal case, the cue ball stops moving and the 8 ball starts at the same speed.
By arranging pool balls in specific ways, you can mimic an AND gate and an OR gate. The outputs are the directions that the balls end up going. And the truth table for AND and OR is simple, so you just need to represent the truth table using "directions" instead of volts.
If you have two inputs, you'll want two outputs, because otherwise you need to stop a ball entirely (energy is lost) which is very costly in comparison. The best outcome is for the second ball to carry along the energy of the second input, even though you don't directly need the answer just to represent AND or OR gates.
The result is that you can run the simulation, and then reverse the entire thing to get back your original inputs. :)