The course looks really interesting, but in my experience of "performance engineering" you very rarely need to know this stuff.
If you want to improve the performance of some random software system you encounter at work, there is normally much lower-hanging fruit available.
First profile the application on as representative a workload as possible. Then fix the accidentally-quadratic loops, stop it from hitting the disk every iteration, add indexes for the worst database queries. If that doesn't at least double your throughput, you are working on an unusually-good application.
More than good to know, sometimes knowing the thresholds of the nuts and bolts is the difference between tuning and rewrite it in a different language or more complex architecture.
Yeap, usually it's more about properly profiling your application on the CPU and memory dimensions, occasionally instrumenting if necessary, experimenting different measurements and tools to have a good picture of your software bottlenecks and where there are gains.
Knowing the full picture and where the issues are you can then proceed to these specifics of optimizing your code for performance in these spots, or using a specific library that solves better that problem or what is usually an approach that gives better gain, rethinking knowing the limitations and finding a better design for the problem that part of the code is trying to solve.
I do not know why this thread is dismissive of the course when that is precisely what the first few lectures of the course teach, and every subsequent assignment treats “profile and instrument the code” as the (sometimes inferred) step 0.
But it’s an 18 unit course, it’d be a disservice to the students (and the world at-large, considering the potential impact of performance engineering) if we stopped there.
“Measurement and Timing” is lecture 10. We started with small independent programs and worked up to systems, in much the same way you would not teach an individual to paint by telling them the Sistine Chapel needs a patch fixed.
But yes, I do agree profiling is step 0 to performance engineering.
The same can be said for many - maybe even most topics in engineering higher education. 99% of the stuff you'd learn in an advanced controls course ain't running anywhere, but it's a body of knowledge we want to maintain and grow because if it was applied everywhere, it could make everything better. Maybe the economics or the politics aren't there today, but you never know where it will come in handy.
I 100% get why you're saying it: the fear is that with all of these contraptions in mind, we send a generation of engineers out who immediately reach for their performance bazooka when all they needed was a pocketknife. But SOMEbody is doing actual high quality engineering and needs to reach for this stuff. It's MIT: there's a chance those people are in the room.
If you want to improve the performance of some random software system you encounter at work, there is normally much lower-hanging fruit available.
First profile the application on as representative a workload as possible. Then fix the accidentally-quadratic loops, stop it from hitting the disk every iteration, add indexes for the worst database queries. If that doesn't at least double your throughput, you are working on an unusually-good application.