As a long-time distributed systems developer, I actually wonder... is there going to be a lot of distributed systems development going forward?
1) A lot of the stuff that needed to be developed has been developed and generalized (and to a large extent centralized by cloud providers). It feels almost like web apps going from large teams writing complex LAMP projects from scratch (I remember writing hand-made XHR when prototype.js was an advanced JS library ;) to one person throwing something together in a week with feature-rich libraries and ready-made parts.
2) The trend (both present and future, e.g. TCP over NVMe, cheap massive SSDs, flash memory) is that many tasks that used to require state-of-the-art code, many coordinated machines, clever hackery to work around the slowness of the disk reads, etc. can now be easily done on one machine with simple code. Sure, there are exceptions (some types of ML tasks? specialized scientific/HFT/... workloads?), but I feel like the hardware capabilities are currently growing faster than the need to process more data, faster.
To be fair I also felt kinda like that in 2012 when Hadoop was hot and having SSDs on a server was cutting edge. Maybe I'm just paranoid ;)
I expected to somewhere find at least a reference to the "fallacies of distributed computing" [1]. Looked into topic "how systems fail - what could go wrong".
Was surprised by the unstructured approach of a) (in the video) rambling down a list of personally encountered errors and b) (on the slides) presenting a messy picture containing the items of said list.
Together with the fact that the slides seem to have had no real update since 2016, i consider myself not part of the target audience and would not recommend this online class.
https://youtube.com/playlist?list=PLrw6a1wE39_tb2fErI4-WkMbs...