Areas like systems performance methodology I've written a lot of new material for in the book in more depth.
It is very dry, and hard to read from cover to cover, but it is filled with a mountain of info. I unfortunately don't know of a more introductory book to recommend, but would also be curious
I've been hunting for such a thing as well, though I'm guessing your hunt is for much higher quality than mine. I've not been able to find anything BUT based on what I've been watching and reading (yes, youtube and blogs) it wouldn't be much work to assemble a reading/viewing list from youtube/blogs and that might just work for you given that there might not be a perfect solution out there. The market for advanced high quality dedicated training on really advanced things is probably rather small (just my guess).
He's spreading and making available knowledge about perf, ftrace, eBPF that would otherwise remain relegated in Linux kernel development circles. He's also contributing to bcc (a tool in the eBPF ecosystem), but his main merit in my opinion is marketing, with his books and blog (I'm using the term in positive sense).
Windows is much more popular and has many more apps, but I can come up with a handful of reasons why I don't like it or think it is objectively worse.
People like traffic to their actual domain, because it often means better search engine ranking in the future. On top of that, some websites serve ads, which means that traffic is proportional to revenue.
Well, legal and whatever domain the "you viewed the ads but they werent tracked so we don't get money" problem falls into. Stupid artificial barriers which are there because they solve an economic problem, that wouldn't exist in a perfect world. Game theory, maybe.
Caching it for when a site is overloaded is not ripping it off, just like sending copies of works to the British Library for archival purposes is not ripping the original work off.
Perhaps a simple GET to the URL ever 60s. If you receive 5 non-200 responses in a row, then the HN link points to the archive.is version. Same method in reverse for bringing it back.
How would one request per minute from a single server DoS the site, let alone DDoS it?
Also, the library analogy doesn't work in copyright law. Copyright protects the act of copying, not the act of transferring an already-authorized copy of a work to someone else.
- not everyone upvotes
- there's probably a substantial number of users who are content to read articles but have never had a login (I did for two years)
- social media amplification.
Or all the phishing domains...
I seem to recall that the blog of some physicist hit the front of multiple such sites on the same day, effectively taking down the whole hosting service.