Does anyone know of someone doing the same style of introspection tools, for tracing and profiling and networking, like the body of her work, not just this post, but for Windows?
I know a few scattered posts here and there, usually PFEs at Microsoft Blogs scattered, but the landscape of dedicated bloggers seems lacking to a novice like me.
Don't get me wrong, I respect Julia Evans as a professional, but what she mostly does is simplify other people's hard work and in-depth analysis of difficult problems in various layers of the technology stack.
Julia mentions Brendan in her post already and she's done _plenty_ of great work. Don't tear other people down, it's not cool.
They didn't appear to be "tearing anybody down." Accusing people of malice for expressing their opinion is also "not cool."
It's perfectly acceptable to not be a fan of someone's blog posts. It's also perfectly acceptable to express that in a respectful manner which they did.
> what she mostly does is simplify other people's hard work
OP also called Brendan Gregg a "god" in the same post. So Brendan is a "god" but Julia is just distilling other peoples' work? Sounds pretty disrespectful to me.
Why are respectful and critical mutually exclusive?
Do you believe that one negates the other?
I don't mean to pull the argument one way or another one. But, IMO simplifying other people's hard work is hard work too. At least I personally find it to be.
Depending on our experience we can understand OP as being respectful (or not). I would like to give her/him the benefit of the doubt.
What is bad about it?
What I have seen though is that saying anything even slightly negative about the content in this individual's blog posts will draw a strong rebuke as you have just seen.
I personally find the amount of fawning commentary and aggressive defensiveness for this particular blog and author a bit cultish.
This is the opposite of "respecting" someone as a professional.
Depending on your view you can view this as a positive or a negative. One view is to say that Linux is more collaborative, and only the "common core" interfaces are actually put into the kernel (with the higher levels being provided in userspace by vendors). A good example of this is the live patching code that came from a distilling of Red Hat's ksplice and SUSE's kgraft systems. You can track done most of Linux's features to this sort of development model.
illumos and BSD however, usually work far more like traditional software products. Some engineer goes off and implements a really interesting system that then gets put into a release. That's how ZFS, Jails, DTrace and so on were developed (in Sun) and I'm sure you can come up with other examples from BSD. The key point here is that this is a far more authoritarian model -- one set of engineers have decided on a system and developed it mostly in isolation. Linux doesn't permit that style of development because large patchsets need to be broken down over releases.
Personally I must say that I don't care for the Linux style of things, mainly because I feel it hampers OS innovation in some ways. But the upside is that the facilities are more like libraries than frameworks and so you're forced to design your abstractions in userspace. Is that good? I don't know.
Note that following along with the above theme, there is an overarching architecture for Linux's tracing tools (in userspace) in the form of bcc.
It's like saying you went to MIT (Minnesota Institute of Technology).
Other than this small nit, great article.
> Research done at the lab concentrates on improving the performance, reliability and resilience of distributed, cloud and multi-core systems.
We also integrate it into a rather large wider toolkit called LISA  which can do things like describe synthetic workloads, run them on remote targets, collect traces and then parse them with TRAPpy to analyse and visualise the kernel behavior. We mainly use it for scheduler, cpufreq and thermal governor development. It also does some automated testing.
It seems like a webassembly for the kernel but local software has the benefits of knowing the platform it is running on. I.e. Why compile C code to eBPF, when I can just compile to native code directly?
I can potentially see it solving a permissions problem, where you want to give unprivileged users in a multi-tenant setup the ability to run hooks in the kernel. Is that actually a common use case? I don't think it is.
This is quite important when you want to run this code in production. You don't want to accidently crash your kernel.
For instance, you could just write your kernel module in a sufficiently safe language, like Rust, and have the same benefits. You could even pre-compile eBPF for the exact same level of safety. Still no need for the bpf() system call or the eBPF VM or JIT in the kernel.
* Strictly typed -- registers, and memory are type checked at compilation time. If you use something like Rust, you'd have to bring rustc into the kernel
* Guaranteed to terminate -- you cannot jump backwards, and there is an upper bound on the instruction count
* Bounded memory -- The registers, and accessible memory via maps are a fixed size. We don't have a stack per se.
Compiling Rust to this is possible, but it'd require quite a bit of infrastructure in the kernel to verify that the code is safe, versus the simplicity of eBPF. Early attempts at a general purpose in-kernel VM included passing an AST in, and then doing safety checking on the AST, but they proved too complicated to do safely.
I'm arguing against the in-kernel eBPF infrastructure: bpf system call, the JIT and the VM.
I think it makes more sense to just compile eBPF (or rust or whatever safe language you want) to a kernel module.
Accepting compiled stuff in the form of a kernel module requires root privileges and requires that the kernel essentially have complete trust in the code being loaded.
Loading eBPF eliminates the need to trust the process/user doing the loading to that level.
eBPF probes can't crash and are determanistically safe (they aren't actually Turing complete). So you are unlikely to heavily impact application performance.
Since then, BPF has grown to be used by more subsystems, including tracing, and allows user programs to do advanced (and fast) things. See for example https://github.com/ahupowerdns/secfilter . AFAIK, this doesn't require privileges, which loading a kernel module would.
For production, placing all rules in a single module seems best. If you could avoid the overhead of executing BPF in production, wouldn't you?
I agree with the privilege argument but I don't think normal users can filter packets or add tracing with the current situation either.
Many doctors will tell you how useful some colouring-in books were to them.
Here's one example, but there are others: https://www.amazon.co.uk/Human-Brain-Coloring-Book-Concepts/...
Richard Feynman said that if you wanted to quickly learn a subject you'd start with the books for children to get a quick overview.
I like the drawings. Cute, positive and informative.
Superficial way to judge the content, too.