Gleick's "Chaos" got me sent to the principal's office in high school. I went crazy for fractals. Unfortunately all I had at home was an IBM PC XT. Mandelbrot set renderings were agonizingly slow and the CGA palette was too limiting.
Around this time my co-conspirator and I realized the library had 386s that almost no one was using for catalog search. They became our fractal render farm. We'd exit the catalog program, insert a floppy with our latest renderer, kick off a deep zoom, and turn off the monitors to avoid suspicion until we could check back next period. The results were thrilling. What a difference the access to compute made.
You all know the story -- eventually the librarian found us out and reported us for "hacking."
Who was the first person to propose FFTs for faster polynomial multiplication?
Got curious about this recently. I’m not great at citation tracing, but did make it back to this 1995 paper by David Eppstein [0] where he uses it to efficiently solve Subset Sum after an incremental update. Surely Knuth’s TAOCP had it even earlier?
The fact that FFT polynomial multiplication also lets you solve Exact Subset Sum with Repetition in sub-exponential time came as a real shock to me. [1] Crucially, this algo is O(N log N) where N = the maximum element, not N = the set size, so it isn’t a P ≠ NP counterexample or anything.
Pollard [1], Nicholson [2], and Schonhage-Strassen [3] seem to have come up with it independently around the same time, using different approaches. Strassen is said to have discovered the Pollard approach in 1968 but there is no (written) record of it.
It should also be noted that, while it was not exactly the birth of the FFT, Cooley-Tukey's 1965 paper [4] on it was what kickstarted research on FFT and its applications. This was just a few years after that.
> Sure, but how often do you need to touch that config?
More often than I’d like! tmux config has broken backwards compatibility on me multiple times over the years.
This is fine for most software — you upgrade your config once and you’re done. However, the nature of tmux is that I use it on many servers, some old and some new, some with tmux 1.x and some with 2.x. Getting a ~/.tmux.conf from my dotfiles repo that works across both has been papercutty.
Love tmux though & can’t imagine tty life without it — I run it locally as well as on remote machines.
> And on a related note, Lewis Mumford, a philosopher and writer, wrote quite a bit about how clocks were (in his view) the necessary invention for capitalism to flourish.
Szabo also takes this up in his excellent essay "A Measure of Sacrifice":
Fair broadcast and verification of time was thus of fundamental importance to the most common contractual relationship in the new European cities. In agricultural societies, including medieval Europe, serfdom and slavery had provided most of the labor. Most workers in a modern economy earn wages based on a time rate. Along with or following the rise of the time-rate institution – including the contracts themselves, the laws and regulations governing the contracts, and the technology to fairly measure the principal quantity – came the growth of related economic institutions, such as the joint stock company. These institutions enabled a boom in productivity and the spectacular rise of Europe from its darkest ages to the modern era. We will now chart the rise of the clocks and the institutions they supported.
This idea was used to great effect in Matt Might’s “Parsing with Derivatives” paper [0]! And it featured prominently in the Compilers class he taught at the University of Utah.
HTML can be described by a context-free grammar [0], but not by a regular grammar [1]. If a language can be described by a regular grammar, you can parse it with a regular expression -- that's where the "regular" in RegExp comes from!
Derivatives of RegExps don't automatically unlock parsing of context-free grammars, afaik. For that you need recursion. They do however unlock some very elegant parser designs.
Like you, we use pre-signed S3 upload urls. From there we use Transloadit [0] to crop and sanitize and convert and generate thumbnails. Transloadit is basically ImageMagick-as-a-Service. Running ImageMagick yourself on a huge variety of untrusted user input would be terrifying.
To be fair, this is not at all Poettering’s idea. There is, for example, precedent in the form of s6-sudo[1], a utility from the s6 service supervisor, itself very much an anti-systemd project (except I believe it predates systemd?..).
And honestly I’d be okay with a suidless Unix. For example, as best as I can tell, the only reason the kernel needs to know what executable formats even are—beyond the bare minimum needed to load PID 1—is s[ug]id binaries.
I like s6! One of the key differences here is that s6-sudo builds on, rather than replaces, the standard unix permissions model.
s6-sudod listens on a unix domain socket. Unix domain sockets are just files, so they have an owner, group and mode bits. The answer to "who is potentially allowed to run a differently-privileged command?" is just `ls -l /path/to.sock`.
For finer-grained access control, a unix domain socket listener can call `getpeereuid()` or `getsockopt(..., SO_PEERCRED, ...)` to learn who it's talking to. You can build powerful – but still relatively simple, and importantly, readily-inspectable – access control policy on top of these basic unix primitives. That's what s6 does. Look at how simple rule definition is. [0]
Or, you could throw all that out the window and build something much more complex and much less inspectable, which is the systemd approach. The answer to "who is potentially allowed to run a differently-privileged command?" under `run0` is to...spend the evening reading through polkit xml rules, I guess?
I realize systemd uses D-Bus, and D-Bus uses a unix domain socket. But that socket is writable by world. We're trusting polkit and complex policy xml and probably a constellation of other services to get things right after the SO_PEERCRED check.
Maybe that's fine for desktop apps, but a reminder that we're talking about sudo here.
Complexity is the enemy of security. The complexity of the systemd ecosystem broadly writ is how we get CVEs like this polkit privesc, which took 12 years to notice [1].
Addendum: it's possible to regard systemd as dangerously complex AND sudo as dangerously complex. OpenBSD as usual had the right idea with `doas`.
Like many other things in Unix, SO_PEERCRED and getpeereid are half-implemented hacks that should not be used for security. They both only return the uid that was used at the time of calling connect(). Meaning you have to be incredibly careful what you do when creating the socket and you cannot really pass any sockets off to other processes if you want to try to do security that way because they will still inherit the wrong credentials. Also the usual complexities apply of how to interpret that when interacting with a process in a container.
I have a pretty low opinion of s6 because of things like this, you pretty much have to create a more complex system like polkit and systemd if you want this stuff to actually work. You don't have to use XML and javascipt like polkit does but you do have to do more than what s6 is trying to do. (Also, I personally don't find the "random collection of ad-hoc text files" style they do to be any less complex than systemd, but that's a different conversation)
> Meaning you have to be incredibly careful what you do when creating the socket and you cannot really pass any sockets off to other processes if you want to try to do security that way because they will still inherit the wrong credentials.
I see nothing new here beyond "handle privileged resources with care." Don't overshare. Got an open pipe to `sh` running as root? Maybe you oughtta set O_CLOEXEC on that fd before you exec and overshare with a child. Got a socket that's been peer authed? The same.
This is pretty basic unix stuff. If you stick to the basics and avoid the siren call of complexity, the security properties remain relatively easy to reason about. Most privileged resources are fds. Mind your fds.
I'm not a huge fan of sending file descriptors over sockets – maybe we agree on that part.
> Unix domain sockets are just files, so they have an owner, group and mode bits. The answer to "who is potentially allowed to run a differently-privileged command?" is just `ls -l /path/to.sock`.
Yeah, except that is not true. To quote unix(7):
On Linux, connecting to a stream socket object requires write permission on that socket; sending
a datagram to a datagram socket likewise requires write permission on that socket. POSIX does
not make any statement about the effect of the permissions on a socket file, and on some systems
(e.g., older BSDs), the socket permissions are ignored. Portable programs should not rely on
this feature for security.
So s6 just has a wide, easily exploitable security hole there. Or is not portable, contrary to its claims.
Lol okay man. Maybe if you're running FreeBSD 4.2 or HP-UX or some BSD derivative from the 90s. All unix systems from about 2000 on will honor unix domain socket permissions.
That's not an exploit, that's just a sequence of basic misunderstandings about how things work on Linux. Which would be fine, nobody knows everything, if they weren't coated with grand claims and not-so-veiled personal abuse.
Requires root not just for the ptrace protection, but also to gain membership of the 'tty' group which gives control over all ttys. And then goes all surprised pikachu when it turns out that allows taking over ttys. Duh?
Huh. I'm not at all a fan of how Poettering operates, but it's neither the ideas nor the implementation where I'd fault him. Well, it depends on what you mean by implementation, I guess; I'm talking about the core "how does it do its thing", not the interface by which you use it.
I think Poettering has great ideas and great implementation. It's the execution and interface that are often terrible. If the square peg doesn't fit in the round hole, then he'll always say that the peg is perfect and the world just needs to chisel out the corners of the hole.
For systemd and pulseaudio, the systems they were replacing legitimately had major problems. There were variants and workarounds that fixed some of these, but no holistic solution that I've ever heard of. There were just so many limitations if you maintained any degree of compatibility. People were (understandably) unwilling to start over and rearchitect something that desperately needed rearchitecting. Poettering designed and implemented replacements that were substantially better, and worked. Worked well, in fact. That's the great ideas & implementation part.
Much of this was enabled by a willingness to throw out compatibility with nearly everything. Backwards, forwards, sideways. If I were making a bold and breaking change like this, I would sacrifice compatibility but try to make up for it by bending over backwards to catch as much of the "once working, now broken" wreckage that inevitably piled up as I could, by creating shims and compatibility stubs and transition mechanisms. I'd certainly listen to people's problems and try to work out solutions.
Poettering, as far as I can tell is more of a honey badger (excuse the dated meme). He just doesn't give a shit. If your stuff doesn't work in the brave new world, then your stuff is broken and is going to have to adapt. That's the bad execution part. (Which is not to say that bending over backwards is always the right approach; it can massively increase the burden on the new system's implementer, to the point that it never happens. There's a reason why Poettering's stuff is taking over the world.)
As for bad interface, this is a lot more subjective, so it's easier to disagree. But the tools to configure and use the new system are done in the style of an isolated cathedral. The tools do a ton of stuff, but they do it all in a new way, and that's great once you learn the blessed paths and internalize the new architecture. But all of your existing knowledge is now useless, and you can't flexibly combine and extend the functionality with the usual tools you'd use (bash, grep, awk, find, sort, tee....) The main functionality of the new system is not new — none of this is really adding fundamental new capabilities, it's just improving things that were already being done. But the way you interface with that functionality is all new, even though it could have been exposed in ways at least a little more unix-like and composable. Instead, the tool author determines all the things you should be doing and gives you a way to do them. If you want more or different, then you're doing something wrong.
Normally, I'd expect something like this to die out as it rubbed up against the surrounding functionality. "Great system, but too much effort when we keep having to fix thing after thing." Surprisingly (to me), in systemd's case in particular, what has actually happened is that the cathedral just keeps expanding to swallow up enough of its surroundings to keep from being ejected.
Maybe it's sour grapes, but my guess is that this was only possible because the previous systems were so bad. esd was a nightmare. sysvinit scripts were baroque and buggy and error-prone. Sure, the first 80% was just plain simple shell scripting. But everything did the last 20% slightly differently or just punted. It was all buggy and idiosyncratic and failed intermittently. Supposedly some of the init system variants managed to corral it all together enough to do actual dependencies and get decent startup speed, but I never used such a system. And based on the quality of the init scripts I've seen from random packages, I'm guessing the successes only happened when a relatively small group of people wrote or repaired a metric shitload of init scripts by hand. And even then, systemd provides more in its base functionality set. Architecturally, it's really quite nice.
> Much of this was enabled by a willingness to throw out compatibility with nearly everything. Backwards, forwards, sideways. If I were making a bold and breaking change like this, I would sacrifice compatibility but try to make up for it by bending over backwards to catch as much of the "once working, now broken" wreckage that inevitably piled up as I could, by creating shims and compatibility stubs and transition mechanisms. I'd certainly listen to people's problems and try to work out solutions.
You do realize that systemd was the only init system that offered distributions a migration path from the sysv-rc init scripts?
daemontools, s6, openrc, upstart all did not have this. systemd was the only system caring about migration and backward compatibility...
> Poettering, as far as I can tell is more of a honey badger
As far as I know, he was the only author of an alternative init system that, for example, did actually talk to distributions to understand which problems they have. Unlike the authors of most alternatives that don't give a shit (and in turn nobody gives a shit about their init). To this day you'll find the s6 author just claim "nobody needs feature X from an init" because they themselves might not need it.
you have the wrong view point. he just have a different opinion than you.
he single handled managed to fool RH and all distros into turning Linux administration just like windows. systemctl list of services is so inspired by the atrocious windows' admin list of services (which have 3 fields supposed to describe the service, but they all just tell you the name again).
it's no wonder his reward was a job at Microsoft.
but again, he's good in all three aspects. you just disagree on building the torment Nexus that is putting Linux in the "standard certification" target for sysadmins.
I continue to be baffled at this widespread belief that Poettering somehow hoodwinked every single major Linux distro into accepting a shit product with, idk, hypnosis or something.
Is it not possible that systemd is simply better than the alternatives, and the distro owners are smart enough to notice that, instead of just wrapping themselves cultish mantras about The Unix Way and how anything which resembles a design used in Windows is bad by definition? Or could that not possibly be it and he must've used mind control magic.
Yes, it has always reminded me of the old "Apple just sells all those shiny devices because they're good at marketing" trope.
As if marketing alone could do that. Poettering does seem to be, to a casual observer, kind of a dick. Arrogant, dismissive of competing products... kind of like that other guy — also kind of a dick — who supposedly had that "reality distortion field" that hoodwinked all those poor saps into buying his phones.
There's no fucking way in hell you are able do that if the user base doesn't think the product is good. To those saying it, I always reply, "It may not be the product you want, but a shitload of people disagree with you, quite obviously."
I'm not personally a huge fan of the iPhone or systemd. But they are both clearly "the best" for the largest number of people. (And that is even clearer for systemd, as it doesn't cost hundreds or thousands of dollars more then the competing products.)
just that his vision was garbage, and everyone knows. but he stood by it. and nobody was putting the same energy he was to either offer better or stop it (rejecting bad ideas also take energy. see gnome deep dive into garbage as another example)
Linux is mostly made from scraps (eg Bluetooth and wifi entire stacks) or misguided but funded things. the age of scratching own itch is mostly gone
> nobody was putting the same energy he was to either offer better or stop it
which was much easier thing to do, compared to an outsider, considering he was on Red Hat's payroll, along with the people (gnome/freedesktop crowd) he had need to convince
I wouldn't worry too much about that. It's a tricky piece of security-critical software, receiving its first round of outside auditing; of course it has vulnerabilities. Sudo does have the advantage of being much more battle-tested, but that will even out with time; what will matter is how secure it is two years from now.
Yes, Excel can be considered a build system! A cell outputs the result of a computation, but it can also input computation results from other cells. For performance reasons, you don't want to recompute all cells in a spreadsheet when one cell changes. You want to do the minimum – an incremental rebuild. You do that by traversing a DAG with cells as vertices. You might not be "compiling" "files", but it sure looks like a build system in other important ways.
I've worked on build systems on & off for most of my career, but your question made me realize I've never read a book that presents build systems in the abstract. Everything I know comes from studying specific build systems and, in some cases, writing my own. The design space for build systems is huge and still expanding. That makes a unified theory of build systems difficult.
Can you share what motivated your question and what you're hoping to learn?
> Here's a paper, "Build systems à la carte: Theory and practice,"
Awesome, thank you. I actually had that saved in my files to read later, then started really digging into it after your suggestion. Almost exactly what I was looking for!
> Can you share [1] what motivated your question and [2] what you're hoping to learn?
I suppose the simplest answer is "[1] pure curiosity and [2] whatever I can to slake it." How build systems in particular piqued my interest is a weirder, zig-zaggy story involving a bunch of different things catching my eye at random, and I'm not sure that's what you wanted to know, so I'll omit for now.
Around this time my co-conspirator and I realized the library had 386s that almost no one was using for catalog search. They became our fractal render farm. We'd exit the catalog program, insert a floppy with our latest renderer, kick off a deep zoom, and turn off the monitors to avoid suspicion until we could check back next period. The results were thrilling. What a difference the access to compute made.
You all know the story -- eventually the librarian found us out and reported us for "hacking."
reply