Several of the criticisms the book lists are still true today. File locking is unreliable, deletions are weird, security is either garbage (in that you set it up in a way where there's very little security) or trash (in that you have to set up Kerberos infrastructure to make it work, and no one wants to have to do that).
Perhaps I was a bit hyperbolic about it sucking more nowadays. At least you can use TCP with it and not UDP, and you can configure it so you can actually interrupt file operations when the server unexpectedly goes away and doesn't come back, instead of having to reboot your machine to clear things out. But most of what the book says is still the NFS status quo today, 30 years later.
How are we not there? The only real issue I know is allegedly requiring host keys for gssd (e.g. "joining the domain"), but rpc.gssd(8) documents "anyname" principals.
That seems like a feature; mounting SMB is done on a local system on the basis of password, and it's horrible. (I assume you could, in principle, use some other GSSAPI mechanism.)
AIUI this is still not user level authentication. It rather secures the communication between hosts, but you still have to choose between sec=sys ("trust me bro") or sec=krb5* at the upper layer.
in most cases you can just use more fine-grained exports.
e.g. export /home/user1 to 10.0.0.1 and /home/user2 to 10.0.0.2 instead of /home to 10.0.0.0/24 etc.
I suppose such feedback could be used for reaching a fixpoint. Suppose you have a build system that reads targets to be built from stdin and outputs to stdout targets that are dependent on that target and must now be rebuilt. With an ouroboros, the build system will continue to run, even if the dependency graph is dynamically cyclical, until the fixpoint is reached and the build terminates.
Perl (without dependencies) works awesomely well as a replacement for bash in scripts, in my experience. Unlike Python, chances that it will break the next month (or the next decade) are virtually nil.
Python without dependencies will also work everywhere basically forever. Hell, most Python 2 is valid Python 3, but it's been over a decade now - Python 3 is the default system Python in most everything.
You're not the only ones, but I can't understand this approach. Do people never then read the version history? It must be impossible to understand commits' diffs with the changes all squashed together.
Not the OP, but I think that the point of squashing every PR is that the reviewers/PR run the whole PR, not the individual commits. If you have a PR with 5 commits, 4 of which break the build and the last one fixes it, then merging that will be a problem if you need to git bisect later.
So the idea is really "what's the point of having a history full of broken state?".
> It must be impossible to understand commits' diffs with the changes all squashed together.
This would be a hint that your PR was too big and addressing more than one thing.
> So the idea is really "what's the point of having a history full of broken state?".
I rebase commits so they don't break the build but the history remains clean and incremental. Selective fixups and so on isn't the same as squashing everything into a single commit.
> This would be a hint that your PR was too big and addressing more than one thing.
I don't think so. Sure, that can be true, but squashes can also simply lose vital history. Suppose you remove a file and then replace it with code copied and modified from another file. If you then squash that, all Git will say is you made a massive edit to the file.
> I rebase commits so they don't break the build but the history remains clean and incremental.
Sure, and that's fine. The idea of the squash workflow is that they don't expect that. It's just different, and that's the rationale behind it :-).
> all Git will say is you made a massive edit to the file.
Which IMO is exactly what happened in this case xD. But again... whatever floats your boat, I was just talking from the point of view of a squash workflow.
> If you have a PR with 5 commits, 4 of which break the build and the last one fixes it, then merging that will be a problem if you need to git bisect later.
And the answer is that you don't; each commit is individually testable and reviewable. Changes requested by reviewers are squashed into the commits and then merged into the project. Unfortunately, while the git command line has "range-diff" to ease review with this workflow, neither GitHub not Gitlab have an equivalent in their UI.
Well, I was obviously meaning that "workflows that squash the commits in a PR are workflows where each individual commit is not tested/reviewed separately".
Of course, if your workflow is different, then... well it is different. Doesn't make the "squash workflows" irrational.
> And the answer is that you don't; each commit is individually testable and reviewable.
How does this work in practice? Is every single atomic commit reviewed by someone? When do they review each of those commits? How many commits typically go into a PR?
> Changes requested by reviewers are squashed into the commits and then merged into the project.
So a reviewer finds the appropriate commit that their comment applies to, and then changes the actual commit itself? Who is the author of the commit at that point?
I'm trying to understand what you're talking about, because you seem to have something figured out, for a problem that every team I've worked on struggles with.
> Is every single atomic commit reviewed by someone? When do they review each of those commits? How many commits typically go into a PR?
1) yes 2) when a PR is submitted 3) it can be a lot for a huge project-wide refactoring, but generally I would say 1 to 5 is typical and up to 20 is not strange.
> So a reviewer finds the appropriate commit that their comment applies to, and then changes the actual commit itself?
No, the author applies the requested change and force-pushes once he has gotten all the requested changes applied.
> because you seem to have something figured out
Thanks! But it's not me—it's how Linux has used git from the beginning, for example. In fact it's the only workflow that is used by projects that still use email instead of GitHub/Gitlab PRs, but (trading some old pain with new pain) it is possible to use it even with the latter. The harder part is marching the review comments to the new patch, which is actually pretty easy to do with emails.
It's quite some work and there's some learning curve. But depending on the project it can be invaluable when debugging. It depends a lot on how much the code can be covered by tests, in particular.
I can only conclude that people who think squashing a work item into a single commit is great have never had to do serious bug hunting, relying on commit history for context, nor have they ever moved forges and lost the context from "the context is in the PR anyway".
I think I've done all those things except forges. I don't know what that one is. I still like squashing. I've wven become the git expert on my team. With squashed PR resolutions, I can more reliably use bisect. Many individual commits were never actually meaningful in the first place.
I think it's a bit of a limited conclusion. Maybe they really just make small PRs that make sense, and maybe they rewrite the commit message into something useful when squashing.
My employer has all PRs merged by a bot once they're approved. The bot takes the PR description and uses that as the commit message. The PR is the unit of change that gets reviewed, not the commit. This makes for a nice linear bisectable history of commits (one per PR) with descriptions, references to issues on our tracker, etc. And no need to worry about force pushing, rebasing, etc, unless you want to do so.
Of course it's got the same end result as doing an interactive rebase & combining all the in-progress commits into a single reviewable unit of change with a good commit message, but it's a bit more automatic.
; these produce an error, since `b` isn't defined when the body of `a` is compiled
let a = \x -> (b x * 3),
b = \x -> (a x / 2)
It surprised me when this was called out, given that both a and b are defined in the one 'let'. Was there a specific reason you decided not to treat it as a 'letrec'?
Yeah I went back and forth on this a little bit. If all variables are defined at the beginning of the `let` expression, you can't rebind a variable like this (assuming some previous `x`):
let x = x + 1
because the new `x` shadows the old one before `x + 1` is compiled. But if you're defining a recursive function, like this:
let foo = \n -> foo(n + 1)
you need `foo` to be defined before you compile the lambda body, or else the compiler will think you're referencing an undefined variable.
At one point I had an exception where, if the value of a variable being defined is a lambda, it defines the variable first to allow recursion, but not otherwise. But this felt inconsistent and kind of gross. Instead, I decided to have `def` expressions behave like that, and disallow recursion in `let`. `def` is always a function definition, so you'd almost always want that behavior, and I felt that since it has a different syntax, slightly different assignment semantics wouldn't be so bad.
For mutual recursion you have to go a little further, and find all the definitions in the block before you start compiling anything. So `def` expressions are also hoisted to the top of the block and pre-defined at the beginning. This felt ok to me since `def` expressions look like big, formal, static definitions anyway, and it didn't seem surprising that they would have whole-block scope.
For my hobby language, I figured let rec should be the default and let nonrec the marked case, for exactly the rebinding application. However, it's been over a year since I came to that conclusion, and I still haven't gotten around to implementing the nonrecursive path. (but: mine is very no-moving-parts Immutable, so ymmv)
By the way, why do you need a backslash to define a lambda? Apparently it doesn't give any additional information. All you need to know that's a lambda is the presence of the -> operator. Is that a way to make the compiler faster?
That was a pretty late change to the syntax actually — I really, really wanted Javascript-style lambdas but with a skinny arrow, like `(x, y) -> x + y`. But it made parsing and compiling really finicky, so I settled on the backslash syntax, which I've seen in a couple other languages. It almost looks like a "λ"!
Alternatively, we can go the other way, and dispense with the arrow:
\x \y x + y
rather closer to the lambda notation in maths, and works well for compact expressions such as S from the SKI combinator calculus:
\x \y \z x z (y z)
This looks much better with syntax highlighting (hard to demonstrate here of course), being both trivial to implement and informative - just have the back slashed tokens in blue or whatever.
Cassette looks really nice - great intro page, and a great name too! Making something simple is much harder and more valuable than making something complicated.