Hacker News new | past | comments | ask | show | jobs | submit | tottenhm's comments login

I enjoy this metaphor of the cow and the papier-mâché.

Presumably, there is a farmer who raised the cow, then purchased the papier-mâché, then scrounged for a palette of paints, and meticulously assembled everything in a field -- all for the purpose of entertaining distant onlookers.

That is software engineering. In Gettier's story, we're not the passive observers. We're the tricksters who thought papier-mâché was a good idea.


Yes. But look at the bottom. There's an image with the PR review screen. There's one change:

* Normally, the big green button says "Merge pull request"

* Now, the big green button says "Merge when ready"

In a large project with lots of activity, a stampede of people pressing "Merge" at the same time will cause trouble. "Merge when ready" is supposed to solve this.

It seems to mean:

> "GH, please merge this, but take it slow. Re-run the tests a few extra times to be sure."


Here's in-depth details on how it works. [1] Basically, each PR gets put in its own branch with the main branch + all the PRs ahead of it merged in. After tests pass, they are merged in order.

[1] https://docs.github.com/en/repositories/configuring-branches...


Aha, so GitHub merge queue = GitLab merge trains (or at least very similar).


Yes that’s pretty much what it is. Both are replicas of bors, and implementations https://graydon.livejournal.com/186550.html


Bors is also very similar to the Zuul CI system used for OpenStack. It has the equivalent of a merge queue (with additional support for cross repositories dependencies): https://zuul-ci.org/docs/zuul/latest/gating.html You can then have pull requests from different repositories all serialized in the same queue ensuring you don't break tests from any of the repositories participating.


Also continuous integration best practices advance one funeral at a time, it seems.


So does each new PR start new tests that will supersede the previous PR’s tests? If one PR’s tests fail, does it block all PRs behind it in the queue?

I’ve read docs several times and never found them very clear about the details.


Each PR on the queue is tested with whatever commits it would have were it merged to the target branch in queue order. So if the target branch already has commit A and commits B and C are in queue, commit D will be tested on its own temporary branch with commits A B C and D. If the tests for C fail, C is removed from the queue, and D is retested with just commits A B and D (because that's what would be on the target branch by the time it merges).


OK, thank you.


One should make a distinction between:

* The general idea of mixing together filesystems+folders to achieve re-use/sharing/caching.

* The "Dockerfile" approach to this - with its linear sequence of build-steps that map to a linear set of overlays (where each overlay depends on its predecessor).

The "Dockerfile" approach is pretty brilliant in a few ways. It's very learnable. You don't need to understand much in order to get some value. It's compatible many different distribution systems (apt-get, yum, npm, et al).

But although it's _compatible_ with many, I wouldn't say it's _particularly good_ for any one. Think of each distribution-system -- they all have a native cache mechanism and distribution infrastructure. For all of them, Dockerization makes the cache-efficacy worse. For decent caching, you have to apply some adhoc adaptations/compromises. (Your image-distribution infra also winds up as a duplicate of the underlying pkg-distribution infra.)

Here's an alternative that should do a better job of re-use/sharing/caching. It integrates the image-builder with the package-manager:

https://grahamc.com/blog/nix-and-layered-docker-images/

Of course, it trades-away the genericness of a "Dockerfile', and it no doubt required a lot of work to write. But if you compare it to the default behavior or to adhoc adaptations, this one should provide better cache-efficacy.

(All this is from POV of someone doing continuous-integration. If you're a downstream user who fetches 1-4 published image every year, then you're just downloading a big blob -- and the caching-layering stuff is kind of irrelevant.)


The whinging hits everyone. Look at any HN story involving the Bay Area, and you'll see a dozen subthreads about how it's a post-apocalyptic hellscape. (But it's home, you know.)

Speaking as an elitist left-wing hippie-business-geek-bro-demon in the Bay Area...

Kudos to the Columbus Area! Ohio, build it up!


I can’t wait to watch Integration Test Email #2 at the same time next week.


> What are we ejecting?

Ourselves, it seems. A Javascript framework is like a jet, and we are the human payload. You can stay in the jet, zooming over the constantly changing landscape. But if you get tired of this zooming around (or if you get scared of hitting a mountain), then you can activate the ejection seat (https://en.wikipedia.org/wiki/Ejection_seat). Of course, now you're a mile high without a plane, but the ejection-seat comes with a parachute, so the descent will be pleasant (or, at least, non-fatal - which is a style of pleasantness).

Erm, wait, I think you were soliciting a more literal answer. :)

"create-react-app" is the Javascript framework/jet. If you want to go for the ride, then you declare a single-dependency on "create-react-app", and they will decide when/what/how to upgrade components in the framework. If you don't want to ride along with "create-react-app"s framework, then you "eject". They'll give you a little bundle (the concrete list of dependencies) and send you off on your way.


I tried this and landed in a field of debris from the jet.


LOL, best answer so far!


In case anyone is interested in that recall effort: https://www.recallsfschoolboard.org/


In fairness, it does give choices -- https://www.php.net/manual/en/spl.datastructures.php


The SPL data structures are widely considered a mistake. They're awkwardly designed, cannot be type-hinted, and often perform worse than pure-PHP alternatives.


Yeah, in the long run, breaking compatibility is totally understandable for a fork. And generally... power-to-them in publishing a free/open/quality SSL implementation.

But... the article also says:

> ... it is not possible to install both OpenSSL and LibreSSL on the same system.

That combination -- not-compatible and not-coexisting -- sounds rough.


> Yeah, in the long run, breaking compatibility is totally understandable for a fork.

Sure, in the long run, when you've overtaken the original and sent it into obsolescence. I.e. at approximately that point where even the original would have deviated from compatibility.

In the short run, drop-in replace, or perish.

Linux still follows POSIX. I can make a program that is source-code compatible with GNU/Linux, Solaris, Cygwin, MacOS, Android/Bionic, ...

If I can't do that between two SSL implementations, something is wrong.


If it only were that simple.

Imagine you're maintaining some program that intensively uses OpenSSL. It's 2014-ish, and some LibreSSL people come to you and tell you that you're using deprecated API and you need to upgrade for compatibility with LibreSSL. Sure, it's an improvement and maintains compatibility with OpenSSL, so why not.

Then OpenSSL 1.1 comes with some more API changes. You end up adding #if conditions to support both OpenSSL versions. And the next thing you realize, you've just broken LibreSSL because it pretends to be OpenSSL 2.0.0.

And when you patch it, you realize you're adding another ticking time bomb because LibreSSL will probably support the new API at some point too and you will have to add yet another version check to the code.

I am not surprised that somewhere in middle of that process upstreams stop taking LibreSSL seriously.


Right, sprinkling #if's across a dozen downstream codebases sucks.

Consider an analogy to shell-scripting -- there are different shells ("bash", "dash", "zsh") which have substantial overlap (insofar as they largely support traditional POSIX). Someone writing a script/program can target a specific flavor ("#!/usr/bin/env bash") and get more features, or they can write conservative code and let downstream policy determine the shell ("#!/bin/sh"). General-purpose distros (like Debian/Redhat) don't seem to have a problem with supporting them side-by-side.

I'm more than a little rusty on the mechanics of dependency-management in C... but shouldn't it allow some analogous arrangement where an app either (a) signals a requirement for a specific implementation or (b) signals ambivalence/deference (and limits itself to common APIs).

(This, of course, only makes sense if the developers of LibreSSL/OpenSSL and of the distros believe that it's better to coexist+compete than to consolidate. The tenor of the LWN article seems to convey a XOR mentality, but if both projects have independently healthy teams, then... maybe they prefer friendly coopetition...)


I can't speak to MDN's tooling or culture, but... it sounds like the transition that many FOSS projects made from wiki-workflow docs (Confluence/Mediawiki/etc) to PR-workflow docs (ReadTheDocs/Github/mkdocs/etc).

Anecdotally, for the project where I contribute most... the issue about fewer contributors is real. But you do still get contributors, and PR-workflow makes other aspects easier (like broad clean-ups/reorgs/scheduling/versioning). For contributors who come, you also get the opportunity to socialize/engage them during review. Overall, there are drawbacks/risks, but I'd say it was a net-improvement (wrt quality/clarity of the final docs).

Maybe look at it this way: Both workflows provide a way to organize open/community docs. Both workflows have positive role-models. In both, you need capacity+interest for editing, for socialization, etc. If you tend to these, you can do good. But if there's neglect... then that's where you'll see the starkest differences:

* In wiki-workflow, the likely symptoms of neglect are draft-quality content, bad prose, quirky TOC, drive-by edits that are out-of-place, etc.

* In PR-workflow, the likely symptoms of neglect are slow review/feedback, older content, would-be contributors who can't assimilate to the workflow, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: