Hacker News new | past | comments | ask | show | jobs | submit login
Dependencies and resilience (ingino.me)
20 points by sebastianingino 25 days ago | hide | past | favorite | 12 comments



The solution exists since decades: instead of having layered OSes + application software back at Xerox time Smalltalk workstations was an OS as a single application, live framework, "apps" was just bits of code. Today it's the same (nearly, all but the C core) in Emacs.

This model makes "bad citizens" a nightmare because they are not isolated bits that fails individually but "something that fail inside the system", this might led to a slow development for revolutionary things, very quick development for incremental evolution, witch is probably very good.

Unfortunately this model means end-user programming, the power of computing in users hands, not the dumb human operating like a monkey an endpoint of a remote service/mainframe model alike. A thing most IT giants really dislike because it can easily erase their business...


Platform-based, such as Squeak and Etoys (not eToys) or like with VMs such as JVM or BEAM.

The problem you describe is solved by introducing capabilities such as in seL4. Without beginning with a capability, something cannot do something else.


Thanks to letting me know Etoys (I know Logo, but never heard about Etoys before), about JVM: well, Java probably was born with the idea to repropose in Unix the old model, a JVM evolving toward and in-kernel VM with the userland and application sw onto it, a simple class/archives, networked of course "the network is the computer", but 99% probably never get such idea, they just choose Java because a big player of it's time like SUN have pushed an enormous amount on money on it and because "it look like C++ witch look like and improved C" the same reason probably that makes the first PHP popular. AFAIK nobody have really advertised Java as such tentative, so for most was just another programming language with interesting capabilities before, than another programming language with an enormous sw ecosystem (no matter if it's called Nexus or C*AN). Erlang have not seen a similar success because like Haskell it's awful to learn and have had no special advertisement. I know little about Erlang history, but I doubt Ericsson have had in mind something big/large like Java at SUN anyway.

About security yes, the classic model means a full trust, witch yes in the modern world is a big issue but not really that big because also means FLOSS even if back at Xerox that was not advertised at all and their target was commercials. FLOSS at scale means little room for hostile actors since the code born and evolve seen by many eyes, and while it's perfectly possible injecting malicious code (as the recent XZ attack show very well) it's hard to keep it unseen for long and even harder on scale since such ecosystems tend to be far less uniform/consistent than modern commercial OSes that are almost the official ISO with marginal changes per host.

About capacity: in my personal EXWM/Emacs desktop I can link an email (notmuch-managed) in a note (org-mode) and while composing an email I can preview inside it a LaTeX fragment or solve an ode simply because the system I'm in offer such functionalities without tied them to a specific UI, a set of custom APIs and limited IPCs (essentially just D&D and cut&paste, since Unix pipes, redirections etc are not in the GUIs), also in eshell I can use a different kind of IPC, like redirection to buffers, de-facto creating a 2D CLI, witch happen to be a GUI, a DocUI. Long story short we can do modern software with classic ecosystems, it's definitively time consuming, but doable and keep up the current Babel tower of pseudo-isolated bits it's not less time consuming.

I call such phenomenon a cultural clash: the modern model is the ignorant model where anyone can step in, like Ford-model workers just able to do 1/4 turn of a key, but doing anything a continuous struggle and anything learned is short living. The classic model is the cultural model, stepping in is long, demand effort but anything learnt is an investment for life and piled up knowledge pay back all the time making anything easier or at least far less hard than the ignorant model... Not, take a look at our society: we have schools, meaning a year long period of learning before being active in society, learning things that theoretically will be valid and useful for a lifetime. So, if in the society we try the acculturated model why not doing the same in the nervous system of our society witch is IT?


I wonder how much dependencies could be reduced by systematically searching low-hanging fruit and addressing it ad hoc. For example, if commonly-used library A uses one minor thing from (and thus imports all of) library B, which in turn imports hundreds of other libraries, then someone should add the minor thing in question to A and remove the dependency on B there.

It's interesting to think of how this sort of "neighborhood watch" could be incentivized, since it's probably way too big of a task for purely volunteer work. It's tricky though because any incentive to remove dependencies would automatically be a perverse incentive to ADD dependencies (so that you can later remove them and get the credit for it).


Then the code for library B still exists, still potentially has bugs, the only difference is that the same bug has to be fixed by project A1 then again by project A2 and project A3 etc. There is a cost there too, outlined in the recent article 'Tech Debt: My Rust Library Is Now a CDO' https://news.ycombinator.com/item?id=39827645


I guess there's a hybrid model where you're able to select exactly what you're depending on and pull it in dynamically at build/package time.

I've thought a little about, for example, building something that could slice just the needed utility functions out of a Shell utility library. (Not really for minimizing the dependency graph--just for reducing the source-time overhead of parsing a large utility library that you only want a few functions from.)

Would obviously need a lot of toolchain work to really operationalize broadly.

I can at least imagine the first few steps of how I might be able to build a Nix expression that, say, depends on the source of some other library and runs a few tools to find and extract a specific function and the other (manually-identified) bits of source necessary to build a ~library with just the one function, and then let the primary project depend on that. It smells like a fair bit of work, but not so much that I wouldn't try doing it if the complexity/stability of the dependency graph was causing me trouble?


Isn't that already just the role of tree-shaking optimizers? At that point the problem seems to be languages that don't have good tree-shakers, don't/can't tree-shake library dependencies, or maybe that tree-shaking should happen earlier and more often than it often does?

Observably, it seems like the "granularity pendulum" in the JS ecosystem very directly related to the module system. CommonJS was tough to tree-shake so you had sometimes wild levels of granularity where even individual functions might be their own package in the dependency graph. ESM is a lot easier to tree-shake and you start to see more of the libraries that once published dozens or hundreds of sub-packages start to repackage back to just one top-level package alongside ESM adoption.


Perhaps by coincidence, there's a good post on treeshaking+wasm on the front page this morning: https://news.ycombinator.com/item?id=40023319


I imagine the answer's ~yes from the perspective of something you build and deploy (and I agree it's relevant to the article--but I'll caveat that I read xamuel to be asking the question very broadly).

Relying on a post-build process to avoid deploying unused code and dependencies still exposes you to a subset of problems with most if not all of the dependency graph.

Sufficiently-rich correct-by-definition metadata on the internal and external dependencies of each package might let you prune some branches without requiring the dependency to be present, but in the broad there are a lot of cases where that can't really help?


Some package managers have "features" (e.g. Rust's cargo) or "extras" (e.g. Python) which might be what you are talking about.

Of course another solution is to make smaller libraries in the first place, so your users don't feel like they need to break it up.


> Some package managers have "features" (e.g. Rust's cargo) or "extras" (e.g. Python) which might be what you are talking about.

I don't think so (though I agree that mechanisms like this are one way to approach the problem).

AFAIK both of these examples are mostly used to provide ~optional behavior (usually to exclude dependencies if you don't need the behavior). This can minimize the set of dependencies, but it's resting on the maintainers' sense of what the core of their library is, and what's ancillary. Said the other way around, both require the software's maintainers to anticipate your use case and feel like it was a good use of their time to split things up very granularly.

In xamuel's hypothetical of library A using one minor thing from library B, this almost certainly means reusing less of library B than its maintainers anticipate.

I can imagine this working in cases where the package is a true bundle of discrete utilities that almost no one will need all of (the package itself is an incredibly small core/stub and each utility is a feature/extra), and the maintainers want to intentionally design it for modular consumption.

But it's hard to imagine many maintainers going through the work of dicing a cohesive library up into granular units when they think most users will be consuming it whole?


I do think there is a legitimate question on why we feel such granular dependencies are normal in software. I suspect the dependency graph for many photography studios was smaller than what a hobby website entails. Heck, I suspect early hobby journals had shorter dependency lists than hobby websites do.

That said, I hesitate to put too much emphasis on this. Largely suspecting that we have such granular dependency charts because we can. There is virtually no harm in having such large charts of things. Not that it helps, of course, but what is the harm?

Nor are you lacking a ton of resilience here? If you know what you want your website to be, you can probably recreate it faster than you'd suspect. Especially if you can target more modern browsers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: