Hacker News new | past | comments | ask | show | jobs | submit | threePointFive's comments login

Wasn't nano released after the CUA spec? Hardly seems correct calling these "modern" keybinds

AppImage is a good distribution format, but IMO is not comparable to your system's package manager or Flatpak for that matter. For starters, when you downloads an AppImage, you are just getting the binary. Documentation, Desktop and Service files, and update tracking are all things that are missing from a vanilla AppImage deployment that your system package manager always provides (Flatpak and Snap only handle some of those sometimes).

The missing piece is perhaps some sort of AppImage installer which can be registered as the handler for the AppImage filetype. When ran it could read metadata (packaged as part of the AppImage?) and generate the support files. Ideally would also maintain a database of changes to be rolled back on uninstall and provide a PackageKit or AppStream provider to manage updates with your DE's store.

Now none of that addresses dependency duplication, but thats clearly not in scope for AppImage.


How big of a problem is dependency duplication on 1TB drives?

Could be big, depending on how much room you give to /. All my Linux life, I have allocated about 50GB to the root partition and it's been adequate, leaving enough room for my data (on a 512GB drive). Now I install one flatpak and I start getting low disk space warnings.

That's also a big reason why I prefer appimages.

ossia score's AppImage is 100 megabytes: https://github.com/ossia/score/releases/tag/v3.2.0

Inside, there's:

- Qt 6 (core, widgets, gui, network, qml, qtquick, serial port, websockets and a few others) and all its dependencies excluding xcb (so freetype, harfbuzz, etc. which I build with fairly more recent versions than many distros provide)

- ffmpeg 6

- libllvm & libclang

- the faust compiler (https://faust.grame.fr)

- many random protocol & hardware bindings / implementations and their dependencies (sdl)

- portaudio

- ysfx

with Flatpak I'd be looking at telling my users to install a couple GB (which is not acceptable, I was already getting comments that "60 MB are too much" when it was 60 MB a few years ago).


The issue is memory. Every library in an app image has a unique memory space and so you have a bunch of copies of sometimes very large libraries sitting in memory rather than one copy from disk mmapped into memory and page duplicated all over the place.

Linux has had support for not doing duplicate pages for a long time now. I am forgetting the name of the feature but essentially this duplication is a solved problem.

That's only the case if the libraries loaded are identical, It won't work with slightly different versions of the same library (unless the differences are small and only replacements, so the pages remain aligned between different versions), and that case is very unlikely to be solvable

The parent comment doesn't talk about different versions and I wasn't either.

For different minor versions and builds of libraries?

This is basic paging and CoW (Copy on Write) behaviour. I agree, it's mostly a non-issue

In addition to memory, there's the ability to patch a libz bufferoverflow once, and be reasonably sure you don't have any stale vulnerable copies still in use.

Is this open source? I don't see a link anywhere. Would love to know the graph rendering works.

It is open source, here is the link: https://github.com/mendableai/firegraph

We use this awesome library to render the graphs: https://www.tremor.so/


Work locks me into Outlook but god I wish I could just grep my inbox

If your sysadmins are kind, Outlook can expose IMAP/SMTP.

I'm still trying to understand why people recommend Nix in place of a build system. Nixpkgs stdlib by default expects an autotools project. It will happily integrate with other build systems, as long as you've spelled out your dependencies in both. I've yet to see it generate a Makefile or make any decisions about compilition that weren't spelled out in a "traditional" build system. Could you shed some light on what I've missed?

So.. it's sort of a battle over territory between build system and package manager.

Bazel is there becoming ever more complex and unwieldy in an attempt to provide supposed reproducibility - taking control of the provision of ever more of a project's dependencies (in often very janky ways). But to Nix people it's clear that what people are actually doing here is slowly building a linux/software distribution around their project, but in a very ad-hoc and unmaintainable way. And bazel projects will continue to grow in that direction because until you have control of the whole dependency stack (down to the kernel), you're going to struggle to get robust reproducibility.

I don't think many Nix people would suggest actually using Nix as the build system, but probably to use a comparatively simple cmake/meson/whatever build-system and use Nix to provide dependencies for it in a reproducible and manageable way.


You call blaze side janky and ad-hoc but to me (as complete outsider) using monorepo+build tool seems more principled and working more with fundamentals, while nix feels more ad-hoc and trying to fix stuff post-facto.

> And bazel projects will continue to grow in that direction because until you have control of the whole dependency stack (down to the kernel), you're going to struggle to get robust reproducibility.

This is bit weird statement, considering that it's not where bazel is growing to, but where bazel is growing from. The whole starting point for bazel is having full control (via monorepo) of the dependency stack


> You call blaze side janky and ad-hoc but to me (as complete outsider) using monorepo+build tool seems more principled and working more with fundamentals, while nix feels more ad-hoc and trying to fix stuff post-facto.

The Nix side is a maintained software distribution, which is a lot more than a bunch of random versions of tarballs pulled down from random urls, wrapped in minimal build scripts and forgotten about for years on end. It's also work that is shared across packages in the distribution and it produces consistent results that don't have dependency conflicts - if you have two bazel projects that each build against their own cpython, I can guarantee that they will have chosen different versions of cpython. Which one wins when they're used together? Who knows...

Every project building-out their own separate pseudo-linux-distribution cannot produce good results.

> The whole starting point for bazel is having full control (via monorepo) of the dependency stack

I'm not aware of a bazel project that builds its own glibc (I imagine there are some which people could point out...). But then.. do they ship that glibc with the end result? Or just shrug and hope it works fine on whatever glibc the target system happens to have?


I haven't worked at google, but my understanding is that their monorepo does contain everything, including kernel and libc etc. So it's not bunch of random tarballs, its complete in-house maintained source tree.

> But then.. do they ship that glibc with the end result? Or just shrug and hope it works fine on whatever glibc the target system happens to have?

That's the whole point of monorepo, you don't have some random target systems, it's all included in the same repository.


Thanks for the summary. I've been using Meson + Nix, so the comments about using Nix as a build system have been confusing. I think what I've been seeing though are "use Nix instead of Bazel", not "use Nix as your build system".

What I mean is use a relatively simple build system instead of Bazel, and deal with dependencies and reproducibility through a Nix development environment.

You lose out on some of the incremental compilation speed that Bazel offers doing this. I think many in the Bazel space suggest using Bazel inside of a Nix environment.

I'm not sure why you'd want to generate a Makefile if you're using nix. Unlike make, nix understands the inputs to a build step and won't bother rerunning it unless their inputs have changed. You would lose that you generated a Makefile instead of having nix build whatever it is that the Makefile builds.

Otherwise it does the same things as make: this bunch of commands depends on this other bunch of commands... It just makes you express that as a function so it can be smarter about memoization.

I've not used it for large complex builds, so maybe there's some itch it fails to scratch at finer granularity which I'm overlooking. I liked this artical about where it shines and where it fails to be a build system: https://www.tweag.io/blog/2018-03-15-bazel-nix/. I've been waiting for the problem to arise that encourages me to learn Bazel so I can use it alongside nix, and it just hasn't yet.


> I'm still trying to understand why people recommend Nix in place of a build system.

Probably because Nix is a build system. After using it for a decade, I dislike that it describes itself as a "purely functional package manager"; that causes all sorts of confusion, since it has far more in common with something like Make (e.g. see my "Nix from the bottom up" page http://www.chriswarbo.net/projects/nixos/bottom_up.html )

> Nixpkgs stdlib by default expects an autotools project

Ah, I see the confusion. Nixpkgs is not Nix; they are different things!

Nix is a build tool, similar to Make. It has some differences, like caching results using their hash instead of timestamp, but the main advantage is that its build receipes are composable (thanks to the FP nature of their definitions).

For example, say I run `make` in some project repo, like Firefox. Make will read that project's Makefile, which contains elaborate rules for how the various build products depend on each other. Yet despite all that care and attention, I get an error: `cc: command not found`. Oops, I don't have a C compiler! So I grab a copy of the GCC source, and what do I find inside? Another Makefile! The `cc` command required by the Firefox Make rules is itself defined with Make rules; but the Firefox Makefile can't refer to them, since Make is not composable.

In contrast, Nix is composable: Nix definitions can `import` other files, including from build outputs! For example, we can write a build receipe which imports its definition from a build output; where that build fetches a git commit; and the definitions inside import things from some other builds; and those download and extract a bunch of .tar.gz files; and so on.

Nixpkgs is the most obvious example of this composability, with mountains of build receipes, built up from a relatively small "bootstrap" (pre-built binaries for a few basic tools, like parts of GNU). It's also a testament to backwards-compatibility, since it features build receipes (and helper functions) which act as wrappers around all sorts of legacy tools like Make, PIP, NPM, Cargo, Cabal, etc. (if you're working on a project that's stuck on such things).

Whilst Nixpkgs provides support for all of these things; Nix itself is only capable of invoking a single `exec` syscall (see "Nix from the bottom up"). Everything else is built up on that foundation, and isn't tied to any particular tool, language, framework, etc.

Hence it's not so much that Nix is a "package manager", or "orchestration" tool, or "configuration manager", etc. It's more like: those categories of tools are workarounds for crappy, non-composable build tools like Make. Nix is a composable build tool, so all of those other things turn out to be unnecessary.


Annecdotally, from my dad who is a controls engineer programming PLCs for manufacturing equipment, the proprietary toolchains needed for the PLC control language runs better under Wine.


"the DCHP server" implies it is somehow a special device on your network, which is a flawed assumption. DHCP works on an broadcast protocol and your device will accept the first offer. The fact that the most common residential configuration is for your DHCP to be hosted on your router and thus likely the first to respond is inconsequential to the fact that any hostile device on your network could use this exploit.


That's not at all guaranteed. The residential gateway most likely contains a hardware switch and a CPU, which also does the routing. The CPU is attached to the switch like any other device, though probably with less physical-layer bits, and some of them aren't all that fast.


not implying anything in that regard, I can imagine a clever attack which involves a local malicious dhcp implementation


The gravity of ASP.NET, SQL Server, AD, etc. Is hard to break out of


I like Go's approach to this same idea. The standard database/sql library provides the standard API and the individual database drivers implement their own backend. You can use the URI connection string for your database (ie. postgres://...) though only after including the driver in your file's imports. There's even the idiomatic underscore prefix to the package import to note that you're only importing it for how it's presence affects another package. Unfortunately there's no way to sway which package you're affecting, but its still better than hidden changes.


So basically JDBC? :) I think a similar approach is used with the crypto provider API and some others

IIRC (in JDBC) you also used to have to do `Class.forName("name.of.it")` somewhere before trying to do any DB access, to ensure that the static initializers had actually run, but I don't believe it's necessary anymore

(And then of course you have Spring Boot autoconfiguring which is another level of magic up, using automatic subclassing and proxy injection to add things like transaction management. And then you can get into proper classloader hackery)


Essentially, yes. And you need to register before use in Go too since reflection is far more limited in Go - often this is done at init time, so you just import the package that does the registration [somehow], but ultimately you just have to do it before you use something: https://pkg.go.dev/database/sql#Register

The "data source name" string when connecting is... basically a JDBC connection string, and some adapters use exactly that iirc, but it's fundamentally an unstructured string that just serves the same purpose. Plugins can use anything they like, and style varies.


I'm not familiar with Bazel, but Nix in it's current form wouldn't have solved this attack. First of all, the standard mkDerivation function calls the same configure; make; make install process that made this attack possible. Nixpkgs regularly pulls in external resources (fetchUrl and friends) that are equally vulnerable to a poisoned release tarball. Checkout the comment on the current xz entry in nixpkgs https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/comp...


If you're ever using Nix like you would Bazel, you likely would not want your derivation to be built via another Makefile. Indeed, that defeats the whole point of using Nix to fix this in the first place. As it is, mkDerivation + fetchurl is mostly used to build existing non-Nix software.

Nix at the very least provides first-class(-ish) build support for modern languages like Rust, Go and Python, but I don't think anyone has written an actual Nix builder for C/C++. A combo of Bazel + Nix is fairly common, though.

IMO, it's hard to say if Nix would "solve this attack", since we've only seen it being truly used on more modern things where the build complexity of the actual piece of code is not much more than the same few commands.

As for pulling down a poisoned tarball, I think the discussion here is rather about upstream projects using Nix to guarantee reproducible builds rather than Nix trying to fend off attacks downstream in Nixpkgs. In modern Nix (with Flakes), this would look something like being able to clone and run `nix build .#` and end up with the same output every time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: