Hacker News new | past | comments | ask | show | jobs | submit | Athas's comments login

Why not essentially treat it as a cross compilation scenario? NixOS is also source based, but I don't think such a migration would be particularly difficult. You'd use the 32-bit off_t gcc to compile a glibc with 64-bit off_t, then compile a 64-bit off_t gcc linked against that new glibc, and so on. The host compiler shouldn't matter.

I always understood the challenge as binary compatibility, when you can't just switch the entire world at once.


Nixos has it easier here because they don't require packages to be "installed" before building code against them. For Gentoo, none of their build scripts (ebuilds) are written to support that. It's plausible that they might change the embuild machinery so that this kind of build (against non-installed packages) could work, but it would need investigation and might be a difficult lift to get it working for all packages.

"treat it as a cross compilation scenario" is essentially what the post discusses when they mention "use a different CHOST". A CHOST is a unique name identifying a system configuration, like "x86_64-unknown-linux-gnu" (etc). Gentoo treats building for different CHOSTs as cross compiling.


NixOS isn't the same kind of source-based. At some level, even Debian could be said to be source based: there's nothing stopping you from deciding to build every package from source before installing it, and obviously the packages are themselves built from source at some point.

NixOS sits between Debian and Gentoo, as it maintains an output that's capable of existing independently of the rest of the system (like Debian) but is designed to use the current host as a builder (like Gentoo). Gentoo doesn't have any way to keep individual builds separate from the system as a whole, as intimated in the article, so you need to work out how to keep the two worlds separate while you do the build.

I think what they're suggesting winds up being pretty similar to what you suggest, just with the right plumbing to make it work in a Gentoo system. NixOS would need different plumbing, I'm not sure whether they've done it yet or how but I can easily imagine it being more straightforward than what Gentoo is needing to do.


There absolutely will be problems with different Nix profiles that aren't updated together; for example, if you update some packages installed in your user's profile but not the running system profile. But this is common enough with other glibc ABI breakage that folks tend to update home and system profiles together, or know that they need to reboot.

Where it will be hell is running Nix-built packages on a non-NixOS system with non-ABI-compatible glibc. That is something that desperately needs fixing on the glibc side, mostly from the design of nss and networking, that prevent linking against glibc statically.


> Where it will be hell is running Nix-built packages on a non-NixOS system with non-ABI-compatible glibc.

This isn't a thing. Nix-built binaries all each use the hardcoded glibc they were built with. You can have any number of glibc's simultaneously in use.


Not at all! Lynx is a quite serviceable Gopher client, and it is available in most package repositories. For Emacs users, elpher is available, and it is very comfortable to use. Far more than modern browsers, really.


Yes, and undoing this means it's no longer UNIX as it will no longer run on that PDP-11! (Unless you can afford a bigger disk I guess.)


A lot of thoughts [1] have been put into the usr merge, and compatibility with unix is one of them, and one other unix that has done this merge is Solaris, so Linux distros doing the merge are not even that special.

(note: it's not about merging /bin and /sbin)

[1] https://systemd.io/THE_CASE_FOR_THE_USR_MERGE/


If "being UNIX" hinges on the fucking filesystem structure I honestly am fine with it "not being UNIX"


It does, however it is actually less strict than I had in mind,

https://pubs.opengroup.org/onlinepubs/9699919799/


I suspect you missed the sarcasm in Athas' comment?


The best answer is that you probably don't want to use Futhark for that - it's a specialised, rather restricted language. There are ways to encode irregular arrays, but it's not even remotely as ergonomic as in other languages (it can be plenty fast, however). For a particularly simple example, see https://futhark-lang.org/examples/triangular.html

In the future, or in a future similar language, one might well imagine an ability to "box" arrays, like in APL, which would allow true "arrays of arrays" with no restrictions on sizes.


You can just use 'map' explicitly. It doesn't have to be special syntax.


We are in a sort of similar situation, but our solution was to put a Raspberry Pi in a windowsill that runs a reverse SSH tunnel (through a server in a VPS somewhere).


Why were BOMs ever allowed for UTF-8?


Some editors used them to help detect UTF-8 encoded files. Since they are also valid zero length space characters they also served as a nice easter egg for people who ended up editing their linux shell scripts with a windows text editor.


An attempt to store the encoding needed to decode the data with the data, rather than requiring the reader to know it somehow. Your program wouldn't have to care if its source data had been encoded as UTF-8, UTF-16, UTF-32 or some future standard. The usual sort of compromise that comes out of committees, in this case where every committee member wanted to be able to spit their preferred in-memory Unicode string representation to disk with no encoding overhead.


When UTF-8 was still very much not the default encoding for text files it was useful to have a way to signal that a file was UTF-8 and not the local system encoding.


Some algorithms can operate much easier if they can assume that multibyte or variable byte characters don't exist. The BOM means that you don't have to scan the entire document to know if you can do that.


No, a missing BOM does not guarantee that the file doesn't contain multi-byte code points and never has.


That isn't what I said.

I said that if a BOM is present, then you explicitly know that multi-byte characters are possibly present. Therefore, if it's present you know that assuming that the Nth byte is the Nth code point is unsafe.

The opposite is irrelevant. There's never any way to safely determine a text file's encoding if there is no BOM present.


I never understood the popularity of the '.C' extension for C++ files. I have my own preference (.cpp), but it's essentially arbitrary compared to most other common alternatives (.cxx, .c++). The '.C' extension is the only one that just seems worse (this case sensitivity issue, and just general confusion given how similar '.c' looks to '.C').

But even more than that, I just don't get how C++ turns into 'C' at all. It seems actively misleading.


C++

is Incremented C

which is Big C

which is Capital C


But C is already capital C! Even .d would have been a better extension.


He is clearly taking about the capital version of capital C.


The pandoc binary I have is certainly large at 206MiB (more than I expected!), but it doesn't have any weird dependencies I can see. Just GMP, ncurses, and such. All the Haskell parts are statically linked, which is probably the reason it is so large.


Arch Linux is linking dynamically, IIRC, and there it is 'only' 64 MiB: https://archlinux.org/packages/extra/x86_64/haskell-pandoc/


It also pulls in about a hundred separate Haskell libraries along with it. Not really complaining, but it's funny that pandoc accounts for about half the programs on my laptop.


He mentions her by name in the first paragraph (after the cover letter):

> I don’t post directly because I am in prison for killing my wife Nina in 2006.

He doesn't get into whether/why he is sorry:

> I am very sorry for my crime–a proper apology would be off topic for this forum, but available to any who ask.

Hans Reiser has previously explained why he believed the murder was justified. It is not clear to me, from this letter, whether he still believes so.


He regrets “killing” other’s dreams about working on his file system.

He’s a psychopath. The whole write up does not discuss anything that was asked and he just wants recognition and fame for inventing something (queryable file system) that was already released long before that, namely BeFS, by the real file system god, Dominic Giampaolo who also wrote APFS


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: