Hacker Newsnew | past | comments | ask | show | jobs | submit | matrss's commentslogin

I can't speak about Chef, but Ansible is as far removed from declarative as is possible when compared to guix and nix.


And thank god for that. I'm very thoroughly over declarative management systems: the world isn't declarative, and all these systems are only as good as their implementation.

Ansible at least doesn't lie to you about this: it provides the tools to be declarative, but doesn't pretend to understand your problems better then you.


Opinions differ, I guess. Ansible can't declare a desired state of anything at all and apply it, so I don't see much of a point in it over shell-scripts-over-ssh, other than maybe making decisions based on its fact gathering (but at the cost of having to deal with the yaml boilerplate).


Agreed Ansible isn't fully declarative like some other tools, but the variable management, roles, template engine, and integrated vault, are a big improvement from shell-scripts-over-ssh. Also in some cases, having yaml structure can be a good thing. Handing a pile of shell spaghetti to a new hire and expecting them to reason about it is less than ideal.


In a single user scenario where you don't care about a web interface (and its associated additional features) for your repository you can literally use any server that is accessible to you via ssh and has git installed as a git remote for your repository.


You can, but only for a relatively short amount of time after posting the comment. Maybe an hour or so?


IMO if you require libraries in other languages then a pure python package manager like uv, pip, poetry, whatever, is simply the wrong tool for the job. There is _some_ support for this through wheels, and I'd expect uv to support them just as much as pip does, but they feel like a hack to me.

Instead there is pixi, which is similar in concept to uv but for the conda-forge packaging ecosystem. Nix and guix are also language-agnostic package managers that can do the job.


But for example, if I install the Python package "shapely", it will need a C package named GEOS as a shared library. How do I ensure that the version of GEOS on my system is the one shapely wants? By trial and error? And how does that work with environments, where I have different versions of packages in different places? It sounds a bit messy to me, compared to a solution where everything is managed by a single package manager.


You are describing two different problems. Do you want a shapely package that runs on your system or do you want to compile shapely against the GEOS on your system. In case 1 it is up to the package maintainer to package and ship a version of GEOS that works with your OS, python version, and library version. If you look at the shapely page on pypi you'll see something like 40 packages for each version covering most popular permutations of OS, python version and architecture. If a pre-built package exists that works on your system, then uv will find and install it into your virtualenv and everything should just work. This does means you get a copy of the compiled libraries in each venv.

If you want to build shapely against your own version of GEOS, then you fall outside of what uv does. What it does in that case is download the all build tool(s) specified by shapely (setuptools and cython in this case) and then hands over control to that tool to handle the actual compiling and building of the library. It that case it is up to the creator of the library to make sure the build is correctly defined and up to you to make sure all the necessary compilers and header etc. are set up correctly.


In the first case, how does the package maintainer know which version of libc to use? It should use the one that my system uses (because I might also use other libraries that are provided by my system).


The libc version(s) to use when creating python packages is standardised and documented in a PEP, including how to name the resulting package to describe the libc version. Your local python version knows which libc version it was compiled against and reports that when trying to install a binary package. If no compatible version is found, it tries to build from source. If you are doing something 'weird' that breaks this, you can always use the --no-binary flag to force a local build from source.


You could use a package manager that packages C, C++, Fortran and Python packages, such as Spack: here's the py-shapely recipe [1] and here is geos [2]. Probably nix does similar.

[1]: https://github.com/spack/spack/blob/develop/var/spack/repos/... [2]: https://github.com/spack/spack/blob/develop/var/spack/repos/...


That's what I mean, in this case pip, uv, etc. are the wrong tool to use. You could e.g. use pixi and install all python and non-python dependencies through that, the conda-forge package of shapely will pull in geos as a dependency. Pixi also interoperates with uv as a library to be able to combine PyPI and conda-forge packages using one tool.

But conda-forge packages (just like PyPI packages, or anything that does install-time dependency resolution really) are untestable by design, so if you care for reliably tested packages you can take a look at nix or guix and install everything through that. The tradeoff with those is that they usually have less libraries available, and often only in one version (since every version has to be tested with every possible version of its dependencies, including transitive ones and the interpreter).

All of these tools have a concept similar to environments, so you can get the right version of GEOS for each of your projects.


Indeed, I'd want something where I have more control over how the binaries are built. I had some segfaults with conda in the past, and couldn't find where the problem was until I rebuilt everything from scratch manually and the problems went away.

Nix/guix sound interesting. But one of my systems is an nVidia Jetson system, where I'm tied to the system's libc version (because of CUDA libraries etc.) and so building things is a bit trickier.


with uv (and pip) you can pass the --no-binary flag and it will download the source code and build all you dependencies, rather than downloading prebuilt binaries.

It should also respect any CFLAGS and LDFLAGS you set, but I haven't actually tested that with uv.


I just tried --no-binary with the torchvision package (on a Jetson system). It failed. Then I downloaded the source and it compiled without problems.


This type of situation is why I use Docker for pretty much all of my projects—single package managers are frequently not enough to bootstrap an entire project, and it’s really nice to have a central record of how everything needed was actually installed. It’s so much easier to deal with getting things running on different machines, or things on a single machine that have conflicting dependencies.


Docker is good for deployment, but devcontainer is nice for development. Devcontainer uses Docker under the hood. Both are also critically important for security isolation unless one is explicitly using jails.


What exactly prevents you from creating your own packages if you want to use your system package manager?

On Alpine and Arch Linux? Exactly nothing.

On Debian/Ubuntu? maybe the convoluted packaging process, but that's on you for choosing those distributions.


On Nvidia/Jetson systems, Ubuntu is dictated by the vendor.


> The quest to get every build process to be deterministic [...] will never be solved for all of Nixpkgs.

Not least because of unfree and/or binary-blob packages that can't be reproducible because they don't even build anything. As much as Guix' strict FOSS and build-from-source policy can be an annoyance, it is a necessary precondition to achieve full reproducibility from source, i.e. the full-source bootstrap.


Nixpkgs provides license[1] and source provenance[2] information. For legal reasons, Nix also defaults to not evaluating unfree packages. Not packaging them at all, though, doesn't seem useful from any technical standpoint; I think that is purely ideological.

In any case, it's all a bit imperfect anyway, since it's from the perspective of the package manager, which can't be absolutely sure there's no blobs. Anyone who follows Linux-libre releases can see how hard it really is to find all of those needles in the haystack. (And yeah, it would be fantastic if we could have machines with zero unfree code and no blobs, but the majority of computers sold today can't meaningfully operate like that.)

I actually believe there's plenty of value in the builds still being reproducible even when blobs are present: you can still verify that the supply chain is not compromised outside of the blobs. For practical reasons, most users will need to stick to limiting the amount of blobs rather than fully eliminating them.

[1]: https://nixos.org/manual/nixpkgs/stable/#sec-meta-license

[2]: https://nixos.org/manual/nixpkgs/stable/#sec-meta-sourceProv...


you can slap a hash on a binary distribution and it becomes "reproducible" in the same trivial sense as any source tarball. after that, the reproducibility of whatever "build process" takes place to extract archives and shuffle assets around is no more or less fraught than any other package (probably less considering how much compilers have historically had to be brought to heel, especially before reproducibility was fashionable enough for it to enter much into compiler authors' consideration!!)


Now I am wondering what kind of unix magic "oregano" must be...


> The oregano is reputedly referring to an incident in which one of the original folks involved with BSD was hassled for coming across the Canadian/U.S. border with a bag of what was assumed to be an illegal substance, and turned out to be oregano.

https://groups.google.com/g/comp.unix.wizards/c/qkiqSJWgEPE/...


I had always heard this explanation (back to when the poster was new and we all wanted one). And in the back of my mind I have always thought I'd been told the "BSD founder" of the story was Kirk McKusick. But I cannot for the life of me google (well...Kagi) up who the real culprit was. Does anyone know authoritatively?


That is a good one. Would be great to talk to the author and learn what they originally had in mind.


> There are plenty of websites that were just static pages used for conveying information.

If you care about the integrity of the conveyed information you need TLS. If you don't, you wouldn't have published a website in the first place.

A while back I've seen a wordpress site for a podcast without https where people also argued it doesn't need it. They had banking information for donations on that site.

Sometimes I wish every party involved in transporting packets on the internet would just mangle all unencrypted http that they see, if only to make a point...


There is a specific class of websites that will always support non-TLS connections, like http://home.mcom.com/ and http://textfiles.com/ .

Like, "telnet textfiles.com 80" then "GET / HTTP/1.0", <enter>, "Location: textfile.com" <enter><enter> and you have the page.

What would be the point of making these unencrypted sites disappear?


textfiles.com says: "TEXTFILES.COM has been online for nearly 25 years with no ads or clickthroughs."

I'd argue that that is a most likely objectively false statement and that the domain owner is in no position to authoritatively answer the question if it has ever served ads in that time. As it is served without TLS any party involved in the transportation of the data can mess with its content and e.g. insert ads. There are a number of reports of ISPs having done exactly that in the past, and some might still do it today. Therefore it is very likely that textfiles.com as shown in someones browser has indeed had ads at some point in time, even if the one controlling the domain didn't insert them.

Textfiles also contains donation links for PayPal and Venmo. That is an attractive target to replace with something else.

And that is precisely the point: without TLS you do not have any authority over what anyone sees when visiting your website. If you don't care about that then fine, my comment about mangling all http traffic was a bit of a hyperbole. But don't be surprised when it happens anyway and donations meant for you go to someone else instead.


There is a big difference between "served ads" and "ads inserted downstream."

If you browse through your smart TV, and the smart TV overlays an ad over the browser window, or to the side, is that the same as saying the original server is serving those ads? I hope you agree it is not.

If you use a web browser from a phone vendor who has a special Chromium build which inserts ads client-side in the browser, do you say that the server is serving those ads? Do you know that absolutely no browser vendors, including for low-cost phones, do this?

If your ISP requires you configure your browser to use their proxy service, and that proxy service can insert ads, do you say that the server is serving those ads? Are you absolutely sure no ISPs have this requirement?

If you use a service where you can email it a URL and it emails you the PDF of the web site, with some advertising at the bottom of each page, do you say the original server is really the one serving those ads?

If you read my web site though archive.org, and archive.org has its "please donate to us" ad, do you really say that my site is serving those ads?

Is there any web site which you can guarantee it's impossible for any possible user, no matter the hardware or connection, to see ads which did not come from the original server as long as the server has TLS? I find that impossible to believe.

I therefore conclude that your interpretation is meaningless.

> "as shown in someones browser"

Which is different than being served by the server, as I believe I have sufficiently demonstrated.

> But don't be surprised when it happens anyway

Jason Scott, who runs that site, will not be surprised.


> If you browse through your smart TV, and the smart TV overlays an ad over the browser window, or to the side, is that the same as saying the original server is serving those ads? I hope you agree it is not.

I agree it is not. That is why I didn't say that the original server served ads, but that the _domain_ served ads. Without TLS you don't have authority over what your domain serves, with TLS you do (well, in the absence of rogue CAs, against which we have a somewhat good system in place).

> If you use a web browser from a phone vendor who has a special Chromium build which inserts ads client-side in the browser, do you say that the server is serving those ads? Do you know that absolutely no browser vendors, including for low-cost phones, do this?

This is simply a compromised device.

> If your ISP requires you configure your browser to use their proxy service, and that proxy service can insert ads, do you say that the server is serving those ads? Are you absolutely sure no ISPs have this requirement?

This is an ISP giving you instructions to compromise your device.

> If you use a service where you can email it a URL and it emails you the PDF of the web site, with some advertising at the bottom of each page, do you say the original server is really the one serving those ads?

No, in this case I am clearly no longer looking at the website, but asking a third-party to convey it to me with whatever changes it makes to it.

> If you read my web site though archive.org, and archive.org has its "please donate to us" ad, do you really say that my site is serving those ads?

No, archive.org is then serving an ad on their own domain, while simultaneously showing an archived version of your website, the correctness of which I have to trust archive.org for.

> Is there any web site which you can guarantee it's impossible for any possible user, no matter the hardware or connection, to see ads which did not come from the original server as long as the server has TLS? I find that impossible to believe.

Fair point. I should have said that I additionally expect the client device to be uncompromised, otherwise all odds are off anyway as your examples show. The implicit scenario I was talking about includes an end-user using an uncompromised device and putting your domain into their browsers URL bar or making a direct http connection to your domain in some other way.


While both those domains have a specific goal of letting people browse the web as it if were the 1990s, including using 1990s-era web browsers.

They want the historical integrity, which includes the lack of data integrity that you want.


This argument is stupid.


Why?


Instead of using telnet, switch over to an TLS client.

    openssl s_client -connect news.ycombinator.com:443
and you can do the same. A simple wrapper, alias or something makes it as nice as telnet.


My goal was to demonstrate that it supported http, and did not require TLS.


I'm pretty sure tons of people have made web pages or sites without caring about the integrity of the conveyed information. Not every website is something important like banking. It doesn't matter if a nefarious actor tweaks the information on a Shining Force II shrine (and even then, only for people who they're able to MITM).

In practice, many pages are also intentionally compromised by their authors (e.g. including malware scripts from Google), and devices are similarly compromised, so end-to-end "integrity" of the page isn't something the device owner even necessarily wants (c.f. privoxy).


What ensures the integrity of conveyed information for physical mail? For flyers? For telephone conversations?

The cryptography community would have you believe that the only solution to getting scammed is encryption. It isn't.


My post I am typing here can happily go through Russia/China/India and you cannot do anything about it - and bad actors can actually make your traffic to go through them as per BGP hijacking that was happening multiple times.

NSA was installing physical devices at network providers that was scouring through all information - they did not have to have Agent Smith opening envelopes or even looking at them. Keep in mind criminals could do the same as well just pay off some employees at provider and also not all network providers are in countries where law enforcement works - and as mentioned your data can go through any of such network providers.

If I send physical mail I can be sure it is not going through Bangkok unless I specifically send it with destination that requires it to go there.


> What ensures the integrity of conveyed information for physical mail? For flyers? For telephone conversations?

Nothing, really. But for physical mail the attacks against it don't scale nearly as well: you would need to insert yourself physically into the transportation chain and do physical work to mess with the content. Messing with mail is also taken much more seriously as an offense in many places, while laws are not as strict for network traffic generally.

For telephone conversations, at least until somewhat recently, the fact that synthesizing convincing speech in real time was not really feasible (especially not if you tried to imitate someones speech) ensured some integrity of the conversation. That has changed, though.


~username expands to the home directory of username. There might have been a shell that likewise expanded ~* to all home directories.


JSON itself is bad for a streaming interface, as is common with CLI applications. You can't easily consume a JSON array without first reading it in its entirety. JSONL would be a better fit.

But then, how well would it work for ad-hoc usage, which is probably one of the biggest uses of shells?


> [...] and it pretty much just works.

I beg to differ. Last time I had to use PowerPoint (granted, that was ~3 years ago), math on the slides broke when you touched it with a client that wasn't of the same type as the one that initially put it there. So you would need to use either the web app or the desktop app to edit it, but you couldn't switch between them. Since we were working on the slides with multiple people you also never knew what you had to use if someone else wrote that part initially.


could it be a font issue?


If I remember correctly I had created the math parts with the windows PowerPoint app and it was shown more or less correctly in the web app, until I double clicked on it and it completely broke; something like it being a singular element that wasn't editable at all when it should have been a longer expression, I don't remember the details. But I am pretty sure it wasn't just a font issue.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: