Hacker News new | past | comments | ask | show | jobs | submit | Larrikin's comments login

Is there any evidence that Comcast cares or needs to care about their reputation?

I'm more curious at what point Comcast is responsible for handing your PII to that shitty little debt collector organization that let your information leak onto the internet because they really have no concept of IT security.

Not like you as a delinquent customer willingly shared your information with that shitty debt collector organization that leaked it, so who's really responsible?


"We don't care. We don't have to!" https://vimeo.com/355556831

I prefer more Honest Cable Company: https://www.youtube.com/watch?v=0ilMx7k7mso

There's one party in the US that likes to show government is bad by getting elected and making it worse. Taxes especially are meant to be painful so that you're reminded everyday about them and dread filing.

It's so boring to read comments on here about how everything could be a text displaying webpage. Internet everywhere in your pocket is a waste for toilet reading.

But it's also frustrating how the ecosystem of Google and Apple, has stifled creativity in the space. We went to the moon with less computing power than a crappy PC in the 90s. People copying Google's model of basic functionally but with ads and Apple's overly protective but nearly useless permission model has ruined a technology that should be more revolutionary than it is.

Some of it needs to be solved legislatively. I absolutely should be able to have my entire contacts list read by a random game app, and the only consequence be that I found all my friends who also play the game, with steep penalties for collecting and selling that data. But I should also be able to vet and use my mobile computer to put anything on it I want.


What is the vision you’re seeing that would be revolutionary from what we have today, with similar hardware?

Have there been any good efforts into getting rid of the language or providing an alternative? My effort into switching and the biggest complaint I've read is that the idea behind NixOS and the ecosystem is genius but one of the biggest draw backs is writing everything in Nix.

I spend a weekend every so often defining the core of what I want next time I upgrade, but just find it so annoying I'm sure I won't use anything I've written until there's a major change in the ecosystem.


What's so bad about it?

Putting aside the poor typing (the lack of proper typing is a shame, so valid criticism), I actually really like the language - it's genuinely a great DSL for the particular problems it's supposed to handle.

It does take a bit of use for it to click, though. A lot of it has to do not with Nixlang itself but about learning nixpkgs' idioms.


It's not just poor typing. It's the lack of discoverability. "What are all my options here in this expression? What are the symbols available?"

I think a good IDE integration could solve this, but not sure how much is possible.


Like (iirc) systemd-resolved has `enabled` which is false by default but then gets silently turned on if you use systemd-networkd. How are you supposed to figure that out without reading the source?

But I think this also stems from the fact that the default state of nixos is "a general purpose linux system" and so instead of just starting at 0 and adding the things you need, you have to mix adding and removing things which IMO makes things much more complicated (except maybe for newbies to linux who don't know what's necessary for a running system).


nixos-rebuild repl and then you can inspect things like config.services.resolved.enable or :p options.services.resolved.enable.definitionsWithLocations

With a default config you start with a console, systemd, dbus and some things to make it boot. There is barely anything.


You instantiate the nix config and then lookup the value

`nix-instantiate --eval -A config.services.resolved.enabled /etc/nixos/configuration.nix`

This is way better than a stateful package manager making a non-revertible change without even telling me.


For other distros, that kind of behind the scenes changes are triggered by package installation and ad-hoc commands. It's not always easy to figure out exactly what has changed, especially after the fact. This, in turn, makes changes difficult to revert.

NixOS is much better because you can inspect the changes after the fact. You also know which code to look for, which is a luxury. If the code seems too much, there's the repl to help. Changes are also much easier to revert.


The nix repl can be a very valuable tool in answering these questions.

That said, I strive to structure my nix source so that portions of it can easily be pasted into a repl. ReadTree goes a long way in that regard: https://github.com/tvlfyi/kit/tree/canon/readTree

More to your point, though: I think a lot is possible. Although nix is very dynamic, it is also, for all intents and purposes, side effect free. I've had this idea that a sufficiently advanced IDE should be able to evaluate your nix code and tell you exactly what the possible values (not just types, but value!) are for any particular variable.


> I've had this idea that a sufficiently advanced IDE should be able to evaluate your nix code and tell you exactly what the possible values (not just types, but value!)

Similarly to the REPL, I'm often using `nix-instantiate --eval -E 'somethingsomething'` so it should definitely be possible.


This is inherently a Hard Problem™, since completions may require evaluating arbitrary derivations (e.g. building a custom Linux kernel).

For "what symbols are available", the nil LSP implementation[1] works for anything in scope that doesn't require evaluation. It also includes completions for the stdlib and NixOS options (in certain contexts).

Another LSP implementation is nixd[2], which is trying to tackle the problem of evaluations for completion.

[1] https://github.com/oxalica/nil/

[2] https://github.com/nix-community/nixd


Maybe then IDE is not the best target, but rather a clear "REPL-friendliness at all times" is, e.g. everything is built around constant re-evaluation where you can discover things where interested, maybe put stop gaps and prints easily.

The community has built two LSPs, and https://search.nixos.org/options (or "man configuration.nix", if you prefer) shows what NixOS options are available

There is this: https://github.com/nix-community/nixd

It has jump to definition and autocomplete. Which is very nice.

It's not perfect. But it's pretty good


> What are all my options here in this expression? What are the symbols available?"

Unlike most languages, the symbols available are completely determined by the scope. Just look at the let expressions in effect. There's no magic.

As for nested expressions, that's a typing problem, which was already mentioned above as a pain point (although there are several efforts to fix this).


There is https://search.nixos.org/ and on the command line you can play around with:

$ nix repl

nix-repl> :l <nixpkgs>

nix-repl> {press tab for auto-complete}


But that’s not the fault of the language. insert another DSL would have the same issues.

Way too much sugar/"idioms", which makes it hard for someone new to the language to figure out what a given piece of code is actually doing. Confusing use of semicolons for what almost every other language uses commas or newlines (or nothing) for. It's the same feeling as writing bash, and needing to always look up again exactly what the syntax is and where the semicolons go.

Here's all the cases for using a semicolon. (estimated 30 seconds read)

1. At the end of local variables

   let
     a = 1;
     b = 2;
    in
    a + b

    result: 3
2. At the end of each attributes in an attribute set (a.k.a. dictionaries or key-value pairs)

    {
      a = 1;
      b = 2;
    }

    result: { a = 1; b = 2; }
3. with expressions

    with pkgs;

    coreutils

    result: (the coreutils attribute in the pkgs attribute set)
4. Assertions

    assert a != null;

    a

    result: (the value of a)

Now, you'll never be confused again.

This sounds like a skill problem if you don't even know bash or rust or PHP syntax.

The Nix language also doesn't need to exist. They want to write pure lazy declarative derivations - great, you can do that in any existing language. It's a matter of style and APIs. You don't need to spend years developing a brand new language from scratch. Not to mention that many derivations end up calling a Bash script underneath anyway because at some point you actually need to perform an action in the real world. How would a derivation looked like if written in <insert your favorite scripting language> with lazy APIs?

The Nix language is basically what you'd get if you were designing a DSL for declarative configuration of extremely deeply nested trees. Nearly every "feature" in the language is for making that easier. You could probably write a Nix -> JSON compiler and end up with something completely unreadable because the language hides so much of what's actually present in the fully resolved tree.

That's not true. The language needs to be declarative. if you use your favorite lang and develop a DSL around those APIS, that DSL would be declarative.. also the language itself, isn't that complicated.... it's really quite minimal. It has its roots in ML. I honestly think people expect everything to look like javascript or C++ and it's a shame honestly.

Pulumi is a great example of declarative APIs built with imperative programming languages. SwiftUI is another.

Personally I have nothing against the Nix language, and use it without issue, but it's untrue to suggest that the language itself requires uncommon support for this kind of thing.


Ooof... Pulumi et al are terrible to write and read. Why should I care about writing 'new' in front of all my declarative configuration? What happens when an if statement depends on a concrete value? How would that even work? The leakiness of the abstraction is too terrible to even consider.

Terraform et al, despite not being my favorite, have much simpler semantics than Pulumi. It's not always a good idea to write DSLs into languages with huge paradigm mismatches.


Terraform and Pulumi have basically the same semantics.

> Why should I care about writing 'new' in front of all my declarative configuration?

Because that’s how your choice of language instantiates an object. Try F# or Swift or Go if it’s that annoying to you.

> What happens when an if statement depends on a concrete value?

What do you think “count = var.concrete_value ? 1 : 0” is doing in Terraform, exactly?

> The leakiness of the abstraction is too terrible to even consider.

While you are are entitled to your opinion, I’d suggest you are very much mistaken, and would implore you to actually consider it for a minute.


Just as a point of order. You offer no case for Pulumi and your one actual discussion of the semantics is misplaced as it deals with if expressions, not statements. Stratified ifs that occur at the non-recursive areas of the language are usually not a problem for these change management systems.

Personally I see it as similar to typed vs untyped languages. You can add typing to untyped languages or you can just use a typed language. The language used shapes the structure and some are easier to reason about than others (to some people).

Some people don't want to hear this, but it is 100% true.

Meh... the nix language being as it is makes it a lot easier to write these things with less cruft. Every attempt I've seen at introducing laziness into a language like python, c++, rust, javascript, etc just seems to require a lot more unnecessary keywords and helper functions and cruft.

There is Guix, which replaces the Nix language with Scheme, but which has some limitations related to a smaller user base, e.g. a smaller package collection.

Replacing the language requires duplicating all the work that went into Nix, to reach parity, so it is not easy.


> Replacing the language requires duplicating all the work that went into Nix, to reach parity, so it is not easy.

That seems like a design flaw in Nix, there's no reason the data model should be so tightly coupled to the scripting implementation that you can't reuse packages written in a different language.


There is no technical barrier against doing that. But much of the power and flexibility in nixpkgs arises from the nix language, not the data model (which is comparatively simple).

For example, see zb: https://www.zombiezen.com/blog/2024/09/zb-early-stage-build-...

Using a different language to depend on packages derived from .nix would be very much akin to depending on a docker image whose Dockerfile you can not inspect.


> Using a different language to depend on packages derived from .nix would be very much akin to depending on a docker image whose Dockerfile you can not inspect.

Speaking of Docker images and Dockerfiles, that's actually a real-world example of how you can achieve this kind of effect without relying on a specific language. Ironically, you can use Nix to build Docker images; there's a bunch of other alternative builders (e.g. Kaniko, Buildah); you can also just stitch together some files&metadata into tarballs, and then 'docker import' it.

Nix or Guix are of course much more powerful and expressive than Docker images, but there's always a cost to complexity.


That sounds like a design flaw in the data model if the flexibility exists at a higher level.

What would a better design look like?

One where the "power and flexibility" of nixpkgs is encoded in the data model?

If there is something that can be done in the nix language that can't be expressed in the underlying model that needs to be used by another frontend then it should be represented in the underlying model so another frontend can use it.

To put it another way, if you're designing a client-server model where there may be multiple client implementations you don't bake big chunks of the implementation into the clients, you provide it in the server interfaces and data types.


Okay, but... how? You can't serialize a function or a closure very easily. I'm unaware of any language which attempts to do so.

Not having functions as values (true of pretty much any serialization scheme I've ever seen) makes serialized data structures strictly less powerful than data structures in code.

> that needs to be used by another frontend

I don't think this was ever a goal of Nix. But if it was, well, you would end up with something considerably less powerful for the reasons I stated.


The biggest limitation IMO it's that they are HPC-centric, not caring the desktop, which is the was to allow more people discovering a distro. Also the lack of a proper zfs and lvm/mdraid/luks support it's a big showstopper.

IIRC guix started off by being compatible with nix package derivations, but they broken it (i also may remember incorrectly).

If nothing changed, they also have a strong ideological drive and funny support any non-free software.


What is a "package derivation"? Packages and derivations are two different things in Guix.

Also, Guix supports proprietary software just fine. It's just not in the main official repo. But there are other repos that have it, e.g. nonguix.


Scheme is so much worse. It has so much repetition and is so much more verbose for no reason.

A macro-capable lisp means that this isn't permanently or definitionally true, even if it is today - which is pretty subjective.

https://www.gnu.org/software/guile/manual/html_node/Macros.h...

Racket is IMO a pretty compelling environment for prototyping DSLs because of how malleable it can be, so I think the ceiling for ergonomics can be pretty high.


I find the nix language to be quite pleasant. There are some syntax quirks and types would be nice, but in general the “json with functions” vibe is imo great and a very nice fit for the domain. Lots of other modern config languages (e.g. dhall, jsonnet) have ended up in this part of the design space too.

With that said tweag has been working on a kind of nix 2.0 / nix with types for a while with the aim (I think) of being able to use it in nixpkgs: https://github.com/tweag/nickel


I also quite like nixlang for config tasks - in theory! In practice its really annoying. I think the main problem is the interpreter and the bad error messages / bad debuggability.

Part of that just comes from lazy evaluation, which makes debugging a lot harder in general (you feel this in Haskell...), but also just from nix not being a big popular language that gets lots of polish, and being completely dynamically typed.


Nix is ok. I like jsonnet more, and once I've tried to write a converter from jsonnet to nix, but it turned out this is much harder than expected (some idioms don't transfer from nix to jsonnet well.

Me as well. As a haskell / ML programmer, it is extremely intuitive. It's non-innovative (In a good way). Literally it's just a functional scripting language.

I don't think the language is the issue here[1].

It seems inevitable to me that some of the design choices around immutability and isolation are going to result in a larger server image (both on disk and in memory) than if you are prepared to forgo those things. For most people that tradeoff is probably worth it but if you want something to run in an embedded server or with a very low disk footprint it's probably not right for you.

Around 20 years ago people who wanted to do this[2] used to make tiny immutable redhat servers by remounting /usr and a few other things read-only after boot so it's certainly doable but it's a lot more of a pain than what nix does and there is no process isolation and no rollback etc when things go wrong.

[1] ...or generally in fact but that's a matter of opinion and I know people feel differently about this.

[2] me for one, but others also.


> Have there been any good efforts into getting rid of the language or providing an alternative?

Guix is conceptually similar to Nix but uses scheme.


It seems to be an issue with testing and debugging, rather than the language itself. The same issue would also be present if you could switch to any other language for configuration.

Nickel lang is such an effort. Id say the syntax is a mix of json and lua and aims for a non-touring complete program. It is still a bit early but it looks promising

No, Nickel is Turing-complete. That's been one of the characteristics intended to distinguish it from most other configuration languages from the start.

See the 'RATIONALE' document: https://github.com/tweag/nickel/blob/378ece30b3e3c0ab488f659...


What do you want to replace it with? YAML?

The language won't go away and you should try to look at it for more than I don't like it.


No, writing this in any other language will definitely make for a much shittier Nix experience.

The problem isn't the language, the problem is that nixpkgs (and NixOS) are just huge.


yes, https://github.com/garnix-io/garn But weirdly there was little interest and they rebranded themselves around being a more general build system based on nix instead of what they originally said about being a nix reimplmentation in typescript.

The Pokemon games were revolutionary at the time but they have all been remade at this point. I'd recommend Heart Gold or Soul Silver, if you don't like those you'll never like Pokemon.

Even with nostalgia goggles on, PlayStation 1 and 2 easily had more good games than SNES and Gameboy Pocket.

SNES has some absolute gems and a fairly deep library of legit good games, but most Gameboy games are barely better than most NES games and rarely hold up to anything even in the next generation of handheld Nintendo games.


Worth considering the size of the console libraries. PS2 was releasing games forever and had a mammoth catalog. By sheer numbers you would expect more gems.

So seems like America is very popular and there isn't some universal hate in Mexico.

You aren’t very creative. I’m talking about if China, or North Korea, did smuggling as part of the masses deliberately through Mexico.

Which office was Hunter Biden running for last cycle?

That’s hardly a reasonable retort. The contents implicated Hunter Biden being used as a pay-to-play doorway to his father, who was running last cycle. Using a middle man doesn’t make it less corrupt.

>Lastly what you shouldn't ever do is get one of those consumers NAS boxes. They are made with no concern for noise at all, and manufacturing cheapness constraints tend to make them literally pessimal at it. I had a QNAP I got rid of that couldn't have been more effective at amplifying drive noise if it had been designed for that on purpose.

Is there any solution that lets me mix and match drive sizes as well as upgrade? I'm slowly getting more and more into self hosting as much of digital life as possible, so I don't want to be dependent on Synology, but they offered a product that let me go from a bunch of single drives with no redundancy to being able to repurpose them into a solution where I can swap out drives and most importantly grow. As far as I can tell theres no open source equivalent. As soon as I've set up a file system with the drives I already have the only solution is to buy the same amount of drives with more space once I run out.


And I've never used a QNAP, but I'm on my second Synology and their drive carriages all use rubber/silicone grommets to isolate drive vibration from the case. It's not silent - five drives of spinning rust will make some noise regardless - but it sits in a closet under my stairs that backs up to my media cabinet and you have to be within a few feet to hear it even in the closet over background noise in the house.

I don't use any of their "personal cloud" stuff that relies on them. It's just a Linux box with some really good features for drive management and package updates. You can set up and maintain any other services you want without using their manager.

The ease with which I could set it up as a destination for Time Machine backups has absolutely saved my bacon on at least one occasion. My iMac drive fell to some strange data corruption and would not boot. I booted to recovery, pointed it at the Synology, and aside from the restore time, I only lost about thirty minutes' work. The drive checked out fine and is still going strong. Eventually it will die, and when it does I'll buy a new Mac and tell it to restore from the Synology. I have double-disk redundancy, so I can lose any two of five drives with no loss of data so long as I can get new drives to my house and striped in before a third fails. That would take about a week, so while it's possible, it's unlikely.

If I were really paranoid about that, I'd put together a group buy for hard drives from different manufacturers, different runs, different retailers, etc., and then swap them around so none of us were using drives that were all from the same manufacturer, factory, and date. But I'm not that paranoid. If I have a drive go bad, and it's one that I have more than one of the same (exact) model, I'll buy enough to replace them all, immediately replace the known-bad one, and then sell/give away the same-series.


So I’ve got a setup like this:

It’s an 8Bay Synology 1821+. Cost about $1300 for the machine, 32GB of ECC memory, and the 10gbe network card.

I have 4 8Tb drives in a btrfs volume with 1 drive redundancy giving me 21TB of space.

All the important stuff gets also backed up to another 8TB drive periodically and sent to glacier.

The way synology’s shr1 setup works seems to be like RAID5 + a bit more flexibility so I can add more drives to the array but as long as they are 8TB or larger.

The docker manager seems to work pretty well. I run a few services there and mount certain volumes into them. A few DNS records and some entries into the reverse proxy in the control panel of it and you can run whatever you want.

Most critically power draw is very low and it’s very quiet which was an important consideration to me.


I might be misunderstanding your needs but my home server uses just LVM. When I run out of disk space, I buy a new drive, use `pvcreate` followed by `vgextend` and `lvextend`.

This.

I've been running LVM and Linux software RAID for like 20 years now.

The only limits (for me at least) are:

    smallest device in a raid determines size of that array. But that's fine since I then LVM them together anyhow. It does let you mix+match and upgrade though really I always just buy two drives but it helped when starting and I experimented with just LVM without RAID too.

    I have to know RAID and LVM instead of trusting some vendor UI. That's a good thing. I can fix stuff in case it were to break.

    I found as drives went to Terabytes it was better to have multiple smaller partitions as the raid devices even when on the same physical drive. Faster rebuild in case of a random read error. I use raid1. YMMV
I still have the same LVM partitions / data that I had 20 years ago but also not. All the hardware underneath has changed multiple times, especially drives. I still use HDDs and used to have root on RAID+LVM too but have switched for a single SSD. I reinstalled the OS for that part but the LVM+RAID setup and its data stayed intact. If anything ever happens to the SSD with the OS, I don't care. I'll buy a new one, install an OS and I'm good to go.

> Is there any solution that lets me mix and match drive sizes as well as upgrade?

Probably more than one, but on my non-Synology box I use SnapRAID, which can take any number/size of drives. Downside is that it isn’t realtime, you have to schedule a process to sync your parity: http://www.snapraid.it/


> As soon as I've set up a file system with the drives I already have the only solution is to buy the same amount of drives with more space once I run out.

Recent versions of zfs support raidz expansion [1], which let you add extra disks to a raidz1/2/3 pool. It has a number of limitations, for example you cannot change the type of pool (mirror to raidz1, raidz1 to raidz2 etc.) but if you plan to expand your pool one disk at a time it can be useful. Just remember that 1) old data will not take advantage of the extra disk until you copy it around and 2) the size of the pool is limited by the size of the smallest disk in the pool.

[1] https://github.com/openzfs/zfs/pull/15022


I started thinking about this a year ago. Unraid works great for me. Just bought another 32tb to extend to 104tb usable space, 8gb to 20gb drives. It’s a JBOD with dual parity setup, next upgrade path requires a disk shelf but hopefully that won’t be for a couple of years.

BTRFS

They are basically sandwich equivalents. You need the correct rice but they are easy to make. Parents make them daily for their kid's lunches because they are so easy to make.

But also like sandwiches an excellent one is difficult to make.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: