I agree, I feel like Nix is kind of a hack to work around the fact that many build systems (especially for C and C++) aren't pure by default, so it tries to wrap them in a sandboxed environment that eliminates as many opportunities for impurity as it reasonably can.
It's not solving the underlying problem: that build systems are often impure and sometimes nondeterministic. It also tries to solve a bunch of adjacent problems, like providing a common interface for building many different types of package, providing a common configuration language for builds as well as system services and user applications in the case of NixOS and home-manager, and providing end-user CLI tools to manage the packages built with it. It's trying to be a build wrapper, a package manager, a package repository, a configuration language, and more.
Purity becomes a hard goal when ever you hit the real world at build or runtime. By definition, you have to bridge 2 domains.
Imagine constant time compute and constant memory constrains, required in cryptography, being applied to the nix ecosystem.
Yes, this is an artifical example but it shows that purity is harder to define and come by, then some people think. Maybe someday these constraints actually do apply to nix goal of reproducibility.
With ever changing hardware that purity is a moving target so nix imo will always be an approach to purity and bundling so much tooling is to be expected. Still, you can legitimately call it a hack :)
I've always thought it would be nice to have a language whose spec describes a few different "power levels":
1. Literal values only (objects, arrays, strings, numerics, booleans, maybe datetimes), a la JSON, with standardized semantics
2. Literals, variable definitions and references, pure function definitions, and pure function calls, but prohibiting recursion, such that evaluation is not Turing-complete and any value is guaranteed to be able to be evaluated into its literal form in finite time
3. All of the above plus recursion and impure, side-effectful functions, custom type definitions, etc.
This way, implementing a literal parser in other languages would be comparatively straightforward (much like JSON), since it wouldn't have to support variables or functions, but it would also be possible to define values using variables and pure functions (much like HCL or Nix) and then use the language's own evaluator binary (or perhaps a wrapped FFI library) to safely convert these into literal values that another language can parse, while guaranteeing that evaluation will not have side-effects. It would also leave open the escape hatch of using a "full" Turing-complete language with side-effects and recursion, while ensuring that using that escape hatch is a deliberate choice and not the default.
I'm sure there are a few additional or hybrid levels that could be useful too (2 but with recursion? 1 but with variables?) but this seems like it would be a solid starting point.
A signed commit can only prove that the owner of a key did make a particular commit, it can't prove that they didn't make a commit, for instance by using some other undisclosed key.
One solution could be adding a `sort` argument (which takes a function that compares two `<T>` items and returns `true` or `false` depending on their order) to all functions on unordered collections of `<T>` items in which an order must be chosen, or requiring that the items in such collections implement a `TotalOrder` interface or something similar. This isn't very ergonomic in languages that don't have an equivalent of Traits or typeclasses though. In languages which permit side effects, this would include any functions that iterate over the items in an unordered collection.
The prevalence of attitudes like this in the Linux community is why the year of the Linux desktop will never come.
Imagine if your brand new refrigerator, by default, would leak toxic refrigerant into your kitchen unless you adjusted a valve just so.
This fact is not called out prominently in the manual, but if you read the fine print in the manufacturer's assembly instructions and have a working knowledge of how a refrigerator operates, you can maybe infer that this valve must be adjusted after purchase to prevent leakage.
You go on their support forum to try to figure out why your brand new refrigerator is emitting toxic refrigerant, and you're essentially called an idiot and told you don't have "basic refrigerator hygiene."
People don't want to become refrigerator mechanics. They want cold food.
From my understanding, traditional photogrammetry typically generates 3d point clouds from image pixels by correlating visual features between images with known camera parameters, allowing the camera pose of each image to be estimated in a shared coordinate space. These point clouds are postprocessed to estimate closed surfaces which can then be converted into textured triangle meshes to be rendered using traditional 3d rasterization techniques.
Gaussian splatting represents a scene a cloud of 3d gaussian ellipsoids, with direction-dependent color components (usually represented using spherical harmonics) to deal with effects like reflections. The "Gaussian" part is important, because gaussian distributions are easy to differentiate, making it possible (and fast) to optimize the positions, sizes, orientations, and colors of a collection of Gaussian splats to minimize the difference between the input photos and the rendered scene. This optimization is usually done by starting with the same 3d point clouds and camera poses estimated using the same or similar tools as traditional photogrammetry (e.g. COLMAP), and using this point cloud to place and color your initial Gaussian splats. One of the key insights in the original Gaussian splatting paper was the use of some heuristics to determine when to split a splat into smaller ones to provide higher detail over a given area, and when to combine splats into larger ones to cover uniform/low detail areas.
The nature of Gaussian splats being essentially fancy point clouds means that they can't currently be easily integrated into existing 3d scene manipulation pipelines, although this is rapidly changing as they gain popularity, and tools to convert them into textured meshes and estimate material properties like albedo, reflectance, and so on do exist.
Where I live, self-checkouts almost universally have a "Use my own bag" button, that prompts you to place your empty bag on the scale and then tares it after you press the button to continue.
Scales are like two years out of fashion. Target and Walmart are using cameras and machine learning now. Almost no false positives, but someone's going to come and take a look if you bag something without scanning it.
I noticed it once when I paid for pharmacy items at Walmart and later put that bag into the bagging area when I bought grocery items. It flagged it as an unscanned item and notified the attending associate.
All in all, it's a superior experience to waiting in regular lines or having to talk to people. Ultimately, I want to just scan and pay with my phone without the kiosk bottleneck.
reply