Hacker News new | past | comments | ask | show | jobs | submit login
A look at Unison: a revolutionary programming language (athaydes.com)
296 points by lycopodiopsida on Jan 11, 2023 | hide | past | favorite | 84 comments



Looking at the comments it seems that this talk might be a better introduction than just skimming at the landing page.

https://www.youtube.com/watch?v=gCWtkvDQ2ZI

I watched that in 2020 or so and got amazed at how much a seemingly simple idea could work it's way into the core of a language. What I recall is this,

The main new idea is turning codebases into a content-addressed store, this needs some new tooling, but makes a couple of interesting things possible,

* Having different versions of the "same" function. They are actually different functions anyway, so you can do things like partial library upgrades when there's breaking changes. Large migrations like python 2 to 3 wouldn't take O(10yr).

* Formatting is not a thing anymore. This goes further than just having a formatter as there's really no format nor names you can't personalize locally without affecting anybody.

* Similarly, renaming is free, so you can come up with better names in the future and chanting it just needs a tweak to the canonical name for the function.

* Functions are serialiazable, which makes it possible to send them to remote nodes for execution without any concerns about their deus, since a function is now more of a Merkle tree, so if there's any difference the client would know and be able to ask for missing dependencies

* The tool to manage your function repo might seem like a burden to some, but once you realize that having this as a first class tool that understands the language you can propose safe mass refactoring of code you don't own.


> so you can do things like partial library upgrades when there's breaking changes. Large migrations like python 2 to 3 wouldn't take O(10yr).

That wouldn't quite work, though, as Python 2 and 3 have mutually-incompatible ideas of what a sequence of bytes (string/binary) is; Unison solves the problem of references to code, but apparently not what happens when identically-typed data has different meaning between versions. At least, that's not answered in the video.


The author neglects to address the semantic reason that dependencies conflict, and focuses only on the syntactical (ie., a conflict of names).

We want to enforce that where, eg., `np.array` is used, universally it's a data structure with the same properties. It isnt particularly desirable to have many versions of it floating around.

It's not clear, atm, whether this is a real solution to the dependency problem. It seems like it solves it by making it easier to isolate areas of a codebase where dependencies are frozen. This is already an option, and people don't do it because they want actual semantic consistency.

Maybe if it were easier it would be done more often, but a priori, it's a straw man of the dependency problem.


While this is definitely true in general, Unison can unlock many context-specific guarantees even for data: the types are strict and expressive and it seems to have referential transparency.


i'm not sure what you mean by saying that python 2 string is mutually incompatible with python 3 bytes

they just have different names

in early versions of python 3 the type was castrated but that got fixed in more recent versions


IIRC jonathan edwards had similar ideas in subtext (https://www.subtext-lang.org/)

all node based, name was just a property, changing it would reflect in all instanciations


This sounds almost like Dion. It would be cool to be able to feed the AST to a compiler backend like gcc/llvm and map to C's semantics.

* https://dion.systems/gallery.html


I stopped reading when they cried in joy at the ability to auto infer types instead of writing them. This has always led, in my limited experience, to unreadable code after 2 weeks.

Why do people want to write short leetcode, Ill never get it.


Part of the cool bit about Unison is that the code is stored in a content-addressed database and is then rendered out to files to edit at your command (including with your own set of custom names for things, I think). I imagine that Unison would allow you to work on code that has entirely specified types while perhaps a coworker of yours let them all be inferred, without irritation to either of you.


Yeah, Unison doesn’t just infer the types, it will add type signatures automatically to top-level definitions.


This looks attractive at first but it has one show-stopping problem AFAICT: if it turns out that a function has a bug, you can't fix it by changing the function definition. You have to create a new function and then change every single call site from the old definition to the new one.

Maybe there is a solution to this problem, but a cursory review of the language description didn't turn one up. It seems like a Really Hard Problem because there is nothing that can structurally distinguish between a new function that is created for its own sake, and a new function that is intended to be a bug-fixed (or otherwise enhanced) version of an old one.

Maybe a unison aficionado can enlighten me?


If the change you've made did not modify the type of the function, then updating all usages of that function is as simple as running "patch".

If you run "todo" it will show you everything that needs to be updated (thought it doesn't force you to update anything).

If you change the type of the function, you'll need to update all usages one by one, or choose to leave some usages with the old function (which may be desirable in some circumstances, e.g. where the bug you fix is never reached from that caller and you want to be maximally backwards-compatible)... UCM will show you everywhere you need to do that... you can get all references to a function before you modify it as well by running "dependents my-fun".


Not a unison aficionado, but I believe this is an OK problem to have given that we already have to deal with a (weaker) variant of the same problem in every other PL: if your dependency has a bug, you/dependency vendors have to create a new version of that dependency and then change every single module that depends on it. The scale might be different, but the nature of this Really Hard Problem appears to be the same in both cases.

Also, it seems like a solution to the really nasty problem of changing transitive dependencies.


Well, you can say the same about git right? If you need to change an old commit, then you'd need to re-create all the commits that depend on it. Which is true, but something we leave to tools. In git we can just rebase or absorb.

In terms on how to distinguish functions, you could just have some suffix, like foo@base and foo@local.

I played a bit with Unison before, but I don't think I tried this. I'm sure it's a non-issue though because it's not hard to imagine how tools would help here.


> Well, you can say the same about git right?

The difference is that in git the repo structure is metadata. The data (the contents of particular commits) is completely independent of the repo structure and so you can change the latter without changing the former. Furthermore, when you try to build a program, you only ever deal with a single commit. You never build a program from multiple commits at once. Program linking has nothing to do with the repo structure.

But in unison, the "repo structure" is no longer metadata, it's part of the program. It is now part and parcel of your linker. So when you change the "repo structure" you are changing the behavior of the program.

Git works precisely because it separates data (program) from metadata (repo structure). When you glom them together the way unisom does, you get some gnarly problems to which I don't see any immediately evident solutions.


There is a solution to this problem of content addressable structures. I was (paper, head) designing the same general approach 10 years ago, borrowing a sort of ~homoiconicity from Lisp (code and data are just tuples of structured hashes) as well.

The problem, explicitly and concisely is this: Referential elements of structures introduce cascading re-hashing.

There is however a solution to this by adding a layer of (invariant) semantic indirection for referential (as opposed to inlined embedded ‘value’ tuples) types. With this approach, at this point, we obtain 2 benefits: ‘object’ re-evaluation (rehash) is only necessary when object structure changes (not the referents); and in an object graph, cascades will only impact the immediate referents’ ‘value’ mappings.

caveat that I felt extremely queasy seeing this post on hn — a session of self criticism ensued regarding the topic of “laziness” :D — so haven’t read beyond the upsetting intro to Unison.


Here's a relevant excerpt from "The big idea" article linked in this one:

> In Unison, changing what names are mapped to which hashes is done with structured refactoring sessions, rather than the typical approach of just mutating text files in place and then fixing a long list of misleading compile errors and failed tests. A Unison codebase is never in a broken state, even midway through a refactoring.

> If your codebase is like a skyscraper, the typical approach to refactoring is like ripping out one of the foundational columns. The skyscraper collapses in a pile of rubble (a long list of misleading compile errors), and you attempt to rebuild a standing skyscraper from that rubble.

> The Unison approach is to keep the existing skyscraper around and just copy bits of it over to a new skyscraper, using a magic cloning tool. The codebase is never in a broken state. All your code is runnable. It's just that, midway through, you may not have fully constructed the second skyscraper and cut over all the names. Unison keeps track of that and gives you a tidy todo list to work through.


I’m no aficionado but I believe this is handled by the code manager and not the language

https://www.unison-lang.org/learn/ucm-commands/#moveterm


If there's tooling that can do this efficiently it sounds like it can be a win since all changes in implementation details become explicit and can potentially be more easily tracked.


The UCM tool propagates the changes for you automatically. It also creates a patch that you can distribute with your release to update any dependents automatically.

If the change can’t be made automatically because it breaks stuff, then UCM guides you through propagating manually.

You’d have to do this manual propagation in any language. Unison is a little nicer about it than most languages, since it just tells you what needs an update (and will suggest an order in which to make the updates) instead of giving you type errors or test failures.


It's trivial to find every possible call site of a function in a Unison codebase.

I agree that patching other people's code is not always possible, or easy. On the other hand, you can patch your own library using indirections to the new version.


I worked using an object oriented database, where all code was stored as the AST as versioned objects in the database. Problems that I remember that might be relevant:

1. To “install” code into another instance, you usually exported your code as a massive text file from your dev server as source code, then import the source code as text into your production server. You could do individual functions, but that had a high risk of ending up with a pet with different code than your dev server. Version control was a mess.

2. We just ended up putting source code (without comments) directly onto production servers. That happened to turn out well for the client, and turn out badly for the development company (who owned the code).

3. Syntax or semantic changes to the language were a bitch when the database version needed an upgrade. Any existing code was migrated, from the old objects to the new objects, which sometimes caused problems.

4. Limited IDE. You had to use the IDE that came with the database, and various code transforms, global refactors, or normal development actions were unavailable. The primary other option was to export a complete syntax tree as text, make changes, then re-import (uggggh).

5. Debugging was hell.

Obviously Unison is a very different beast, but some of the above problems were a result of having the AST be the source of truth: problems which have the potential to apply to Unison too.


1. A Unison codebase is a persistent shared data structure, which should be easy to integrate deeply with something like git or borrow ideas from it. Conceptually, it's just like a git repository.

2. It's possible to add an additional compilation step to generate (potentially obfuscated) machine code for a Unison AST. The same point applies to Javascript/Python or JVM bytecode.

3. Data migrations are an open question, yes. Still, Unison is a pure functional programming language and might do this much better than an imperative OOP language. Especially if you model your data as a shared persistent data structure too (like Clojure).

4. This is true, all the textual tools go out of the window without some kind of a translation layer. There are possible workaround like exposing a codebase as a FUSE filesystem. On the other hand, the codebase-as-AST makes it trivial to do code transforms, global refactors, etc, in a Lisp-style macro way, with its pros and cons.

5. I agree that debugging and execution observability is a must. The idea here seems to be "let's have a Haskell-like language that does not need much debugging in the first place". I am a bit skeptical of this approach in general, but it does make a lot of debugging go away.

Thanks for sharing your experience, many of the points are valid for Unison.


Related:

The Unison language – a new approach to Distributed programming - https://news.ycombinator.com/item?id=33638045 - Nov 2022 (113 comments)

Unison Programming Language - https://news.ycombinator.com/item?id=27652677 - June 2021 (131 comments)

Unison: A Content-Addressable Programming Language - https://news.ycombinator.com/item?id=22156370 - Jan 2020 (12 comments)

The Unison language - https://news.ycombinator.com/item?id=22009912 - Jan 2020 (141 comments)

Unison – A statically-typed purely functional language - https://news.ycombinator.com/item?id=20807997 - Aug 2019 (25 comments)

Unison: a next-generation programming platform - https://news.ycombinator.com/item?id=9512955 - May 2015 (128 comments)


My experience with Unison so far has been limited to learning the basics and doing a few exercises on Exercism.io (big kudos to whoever added the Unison track there, it's a really great way to get your hand dirty with a new language!).

Coming from Haskell/other functional programming languages background, I have found many things familiar. Unison can be an improvement and an opportunity to do better this time.

What I still find lacking:

- developer experience and tooling: there's no language server (is it even possible for Unison?) or an easy development environment. I found myself confined to the usual vim+tmux+"repl" combo. A VSCode plugin that I tried was pretty basic, even though it showed the structure of code base and libraries nicely. Even Haskell is more pleasant with haskell-language-server these days.

- discoverability of library functions: skimming through modules, reading docs, trying to figure out what to use. Type-aware search (like Hoogle) is a great idea, but it does not replace the ergonomics of having context-specific completions. Having dot-namespaces relevant to a specific object (in OOP/Go/Rust style) makes this so much easier for me.


Good news, there is now a language server!

https://www.unison-lang.org/learn/usage-topics/editor-setup/


> Unison does away with builds. Completely!

> It has a canonical representation for all code,

Isn't generating that canonical representation the build step?

> The important thing is that now, I can actually compile and run this program

Now I'm confused whether there is or isn't a build step.


You can think of it as an incremental build step but the crucial key that Unison's content addressing gives you is that canonical representation only needs to be generated once, ever.

So for example if someone publishes a library, no new build step would ever be required for consumers of that library.

Even with CI/CD there would be no build step because the very act of merging one's code into another codebase operates on canonical representations and so they never need to be regenerated.


Not once ever. Once per toolchain version and unique argument set. Perhaps the toolchain starts by attempting to never offer arguments, and perhaps it has lofty goals to release on day 1 with a perfect compilation model. All of my experience suggests these constraints are untenable, so ever is somewhat of an embellishment.

The model also precludes cross-unit optimizations so there's a performance upper bound on compilation model that leaves it in the middle tier of performance without the model being broken (given things we've seen before).


> You can think of it as an incremental build step but the crucial key that Unison's content addressing gives you is that canonical representation only needs to be generated once, ever.

That's not very different from using ccache. Claiming that there are no builds is simply not true.

> So for example if someone publishes a library, no new build step would ever be required for consumers of that library.

Does that require trusting compiled blobs from 3rd parties? That could make it trivial to inject backdoors.


Could this be done in other languages too?


An issue with many languages is the presence of mutually dependant parts of code, because how do you generate a content-hash for something whose constituent part's content-hashes depend on it? Merkle trees can be generalized to DAGs, but not directed graphs with cycles.

As far as I understand, Unison's syntax has been designed specifically to avoid such cycles, although it supports recursive functions and inductive data types which refer to themselves such as lists. A self-cycle can be handled, but I'm not sure if, and if so how, it would support mutually recursive functions or data types. I suspect they're not supported for the reason that content-hashing becomes non-trivial.

If you wanted to generalize the solution to arbitrary languages, you would need to somehow sever the cycles, which is likely to require a specific solution for each language, or come up with another solution to content-hashing which does not have the problem of Merkle-DAGs. Either way, you are looking at something non-trivial compared to having a language which simply forbids the cycles.


Unison does allow mutually recursive functions.

https://www.unisonweb.org/docs/faq/#how-does-hashing-work-fo...


kind of. For example, in Java, build tools will normally cache the hash of the source file that generated a class file. That class file is a cached version of the source, basically. IF you could keep both the class file and the hashes of the source around, you could avoid ever re-compiling the source unless you've changed it. This is just incremental compilation, in essence.

However, you'll need quite a lot of tooling around this simple idea: where to store the class files and the hashes, should you commit it into source control, how to distribute the cache, can this be made multi-platform so it works on all developers' machines?

All solvable, but Unison already does all that, and more, with its single tool, the `ucm`.


This is one of things Bazel tries to do.


I had the same confusion and re-read the article. Best I can understand, what they're trying to say is that it could be used without a build step in the sense that you could just manipulate the database's AST-like structure directly instead of compiling code, and it would be basically incrementally "compiling" each unit as you edit instead of interpreting a code file. But of course that's not a very ergonomic way to do it so there is a language on top, that they used as an example to show off the language.

I can see this being useful for implementing alternative editing interfaces besides just writing code textually. For example, flow-based visual programming or something like that. Adding and connecting nodes in a visual interface would each be small updates to the database, and then there would never be a "build step" because the database you've been editing is already the AST that could either be interpreted or compiled. So it only really does away with the parser, but that doesn't seem revolutionary to me, other languages can already do that. I might be missing something still.


JavaScript doesn't have a build step either. sounds like it is interpreted and you reference methods by hash instead of by name. sorta?


JS doesn't type check, generate an AST, or even minifies anything until the user runs the code. If you just write JS in notepad then immediately send it over to your users, then yeah, there's no build in JS.

But if your idea of writing software involves:

* type checking, linting and otherwise ensuring properties of the code before even running it.

* testing code before deploying it.

* turning it from text into a representation that's more compact and safer to run.

* generating documentation.

Then you'll definitely need a build, and a very complex one for that matter, in JS world. Have you ever seen a JS project without any sort of build in a professional context?

And you'll need a big IDE to help you make sense of anything because JS doesn't even have a "compiler" or "helper tool" that'll give you any help to navigate and understand the code base.


> Have you ever seen a JS project without any sort of build in a professional context?

Imagine a world that pre-dates typescript, node, webpack, browserify, gulp, grunt. You were lucky if you had a server framework that concatenated files for you, otherwise you were stuck with 6000+ line "global.js" files. Then you had to fix merge conflicts in that file because SVN.

Whenever someone complains that frontend JS is too much, I just think back to how it used to be.


I think the compiled form is associated with the hash of the source. It can’t be machine code though.


It’s bytecode


Does it require a jit then or more like llvm?


I'm not an expert at JS internals, but I'm under the impression that if you call a function ten times, its code has to be interpreted ten times, so you get ten, temporary, abstract syntax trees. Such is life in interpreter-land--the code is authoritative, the instructions are ephemeral.

My impression of Unison is that if you want to call a function ten times, you just need to evaluate that code into an AST once. Once you have the hash of that AST, you can delete the code and still call the function 9 more times without having to bother with building the tree each time.

I don't know what to call it, but I don't think it's interpreted because you don't need the code to run it (except the first time, which is how you know find the hash of the function).


> I'm not an expert at JS internals, but I'm under the impression that if you call a function ten times, its code has to be interpreted ten times, so you get ten, temporary, abstract syntax trees.

This is not the case even in an interpreted language. You only need to build an AST once, at parse time. In JavaScript the AST is then compiled into byte code or machine code depending on the implementation.


Sorry, let me rephrase.

In JS, if 1 developer and 9 users with a total of 10 separate browser instances all call the same function, the code will be parsed 10 times.

In Unison, the AST is generated by the developer, cached, and the 9 users can call it by hash (assuming they have access to that database where the AST lives) without reparsing the code.


So the AST is transferred over the wire? How is it serialized?


It's in a sqlite database. Unison doesn't prescribe how those are passed around, but since everything is content addressed there are never any name collisions. You can always merge two of them into one without having to resolve conflicts and without breaking anything. Doing so just expands your codebase to include more code than you're currently calling.

The way I intend to use it is that users periodically assemble the codebases from all of the developers that they care about and merge them into a single codebase. This can be very latency tolerant (like, sneak-a-thumb-drive-across-the-border levels of latency tolerant) because it's not like you're waiting around for transport every time you want to do something, just every time you want to upgrade.

Then, when you decide to use the new version of something (the developer would communicate a new hash to you), you just start calling the new hash and now you're using the new software. If something goes wrong, you just go back to calling the old hash. No implicit trust (e.g. no need for SSL, DNS, package names), atomic upgrade/downgrades, no merge conflicts.


It’s a binary-encoded merkle tree


> As I write this in early 2023, finally, I think it has reached a level of maturity where it’s actually usable for real work, so I decided to write about my impressions using the language for something non-trivial.

For what non-trivial you used Unison for? Are you allowed to write about it?

Is that those examples in this Unison introduction article, or something else?


It will be interesting to see how it handles transitive dependencies through a supply chain you don’t control. Same issue as “this 3rd party npm package depends on an insecure version of X” but at a function level.


Integrate this with IPFS (or other DHT) and you have a distributed code database with the possibility of massively parallel builds.

(Assuming I understand the claims properly)


Their slack is on the free tier, so I couldn't find a direct quote within the last 90 days, but this comes up from time to time. Last I heard it's not on the roadmap but there's interest.

I didn't think of it as paralell builds so much as a big patchwork quilt where some patches are data and others are code and you can kind of wander around and see which data is related to which other data through which functions.


I’m working on a low core project that is already using content addressable source code that is stored in IPFS at https://github.com/yazz/yazz so it can be done


I started designing my own functional dataflow language to simplify distributed programming. It looks like I can stop.

Does Unison execute its DAG of functions concurrently by default?


FWIW I'm acquainted with a few folks on this team and it's the biggest concentration of talent I've ever seen.


Why do we need a new language to do this?


Finally, a programming language that can be used in Tarrey Town.


> tests that run every single build even when nothing checked by them has changed

Oh yes, because we can always be sure that a change in one place will never cause a test in another place to fail.


Not when it's referentially transparent. In that case, there's no need to run the tests if a function or its children haven't changed. Of course, in that's the case, the tests are probably fast enough that running then won't matter much.


If you're working with pure functions, you can always be sure of this.


Not on real hardware.


Are hardware-level bit flips actually a part of your threat model?


That's not the only kind of way in which real hardware falls short of the theoretical. I've been burned by this at least a half dozen times in my career.


Unironically yes. Memory and CPU errors happen.


Not to mention things like caching and consistency bugs, either in hardware (e.g. CPU errata) or the software stack which is implementing the abstraction.

Or stuff like reordering a piece of code which should be a no-op, but causes two expensive operations to be run simultaneously on the same die at runtime, resulting in a CPU under voltage event which triggers a reboot. I ran into this exact problem about 6 months ago. We could argue about whether the CPU, motherboard, or power supply vendor was out of spec, but in the real world with deployed hardware that doesn't matter. You revert and run the code that doesn't trigger a reboots during high work loads.

Having a test suite catch this before you deploy to production is a good thing.


Or just use your existing language of choice, but incorporate Nix (which is based on the same idea), or go all-in and use NixOS (or Guix, very honorable mention) to get the same ideas all the way to the bare metal of the hardware (which IMHO is where the reach of this idea belongs, it is that good)

For example, there are tools that convert traditional lockfiles to nix flakes.

Nix is more intimidating than it actually is. It’s basically JSON with pure functions and some syntactical-sugar shorthands that end up being very nice.

“Flakes” are basically lockfiles for Nix. Before Flakes, Nix still used hashed dependencies though. So everything you install with it is more or less guaranteed to also automatically pull in everything it needs to run, without conflicting with the needs of anything else.


As someone who could reasonably be called a Nix zealot, who has just had to spelunk through far too many layers of abstraction and infrastructure to get some custom TLS certificates recognized on a macOS+Nix setup, I welcome attempts to simplify the stack rather than wrap everything.

Nix is great when you're on the happy path and its ability to offer a near-perfect reproducible environment right off the bat is unparalleled, I can't think of anything else that does nearly as well. But there are plenty of ways to fall off of the happy path, and when that happens it can be a fairly lonely or worrying experience.

Unison looks like it takes the whole development experience and owns it, and that allows for faster experimentation across layers of the stack. I think the implementation is pretty neat and I really hope they continue working on it.


Removing layers of abstraction is necessary for building greater things on top. Not that Nix isn't better than what exists currently, just ideally the OS/language disconnect can be resolved, in line with Dan Ingalls. TBH I'm not really sold on Unison as a language, but the idea is certainly intriguing. Would be a lot more interesting if it was as at home in the browser as it is on unix - as it stands I don't see a real use case for it. Perhaps a killer app will come along, but I think for a language to succeed with this idea it will have to be a lot more pragmatic (aka further from Haskell). Time will tell.


i upvoted, love the reference to os/language disconnect. But is it historically true that we must remove layers to build greater things?


can someone tell me what this is? this instinct to "one-up" posts?

"forget that, try this!"

"that's nothing, look at my favorite thing!"

this is rampant all the time in all posts no matter what they're about on this site and I do not understand it.

Root comments reminiscing about The Old Days also steal thunder from the topic and I don't understand why people do it.

start a post about Your Favorite Thing if you want to go on about it; please stop hijacking other threads because you wanna talk about Your Favorite Thing. let people talk about This Thing...


That’s like the second most annoying thing people do around here. The most annoying thing is…

Just kidding :)

I think it is because it feels at least vaguely like contributing, but also allows the poster to show off a bit by claiming some deeper knowledge/better tool or something like that.

I dunno. The threaded nature of the conversations means that it just forms its own tangent. It might steal a little oxygen from the on-topic conversation, but it seems mostly harmless otherwise.

On good old fashioned phpBB style boards this sort of thing would never happen, because…


At least part of it is trying to save people work, I imagine. "That's a cool tool, but there's an existing, well-supported one that already solves those problem."

Of course, the problem here is that Nix may be similar to this language, but they solve different problems. So I imagine the writer is just mistaken, and it comes off as one-upsmanship.


Had the comment started with “You can find similar ideas in Nix” I think it would have been well-received. Tone and intent matter.


Fair enough regarding tone; unfortunately too late to edit (man, I wish they’d increase the time limit on that)


> can someone tell me what this is? this instinct to "one-up" posts?

Showing better solutions would be helpful. It would inform readers about an alternative that deserves consideration.

But the "use nix" and "RIIR" posts instead are just fanboyism: HN readers already know that nix and rust exist.


> can someone tell me what this is? this instinct to "one-up" posts?

That’s not at all what I said. I just said that the fundamental idea here is already available for everyone at the component level for your entire system. Unison just takes this idea down into the language. A new language that you’d have to convert 100% of your work into in order to benefit from, and then you’d STILL suffer from the same things outside that ecosystem in your software, that Nix solved.


It seems like Unison can reasonably memoize computations at a much more granular level than Nix. It bears some commonality with Skiplang.


I agree and I think it's potentially huge.

Think of how much money is spent on compute for CI pipelines which are calculating 99% of the same values today that they were yesterday. Under this model they'd only have to compute the parts that changed.

And it's not just saving on compute, very fast test results is a dev force multiplier.


> Nix is more intimidating than it actually is.

The intimidation is about learning it. That there is no (AFAIK) existing guide that is good enough to make it less intimidating. IE. it shouldn't be intimidating once someone writes a good intro to it.

Does anyone know a good introduction to Nix that uses the new better 'flake' based tooling? Just something simple, like to install a single package or setup a docker container.


Until Nix gets some kind of concept approximating interfaces such that the compiler can tell me "hey, you're missing this field to meet this interface" or "the shape of the argument for this thing is blah", it will remain a horrible experience in the same way that an exotic Ruby codebase is a horrible experience.

Everything is based on convention or code you have to go spelunking for.


When you use your nix-wrapped language of choice, do you nixify your unit tests so that their results are in the cache and don't need to be reran except partially when necessary?

AFAIK that's atypical, but if you've got it working please share a link.


Elixir (the language I primarily use) already only runs the unit tests that cover the code that has changed.

But you’re right in that that is a great idea.


Nix does not implement the same features of Unison at all.


Not at the language level but at the component or binary/lib level.

And without requiring you to switch all of your work to this language in order to gain the benefit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: