
Unison: A Content-Addressable Programming Language - sillysaurusx
https://www.unisonweb.org/docs/tour/
======
scribu
Discussed recently:
[https://news.ycombinator.com/item?id=22009912](https://news.ycombinator.com/item?id=22009912)

------
macawfish
I'm hardly half way through this talk:
[https://www.youtube.com/watch?v=IvENPX0MAZ4](https://www.youtube.com/watch?v=IvENPX0MAZ4)

But I'm already completely convinced that _every_ language should have these
features! Especially languages used for the web.

------
d--b
I don't really understand the benefits of this.

I can think of: avoid binary code duplication - cause everytime you see a hash
you've already come across, the compiler can jump to the already defined code.
But that sounds like a lot of jumping around.

The website says "it eliminates builds and most dependency conflicts, allows
for easy dynamic deployment of code, typed durable storage, and lots more."
but I don't understand this.

If your code says "I depend on that hash", then the runtime needs to locate
where the binary code that corresponds to that hash is located. And that's a
dependency problem to resolve.

If someone fixes a bug in a dependency, your program may not be able to locate
the hash anymore. You have to "re-build" your hashes everytime a dependency
changes.

Can someone write the benefits more clearly?

~~~
gridlockd
> If your code says "I depend on that hash", then the runtime needs to locate
> where the binary code that corresponds to that hash is located. And that's a
> dependency problem to resolve.

It's not a dependency _conflict_ though.

> If someone fixes a bug in a dependency, your program may not be able to
> locate the hash anymore. You have to "re-build" your hashes everytime a
> dependency changes.

Again, that's not a conflict. A conflict goes like this: Dependency A has a
breaking change, but Dependency B transitively depends Dependency A as well,
so you cannot update your own code until Dependency B also updates. Even if A
and B are updated, you are prevented from adding any dependency that hasn't
updated yet. You can't mix and match to use the old code in one place when you
need it.

This wouldn't be such a problem if programmers didn't break interfaces for
dumb reasons all the time, but they do, so lots of people just run older
versions of the software.

------
pcr910303
A TLDR from the past discussion[0] for the tour[1] based on my understanding
(please fix me if I’m wrong):

Unison is a functional language that treats a codebase as an content
addressable database[2] where every ‘content’ is an definition. In Unison, the
‘codebase’ is a somewhat abstract concept (unlike other languages where a
codebase is a set of files) where you can inject definitions, somewhat similar
to a Lisp image.

One can think of a program as a graph where every node is a definition and a
definition’s content can refer to other definitions. Unison content-addresses
each node and aliases the address to a human-readable name.

This means you can replace a name with another definition, and since Unison
knows the node a human-readable name is aliased to, you can exactly find every
name’s use and replace them to another node. In practice I think this means
very easy refactoring unlike today’s programming languages where it’s hard to
find every use of an identifier.

I’m not sure how this can benefit in practical ways, but the concept itself is
pretty interesting to see. I would like to see a better way to share a Unison
codebase though, as it currently is only shareable in a format that resembles
a .git folder (as git also is another CAS).

[0]:
[https://news.ycombinator.com/item?id=22010510](https://news.ycombinator.com/item?id=22010510)

[1]:
[https://www.unisonweb.org/docs/tour](https://www.unisonweb.org/docs/tour)

[2]: [https://en.wikipedia.org/wiki/Content-
addressable_storage](https://en.wikipedia.org/wiki/Content-
addressable_storage)

------
choeger
Very interesting approach. One thing comes to mind though: in a large
codebase, patching a fundamental definition (say, map or foldl) will take a
long time, right?

~~~
0xCMP
It'd actually be faster because it only updates the _references_ to the old
code wherever they are. All the code simply uses its existing references.

[https://www.unisonweb.org/docs/tour/#names-are-stored-
separa...](https://www.unisonweb.org/docs/tour/#names-are-stored-separately-
from-definitions-so-renaming-is-fast-and-100-accurate)

~~~
gryfft
Processing the implications of that was the point while reading this that I
got _really_ excited to try this out. I wasn't expecting to see so many
curiosity-piquing features.

------
fnord77
> the technology for creating software should be thoughtfully crafted in all
> aspects.

lost me right here. fetishizing software "craftsmanship" isn't going to make
the software run better. It might make it more maintainable. But even then
it's better to have a well-designed, efficient system with poorly crafted
components than artisanal for-loops

~~~
madsbuch
Yesterday's artisanal for loops is today's functional combinators widely
supported in mainstream programming languages.

