Hacker News new | comments | show | ask | jobs | submit login
Gx: A package management tool built around IPFS (github.com)
247 points by jamesdwilson 551 days ago | hide | past | web | 43 comments | favorite



Developer here! i didn't expect to wake up to my github feed being destroyed by this. gx was originally a response to being frustrated with go's package management system, so i wrote gx and gx-go to handle go dependencies in a nice(ish) way. We made the decision to keep gx itself as agnostic and extensible as possible, package manager logic is largely the same across the board, so why not provide a nice base layer for people to build on? Since writing gx, we've started using it to manage code for the ipfs project (which gx uses to fetch packages from).

If you're interested in helping out, stop by #ipfs and #gx on freenode!


jbenet here. We're all very excited about the dev interest in gx and ipfs! thanks! we did not expect attention on it this soon! Please check it out, try it, and give us feedback!

Please know though that things are still super early, and very rough! We do not think our pkg mgment efforts are UX ready for end users who want something strictly better today. But they are ready for early adopters who want to think or hack on them with us. In fact, we are now self-hosting go-ipfs in gx! https://github.com/ipfs/go-ipfs

Something to bear in mind: we are building infrastructure you can rely on long term, with decentralization, flexibility, and futureproofing from the ground up. We want our protocols and tools to survive independent of fragile organizations or fragile routing. We can't take many shortcuts (like depending on centralizing agents), because we want something rock solid to last for decades. This means it takes us a lot longer to build high perf and high quality UX, because there's a lot more to do. More work, but it is all achievable. The rough edges will disappear as we improve the tooling. Even today, gx is amazingly simple and powerful, and already does much for dependability.

It's worth mentioning other package management tooling we're working on too:

* NPM + IPFS - https://github.com/diasdavid/registry-mirror and other repos. This is an effort to improve how NPM works with IPFS

* pacman + ipfs - https://github.com/ipfs/notes/issues/84 - this is a super rough way to show how to add ipfs support to a package manager through FUSE. it's not the best way, but it works!

* also, we <3 nix, and cool stuff coming in the future there :)

We have LOTS in development and store for the package manager communities. It's a very exciting time. Please join us at https://github.com/ipfs/ipfs and #ipfs and #gx on freenode IRC. And, if anyone wants to work on "Hypermodular Programming" with us, see https://github.com/jbenet/random-ideas/issues/27 and ping me :)


Great work on this stuff.

Question about package managers built on ipfs, and ipfs in general: how do you ensure the packages/files you depend on will always be available? Is the idea that there would be at least one entity pledging to host every package for eternity, or if you depend on obscure packages would it be wise to run your own mirror of the packages you use? Or something else?


Thanks! Many answers:

- storing is super cheap. seeding not v expensive. i think the entire npm reg is <1TB?

- (soon can ship entire registries in USB keys!)

- many individuals and orgs should keep/replicate what they depend on

- tools like ipfs-cluster will help organize nodes like a RAID array

- can pay services to back things up (on service infra in vogue)

- can pay _the network_ to back things up -- see http://filecoin.io


When I want files available on IPFS, I just pin them to the appropriate machines.

    Ipfs pin add [key]
And my material now has another seed machine. It works, and is clean and efficient. One thing: I share everything in a directory, so the root directory name gets clobbered with the multi hash, and all the files in it retain their pretty names.


Especially after all the NPM drama, I'd be really interested in a distributed package manager that:

(1) Validates that the specific revision of a package you're dependent on hasn't changed (i.e. checksum of the package stays the same), and

(2) On upgrade, validates that ownership of the package hasn't changed (i.e. package is signed by the same person), or that any change in ownership is authorized/authenticated and traceable to the author of the previous revision. Some sort of strong notion of identity would be essential here, too-- maybe something along the lines of Keybase.

This looks like it does #1 (by virtue of being based on a content-addressable filesystem), but it doesn't look like it tries to handle #2... objects and repos just "are"; they don't seem to carry ownership information with them. If I want to validate that I am, in fact, getting Express.js from the Node.js Foundation and not some complete stranger, I'm forced to trust whatever website I pulled the package or repository hash from.

(This identity/authentication issue isn't new-- for example, Apt solves it with GPG signing of repositories, and assume that the repository is taking responsibility for the safety of the content they're publishing. It doesn't seem to be something that these newer (less strictly-managed) tools seem to be concerned about, though-- npm, bower, homebrew, etc.)


Adding in a good way to do signed packages and repos is very high priority for us. If you have ideas on that, i'd love to discuss things here: https://github.com/whyrusleeping/gx/issues/47

Thanks!


Doesn't npm have something like Composer's composer.lock?


I'm really hoping someone adds an IPFS binary cache system for Nix/NixOS.

(Nix already packages tons of Go/python/javascript/rust libraries and binaries)


I've wanted exactly this for ages. Would make reproducible builds and distributions simple.



We also need an IPFS source cache, not just binary cache. It would be an immutable source mirroring for all of our dependencies allowing reproducible builds from source (which is sometimes required as not all compiler flags are specified by default). I've had tons of problems buildings from source in Nix because upstream source urls break.



> Nix builds packages in isolation from each other. This ensures that they are reproducible and don’t have undeclared dependencies, so if a package works on one machine, it will also work on another.

This is something that we should have all moved to years ago. Our drives are big enough for most shared libraries to move away from. Though I am intrested on how they handle something like KDE and KDE applications.


Note: debian have a similar projecthttps://wiki.debian.org/ReproducibleBuilds


gx is a language-agnostic, distributed package manager. It uses IPFS for package retrieval and publishing, and git-like hooks for per-language/ecosystem implementations (like Go: https://github.com/whyrusleeping/gx-go).


Looks like this wasn't built in response to the recent NPM drama but sounds like a perfect alternative.

IPFS providing the ability for permanent, signed content makes this great.


Very cool. Have you heard of gittorrent: https://github.com/cjb/GitTorrent


In all this discussion of npm, I wonder how PyPi stacks up. Once I tried to fix an issue with the source distribution of one of my packages on PyPi and found that I could delete it just fine, but uploading a new file with the same name failed. I tried re-uploading the original sdist, but that failed as well. Apparently once it's gone it's gone.

I eventually just uploaded a different format then quickly pushed a new version, but I was kinda disturbed that I could irreversibly break a dependency like that.


This is probably done for security: if a package gets removed it can't be replaced by something potentially malicious. In addition, people that have tied their version numbers would get alerted this way.


Yeah, but I think either I shouldn't be able to delete it in the first place or I should be able to upload the original package with an unchanged hash. The alternative is that anyone with an == dependency is now permanently broken.


Ah, I was just thinking about doing something like this for a different package manager. I'll be interested to see how this works out.


Let's say we adopt this. Cut to some small number of years in the future, and everyone is complaining about how dealing with 46-character-long opaque strings is terrible.

This is a great idea, but there's no mention of versioning. As a long-time advocate of what we now refer to as Semantic Versioning, I can't help but see this as a disaster waiting to happen.


Excellent concerns! gx addresses both:

> Let's say we adopt this. Cut to some small number of years in the future, and everyone is complaining about how dealing with 46-character-long opaque strings is terrible.

Agreed. You can point gx to "repos" (/ipfs addresses that map (package, version) => /ipfs/... addresses. This lets you use simple names, and anyone can publish a repo.

see https://github.com/whyrusleeping/gx#repos

> This is a great idea, but there's no mention of versioning. As a long-time advocate of what we now refer to as Semantic Versioning, I can't help but see this as a disaster waiting to happen.

Sorry, this is a documentation failure rather than a feature failure: gx has semver built in and adheres to it today.


Right now, I'm running an IPFS node. Now, say I store a set of files. What happens is that:

/set_of_files -> /ipfs/[key_1]

Now, these files change because they were your GIT repo of all your projects. So what happens is that the key that represents your files changes continuously. That sucks. There's a solution. IPNS.

IPNS can point at a continuously changing set of directories, and always provide the pointer as an IPNS link. It's the mutable part of IPFS...

Now, as for the hashes... The Firefox plugin already supports looking up the DNS txt record of the IPNS key associated to your site. So, if you have a pretty DNS name, you still keep your pretty DNS name.

Now, for the versioning, that assumes that there was something prior, something now, and something that will be. That's not always the case. In that situation, a changelog could be applied as a form of a blockchain showing all deltas to the keys changed. But that can be implemented when it needs to be, and not as core functionality.


The hash is the version. However, the extra information semantic versioning gives is missing of course.



It was created before npmgate but I posted it as a response to npmgate.


(I meant to mention this is an nodejs NPM replacement, specifically)


but i've learned it can do more than only solving npm replacement, very cool!


Isn't IPFS based on bitcoin technologies? Does it have the problem that bitcoin had (still has?) where you have to download the entire blockchain -- which in this case is the entire network? I'm guessing this has been solved since I last heard about the issue...


Not at all. It's more like a giant bittorrent full of everyone's git repos.


Any place where I could read a good introduction to IPFS ?


I would start at our webpage here: https://ipfs.io/

Though admittedly it needs work. If youre the video watching type, the stanford talk on that page is very good (i recommend setting the speed to 1.5 or so).

Beyond that, come check out our git repos: ipfs whitepaper and general protocol discussion: https://github.com/ipfs/ipfs go-ipfs codebase: https://github.com/ipfs/go-ipfs And the FAQ: https://github.com/ipfs/faq


There's also a (very WIP) "IPFS Textbook": https://github.com/RichardLitt/ipfs-textbook/


Ok, so my question still stands, does this mean I will be eventually downloading the entire IPFS network onto my computer? What is the solution to that problem?


Currently no, you only download content you have requested, and you can remove any content you have not pinned to hold onto via a garbage collection command. When bitswap is more mature you may get more content on your system, but that will probably be configurable.


This looks great! I've always wondered why there's no universal package manager and although we all know about Nix, I've always wanted something simpler and lighter!


Whenever I see "complicated technology of which there are 1000 implementations" combined with "built around X" I think somebody just wanted an excuse to play with X and has no intent on building a good tool.

Package managers have a long history of being poorly implemented. Look at CPAN, RPM, Alien, Pacman, Nix, etc as examples of what was a good idea, but still has various flaws. We already have everything we need, it's just not all built into the same tool. Adding new technology to a new tool doesn't make a better tool, it just makes yet another incomplete tool.


I think it would be relevant to this discussion to point out the flaws in Nix rather than vaguely refer to them. Please explain how to make the more complete tool from any of the 5 package managers you mentioned.

If I were to grant your comment the same amount of charity as you have granted this project, I would summarize it as, "This project has problems like all the rest. If you don't make it better, it's lame."


When a corporation makes a tool, it's to do some job it needs to do. Usually features are added over time to provide for the requirements of its users and various teams. In this way, it only does what it needs to do, and is usually kept around a long time, used by everyone, and rarely replaced, because it can always be modified to fit a new feature. It ends up being pretty feature-complete and useful (which is sad because they usually end up locked up as intellectual property).

Non-corporate tools, on the other hand, are typically made by people who need to do something and want to do it their own way. It's not that they couldn't make due with an existing tool - the functionality probably existed in another tool or tools - they just want to do it differently, with some kind of interesting twist. The end result? Reinventing the wheel with maybe one new feature, and missing all the ones they didn't need/want. Do this a hundred times and you have a hundred tools that are useful to some people, but not most people.

I'm not saying it's lame, i'm saying it's a waste of engineering effort.


This comment demonstrates that you do not have any criticism at all regarding this project. While reading it I could only think of Visual Source Safe and Git, which I believe counters your entire premise.


Git was created basically because there wasn't an open source implementation of BitKeeper, and BitKeeper was unique in its function. If at all possible Torvalds would have stayed away from making Git (in fact he said so himself). This is different than reinventing a wheel with existing functionality. I don't see how VSS relates, since it's used for different things (like comparing Oracle to SQLite?)




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: