Hacker News new | past | comments | ask | show | jobs | submit login
Radicle: Open-Source, Peer-to-Peer, GitHub Alternative (radicle.xyz)
807 points by aiw1nt3rs 9 months ago | hide | past | favorite | 284 comments



Hi HN. I am the co-founder of the project. If you are interested in how the protocol works under the hood, start here: https://docs.radicle.xyz/ Docs are still WIP though.


I read the documentation and this stands out to me:

> Radicle repositories, which can be either public or private, can accommodate diverse content including source code, documentation, and arbitrary data sets.

If this is, basically, a peer-to-peer file sharing application, what part of the protocol handles dealing with abuse?

Otherwise, how is this different from the previous generation of file sharing applications (BitTorrent, winny, etc) where people just share arbitrary copyrighted content like movies, songs, software, etc?

I feel like a few bad actors will ruin this?

Can you partition your “personal” network somehow, so you can use it with absolute confidence your not participating in anything illegal?


Good question!

One of the key ideas is that each user chooses what repositories they host via pretty fine-grained policies. This means you can easily block content you're not interested in seeding, or simply configure your node to only host content you explicitly allow.

You can also choose which public nodes to connect to if you'd rather not connect to random nodes on the network; though I don't expect most users to go this route, as you are more likely to miss content you're interested in.

Though Git (and thus Radicle) can replicate arbitrary content, it's not particularly good with large binary files (movies, albums etc.), so I expect that type of content to still be shared on BitTorrent, even if Radicle were to be popular.


Is there nice interop with BitTorrent for those cases, similar to how Git Annex adds large binary support to git?

For example, if I use Radicle to version a machine learning project, can I use a Magnet link for multi-GB model files?


You can already use it with git-annex to store binaries using the git-annex-remote-git-aafs[1] special remote.

Although I would be careful and make sure you understand what it is doing to your branch namespace. Even though in the worst case it would not save any space over directly committing binaries, they are in orphan branches that can be pruned without rewriting history.

But even so, you can just use any number of git-annex special remotes to bypass using git for sharing files.

They may eventually add first-party support for git-annex. But nothing is stopping you from using it now.

[1]: https://github.com/domo141/git-annex-remote-git-aafs


Last commit is 5 years ago. That is a lot in dog years.


Any plans to add support for git-annex?


Random thought but what about calling it a subscription?

For example, allow users to be able to subscribe to specific repos or specific topics or specific people etc


Sharing arbitrary copyrighted content did not ruin BitTorrent so I don't see why it would ruin this.


Did it not? I would consider Bittorrent traffic to have a very high risk of being blocked or viewed as suspicious, and as a result almost never gets embedded in other use cases. Even the simple use case of using torrents to have peer-to-peer delivery of software updates has flopped.


It looks like there’s still an Arch Linux BitTorrent tracker; maybe I’m out of touch but I think this is not a very uncommon way of distributing distros (?).


Granted, you've found the one (and only?) notable exception to the "no legitimate torrent use cases" rule that I can think of. :)

And this has a legacy roughly as long as BitTorrent itself. The fact that BT never established a footing in other use cases (even those where you would think it would be a great solution) is telling.


Blizzard used BitTorrent for distributing full games, patches and trailers for years.

https://wowpedia.fandom.com/wiki/Blizzard_Downloader


I had forgotten about this, this is an interesting exception.

Windows Update also allows the user to opt in to use a non-bittorrent protocol for peer-to-peer delivery of updates to other Windows users. But of course, it's not true Bittorrent.


That’s fair, I never got into torrenting for illegal purposes but even I had the feeling that we were acting as the thinnest veneer of legitimacy when torrenting distros, haha.


you can choose which nodes you follow and which nodes you block - you can even decide that you will seed particular repos and not the entire node.

(P.S. I am working at Radicle)


How would you know which nodes to block?


Any node with "problematic" behaviour can be blocked.

But please note that you can also choose a "block everyone, follow just the good ones" (i.e. a selective) seeding policy [1]).

[1] - https://docs.radicle.xyz/guides/seeder#a-selective-seeding-p...


so it's the same as cloning a repo locally, auto updating it, and exposing a mirror to the world?

how will this not devolve into freeNet fiasco when popular repos start to go wild on content?

edit: i see from the finance thread you will likely take on maven/npm/etc with crowd hosting+funding so I'm now more curious how cheap it will be for bad actors to push intentional malicious content, since now it's mostly about cost of having consensus over the mirrors


Fascinating project! I'm curious what's the business model? it's listed on Crunchbase that you raised 12M$ so I'm assuming you do have plans to make money?


Curious as well. Searching around I found this documentation on their ecosystem [0], which may shed some light on the organization structure. It may be they are organized as a DAO? From the intro:

> Radworks is a community dedicated to cultivating internet freedom.

They do not shy away from cryptocurrency technology, though AFAICS that is not directly applied to the Radicle project. Another project of Radworks is Drips [1], to help fund open source.

[0] https://docs.radworks.org/community/ecosystem

[1] https://www.drips.network/


Hi. While not actively looking for replacement to proprietary services s.a. Github or GitLab, from time to time I'm asked about an alternative.

I'm all for a distributed self-hosting solution, so Radicle is definitely hitting the mark here, however:

> Linux or Unix based operating system.

For the kind of project I have to assist with, this would be a deal-breaker. Since the code seems to be in Rust: do you intend to make it available to MS Windows? (I took it for granted that Mac OS is included in the Unix family, right?)

If not straight-up support for MS Windows, then maybe an MSYS2 port?

----

To give some background: I'm not in charge of decisions like service vendor selection, and we are talking about a quasi-government organization with substantial open-source code base that is currently hosted on Github. I.e. sometimes I might have a chance to pitch a particular idea, but it's not up to me if the idea is accepted. They are quite motivated to make their work as resilient to private vendor policies as possible as well as try to "do good" in other ways (i.e. sustainability, breadth of the outreach etc. -- a typical European gov. org :) So, Github is... obviously in conflict with such policies.

While there are other gov. agencies tasked with archiving or networking support, they seem to be woefully incompetent and / or outdated, as well as often falling for the vendor-laid traps (eg. the archiving service went all-in after DataBricks not even realizing it's a commercial closed-source product). So, I wouldn't have high hopes for the org. to be able to leverage a self-hosted solution. That's why a distributed solution looks great.

However, they wouldn't be able to use a tool that doesn't work on major popular PC systems.


Hey there. Yes, Windows support is something we'd like to have, but focusing on less OSes is helping us ship faster. In principle, there shouldn't be any issue in porting to Windows, but since no one on the team runs Windows it would have been hard to ensure things are working smoothly. If there is demand though, we will certainly start allocating time towards it.

Radicle does work on macOS as well.


Windows Subsystem for Linux should alleviate these pains a lot.


It's just a somewhat better integrated VM with all the shortcomings that entails...

Having to deal with individual users of various software I'd sometimes resort to using WSL, but this isn't an always acceptable way.

To shed more light: some of the users of the system I'm talking about are hospital researchers. These people are very limited in terms of choices they can make about their computers. While it could be possible sometimes to convince hospital's IT to install / enable WSL, this won't work all the time esp. because it, essentially, allows too much control for the otherwise very restricted user over their workstation. MSYS2 here has an advantage that everything can be packaged as a single program (Git is distributed in this way for example), which makes it easier on the org. IT. In principle, WSL can be used that way too (iirc. Docker does something like it), but you'd still need a bunch of Windows-native wrapping for things to work (i.e. I understand that there needs to be at least one service process that does the peering).


WSL is great so long as everything you need to do runs inside its VM. If you need to access things on the main Windows filesystem, you're basically accessing a networked file system at that point, with all that entails for perf.


How much budget was spent on Radicle, how many people did work on it, how long you've been building it and who is using it ?


I won't reveal anything about our finances, but the current code base is a little under 2 years old. We've worked on the general problem for over 4 years in total though. The team is around 12 people, split between protocol, cli, tui, web and content.

The product is set to launch this month, so we're just starting to onboard users, but many people in the community are already using it, and we've been using it internally for about a year.


My question wasn't about your "current codebase". It was about Radicle. It was launched 6 years ago, and for some reason it's always about to onboard the first users when crypto is on hype :)

An idea doesn't take off -- totally normal, but how on earth can you fund Radicle for such a long time with no users? You can even throw it away and rewrite it! What's the source of funding for Radicle ?

Asking because you seem to be best at getting the idea funded, not really actualizing it.



I might take your comment more seriously if you a) put your name behind it and b) dropped the “just asking questions bro” shtick when you obviously have some sort of axe to grind with this project.


Good luck with your reasoning skillset


Good luck with your trolling.


Asking source of funding is definitely trolling for sketchy crypto projects ;)


Sorry, this is sketchy. If you're not clear about your revenue generation and finances, how do I know your project isn't just about harvesting as much user data as possible?

Open-source projects obviously need to pay the bills, but if you're not clear on how you are achieving this or hoping to achieve this then there's really zero trust in using this.


> there's really zero trust in using this

It's peer to peer, anyone using the protocol is entitled to share and collect as much data as the protocol permits, and the founders have no more power than any other user.

It's way less sketchy than anybody operating a server and asking you to trust that they're doing so responsibly--which is pretty much everybody.

I don't think that everything can or should be made zero-trust. But if this can, then that's a win.


The alternative isn’t just some random person hosting a random server.

It’s dealing with a company where you agree to a policy that describes how they can use your data. That means you have legal recourse if they violate that agreement.

It also means you know who actually has your data which isn’t the case with these federated networks. Every entity that has your data on their server is another entity can use it in a way you don’t agree with

On top of that, the alternative solutions are pretty clear how they make money.


If you're powerful enough to have a lawyer for such things, then I guess that's a significant difference. But for most of us, your description of the alternative is indeed tantamount to "some random person hosting a random server". And you're right that federated designs are susceptible to bad behavior on the part of the server admin. I assume that's why the radicle protocol guide (https://docs.radicle.xyz/guides/protocol) has a section differentiating P2P from federated.

I don't know these people, maybe they are indeed up to something nefarious, but their design is inherently more trustworthy than federated or hosted solutions. If I must chose between transparency into finances and a nonhierarchical design which presents no high value targets for corruption to focus on, I'll take the better design over the financial transparency every time.

If they turn out to be actually shady I can just configure my node not to talk to them or their friends and keep on using it, which is a lot more than can be said for most of the alternatives.

Besides, it's a publishing platform. What is this "your data" you're talking about? The whole point is to spread it far and wide and to collect contributions from far and wide.


i really like the website and application design, bc so many oss projects often completely falter w/ visual design, and while this is a superficial thing, beautiful design makes me want to interact w/ a project more :)

also, i'm curious, what kind of adoption were you anticipating (some time ago and now) and did the result align with it?


Your gossip protocol isn't the gossip protocol of Hashgraph/Hedera https://hedera.com/learning/hedera-hashgraph/what-is-gossip-..., is it?


I'm curious if you (or anyone) had a chance to use Mango (https://github.com/axic/mango) before it was abandoned?


I do remember Mango! I didn't actually try it out, but we had experimented with Ethereum and IPFS in the past, and it wasn't a great fit for a code collab platform due to performance and cost.


Looks really interesting! Some of us are allergic to "curl | bash" though; would you consider creating a homebrew package?


Understandably! We are working on packages for Linux and macOS.


Does this v3 iteration mean that if I pull Radicle from nixpkgs right now that I might be a major version behind?


I am a nix noob, but we are now using flake[0], I don't know if that helps!

[0]: https://app.radicle.xyz/nodes/seed.radicle.xyz/rad:z3gqcJUoA...


It would be neat to define the radicle repo as an input to the flake for a project which used radicle, that way you could add it to the devshell and you'd have pinned the version of radicle to your project such that running "nix flake update" updates that project's version of radicle along with updating its other dependencies (this, among other things, is what having a flake.nix at your repo root makes possible).

A workflow of this sort doesn't need nixpkgs at all, but it does require that the nix flake input handler knows how to fetch from radicle repos. I'll try it a bit later today, but I'm guessing that this will require a change to nix before it works.


Interesting. "fetchFromRadicle" would be insanely cool.


It seems like you can define radicle repos as flake inputs like so: https://gist.github.com/MatrixManAtYrService/b527300542b6fdd...

Although maybe this approach is cheating because it's relying on https and not some kind of hash-linked P2P magic (which would definitely require modifying nix to make work). I guess there's something similar to the IPFS gateway going on: somebody is hosting a bridge into radicle space. It would be interesting to get this working without dependency on that bridge.

Anyhow, modify your project's flake accordingly and your version of `rad` will track with that radicle repo. No curl-to-bash required.


IIUC, you should be able to use it with git-remote-rad[0] via builtins.fetchGit. Flakes would probably need upstream support, though.

Seems to work for git-remote-hg, anyway:

nix-repl> (builtins.fetchGit { url = "hg::https://www.public-software-group.org/mercurial/lfapi/"; rev = "34366bf575c8c77c8d3b76d32940c1658cb948a4"; }).outPath

"/nix/store/8dwyms22iwy4fq0b1593i34m88jk574j-source"

[0]: https://app.radicle.xyz/nodes/seed.radicle.xyz/rad:z3gqcJUoA...


Yeah, Nix's fetchGit just calls out to git; which is great for seamlessly handling weird SSH setups and the like.

Nix itself has a plugin mechanism, but its rarely used (since it can harm reproducibility; and TBH that seems to be the only reason to use it, for those very rare situations that it's desirable)


Of course you are! All the coolest projects are! Thanks, now there's no excuse to not try later!


Are you planning on adding native support for this to git?


No plans to do so - firstly we’d have to rewrite it in C, and second it would be a big amount of code for the Git project to merge.


I'm interested in this, but I noticed a base58 hash on the page. I'm not really interested in crypto. How much could I use this product without adopting crypto? Is this attached to some digital currency like ipfs or is it independent?


It’s independent.

No need for crypto/digital currency whatsoever.


My eyes! The goggles do nothing! [1] Please bring on some design and UX folks. The contrast literally strains the eyes.

[1] https://www.youtube.com/watch?v=PWFF7ecArBk


There are lots of potential intellectually stimulating research projects. Why code repositories instead of like, a video game? Why not harness the same manic energy into something that already existed? Like the kind of person who can be sincerely passionate about source code repositories, why can't that kind of person then be passionate about literally anything?


I’m confused. What does this comment mean?


Let's say you had $12m, with the requirement that it be spent on an "interesting problem." What would you spend it on?

These guys picked "distributed source code repositories, with a client idiosyncratically written in Rust and idiosyncratically built with a cryptocurrency idea." Why?

I agree it's intellectually stimulating. But besides that, is there a reason they are particularly passionate about distributed source code repositories?

Thing is, building UIs to clone GitHub is a huge grind. It's not anything anyone asked for, and there aren't any innovations there. Then building all this infrastructure and documentation is also a grind. Then this is a grind, and that is a grind, and you add up all this stuff that is kind of a grind. So you have this original... radical of an interesting idea, that is intellectually stimulating, but then you have to do all this work that is a grind, and it's like, why?

So in my experience there's someone back there who has a certain manic energy, an ability to focus on doing this grind. There's no audience, so there are no bug reports, so it can sort of be all work channeled into product development just for the psychic satisfaction of pumping out GitHub UI clones, documentation, install scripts, infrastructure, documenting and evangelizing a "protocol" etc. I mean that's 90% of the work right? So why not harness that manic energy elsewhere?

I'm not making a judgement, but I'm trying to figure out where the sincere excitement lies. Like if you are willing to put up with this grind, you can also contribute 1 feature to Git, which already exists, and that could get 1,000,000x the adoption and be 1,000,000x more impactful. I don't know. So it's not about that. It's not about growth growth growth, or whatever.

If you have the manic energy to do any mundane task, like if the authors don't really care about which grind it is or how much it is, why not channel it into something else that maybe they are more passionate about? Like who is sincerely passionate about distributed source control, but hasn't already found their proverbial tribe in the many, many places where you can be excited about something like that?

I bring up "video games" because it's stereotypically the thing the quiet kid who suddenly becomes very wealthy doing something meaningless (sorry, that's true about distributed source control) and then now, he has the money to do whatever he wants, so he funds a Diablo clone or whatever. It's only a half joke. But it's like, why? Why this?

Of course one answer is that, for many people, they see $12m as Finally My Payday. They'll say or do anything to make that happen. If it happens to be that their lane is Authoring Idiosyncratic Computer Science Research Projects, that's what will be related to getting the payday. If that happens to be their lane because they look like a guy who plays Diablo and drinks energy drinks and is good at math and has manic energy to program cryptocurrencies... okay. That could be true.

This is just a colorful comment. But I think there's something more sincere there, and that's what I'm asking for, and unfortunately there is a way to tell in writing if someone is or is not sincere, versus just trying to keep their payday, and that's the risk with exposing yourself to the community, and that's fine.


It’s simple: I don’t want my code and code collaborators to be using a platform owned and controlled by a third party. Just like I don’t want my OS or text editor, kitchen, furniture, clothing, books or music to be controlled by a third party that can decide to take it away whenever they see fit. Code and open source are integral to my life, as are the other things cited above, and therefore I’m uncomfortable with the idea of using github for the forseable future.

As it happens there are many others like me, and this helps fuel our excitement and drive to get this out there.


> I agree it's intellectually stimulating. But besides that, is there a reason they are particularly passionate about distributed source code repositories?

I’m surprised the answer isn’t obvious to you, yet again maybe I shouldn’t be as I suspect you’re a nocoiner.

Distributed decentralized anything is fundamentally about censorship resistance. Understanding that, for me, make the answer as to why they are passionate about distributed decentralized version control … they are concerned about coming censorship attempts on software, which honestly seems pretty likely given the current authoritarian direction of western civilization in general.

I’d also speculate that your perception isn’t shared by everyone based on the large number of upvotes on this submission.


For me it is not only about censorship resistance, it is also about having everything stored in git (so you loose nothing if you decide to move around) and about easy way to be a node itself (you installed Radicle? Congrats, you're a node in the network - you participate, you help, you extend. You can make a permanent node with domain name but you're already a node once you installed Radicle and started the node daemon).

It is this radical idea that everyone can easily be an equal participant in the network and that they have power over what they want and don't want. Open source nature further helps you to adapt things - don't like our web UI local-first interface? Build your own one or adapt ours to your need.


It's been fascinating watching Radicle evolve over the –what seems to be– last 5 years.

I attended the workshop at Protocol Berg 2023 and think they built something really powerful and novel.

Perhaps the most exciting aspect is that even the collaborative aspect of the protocol is local-first which means you can submit patches and issues without internet and that your team isn't on HN every time GitHub is having problems.


Nice to meet a fellow Protocol Berg enjoyer in HN!


This looks like a fine project for its purpose, but I think git is already open-source and p2p. You don't need sh<(curl) a bunch of binaries, instead simply connect to another git server, use git commadns to directly pull or merge code.

What's missing in git is code issues, wikis, discussions, github pages and most importantly, a developer profile network.

We need a way to embed project metadata into .git itself, so source code commits don't mess up with wikis and issues. Perhaps some independent refs like git notes?

https://git-scm.com/docs/git-notes


While Git is designed in some way for peer-to-peer interactions, there is no deployment of it that works that way. All deployments use the client-server model because Git lacks functionality to be deployed as-is in a peer-to-peer network.

For one, it has no way of verifying that the repository you downloaded after a `git clone` is the one you asked for, which means you need to clone from a trusted source (ie. a known server). This isn't compatible with p2p in any useful way.

Radicle solves this by assigning stable identities[0] to repositories that can be verified locally, allowing repositories to be served by untrusted parties.

[0]: https://docs.radicle.xyz/guides/protocol#trust-through-self-...


> it has no way of verifying that the repository you downloaded after a `git clone` is the one you asked for

Respectfully disagree here. A repository is a(or multiple) chain(s) of commits, if each commit is signed, you know exactly that the clone you got is the one you asked for. You're right that nobody exposes a UI around this feature, but the capability is there if anyone would have any workflows that require to pull from random repositories instead of well established/known ones.


Here's the problem: how do you know that the commit signers are the current maintainers of the repo?


That problem is social you can never be sure of that even with hardware signing of commits. No tech can ever solve that. Just get "pull requests" from contributors you know and pull from maintainers you trust. Is the social model.


That's not quite right, we solved this in Radicle. Each change in ownership (adding/removing maintainers) is signed by the previous set of owners. You can therefore trace the changes in ownership starting from the original set, which is bound to the Repository ID.


Sure, but again, you've added convenience - or what you feel like it's convenience - for something that probably can be achieved right now with open source tools. A "CONTRIBUTORS" file with sign-offs by maintainers is an example of a solution for the same thing.

I don't deny that your improvements can benefit certain teams/developers but I feel like there are very few people that would actually care about them and they're not making use of alternatives.


A CONTRIBUTORS file is easy to change by anyone hosting the repository - it's useless for the purpose of verification, unless you have a toolchain to verify each change to said file. "Sign-offs by maintainers" it not useful either unless you already know who the maintainers are, and you are kept up to date (by a trusted source) when the maintainers change. This is what Radicle does, for free, when you clone a repo.


All good points, but now you moved the trust requirement from me having to trust the people working on the code, to me having to trust the tool that hosts the code. I'm not convinced your model is better. :P


Over time, I’d expect trusting the tool to be more and more trustworthy as more and more eyeballs can review the tool.

Whereas having to trust people, especially as people cycle in and out over time can be inherently stochastic.


I don't know, for me when I get involved with a project, I'm more likely to be aware of the people involved with it than the place where they host it.

I understand that the disruption Radicle wants to bring is to divorce projects from their developers, but that sounds so foreign to me, that I can't wrap my head around it. I can see its use in some cases: abandoned projects, unethical behaviour from maintainers, but not to the extent where a new platform is required.

Maybe that's why I'm being such a Negative Nancy. I hope u/cloudhead didn't consider my replies too aggressive. :)


For me, one of the benefits of FOSS is that I don't have to trust the people. I can look at the code and decide for myself.

Not looking to convince you of that or anything though... :)


Can’t debate that :)


How do I verify the “original set”, or the Repository ID, if not out-of-band communication (like a project’s official website)? And then what advantage does this have over the project maintainer signing commits with their SSH key and publishing the public key out-of-band?

I think there’s room for improvements in distributed or self-hosted git, but I think they exist more in the realm of usability than any technological limitations with the protocol. Most people don’t sign git commits because they don’t know it’s possible—not because it’s insecure.


The repository id can be derived via a hash function from the initial set of maintainers, so all you need to know is that you have the correct repository id.

The advantage of this is that (a) it verifies that the code is properly signed by the maintainer keys, and (b) it allows for the maintainer key(s) to evolve. Otherwise you’d have to constantly check the official website for the current key set (which has its own risks as well)


I'm not sure how any of this solves the problem.

If I am on the internet there is no key or keys that I could definitively say came from the _real_ maintainers. I need to trust some source or sources for that.

In your model, committing to the repo requires a private key. This key claims ownership of the repo. If that key is lost or stolen I have lost ownership of that repo. With no out of band method to recover it.

If that key is unknowing stolen, ownership is switched to a new key, this is a pretty bad scenario.

Basically, I still always need to go to some other out of band source to verify that bad things have not happened.


Radicle developer here :) And yes you're completely right.

The current state of key management has A LOT left to be desired, because `did:key` has no rotation and so if you lose your key then it's game over. We decided to go with something simple first to allow us to develop the collaboration experience as much as possible -- we're a small team so it's hard to tackle all of the large problems all at once, while also getting an experience that's polished :D

Key management and a general "profile" is high on our priority list after we have properly launched. A few of us think DIDs (https://www.w3.org/TR/did-core/) are a good way forward. In particular, `did:keri` seems very interesting because its method involves a merkle-chain log, which can be easily encoded in Git. It includes key pre-rotation -- meaning there's a key that's available to help recover if something goes wrong. It can also delegate to other people, so you can allow the safety of your identity and key be improved by third-parties.

That said maybe there are other DID methods or other methods in general that might better suit. Or maybe we're able to build something that can be more general, and just needs to essentially resolve to a public/private key pair and we don't care after that.

Would definitely be interested in the communities thoughts here :) Or if someone who's got expertise in the area wants to chip in, hit us up ;)


It's seems to me that for security reasons it might be a good idea to support separate signing keys for normal commits and commits that change the ownership set. This would allow you to keep the ownership change keys offline under the assumption they are rarely used. This is something PoS cryptocurrencies tend to do by having a separate withdrawal key for accessing stake to the signing key used for block proposals, attestations etc.


Interesting idea, thanks!


How do you know the repository id is the correct one?

You have just changed the requirement from knowing the maintainers public key, to knowing a different public key. Sounds pretty much the same problem to me.


The difference is that the repository id is stable, while the maintainer keys can change.


Except repository ids change when the repo is forked.


Yes, but the maintainers can be changed while also keeping the identifier stable.

Updates to the delegate set (read: maintainers) can be made, let's say adding a new delegate. This change is signed by a quorum of the current set of maintainers. This change is kept track of in the Git commit history, so these chain of actions can be walked from the root and verified at each point.

Similarly, a delegate can be removed, the project name changed, etc.

Forking is only necessary if there is a disagreement between maintainers that cannot be resolved so one of them goes off to create a new identifier to differentiate between the two. At this point, it's up to you to decide who you trust more to do a better job :)


How do you fork an abandoned repo?


When you fork an abandoned repo, you are essentially giving it a new repository identity, which is a new root of trust, with a new maintainer set. You'll then have to communicate the new repository identifier and explain that this is a fork of the old repo.


By the same way I know how the commit signers are who they say they are in "regular" usage of GPG: I have verified the key belongs to them, or their keys are signed by people I trust to have verified, etc, etc. Like a sibling said, the problem is social rather than technical.


By joining the web of trust. Meeting people, verifying each other's identities and getting keys signed.

Debian seems to be quite good at this.

https://wiki.debian.org/Keysigning

https://wiki.debian.org/Keysigning/Coordination

https://wiki.debian.org/Keysigning/Offers


Does that matter if the signatures are valid?


Yeah, because for eg. I can publish the given repository from my server with an additional signed commit (signed by me) on top of the original history, and that commit could include a backdoor. You have no way of knowing whether this additional commit is "authorized" by the project leads/owners or not.


That is in fact the point, it's decentralized by nature. The entire idea behind git's decentralization is that your version with an additional backdoor is no lesser of a version than any other. You handle that at the pointer or address level i.e. deciding to trust your server.


Perhaps, but none of that commit history is related to the invocation to git clone. To acquire and verify you need both a url and a hash for each branch head you want to verify


The problem I'd like to see solved is source of truth. It'd be nice if there were a way to sign a repo with an ENS or domain withiut knowing the hash.

Another thing is knowing if the commit history has been tampered with without knowing the hash.

The reason for needing to not know the hash is for cases like tornado cash. The site and repo was taken down. There's a bunch of people sharing a codebase with differing hashes, you have no idea which is real or altered.

This is also important for cases where the domain is hacked.


> The reason for needing to not know the hash is for cases like tornado cash. The site and repo was taken down. There's a bunch of people sharing a codebase with differing hashes, you have no idea which is real or altered.

> This is also important for cases where the domain is hacked.

I think at some point you need to know some sort of root-of-trust to kick off the trusting process. I believe in this case, you would trust a certain DID or set of DIDs (i.e. a Tornado Cash developer's public key). You can clone their version of the project and the history of the project MUST be signed by their private key for it to be legitimate.

To clarify, in Radicle, a peer's set of references are always signed by their key and this data is advertised so that you can always verify, using their public key, that this data is indeed what this peer has/had in their Git history. If this ever diverges then any fetching from that peer is rejected.


Right now the devs are in jail so you wouldn't be able to seed from them (computers off) so you'd need to trust another reference


This is just another form of the cryptographic key distribution problem. Doesn't matter where the git repository comes from, you can be sure it hasn't been tampered with if the signatures are valid.

Domains with DNSSEC are an interesting solution. PGP public keys are distributable via DNS records.

https://www.pgp.guide/pgp_dns/

https://weberblog.net/pgp-key-distribution-via-dnssec-openpg...


How do you handle the SHA1 breaks in an untrusted p2p setting?


If you mean collision attacks, this shouldn't be a problem with Git, since it uses Hardened SHA-1. Eventually, when Git fully migrates to SHA-2, we will offer that option as well.

> Is Hardened SHA-1 vulnerable?

> No, SHA-1 hardened with counter-cryptanalysis (see ‘how do I detect the attack’) will detect cryptanalytic collision attacks. In that case it adjusts the SHA-1 computation to result in a safe hash. This means that it will compute the regular SHA-1 hash for files without a collision attack, but produce a special hash for files with a collision attack, where both files will have a different unpredictable hash.

From https://shattered.io/


So you use hardened sha1 in radicle? It would be great to see this in the docs.


Everything that is replicated on the network is stored as a Git object, using the libgit2[0] library. This library uses hardened SHA-1 internally, which is called sha1dc (for "detect collision"). Will add to the docs, good idea!

[0]: https://github.com/libgit2/libgit2/blob/ac0f2245510f6c75db1b...


The entire Linux kernel development team wouldn’t to differ…


> What's missing in git is code issues, wikis, discussions, github pages and most importantly, a developer profile network.

Radicle adds issue tracking and pull requests. Probably some of those other features as well.

On mobile there are buttons on the bottom of the screen in the op link, click those and you get to the issue tracking tab and the pull request tabs etc


But that’s not what parent meant. Those things should be embedded in the git repository itself, in some kind of structure below the .git/ directory. That would indeed make the entire OSS ecosystem more resilient. We don’t need a myriad of incompatible git web GUIs, but a standard way of storing project management metadata alongside version control data. GitHub, Gitea, Gitlab, and this project could all store their data in there instead of proprietary databases, making it easy to migrate projects.


Yes, this is how radicle stores this data. ; )

https://app.radicle.xyz/nodes/seed.radicle.xyz/rad:z3trNYnLW...


https://docs.radicle.xyz/guides/protocol is probably a better resource (but this guide is still Work In Progress)


> Radicle’s predefined COB types are stored under the refs/cobs hierarchy. These are associated with unique namespaces, such as xyz.radicle.issue and xyz.radicle.patch, to prevent naming collisions.

This looks like an interesting approach. I have question, to avoid copy a large .git project, we have partial cloning and cloning depth. If `cobs` grows too large, how can we partially clone it? Like select issues by time range?


The COB types are located in the Stored Copy, you would still be able to partial clone the working copy repo without the issues and patches, with your current git commands. There is a better explainer here: https://docs.radicle.xyz/guides/protocol#storage-layout


> a standard way of storing project management metadata alongside version control data

Emphasis mine. Doesn't seem to be it seening as this is yet another home grown issue storage.


Yeah, exactly. Radicle doing it this way, Fossil another - see here why that is a problem: https://xkcd.com/927/


And Fossil is an entirely different VCS.

What’s the alternative? That at least N projects cooperate and agree on a common design before they do the implementation? (Then maybe someone can complain about design-by-committee.)


I use Artemis, which was originally written for Mercurial but also supports Git. It stores issues inside a repo, so it doesn't care about where it's hosted and works completely offline without needing a Web browser. Issues are stored in Maildir format, which is widely supported standard that can be processed using off-the-shelf tools. For example, I write and comment on Artemis issues using Emacs message-mode (which was designed for Email), and I render issues to HTML using MHonArc (which was designed for mailing list archives).

I'm not claiming these tools are the best, or anything. Just that existing standards can work very well, and are a good foundation to build up UIs or whatever; rather than attempting to design something perfect from scratch.

My fork of Artemis, patched for Python3: http://www.chriswarbo.net/git/artemis

Emacs config to trigger message-mode when emacsclient is invoked as EDITOR on an Artemis issue: http://www.chriswarbo.net/git/warbo-emacs-d/git/branches/mas...

An example of HTML rendered from Artemis issues: http://www.chriswarbo.net/git/nix-config/issues/threads.html


> What’s the alternative? That at least N projects cooperate and agree on a common design before they do the implementation?

That would be ideal, yes. You should solicit comments from the greater community before setting the format in stone. But the very minimum would be to build on existing attempts at issues-in-git like [0] instead of reinventing the wheel unless you have a very very very good reason.

[0] https://github.com/MichaelMure/git-bug


Yes! That's exactly what I would like to see - come together as a working group, create a PR on git itself, and implement standard support for issues, PRs, discussions, projects, votings, project websites, what-have-you. The community will take it from there.

The alternative to that would be the git project itself coming up with an implementation. They have reasonable experience working with the Kernel, and the creation of git itself seems to have worked reasonably well -- although I'm not sure I would want to use something Linus considers ergonomic :)


Ok. That could work if you found a group of people who are interested in such an all-in-one package. Gitlab is apparently working on a sort of cross-forge protocol (probably motivated by connecting Gitlab instances) and it seems that some Gitlab employees are working full time on the Git project. So those are candidates. You probably need a group which both have recognition within the project and are active enough to drive such a project forward without it fizzling out.

Without such a group you might just send a few emails to mailing list, get shrugs about how that plan “looks good” with little real interest, and then have to discuss this big plan in every future patch series that incrementally builds such a thing into Git itself. Mostly why such code should be incorporated and how it will pay off when it’s all ready.

The Git tool itself—and the project by extension—is per now very unopinioated about whole-project solutions like this. The workflow that they themselves use is very loosely coupled and pretty much needs bespoke scripting by individual contributors, which I guess is telling in itself.


I would have like to have seen what kind of issue resolution labels old Linus would have come up with. Resolved: YOU GIT has a nice ring to it.


I'd like to note here that Radicle has defined our own version of issue and patch management, but they're not necessarily required to be used as part of the protocol. The protocol only defines that any and all COBs will be stored and replicated under the `refs/cobs` hierarchy.

If someone wanted to come along and define a way to embed Fossil wikis/issues as a COB then they could be replicated on the Radicle network and it's then up to application developers to load and interpret that data.

I think this is cool because it essentially allows developers to extend the Radicle ecosystem easily and define new workflows! However, that does not avoid our XKCD problem stated above ;P But hey, sometimes that's the beauty of these things -- we're given the power to define our own workflows and not locked into something everyone complains about coughGitHub branches PR flowcough


Nothing should be under .git/ except things owned by git (or at least allowed for in an explicit way like the hook scripts).

You would have to either add the features to git itself, or at least add to git the knowledge of and allowance for extra features like that.

But not just toss non-git data in .git/ simply because the dir happens to exist.


What is "non-git data" though?

Git is just a mechanism for storing plain-text data as a series of commits. The underlying data are just blobs of bytes. So all data is Git data and Radicle takes full advantage of that.

The "special" data in Git would be `refs/heads`, `refs/remotes`, `refs/tags`, and the lesser known `refs/notes`. Radicle doesn't touch those directly, we still allow the use of Git tooling for working with that data.

It then extends on top of these by use `refs/rad` and `refs/cobs` for storing Radicle associated data, using all of Git's mechanisms that it provides to do so.


> You would have to either add the features to git itself,

That is exactly what I'm suggesting.


I don't think I would agree. Well it depends exactly what you imagine.

There are probably an infinite and always growing and changing number of repo-adjacent things similar to the handful of added features that github currently tacks on to git.

I think it doesn't make sense for git, already complicated enough, to try to do all of that other related stuff, however related.

Maybe in fact git is already doing too much meaningful metadata work directly itself, and should instead try to switch it's current metadata into some kind of generic interface that other software could hook in to better?

So rather than git managing, say, issues, git just manages data. Not merely a db, it would facilitate associating high level feature data like issues and ci and conversations with low level commit data. An issue tracker would just be one of many clients using the interface.

git itself should probably not know very much about the data other than how it is associated to some commit. IE, the actual meaning of the data is either only in the consumer, or perhaps is some agreed standard that multiple consumers adhere to and understand, or perhaps comes with it's own schema definition like dtd or protobuf. Or all of the above, a paid product could have data only it understands, and other software could use standardized data, all stored in git at the same time. Multiple different consumers could use the same generic interface to their own data.

Because these add-on related functions are probably not universal or done being invented. Tomorrow there will be 5 new things you want to attach to a repo besides what github provides today, and so without even knowing what they are, that's how I know it's wrong for git to provide the features itself.

But managing the data without caring what the data is, that could totally be a new built in git function, where you give git itself one new function and interface, and an open-ended number of new feature-providing consumers all just use the same underlying git feature in whatever way they want.

Other than that the only other new thing I think that might be right to add to git itself is a proper way to manage large binary assets.


Radicle does store such data in git - issues, patches (PRs) etc. Also, the entire project (protocol, cli, web ui etc) is fully open source.


> We need a way to embed project metadata into .git itself, so source code commits don't mess up with wikis and issues.

Fossil (https://fossil-scm.org) embeds issues, wiki etc. into project repository.


Also Radicle, evidently


> and most importantly, a developer profile network

What has the world come to where that is the most important part?

--

I think gerrit used to store code reviews in git.


Repositories and code-sharing are inherently about trust. Even if you personally audit every line of code, you still need to trust that the owner isn't trying to slip one past you. Identity is a key component of trust.


What you say makes sense. But that trust needs to extend to the hosting platform itself, because the platform can manipulate all non-signed data. I don't see how a GitHub profile by itself is trustworthy. You need some additional, external and independent verification that that GitHub profile is really authentic and doesn't contain compromised code.

There is nothing stopping me from creating the accounts IggleSniggle or Iggle5n1ggle on github.


I mean... yeah, you obviously have to trust someone to vouch for the authenticity of an identity. In the case of Github, that's the platform owner. In the case of a digital signature, that's the root certificate authority.

With that being said, your example feels pretty far off the mark. You might be able to phish using a similar looking identity, but that's completely unrelated to the trustworthiness of the platform. It's not as though you'll manage to somehow phish Github into showing someone else's trustworthy work history on a spoofed identity.


> It's not as though you'll manage to somehow phish Github into showing someone else's trustworthy work history on a spoofed identity.

You don't need to trick github, that's just how it works by design. Anyone can upload any repo to github. There is nobody checking the repo isn't stolen or fake. Github does not claim to be vouching for anyone. At most they will delete malware and obvious scams if it happens to comes to their attention.


Welcome to the era of self-promoters and narcissists.


Classic git does not evade censorship, such as the extremely recent news concerning Nintendo. An idea like this has been rolling around in my head, and I'm overjoyed that someone has done the hard work.


Git evades censorship just fine, since it is properly decentralized and doesn't care about where you got the repository from. Plain HTTP transport however does not and most Git repositories are referred to by HTTP URL.

If you simply host Git on IPFS you have it properly decentralized without the limits of HTTP. IPNS (DNS of IPFS), which you need to point people to the latest version of your repository, however wasn't working all that reliably last time I tried.


But with Git you still need to locate an up-to-date source for the repo. If the author is signing commits or you know a desired commit ID then you can verify once you have found a source, but finding the source is the hard part.

IIUC with Radicle you can just request the repository by signature and get the latest released version from the network without needing to track down a source yourself. A trusted publisher (probably the original author/maintainer) can continue to publish commits without a centralization point that can be shut down (like the recent Yuzu case).


You're basically describing a name service: a way to associate a stable name (like "Yuzu" or "PGP:ABCDEFG..." or whatever) with a changing, mutable identitfier (like a Git commit ID, an immutable IPFS URL, a Bittorrent magnet link, etc.).

The most obvious example is DNS, which is technically capable of this, but is mostly not set up in the ways we'd want. It's pretty centralised: so whilst anyone can host their own DNS servers, serving any data they like, it's unlikely the "main" DNS network will connect to it; and will consider you malicious if your data disagrees with the centralised records. Things like DNSLink can be useful for associating as DNS name with names in other systems, but that's still vulnerable to hijacking/poisoning without an out-of-band way to verify it.

The GNU Name System seems better suited, since it can use public keys for stable names, which therefore can't be hijacked/poisoned. Associating these to pet names is also less centralised, feeling more like /etc/hosts than "the" DNS; with recursive resolving, so e.g. I can look up "Yuzu" based on what you think it is, etc. That seems to provide a nice balance between decentralised control, versus relying on side-channels to find public keys. I'm currently experimenting with this.

There's also IPNS, but in my experience its lookups are incredibly slow and unreliable. It could get better, and I know various studies are being performed and experiments with different approaches and parameters are being tried, so I'm hopeful something will come of it.

As far as I remember, the main reason Radicle diverged from these approaches (and IPFS/IPNS in particular) was to allow peers to negotiate delta transfers, like (non-dumb) Git HTTP servers do. That's more efficient than using something like IPFS/IPNS, where the root address keeps changing, and we need to fetch a load of the Merkle tree to find out which blocks we already have.


IPFS does not intentionally replicate (it merely caches). Bringing down the authoritative server can result in lost data. Anonymity is also out of scope for the project.


Yeah that’s been my experience with IPFS. Very cool idea, practically doesn’t work very well. Haven’t tried recently though, maybe it’s improved.


You're missing the discovery part. You want to get the repository X from user Y cloned - how do you find it? Especially if you don't know Y and their computer is off?

Also radicle does want to tackle the issues / prs and other elements you mentioned as well.


How do you find a website ?

And presumably the person hosting it will make sure that the computer hosting it is often on, for instance ISP routers and TV boxes are a good way to popularize it, since they often come with NAS capabilities :

https://en.wikipedia.org/wiki/Freebox

(Notably, it also supports torrents and creating timed links to share files via FTP.)


Depends on what you mean by finding :

- finding what the domain name is ? - resolving the DNS to an IP address ?

Radicle solves both problems in theory, but more the latter than the former right now:

- there is some basic functionality to search for projects hosted on Radicle, to find the right repo id (I expect this area will see a lot more activity and improvements in the near future), - given a repo id, actually getting the code onto your laptop. This is where the p2p network comes in, so that the person hosting it doesn't always need to keep their computer/router/tv box on, etc.


I think this already exists for issues. git-bug [1] uses git internal files to store the issues. It is distributed and it even comes with a web ui in addition to the usual cli.

[1]: https://github.com/MichaelMure/git-bug


A friend of mine wrote a similar tool. https://github.com/nolash/piknik.


do you know of any projects using [anything like] git-bug?

i know i've encountered something like this once in a notable repo. thought it was graphics related, like mesa or something, but looks like they're using GitLab.


Most CI runners use git notes which is similar to what git-bug uses iirc


Fossil has a few of these.


From their documentation:

> It’s important to only publish repositories you own or are a maintainer of, and to communicate with the other maintainers so that they don’t initialize redundant repository identities.

Based on my experience with people taking my code and shoving it onto GitHub--as well as separately in my demoralizing general experience of putting random little "please for the love of all that is holy don't do X as it will cause problems for other users" notices in the documentation or even as interstitial UI (!!) of my products and watching everyone immediately do exactly that thing as no one reads or thinks (or even cares)--a large number of people aren't going to honor this request in the documentation... and, frankly a large number of people aren't even going to see this in the first place as the home page tells you how to push code but you only find this "important" request in the "user guide" that people definitely do not bother to read.

It thereby seems quite concerning that, apparently?!, this system is designed in a way where doing what feels like a very reasonable thing to do--just pushing whatever open source code you are working on, based on the instructions on the home page--is going to interact with something about this protocol and how things are stored that something important enough to have this separated boxed "important" statement in the documentation is going to get cluttered and maybe even confusing over time :(.


I don't think there's anything "special" here. You have the same problem currently where finding the canonical location of a repository is done via some out-of-band social network or website.

On GitHub, you also can look at the stars to give you extra confidence, and on Radicle the equivalent is the seed count for a given repository.


Then why does the documentation say this is "important"? GitHub certainly does not have a notice anywhere saying "it's important to only publish repositories you own or are a maintainer of" (...well, I guess it could be buried deep in some user guide I never read, lol).


I think it's currently more likely to happen on Radicle given there is no search or discovery functionality, and repositories exist on a flat hierarchy, ie. they are not namespaced by user/org name, so harder to distinguish if they share the same name and description.


Why are those items not included? Being able to browse one org/developer's repos is a very useful indicator when investigating a new unknown repo/project/org/person, trying to determine if the risk of time investment is worth the effort.


Maybe Kagi could add this to their custom index.


> putting random little "please for the love of all that is holy don't do X as it will cause problems for other users" notices in the documentation or even as interstitial UI (!!) of my products and watching everyone immediately do exactly that thing as no one reads or thinks (or even cares)--a large number of people aren't going to honor this request in the documentation

Kind of off topic, but you shouldn't get annoyed at people for ignoring your notices and not reading the docs. It's an extremely logical thing to do. Think about it - how many notices do you see in a typical day of computing? Probably dozens. How many tools to you use? Also dozens. Now imagine how long it would take if you read all of those notices, and exhaustively read the documentation for every tool. Too fucking long!

It's much better to use heuristics and not read. For example if you close a document and you've made unsaved changes to it, you know the dialog is going to be "Do you want to discard it?". There's no point reading it.

This is a good thing!!

So the conclusion is that you should design your software with the knowledge that people behave this way. It is usually possible to do so. If you give a concrete example I can probably suggest a better solution than "asking and hoping they read it".


I spoke in the past tense, and already learned this lesson back 20 years ago; you can tell that I believe software can and should be coded to avoid such issues from the position I took with my comment: that it was concerning that the software would stop working not if but when people do not read this "important" notice. Although, maybe you didn't actually bother to read the rest of my comment, and so failed to appreciate my actual point, given how you just quoted something near the beginning which was mere evidence and focused on it with what feels a bit like an axe to grind ;P.

Which, though, leads me to something I will say in response to your reframing: while I do believe that one must build systems with the understanding that people will not read any of the documentation, we should still judge people for the behavior. I am pro commingled recycling, and yet I also believe that people who don't even try to read the signs on top of a series of trash bins are shirking a duty they have to not be a jerk, the same way we should be angry at people for not knowing local laws even if we put them on giant signs on the street as they'd rather just be lazy.


Isn't the github way of doing things: You add a copyright notice to your code, identifying your repository as the source, and changing the copyright is illegal? That would be applicable to this as well.


Congrats on the launch! I’ve been following this project and I’m really excited to see how much it has matured. For projects currently on GitHub, what’s the best way to migrate? Is there a mirror mode as we test it out?


Thanks! There is no mirroring built-in yet, though this is something we're looking into. It should theoretically be as simple as setting up a `cron` job that pulls from github and pushes to radicle every hour, eg.

  git pull github master
  git push rad master


In addition, in order to migrate your GitHub issues to Radicle (which the above doesn't cover), there's this command-line tool [1] that should get you most - if not all - of the way there.

Migrating GitHub Pull Requests (PRs) to Radicle Patches is somewhat more involved, but that should still be possible (even if it involves some loss of information along the way, due to potential schema mismatches) ...

[1] - https://github.com/cytechmobile/radicle-github-migrate


Good work!

The main value capture at Github is issue tracking, PR reviews and discussion. Maybe not today, but is there an automated way to migrate these over in the future?


Yup, you can do this today! There's already this tool https://github.com/cytechmobile/radicle-github-migrate, built and maintained by the community, and which is already quite capable.


I wonder how discoverable (for normal people) these repositories are. It looks like https://app.radicle.xyz/robots.txt doesn't exist, so it seems like fair game for search engines, and indeed a search on Google and DDG for

    site:app.radicle.xyz 
does give some results. Maybe not that high up yet if not using that site filter, perhaps the ranking will improve?

Tools for integrating CI support with this would also be nice to see. Ultimately a loop with

    while true; do wait_repo_update; git pull && ./run_ci.sh; done
but something nicer that you could only limit to pushes by trusted identities.

And then finally artifact storage. But maybe Radicle doesn't need to solve everything, in particular as a distributed network for sharing large binaries is going to get some undesirable uses pretty fast..


We are actually working on a number of CI integrations and building our own native one, for our needs.


> building our own native one, for our needs.

I realize I'm just some rando on the Internet, but I'm begging you please don't introduce Yet Another CI Job Specification &trade;

I'm sure you have your favorites, or maybe you hate them all equally and can just have a dartboard but (leaving aside the obvious xkcd joke) unless you're going to then publish a JSON Schema and/or VSCode and/or IJ plugin to edit whatever mysterious new thing you devise, it's going to be yet another thing where learning it only helps the learner with the Radicle ecosystem, and cannot leverage the existing knowledge

It doesn't even have to be yaml or json; there are quite a few projects which take the old(?) Jenkinsfile approach of having an actual programming language, some of them are even statically typed

I also do recognize the risk to your project of trying to fold in "someone else's" specification, but surely your innovation tokens are better spent on marketing and scm innovations, and not "how hard can it be" to cook a fresh CI job spec

I likely would have already written a much shorter comment to this effect, but having spent the past day slamming my face against the tire fire of AWS CodeBuild, the pain is very fresh from having to endure them thinking they're some awesome jokers who are going to revolutionize the CI space


Appreciate both the clear feeling and nuanced take here!

It’s interesting, because it’s like the problem is partly that most of the CI offerings out there are at least a little bit gross, but also the vast number of mediocre CI offerings is a factor too.

It feels like it’d be easy to convince yourself that what you’ve built is better than everything that exists already, and hey, maybe it is! But personally I wonder if we really need is a step-change here, not an incremental improvement—something that really does make build and deploy easier, and changes how we all think about it too.


My life experience has been that answering the question is almost always a matter of "easier ... for whom ... to do what?" I think CI/CD systems often run up against the same problem that programming language adoption runs into: trying to be all things to all people for all problem domains is incredibly hard

Even what I mentioned about static typing I'm sure caused a blood-pressure spike in some readers, since some folks value the type safety and others consider it "line noise". Some people enjoy the double-quote free experience of yaml, others pound on their desk about the 7 ways to encode scalars and "but muh norway!!11"

But, taking our ragingly dumbass buildspec friend <https://docs.aws.amazon.com/codebuild/latest/userguide/build...> as a concrete example, how in the holy hell did they invent some build language without conditionals?! I'm sure the answer is "well, for our Amazon-centric use case, you're holding it wrong" but for someone coming from GitLab CI and GitHub Actions which both have "skip this job if today is Tuesday" it's a pretty glaring oversight


Hey, we are having fun doing it :) It is already pretty nice for testing things and show us the results. It is not going to be forced on users at all and it will share parts of the code with all the third-party CI integrations that we are working on.


It's a good point - I think "gateways" such as `app.radicle.xyz` will have to allow crawlers to index the full set of repositories on the network.


I wish people would define precisely what they mean by "peer to peer" (or more commonly, "distributed"). Its such an ambigious term now it can mean anything when used as a buzzword.


I haven't seen the term misused very often - the way it is defined in Radicle and most other peeer-to-peer systems is how Wikipedia defines it[0]; specifically this part: "Peers are equally privileged, equipotent participants in the network".

So a peer to peer system is one where all participants are "equally privileged in the network". This usually means they all run the same software as well.

[0]: https://en.wikipedia.org/wiki/Peer-to-peer


I mean, that definition doesn't fit with supernodes ("seed" nodes in your design) but that is a nitpick.

I guess im mostly just wondering what are the properties you are trying to accomplish. Like there is talk of publicly seeding repositories that are self-certifying, but also using noise protocol for encryption, so what is the security model? Who are you trying to keep stuff secret from? It is all very confusing what the project actually aims to do.

Mostly all i'm saying is the project could use a paragraph that explains what the concrete goals of the project are. Without buzzwords.


I've answered the use-case question here: https://news.ycombinator.com/item?id=39601588

But yes, we're not officially launched yet and the website is going through a rewrite to offer more clarity, thanks for the feedback.

Re: seed nodes: they are running the same software and work the same way as regular nodes, the only difference is how they're deployed (with a public IP address vs. behind a NAT). But yes, a little bit of asymmetry is needed because of NATs/IPv4.

Re: properties: mainly we need to provide encryption and self-certification to enable a similar user experience as GitHub/GitLab/etc. on a an untrusted peer-to-peer network. Additionally though, Radicle offers a level of censorship resitance and disruption tolerance that GitHub cannot offer.


> I've answered the use-case question here: https://news.ycombinator.com/item?id=39601588

I appreciate you think you have, but you haven't. Appeals to vauge lofty notions of digital soverignty and other political values is not what i'm looking for and doesn't really mean much. I'm more looking for a threat model. What does the network accomplish? What are its limits? What are its intended properties?

> Additionally though, Radicle offers a level of censorship resitance and disruption tolerance that GitHub cannot offer.

That seems unlikely (at least as it stands now). Pretty sure it would be much easier to DoS the entire radicle network than to DoS github. It all depends on who you think your attacker is. Github has an excellent track record of standing up to china (for example nytimes archive on github is primarily about bypassing the great firewall of china) it is much weaker on DRM circumvention. All these things depend on how you define them. If you don't define them, and rigorously analyze them, then your censorship resistence is probably wishful thinking.


All nodes can still have equal privilege. Data must originate from somewhere, that is a seed node. And supernode is, or at least was when I studied CS, basically just a more connected node. That said, I agree, a project like this could do with a more formal and structured definition of goals.


> Data must originate from somewhere, that is a seed node

From what i read in their docs, that is not how they are defining seed node.


From https://app.radicle.xyz/nodes/seed.radicle.xyz/rad:z3trNYnLW...

> A seed is a node that hosts and serves one or more projects on the network.

I have not read the entirety of their documentation in great detail, but what I did glance over, I did not see anything particular special about seed nodes other than they host and serve projects. And obviously there is a bit of plumbing required for that.


>Installation

>

>The easiest way to install Radicle is by firing up your terminal and running the following command:

>

>$ curl -sSf https://radicle.xyz/install | sh

Ah.. my high hopes were immediately dashed by the trash that is curl-bash. What a great signal for thoughtless development, if this project catches on I can't wait to watch the security train wreck unfold. Maybe someday we'll get an "Open-Source, Peer-to-Peer, GitHub Alternative" that doesn't start with the worst possible way to install something.


This is an overreaction, almost to the point of absurdity.

Risks inherent to pipe installers are well understood by many. Using your logic, we should abandon Homebrew [1] (>38k stars on GitHub), PiHole [2] (>46k stars on GitHub), Chef [3], RVM [4], and countless other open source projects that use one-step automated installers (by piping to bash).

A more reasonable response would be to coordinate with the developers to update the docs to provide alternative installation methods (or better detail risks), rather than throwing the baby out with the bathwater.

[1] https://brew.sh/

[2] https://github.com/pi-hole/pi-hole

[3] https://docs.chef.io/chef_install_script/#run-the-install-sc...

[4] https://rvm.io/rvm/install


FWIW, Homebrew (no longer) deserves quite such ire as you will note that it explicitly does NOT pipe the result to a copy of bash: by downloading it first it and quoting it using a subshell it prevents the web server from being able to get interactive access.


A bit dramatic here, are we?

The script is safe regarding interrupted transfer, unless you happen to have a dangerous commands in your system matching ^(t(e(m(p(d(ir?)?)?)?)?|a(r(g(et?)?)?)?)?|i(n(_(p(a(th?)?)?)?|fo?)?)?|s(u(c(c(e(s?s)?)?)?)?)?|f(a(t(al?)?)?)?|m(a(in?)?)?|w(a(rn?)?)?|u(rl?)?).

And after that's been handled, well, what's the difference to just providing the script but not the command to invoke it? Surely if one wants to review it, downloading the script to be run separately is quite straightforward. (Though I there was a method for detecting piped scripts versus downloaded ones, but I don't think it works for such small scripts.)


Here you go [0] - the project hasn't launched yet and there are bits and pieces to be dealt with, the current focus is a bit somewhere else. You can also build from source [1] with Rust's cargo.

[0] https://files.radicle.xyz/latest/

[1] https://app.radicle.xyz/nodes/seed.radicle.garden/rad:z3gqcJ...


Thanks but... no thanks, you've missed my point entirely. Why would I want to run peer to peer software built by developers whose security stance starts with curl-bash? Would you curl-bash a webserver? an email server? No? Probably even worse for your source code repository then right?


The problems with curl-bash are overblown. You are pretty much exactly as vulnerable running pip install, npm install, or cargo install.

Not that curl bash is great, but it's not uniquely horrible when the goal is to run some unvetted code on your machine.

If you care about security, you have to either vet the code or trust the source. When you install through your package manager, you're trusting the maintainers. When you install from curl bash, a random website, or any unvetted software source, you are electing to trust the developers or site directly.


The big difference with curl|bash is that the download itself gets to execute in the context of the computer as it is downloading, which is a super power that makes it much easier to hide behaviors as you can make it extremely difficult for people to ever be able to just download a dead copy of the script to analyze it for malware.


Packages on those systems do get blocked at times. So no, not as risky.


The counterpoint would be: you're intending to run their code, if it's malicious then you're hosed anyway.

In bygone times, one might suffer from a truncation attack or otherwise end up running arbitrary code that's not what the vendor intended. Nowadays, there's really no security difference in curl|bash vs downloading a package and running it. Or, indeed, installing using `cargo install`.

That doesn't mean I'm happy running it, but my argument against it is less a security argument and more a predictability one: I want to be able to cleanly uninstall later, and package managers normally provide more consistent support for uninstalling than an arbitrary shell script.


The cleanup and uninstall concern is one of the reasons I run as many things in containers as I can. It's so easy to blow away a container and its volumes compared to traditional software uninstallation.


Yeah, it’s much better to npx something or install a package off the AUR. Definitely much safer.


Download it or don't. Trust the maintainer or don't. Whether you trust the maintainer or not shouldn't be a matter of the installation method, not even with curl-bash.


I hear about Radicle every time crypto market goes up. Is anybody seriously using it ?

This got down-voted so fast! :)

Serious question though: how much budget was spent on Radicle, how many people did work on it, and who is using it ?


Fair question.

I'm working in the crypto industry and I had the same impression.

Last time I heard about Radicle was the last bull market. Then it was silent in the bear, which is kinda strange, since everyone is always saying, bear markets are for building and Radicle certainly is a builder tool.


is there any plans to support this use case: offering repositories only to a set of nodes? I can imagine people wanting to collaborate in private but not wanting to be on Github.


Yes, these are what Radicle calls "private" repositories. They are invisible to the rest of the network, and only shared amongst trusted peers. Note that they are not encrypted at rest, which means they cannot be stored on intermediary nodes that are not part of the truste set.


Off-topic: This reminded me of NESticle.

https://en.wikipedia.org/wiki/NESticle


THANKS SHITMAN!


well that's a name I didn't expect to see coming into this thread, lmao.

so many good memories of that software but for some reason I'm remembering a red theme.


Congrats on launching ! Reminds me of another similar project, nest.pijul.com but using pijul instead of git


Thanks, we haven't officially launched though!

Pijul is a great project indeed :)


This could enable development of projects like forks of Yuzu, with reduced risk of DMCA interference.


So it uses git, right? The readme should make that clear.


Yes, radicle is built on top of git and even uses git as its storage backend for its own data model. [1] has more details on how Radicle depends on git.

[1] https://app.radicle.xyz/nodes/seed.radicle.xyz/rad:z3trNYnLW...


yeah, the underlying storage layer is git. There is more information on https://radicle.xyz/ about how it uses git


I'm curious why dual license with both the MIT and Apache licenses.

This is not a criticism, and I could be wrong about this, but doesn't the mit license allow anyone to essentially bypass any extra responsibilities provided for in the Apache license? Specifically I'm thinking of the patent license grant provisions. I don't think the MIT license has anything to say about patents.

And if that is the case then why not just license it MIT?


Contributions to the repo have to abide to both licenses, derivative work gets to choose only one.


That explains it perfectly, thank you!


A bit of a naiive general question, but why are these things not layered on top of existing technology?

You already have Bittorrent for distributing files P2P. We "just" need an extra layer for discovering new updates/patches so that files can dynamically update/grow. These two problems seem fundamentally decoupled. The "git forge" aspect seems to be another fundamentally separate layer on top of that


We tried. At first we built it on top of IPFS. It was much too slow. BitTorrent is interesting but we need a way to have mutability (repos change all the time). So we built the networking layer ourselves and the forge on top of that.


If you built a mutable bittorrent layer yourself (like ipfs but better), then why not make it its own separate thing?

If that's what you've managed to pull off, that's like a way bigger deal than a p2p gitforge (not that that isn't super cool in itself)

I guess architecturally why does it need to be coupled to git and a git forge?


It's optimized for certain workloads around code collab, so for now we don't want to oversell it. It doesn't have to be coupled with Git, though Git is very efficient at synchronizing changes. The protocol currently can be used for other things than a forge, but having an application influence protocol development is very helpful.


How is this related to the $RAD coin?


From a technical standpoint, Radicle (P2P git protocol) is not related to $RAD.

$RAD is the token of the organization that has been funding Radicle over the years.


If the RAD token has nothing to do with their product, why does it have value? Did/do they have some other product that uses the token?


There is governance value in the token. Whoever holds that token can vote on Radworks governance proposals.


Why does this website try to connect to localhost on http://127.0.0.1:8080/api/v1/node ?


If you run their service locally it displays the connected account and you can interact with the app.


Fairly arrogant to assume port 8080 is unused for other things on localhost.


It's just the default, it can be changed.

And it wasn't free on my host me either :). Indeed 8080 is maybe not the best port to select for this app, because it is more likely used than "some other" port.


local-first [1] software ;)

That is the default port for `radicle-httpd`: an HTTP API that would allow you to authenticate (using your public/private key pair, that is stored on your machine), so that you can perform actions on the web-based interface as a specific user, through your local radicle node.

[1] - https://www.inkandswitch.com/local-first/


Maybe that is where you would have your local copy of Radicle running?


My question isnt related to radical but P2P in these sense in general, Why should I store someone else's data and why should someone else store my data? doesnt it make it easy to access?


That's a great Q.

Radicle can support a federated model, where known major seeds are connected with multiple smaller clusters. Radicle supports also completely self-sustaining and disconnected clusters of nodes networked between themselves within that cluster. And of course any other network topography in between.

There's a promising active proposal to establish a dedicated new Radworks Organization tasked with solving the incentivization and reward problem for seeds. https://community.radworks.org/t/discussion-rgp-22-start-the...

Additionally, similar to how one can "star" a repo on GitHub, one can "seed" a repo on Radicle. "Starring" a repo is often a toast of support, akin to an emoji reaction, with little more effect other than that, but in Radicle "seeding" a project, goes beyond incrementing a vanity metric: it actively supports propagating that project across the Radicle network. The count of seedings per repo can also be used as a differentiator between original and "copy-cat" ones.


I see no information about properties in README.md, and ARCHITECTURE.md is empty.

What are the capabilities?

If a node is down, would other nodes step in? Where's stuff stored? How is it replicated?




Genuine question ... isn't there an inherent latency issue with Peer-to-Peer?

and as such, it makes for a poor user experience on the web.

(when you're just downloading files over P2P, this isn't an issue or noticeable - but when you're interacting with a web site, it is)

EDIT: why the downvotes? I'm just asking a question.


It's a good question, I don't know why you're downvoted.

Because the synchronization protocol (backed by Git) is operating in the background, web frontends are always just querying local data, so it's actually quite fast. You can try browsing the linked repository and see for yourself.


Even a slightly critical comment gets downvoted instantly in this thread, I wonder why ;)


I really want to have the problems this solves.


Can this handle patch stacks or is this just another pull/merge request model with all the flaws that entails?


It can handle them, though we haven't built that much tooling around them. However, unlike GitHub, updates to PRs (Patches in Radicle) are non-destructive, just like Gerrit[0], and code reviews are tied to specific revisions of patches. This is in my opinion one of the biggest flaws in GitHub's model.

[0]: https://www.gerritcodereview.com/


That's indeed the biggest flaw in the Github PR model. I've been hoping for a gerrit-like code review experience in a Github alternative for years. I'll be sure to try Radicle.


Is this the same Radicle that issued a crypto token ($RAD)?

If so, I'm glad that it completely failed and they decided to focus on the actual product of a 'P2P GitHub'.

Although stay away from their 'drips' crypto thing, looks like a tax and accounting nightmare for individuals and businesses.


isn't git already the open source, p2p Github alternative? coders will do practically anything to avoid learning `git rebase` . ( don't read too deeply on this chaps)


It is if you don't care about any of the other things that Git brings to the table.

I fail to see what `git rebase` has to do with issue trackers, project boards, wikis, repository notifications, or any of the other things that GitHub does. I use git forges as well as `git rebase`. Neither of these things precludes the other.


GCM glorified commit messages


I find them useful, as do many others. They can do many things that commit messages obviously can't.

You can technically coordinate many of the other things through external tools like email, but email sucks, and there is real value to having them all in one place.

Obviously, I'd rather have all these things part of the repo itself, like with Fossil. What's what Radicle is trying to do, it looks like.


There should be a way to run git over i2p.

Also, git over yggdrasil should be easy because there are just ipv6 addresses. And, in the worst case, I think 6to4 tunnels would work.


> There should be a way to run git over i2p.

https://geti2p.net/en/blog/post/2020/03/06/git-over-i2p see "Third: Set up your git client tunnel"

But like most things in the I2P ecosystem, not seamless.


As long as it runs with an i2pd service in the same easy way as irc/usenet or email, I'm sold.


Support peering over the Tor network like what briar does. That way, all peers can fall back to tor when they're behind restrictive firewalls.


We've designed Radicle with Tor support in mind, via Socks5 proxy!


and I thought I was cool for knowing about https://codeberg.org/


This looks wonderful, I'll read more on details and follow the project!

Does this suffer from the code search problem, or are there plans to somehow introduce that?

The main problem of decentralized and federated code management projects is that I still go to github (not even gitlab) when I want to see what other people do, how they use a lib or something, and I can search issues, too.

So we obviously can't have each of our small servers serve code search requests from all the world's developers.

...a sync-and-search-only project is probably a job for someone like the EFF, or non-profit orgs that already have sufficient funding... has anyone heard any talks in that regard?


> p2p, signing, local first, yadda yadda

curl | bash is the recommended way to install.


damn you're not joking


What are the most common use cases this provides a solution for?


In the long term, this is intended as an alternative to collaboration platforms like GitHub and GitLab for people/organizations who want full control of their data and user experience, without compromising on the social aspect of these platforms.

The first three paragraphs of the guide has a longer motivation: https://docs.radicle.xyz/guides/user


Feedback: consider adapting the docs for mobile view Feedback 2: a short tldr about the short term use cases would be great :-)


Yes, working on it!


any plans for adding localization to the UI?


We're a small team, but if there is enough demand for it, then yes.


Their monetization strategy is pretty critical for people who’d sink their time into the service and entrust it with the code for long-running projects. So… how do they plan on making money off of this? If they can’t or won’t say, what sort of projects do they imagine they’d attract in spite of that? (e.g. ephemeral ones? Data sets about current events?)

Downvoters: do you not think their monetization strategy is important to potential users? Surely their investors didn’t throw that money at them out of the goodness of their hearts, and surely it’s apparent how that could affect their users in the long run.


this is a very VC-brained comment to make on a peer-to-peer open source project. let's instead ask if there are any single points of failure to the protocol and service, and if so, are those sustainable regarding developer time, effort, and compensation?


> this is a very VC-brained comment to make on a peer-to-peer open source project. let's instead ask if there are any single points of failure to the protocol and service, and if so, are those sustainable regarding developer time, effort, and compensation?

Crunchbase said they raised at least 12m as a “fully decentralized code repository”. I’d say presenting your open source project without saying it’s VC-backed is the only “VC-Brained” thing happening here.


You are commenting on the hype channel of one of the biggest VC firms in the world.

(there was a time _when that comment was the default type around here_, actually)


That aside, it's also a VC-backed startup. I've been burned enough times by "we promise we're going to be the good kind of corporate!" types to ask upfront now if a big chunk of someone else's money is funding something that doesn't charge their users anything.


Incredible. They throw some indie-sounding buzzwords out and that's enough to make the business model unimpeachable?

Over the past few decades we've seen many cynical capitalists riding the wave of "peer to peer open source" for personal gain. It's absolutely within scope to discuss how a company's business model may affect their ability to deliver on the supposed mission.


I imagine the person responding to my initial comment just didn't realize it was a VC-backed business rather than a regular FOSS project. The repo readme doesn't seem to indicate otherwise, so I can see why they'd have gotten that impression.


What about codeberg.org?


What about it? It‘s an almost completely different product.

Codeberg is like GitHub.com, GitLab.com, or sr.ht: a centralized hosted solution.


The software behind Codeberg is Forgejo, which is a fork of Gitea. The team of Forgejo is working on a federation protocol based on ActivityPub. Once it is done, it will be able to exchange data with other Forgejo servers and any server supporting that protocol. So, we may expect that Codeberg will transform from centralized to federated.

sr.ht chooses another approach. You only need an email to submit codes, file issue, join discussion, etc. From perspective of source hosting, it is centralized. But, from perspective of project collaboration, it is decentralized.


Federated is nice, but with Radicle you don't need a server with publically accessible IP, so you can pull and push with just a node running on your laptop—though I understand there still need to be some nodes with publically accessible IP due to NAT and it doesn't seem Radicle is (yet?) doing NAT punching/STUN/TURN.

Well, at least you don't need a name or a certificate for the server, I assume its id works as its cryptographic identity.


Ah I see, I've not worked with any, however I do become curious about anything labeled as "Github alternative".

I know this movement since Github started with their "doubtful code scanning" that people are looking towards alternatives.

Not the least: good job!


codeberg are censors. People should be migrating elsewhere. https://codeberg.org/themusicgod1/codeberg-is-corrupted-dont...


Isn’t git already open source and peer to peer?

So this is just a web interface to git? Like gitlab?


its an open source alternative to github, not git



Pedantic, but this seems like a git alternative, not simply a GitHub alternative.


There appears to be a git remote helper in the repo, so this will work just fine with standard git.


That's a neat name! If "seeding" is the word for distribution in a peer-to-peer network, then a "radicle" (not a "radical"!) must be named after:

- "In botany, the radicle is the first part of a seedling (a growing plant embryo) to emerge from the seed during the process of germination.[1]"

https://en.wikipedia.org/wiki/Radicle


>a "radicle" (not a "radical"!)

I'll just mention that etymologically both "radical" and "radicle" come from the Latin "radix", meaning "root".


dang, seems like they missed out on not going for "radix"


Good thing they avoided that. "Radix" is a much more common word, easier to clash with other same-named things, harder to search for.

I greatly disapprove of the fashion of naming projects with common words. Names like Flickr or Google or Linux or Inkscape are effortlessly unique. Names like Snowflake are self-defeatedly commonplace.


Indeed. Radicle is "tiny root", a noun, while "radical" is "pertaining to root, root-level, deepest possible", an adjective.


Sorry for nitpicking but "radical" is also used as a noun, particularly when looking for roots of an algebraic expression, as well as in chemistry (free radicals).


Certainly, a number of Latin adjectives turned into nouns in English, like radical, terminal, solid, tenant, etc. Same even happened to some verbs, like caveat or video.

But what you can expect from a language that allows one to tape a talk, or to circle back about an ask?


Going to be pretty confusing between Radicle and Radicale ( https://radicale.org/v3.html )


Much less so than Amazon and Amazon, Meta and meta, and Threads and threads.


> Threads and threads.

And don't forget Thread! Pretty annoying when you're trying to learn about Thread on ESP32 and you just get stuff about threads.


Just search for matter or CHIP instead.


[flagged]


Thanks Mr Ryan bot


Unfortunately, it's developed by crypto-brained guys.


The best approach to building a GitHub Alternative would be to build a GitHub for Data (merge SQL changes etc) and then extend that to GitHub for Code and later to GitHub Alternative for anything.


1. Lower left, device isn't connected? What device?

2. Domain ends with the nonsensical .xyz, my email server would block all email traffic from them.

3. The default dark theme isn't readable by about 40% of the human population. It can be changed to a light theme, that's nice, but the light theme is some sort of puke light purple.

4. "Run the following command and follow the instructions to install Radicle and get started." I have to use your custom tool called "rad"?

No thanks. Even though GitHub is owned by Microsoft, I'd rather use it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: