Please don't hand-wave away the UX challenges of getting to a GitHub replacement. We need a way that someone can access and leave comments on a project that's better than "well, first I would have joined the project mailing list months earlier". We need to not depend on people running their own 24/7 server infrastructure. We need to not require command-line proficiency. We need to understand that infrastructure has inertia, and mindshare can be everything.
For example! Gmail rejects outgoing email from most home ISPs, and mangles patches in ways that cause vger.kernel.org to just reject email from Gmail, as I understand it. Is that really our example of an accessible federation?
Are we actually trying to compete, or just trying to build ourselves tiny gardens where we can say to ourselves that we're doing things the right way, even though no-one else is?
The configuration must go with the flow, for example it's recommended to at least configure SPF or DKIM. For each E-Mail a score will be calculated and both make it better, there's also DMARC for further improvement. In fact not only Gmail does this, most non-trivial mail setups do inbound filtering.
Nowadays there are even Docker containers for that. Not saying that E-Mail is great but automation will get one really far and there doesn't seem to be a decentralized alternative for that yet. (Obviously there are centralized alternatives; and of course there's XMPP)
You need a mail server. You can run it yourself if you're in to that sort of thing, but you can't run it off a residential/consumer uplink. Sorry, this one's non-negotiable. Then your home server authenticates to your mail server as a client, and send email through your mail server. Your mail server is recorded as the IP address of origin, not your home address. MTAs are already designed to do this with nearly zero effort on your part, so you don't have to change your workflow, just your config file.
Don't want to pay for a mail server? Good news! There's like a gazillion services that actually do this for free. Gmail actually turns out to be one of them. Don't want to have to use Oauth? Good news! Gmail's not the only mail service. There's ten billion others.
It totally should be. If you use SPF and DKIM, that should override distrust of IP addresses. If your domain has good reputation and SPF and DKIM prove that you are authorized to send using your domain, then only the reputation of your domain should be considered (and affected) when processing the inbound email.
> Then your home server authenticates to your mail server as a client, and send email through your mail server.
That just overcomplicates things in that you now have to maintain two mail servers. Just set up a tunnel to route the public addresses of your server to your home server, then you can send directly to whereever using static addresses. Also has the advantage that the TLS is terminated on your own hardware, rather than on systems with potentially questionable security of some cheap hosting provider, so less trust in proper security and data protection practices of the hosting provider is required.
> Don't want to pay for a mail server? Good news! There's like a gazillion services that actually do this for free. Gmail actually turns out to be one of them.
Giving power to Google both over your data and over the direction of email in general is not free. That's the one thing everyone should finally grasp. Using Facebook isn't free, using GitHub isn't free, using Gmail isn't free, ...
There are several IPs that are completely banned on my firewall because they send shitloads of spam over dozens of domains. And some IPs are just inherently not trustworthy (Tor exit nodes, North Korean IPs, etc.)
Everyone can setup a SPF and DKIM record on their domain. It's not hard.
IPs have reputation and you better deal with it because most sysadmins on your receiving end won't deal with any special snowflake configuration.
This isn't exclusive to Gmail, this is basically any mail service and server out there.
I understand that that is the case. I said that it shouldn't be the case and why. So, what's your point?
> And some IPs are just inherently not trustworthy (Tor exit nodes, North Korean IPs, etc.)
Noone is saying you should be trusting those IPs. I said domain trust should override IP distrust. So, again, what is your point?
> Everyone can setup a SPF and DKIM record on their domain. It's not hard.
Which is obviously the premise of using them to override IP distrust? So ... what is your point?
> IPs have reputation and you better deal with it because most sysadmins on your receiving end won't deal with any special snowflake configuration.
I think I wouldn't write "It totally should be" if that were the current reality, would I? So ... what is your point in explaining what I am obviously aware of?
Also, you might be surprised, but "special snowflake configuration" is how every change starts. So, if your argument were to be taken seriously, we should never have introduced SPF, because the first person to use SPF had a very special snowflake configuration indeed.
> This isn't exclusive to Gmail, this is basically any mail service and server out there.
Erm, yeah, thanks for repeating half a dozen times the obvious premise of my comment.
I don't trust certain IPs. Why should the presence of a SPF or DKIM override my trust of these IPs? If that is the case, why should it be for any other IP?
>Which is obviously the premise of using them to override IP distrust?
Which is my premise for why having them override IP trust is completely useless. There is nothing involved in the process of setting up SPF or DKIM that would make me trust a domain if the IP is not trusted.
When you use IP blocklists, say, you are effectively using a reputation database that maps an identity (a client's IP address) to a reputation score that heuristically reflects how well the owner of that address space has guarded the address against use by spammers.
Equally, you can have a database of reputation scores for domains that reflect how well the owner of that domain has guarded the domain against use by spammers.
So, the existence of an authenticated domain identity, as established through SPF or DKIM, should override the IP address identity as the basis for determining the reputation of the sender. The identity alone never provides trust, it is only the key for looking up reputation in some database to base your trust on, and to store any reputation feedback under (like, when an email is marked as spam by the recipient). If your database says that my domain is trustworthy, then you should accept emails from that domain even if they come from a tor exit node, and if you determine that it is spam after all, that should lower the reputation score of the domain, not of the tor exit node.
How would you establish trust in your domain from a tor exit node?
Home connections that send email are 99.9% of the time involved in a spam sending botnet. Those IPs are almost always entirely distrusted.
So you can't build trust into your domain without getting a both a trusted domain on a trusted TLD on a trusted IP.
The identity issue is not of concern since there is no good way to tell if the IP of a domain's mail server changed to a tor exit node because it was compromised, sold or otherwise maliciously claimed or if the owner legitimately did it. Lots of people will therefore treat changing the endpoint for mail as a new identity (for good reason).
>Equally, you can have a database of reputation scores for domains that reflect how well the owner of that domain has guarded the domain against use by spammers.
This already exists, multiple domains. Any spam blacklist operates via such reputation scores. It's utter pain to get your reputation fixed on these once you're in bad standing.
These go for both IPs and Domains. SPF and DKIM are a great way of avoiding trashing your reputation because some spammer is sending under your domain. But we already have tools for reputation.
If you build an IP reputation database, you'll quickly find out that Home IPs will get trash reputation regardless of if you think they should be allowed to send anything. (That's not even mentioning that Home IPs aren't stable)
Presumably, you wouldn't? But just because you can't establish trust this way, doesn't mean you couldn't maintain it that way, does it?
> So you can't build trust into your domain without getting a both a trusted domain on a trusted TLD on a trusted IP.
Yeah, so what? You can't upgrade an operating system without installing it first ... therefore you can't upgrade it? What's your point? Maybe you still need a "static IP" to gain reputation. Maybe someone finds a different solution for building reputation. Why should we keep one unnecessary restriction just because there is another currently necessary restriction? Maybe someone builds a sort of "new domain reputation escrow system"? You pledge 1 bitcoin (or whatever) that gets donated to charity if a panel of judged from the internet community decides that you used your domain to spam, and in return you get a reputation boost in a public database that email servers can check? Who knows, there are endless options for solving that second problem, which really have nothing to do with the first problem.
> The identity issue is not of concern
Erm ... you do understand that without identity, there can not be reputation, right? Even if the only thing you do is that you trust emails coming from a particular IP address because of good experiences with emails coming from that IP address, that only works because the IP address provides identity. The equality of client IP addresses between two SMTP sessions is the only thing that allows you to come to the conclusion that "this is the same entity that didn't send spam last time". This is all about identity and identity only.
> since there is no good way to tell if the IP of a domain's mail server changed to a tor exit node because it was compromised, sold or otherwise maliciously claimed or if the owner legitimately did it.
There is no good way to tell if a domain changed ownership to a spammer even if the IP of the mail server does not change. There is no way to tell if an IP address that didn't spam before changed ownership to a spammer. There never is a way to just know that some identity has not become a spammer. The involvement of a tor exit node is completely irrelevant to all of this.
You never know whether the next connection from an identity with good reputation is spam. You can't. That just isn't how reputation systems work. The point of a reputation system is that it creates an incentive to protect your identity against abuse because otherwise your identity will be muted.
> Lots of people will therefore treat changing the endpoint for mail as a new identity (for good reason).
Well, I can't tell whether "lots of people" do that, but it's certainly a terrible idea. If you have a business relationship with another company, and that company happens to move their email server to a different provider, how would it be anything but a completely braindead idea to penalize their emails for that in your spam filter when the emails are authenticated via SPF and DKIM as still coming from the same domain, and in at least that case thus also from the same company?
> These go for both IPs and Domains. SPF and DKIM are a great way of avoiding trashing your reputation because some spammer is sending under your domain. But we already have tools for reputation.
Yeah, and those tools build on SPF and DKIM ... your point being?
> If you build an IP reputation database, you'll quickly find out that Home IPs will get trash reputation regardless of if you think they should be allowed to send anything.
What is your point? I am not even sure if this is supposed to be an objection?! I mean, a lot of what you write isn't wrong, but just completely besides the point. I suggest that positive domain reputation should override negative IP reputation, and in response you point out that dynamic IPs have negative reputation ... when that is exactly the reason why I suggest that positive domain reputation should override negative IP reputation!?
And no, of course the fact that I think that domain reputation should override IP reputation does not influence the reputation of dynamic IP addresses ... why would anyone think that it would?!
> (That's not even mentioning that Home IPs aren't stable)
Again: Your point being?
But then, why can't I do that with Git?
Git is a dCVS, so a software to store source code tracking history and users, not a communication platform, mails on contrary are communication platform.
If you want something like Fossil (dVCS with built-in webserver with a mini-site for bugtracking etc) integrated with something like ZeroNet well, we do not have anything like than and yes, it can be an interested thing, far more interested than IPFS monsters etc.
Just out of curiosity, have you tried or have heard of someone who tried? I mean nothing stops one from pointing a Domain to a Home ISP IP address and creating DKIM and SPF entries for that. I kind of doubt that Google can determine with high accuracy whether an IP is from a Consumer ISP or not, especially when it comes to small ISPs - which might be the most interesting, when already going so far... ;-)
Though indeed it would be an interesting experiment how domain and IP reputation actually interact at large email providers.
I do have SPF, DKIM, DMARC, and rDNS setup.
Why? You also need a driving license to drive a car. There are a lot of problems in this world that have a certain internal complexity. You can't UX it away. And the command line is exactly that. It's actually not complex at all. It's so simple that a student can write one before graduating from college. And the learning you need to do to use it is actually learning to interact with a computer.
It's really harmful that people nowadays act like hard stuff wouldn't exist and having to interact with hard things would be discrimination. Hard stuff does exist and you need to get used to handle it or suffer never to be able to do anything.
Also email is actually one of the most simple communication protocols. It's so easy that you can manually read and write most of the stuff. You don't need anything fancy for it. A text editor is enough.
And while I think people don't need more than git and email or ssh, I'm all in favor of also implementing ActivityPub. One leg of stable distributed systems is the ability to use another transport if your preferred transport is not available.
If for you send an email is not immediate (personal case shift-F6 directly open a new message buffer ready to receive destination address and body, C-c C-c send the message) than you have a problem in your workflow that it's also a problem in development.
Every time a community switches from a mailing list to a forum (like Elm lang recently), they seem to agree with me.
This statement is often met with nerd rage on HN (“why wouldn’t you want to deal with mailing lists?!”) but it’s something you’ll just have to understand or take for granted if you are going to understand what people want from a Github competitor, for example.
I don't disagree, but would point out that the mailing list/forum choice is a false dichotomy. Nix recently switched to using Discourse and I've found it pleasant enough to interact solely through its mailing list mode. Presumably there are other projects which provide a similar dual interface.
Which doesn't really answer the question why, does it?
> This statement is often met with nerd rage on HN (“why wouldn’t you want to deal with mailing lists?!”) but it’s something you’ll just have to understand
It's impossible to understand it if people don't explain themselves.
> or take for granted if you are going to understand what people want from a Github competitor, for example.
I personally don't care for a Github competitor that has the same terrible usability as Github, so I am not sure I'd be willing to take for granted some unjustified demand for bad usability.
That's why email will continue to exist forever. And that's why email is so versatile.
It's less terrible than some other alternatives, that's all.
While a protocol exists forever, implementations mightn't, and there are many failed protocols. But can endure through network effects.
And a part of the self-responsible philosophy is also that "there is no implementation" is not even an excuse. As long as there are free programming languages and affordable computers (I.e. a raspberry pi) then you can very well create your own implementation. It might take years to even start due to the requirements to learn programming, but it's not impossible at all.
Since being frustrated with all the current major social networks, I've been playing around with an idea for a semi-decentralized/semi-federated chat service (like IRC but not quite), but it's just in my head for now.
Can someone recommend any sites where one can put up the seed/initial draft for an idea for others to collaborate on and add to? GitHub?
Have you taken a look matrix.org? Out of interest: What would you do differently?
Pure communications protocols are starting to feel like a losing game for me these days - the world is so carved up and walled again that I really need someone to build a new Trillian that instead of speaking protocols, hooks directly into the UI widgets on my desktop and phone so unify my view of communications and let me inject features (the only one I want being OTR messaging and I guess let's bring Zmodem back so we can stuff binaries to each other).
It is practical if you use modern clients. For example I replaced Hangouts with Conversations.im for all my family members and they are very happy with it.
So why not just fix the broken parts, instead of inventing hundredth messaging protocol that will be forgotten in a year?
You then need to pick a server that supports everything you want, since things like a mailbox for when you’re offline are optional extras. Integration with mobile push services so that XMPP doesn’t kill your phone’s battery is an optional extra that is marked “experimental”.
A result of the near-death of XMPP, that I’ve discovered recently, is that there is no trustworthy iOS client that supports OMEMO - e2e encrypted communication is table stakes now, and XMPP can’t provide that to me in a manner that is usable for me.
I’m general, if you can find a server that provides what you want, and a desktop client that provides what you want, and you have no use for mobile communications, XMPP as it stands is probably as fine as it was two decades ago - but we have these things called smartphones now, I don’t want to be bound to a desk in order to stay in touch with my friends and partners, and I don’t see XMPP defragmenting any time soon.
I haven't fully thought it through and I'm not even sure if it would be viable in practice, but it involves BitTorrent + Magnet URIs.
Obviously, because Git was decentralized from day one.
I can't think of a reason I'd want a decentralized system to use email as the message bus. I do want the message bus to be backed by something standardized, but there's plenty of standard ways to transmit data that aren't SMTP. If some users want to interact with the system via email, supporting email as a notification / response mechanism is totally viable without using that as the backplane for the service itself.
I kind of get this with searches in thunderbird email client, but I can't quite remember if phpbb or vbulletin does that. I am guessing slack, rocket chat and similar have some kind of bots or bouncers kind of things that could re-insert replies and group by threads.
Maybe it's just having a new view layer added into into some of the systems and giving the option to get notified and reply by email, as well as other methods that others may prefer.
I think we need to remind people posting in forums and other softwares that others may be getting plaintext-viewable-by-the-world emails of the posts and replies - that's a security gap I think many would not consider if they do not use those methods, so a reminder would be good.
Of course adding an option to get pgp encypted emails only and blocking plaintext for example might be a step up and in the better direction for adaptability and moving towards future better.
Still can't pinpoint where I have seen similar things used in the past or what it was called at the moment.
Email is basically free, works well for up to 5 megabytes of data, and data security isn't much of an issue for open source work. The post suggests quite a few tools that improve the Git-email workflow, and I think some do prefer those to certain web-hosted Git interfaces.
> there's plenty of standard ways to transmit data that aren't SMTP
Are they free, federated, and as reliable as email? It may be inferior in some technical ways, but it's still a rational choice for small non-private data transfers, such as a Git patch or any another text.
Is that the case? It seems like you may be focusing on specifically the privacy aspect of "security". I'd say that email is equally bad at ensuring integrity and authenticity, which are crucial aspects of security for open source work that's consumed by others. We can attempt to backfill those gaps in email using GPG and other tools, but we're trying to put a bandaid over a mortal wound in a lot of ways. Recent vulns have highlighted what has been known for a while: trying to ensure the authenticity and integrity of a protocol as broad as email with as much client-side complexity is a losing battle.
However many, too many, use proprietary stuff offered by GitHub on top of it's storage, from PR to wiki etc and those are NOT decentralized and are NOT "free" in the sense of freedom. A FOSS project that depend on GitHub for bugreports, patches, discussions etc is voluntary trapped in a proprietary platform.
We have mailing lists to discuss and post casual patches, Linux kernel work that way, Python work that way, Emacs work thay way etc and all those are not small potatoes projects. We have NO NEED of discourse, GitHub etc if we know how to use good development and user environments, of course we can't develop anything via mail if we are tied to a webmail or to an ancient '90-style MUA monster, we need to know other MUA and other UI to work with (my personal choice notmuch-emacs and EXWM, another populat choice neomutt/*pine and Vim etc). If even FOSS developer lose this knowledge FOSS is at the end.
I think you mean this one (yes, it's still the top comment): https://news.ycombinator.com/item?id=18098416
And of course the scuttlebutt message types work on commit hashes, so leaving comments, etc. is baked in.
This is true. But the problem they’re solving is that email as a protocol is great, but the UX of most standard mail clients is trash. Especially for mobile. Consuming the protocol and providing a better UX will incentivize people to switch.
Yes, I know that what I said doesn't make sense if you understand how the dag works. The emergent features matter, and that feature is absent.
Honestly if you compare the two, email is much easier to grasp. So I'm not sure what the point is. If you want easier, then go with email.
That said, I don't understand why a decentralized system shouldn't speak as many languages as possible. Therefore I see no harm in implementing both.
Surely that minority are working on quite a large project. Ahh:
Using email for git scales extremely well. The canonical project, of course, is the Linux kernel. A change is made to the Linux kernel an average of 7 times per hour, constantly. It is maintained by dozens of veritable clans of software engineers hacking on dozens of modules, and email allows these changes to efficiently flow code throughout the system. Without email, Linux’s maintenance model would be impossible. It’s worth noting that git was designed for maintaining Linux, of course.
As it turns out, that canonical project is both quite large and invented the bloody thing in the first place (as mentioned in the prior article).
Git is decentralized only if commit priviledge is controlled and decided by at minimum two users where either user cannot revoke or otherwise threaten the other users priviledge.
There it has issue tracking, Readme parsing etc. Looks pretty githubby to me.
What better UX is awaiting me and my new developers by taking an email-based approach?
I use the Gitlab UI for issues, creating merge requests and discussing them with other devs, etc. That centralized Gitlab instance provides a lot of value for me. If there were a method of leveraging something close to the Gitlab UX with a decentralized or even federated design underneath, I'd use that. But this article reads like it's saying, "Give up that value and use this less convenient flow in order to protect against an unlikely class of attack."
Hippie utopia where people publish or send changes by email to mailing list their code fixes for software they are using themselves, sharing all code, all free (not necessarily as free beer but free and reviewable). Software improvements where you can not accept some feature sent to mailing list only for your own pleasure or own decision, in contrast to SaaS where changes or "features" for software are pushed down your throat. World where everyone is his own software developer deciding exactly what codes of line his hardware is executing.
I don't mean to be derogatory with 'hippie utopia', but that view is really attractive. Though in reality I don't have time to review and merge changes for all software I am using, I would have to have life span of 1000 years probably just to do it, so I understand that is not very practical.
Proof of work to avoid spam is orthogonal - it could be introduced without any sort of race (there have long been such proposals around email) but putting together a patch set that seems anything like plausible seems sufficient "proof of work" in this context for most projects.
Both are distributed, but git is more decentralized as the central law needs not be pushed to the outer citizens.
Conversational foul! It was your comparison in the first place. You don't get to accuse someone else of comparing apples and oranges when they describe where you were wrong.
> Git can be use alone whereas the utility of a cryptocurrency is in the network.
To the degree that's true, it's why we can't expect cryptocurrency implementations to reach git's levels of decentralization. It doesn't somehow make it extra decentralized.
It's not even quite true that the value is in the network - the value is in my ability to safely transact. There have been proposals for digital currencies that allow offline transactions while maintaining resistance to double spends.
> You don't say that a video game is "decentralized" because you can play offline do you?
No, but only because we don't really have reason to think about them in those terms.
Authentication of who can comment on issue and who can see them, the user identity service is not decentralized.
That's the main problem with git. And that's why github exists.
I think github could try transform into a namespace and identity service.
Yes, it is, it's the DNS. The user identity of your git repository is your host name, which is connected between participants via the DNS.
As for issues and wikis, well, they are not part of git, obviously, but the same applies in principle. Also, in the case of issues and the like, if you use mailing lists, your user identity service is decentralized by virtue of using email addresses.
Whether those are the perfect solution may be questionable, but they most definitely are not centralized, and might very well be the basis for improving the usability.
> I think github could try transform into a namespace and identity service.
So they stay the monopolistic gatekeeper? What would be the point of that? If anything, namespace and identity must not be controlled by one monopolistic entity.
But in any case, your mistake seems to be in thinking in terms of one repo. You don't need that. Every developer can have their own repo published somewhere. Or even more than one. And for the probably simplest solution for limited access, you just add http authentication in front of that repo and then send the URI including the credentials via email.
Is that the perfect solution? Maybe not. But the point is that you don't need a centralized gatekeeper. If you don't like email, you can build new communication protocols that use DNS names for identity. Or you could integrate email more with git so git automatically imports machine-readable pull requests you receive via email. Or whatever. There are endless possibilities that can use DNS names for federation of independently hosted repositories and issues trackers and whatnot. And there exist many implementations of ideas as well. You also could use OpenID for federated authentication.
1. preserve source code change history
2. preserve history of how the project evolves.
Github serves the #2 well but in an centralized way. But #2 is kinda essential these days. People need to know how the code changed but also WHY.
That's the stated purpose of git. Git syncs diffs, distributedly. The whole point of the article was that we have technology that works and is very well supported, so we should use and improve those rather than reinvent them for the sake of a new technology.