Hacker News new | past | comments | ask | show | jobs | submit login
P2P Matrix (matrix.org)
764 points by Arathorn on June 2, 2020 | hide | past | favorite | 174 comments

This is exciting - I'm looking forward to it.

I've been nerding out a bit playing with Matrix/Riot and Urbit - there's a lot of exciting stuff happening in this space.

If you like what Matrix is doing consider paying $10/month for your own server via Modular.im (https://modular.im/). It supports the Matrix devs and you get your own high performance federated server to use on the network.

Their automation makes it extremely easy to spin up your host so you don't have to manage anything yourself. The default server they run has a lot of traffic so having your own is a better experience, and it helps further decentralize the network (and you control your own server).

I'm excited to try out p2p, thanks Arathorn for being so responsive in the comments on HN.

After Keybase sold out users to Zoom my friend in China and I switched to Matrix/Riot. While the default Matrix server is blocked there if he doesn't use a VPN we can still communicate on others (for now).

One question for Arathorn re:

> "Hooking up E2E Encryption APIs in Dendrite (not that it buys us much in a pure P2P world)"

Doesn't this still matter if the internet traffic itself is tapped? If I'm understanding this correctly we'll still want E2E for things like deep packet inspection in China (or bulk collection by western governments).

> > "Hooking up E2E Encryption APIs in Dendrite (not that it buys us much in a pure P2P world)"

> Doesn't this still matter if the internet traffic itself is tapped? If I'm understanding this correctly we'll still want E2E for things like deep packet inspection in China (or bulk collection by western governments).

If TLS has been compromised between your nodes, then yes - E2EE would give another level of protection; security in depth. I guess it depends on whether you believe your government has the ability to compromise transport layer security between arbitrary peers on the network.

For pure P2P Matrix, there might be some compromise where one uses the E2E keys and key verification to secure the transport layer security (thus giving you E2EE if there is no server in between, but without all the complexity of a cryptographic ratchet). This might give you best of both worlds - but the second you start accumulating data on a server somewhere you'd be back wanting proper E2EE.

Thanks - I remember reading about the issues Tor had operating in China and the sophistication of the government's deep packet inspection. I think TLS is not secure in China, I think I read that the government controls the CAs and MiTMs all traffic.

TLS isn’t everything, you might not trust the server operator. In fact you probably shouldn’t if you’re sending a PM.

This is asking about P2P Matrix, where typically there isn’t a server involved. The transport layer security runs between the peers, effectively providing a form of end to end encryption if there are no nodes in the middle.

Re your question -- it depends: if the network ends up being based on Yggdrasil, as the blog post suggests it could, "IP addresses" are public keys, and packets are encrypted to that destination (which could be a NATing router, though).

So, it gives you some guarantees about encryption, that might or might not be redundant (and for some use-cases, better safe than sorry).

How is the new Urbit stuff, since you mention it? Is it usable for a casual but interested user?

Looks like it's still relying on Ethereum, which IMO is irresponsible from a climate change / energy use point of view.

Do you think that ethereum won’t move to PoS soon? (I haven’t looked into the design, can’t comment on whether the construction seems sound to me)

In short: PoS migration of Ethereum is in progress and will happen in stages. "Stage 0", which doesn't have any practical effects for smart contracts, is expected Soon (tm) (a few months probably).

It'll most likely be a couple of years until the transition is done so that smart contracts like Urbit can/will run under PoS.

Ethereum has been moving to PoS soon for a couple years.

It's still the intent, but PoS is effectively still unsolved. It's not just "can it decide blocks", it needs to have the right balance of incentives to be self-sustaining, or it destroys itself.

What makes you say that PoS is "unsolved " ? Did you hear about Cosmos for example ? Their mainnet is running for more than one year without any issue.

Depends on how 'casual', it's definitely usable and works - but there's still some effort required to figure things out.

I've found the docs are generally pretty good and the community is helpful so if you're interested in playing with it it's worth it.

What is Urbit for? I mean i never understood whats it's real use. From what i've read i imagine it as some new stack that replaces networking and servers but then i found out that you have to buy namespaces that are finite? Why?

I actually don't think it's that hard to grasp, but people often don't explain it very clearly.

It's a from the ground up design for a serverless p2p 'overlay' operating system.

Overlay just means that you run an Urbit VM on top of a mac/linux OS and people can write applications for your Urbit VM. Your Urbit VM communicates with other Urbit VMs in such a way that the application design is p2p by default so the complexity of this is hidden from the user. This means in the future someone can start up their urbit node (which will just look like another application) and use the apps in it without needing any central servers.

Right now the main app in their release is chat, they also have a link aggregator app too.

The reason for the finite user IDs is because the founder believed that one reason for service centralization on the existing internet was the spam problem. When IDs are free people can make tons of them and spam others, complex algorithms via centralized services are required to fight this which leads to a client/server model and a handful of enormous corporations powering the services.

When you have a small cost on the ID you make it economically infeasible to spam people, you also make it easier to stop IDs from abusing the network (and other people). It’s also a way to incentivize people to run infrastructure nodes in the network.

The other interesting details are that it's written in its own functional language on top of its own functional like assembly. This has benefits around correctness that I don't totally understand - but the idea is if you were to redesign a modern computing system from first principles taking everything learned over the last 50 years how would you do it?

Regular users are never going to run their own linux servers to host their own applications on the modern internet, but regular users might use an application that is p2p by default and does the things they want. This leads to a better decentralized model at the application level that is not ad supported.

Ultimately it's out of the box private servers that you can run locally and easily share data between securely, without the need for a centralized company in the middle.

The project makes me think of Alan Kay when he talks about big ideas and 'having the chops' to implement them. It's crazy to throw away most of modern computing and build a functional OS from scratch in a radical kind of way, it's crazy for the product to have made progress since something like 2002 (picked up a lot when they got funding more recently) rather than fizzling out or being vaporware. I'm amazed that they have real software shipping that works, but they do. It's a project driven by a clear vision of how computing can be better than it is, which I think is an exciting thing.

I mean i've always been a skeptical about it...

1. Is feels pretty elite. Exactly what you mention - some new language that is pure and beautiful and not burdened by those dirty systems of the past (thats how i felt when they were discribing it).

2. The elitism i guess makes sense because it's core philosophy of the original author. The fact the original author is prick doesn't make the project bad but there is lot of ideology tied to it. It then makes sense then that part of the solution is buying finite namespaces. The richest have the most power because they are the most capable...

3. There is just huge contradiction in secure private p2p system funded primarily by Thiel. So it's Palantir money. Now I don't want to say that it's not secure. Quite the opposite i am pretty sure it is highly secure and verifyable.

But it feels like it's aiming to be the closed off gatekeeped part of the internet for rich and powerful that is secure and not spied by companies. While the rest of us gets the regular data mined, survailence network... regular internet.

I think initial skepticism is reasonable, big ideas and bullshit often sound similar at first and require more investigation. An ambitious project can still sometimes be real though, not everything has to turn out to be Magic Leap. In Urbit's case the docs are really good you can find out a lot if you want to.

1. I don't think elite is the goal, it's more about first principles and being free to rethink things. Obviously high risk and in most outcomes would lead to failure, but they've made progress for years and have now proven they can actually ship. I don't think big complex goals are impossible, just hard.

2. I don't think the politics of the original author matter that much.

3. I don't agree with Thiel's politics. That said, Thiel has a pretty good track record for picking things that end up really successful because others pre-judge and misvalue them. I think he's good at taking the contrarian view and really thinking it through, looking for blind spots or generalized assumptions (that may be correct most of the time, but not all the time) that lead the community to undervalue things. I think his political views are both contrarian and wrong, but his VC success is often because he is contrarian and right.

As far as closed off and gatekeeped - the IDs are cheap (free to $10ish), and I believe their spam explanation for the existence of them. You can spin up a free 'comet' ID to play with things without needing to buy an ID at all. I'd argue the modern internet is what's closed off and feudal with large companies running our applications and requiring us to give them our data in order to interact with each other. Sure you don't technically need to do this (I created r/hnblogs and I want people to run their own websites), but the reality is most users won't do this, so how default applications work on the network matters. Right now they work almost entirely on a large company server/client model.

The 'galaxy' and 'star' urbit nodes are basically governance and infrastructure nodes, they allow a network consensus to determine changes that might have to happen to the network over time. They also incentivize people to operate portions of the network for other users (IDs can move between stars freely). A lot of this when compared to flaws in existing internet infrastructure comes out ahead to me - or at least not worse. (CAs, ISPs, etc.).

and how does one urbit vm finds other urbits vm to establish the p2p?

So when you blacklist a troll, they can't just create a new account for free.

sounds perfect for state sponsored troll armies, who have way more money/btc/eth than any of us.

That's true for anything where difficulty is measured by a continuous function -- that is, everything is easiest for a state sponsored anything, good or bad, unless it can't be done at all. You're really just saying states have more resources than individuals... which is true almost by definition (they aggregate individuals within their realm)

But the underlying point of the comment still stands. Just making account creation more costly doesn‘t stop spam... it just makes it more expensive. Same for all other legal activities which are then also more expensive. Thus, it‘s not a perfect solution (if at all).

> making account creation more costly doesn‘t stop spam... it just makes it more expensive

But that's the only thing that has ever been effective against spam. Spam exists because the marginal cost of spamming is 0. This puts a floor on the marginal cost of spamming; if it costs at least this much to spam, then only spammers that make back that much will exist, which eliminates most spammers.

not true. in a truly organic network, where you actually know the graph back to you, it is very difficult for a state actor.

ironically, facebook is in a position that is perfect for this, and they go out of their way to not see it. ...I guess because being able to block content based on dislikes by your immediate friends would make it too hard to expose you to all the sponsored posts.

In that case.. it's difficult for anyone. It's still not any better for a state actor versus anyone else -- you've just upped the difficulty so high that it even includes state actors.

That is, (I believe) it is not possible to construct a nefarious deed that is harder for a state actor than an individual to execute. It is only possible to construct scenarios that are hard enough that both state actors and individuals cannot execute it.

A state can do anything a group can do, and a group can do anything an individual can do. They're supersets of one-another, in terms of capability and resource.

There is no "sounds perfect for a state-sponsored" anything -- everything is perfect for a state-sponsored entity, unless it cannot be done at all (because who has greater capability than a state? Other than a coalition of states, or even further, galactic nation-states)

>not true. in a truly organic network, where you actually know the graph back to you, it is very difficult for a state actor.

An entity with sufficient resources & time can probably defeat this -- the main thing is to generate more "accounts" or whatever;

need proof of id? Sufficiently good counterfeits.

need DNA proof? rob of a morgue.

users police themselves? make sufficiently "attractive"/fake accounts and add randomly until you get added by enough people, and then their friends, and then spam until you get blocked and do it again);

and hide them sufficiently well, faster than the governing entity(ies) can identify and remove them.

Sounds perfect for my galactic nation-state.

I was interested in a server, but the website does not mention how much storage or any details at all really. Anyone happen to know where to find that info?

matrix.org is the website for the non-profit foundation that looks after the overall project. https://matrix.org/hosting lists hosting providers, of which https://modular.im is run by the core team and directly funds development of Matrix. There aren't hard storage limitations on Modular currently.

This is the dream. From the end user's perspective there has never been a good reason to have a centralized server between them and the person they want to talk to. If Matrix can pull this off it has the potential to go way beyond chat. I will be following them much more closely following this announcement.

Don't forget Skype. A successful project will be a target.

Skype was never maintained by a non-profit foundation, and was never open source. Even if companies building on Matrix like New Vector get acquired by whoever, the project itself should live on.

Original comment below but I decided instead to go for the following:

I am extremely proud of what work is being done online today to secure communications.

While we have companies telemetrying our native stacks[1], web browsers[2], and messaging platforms[3], we also have people working on software that doesn't do those things and still tries to empower the user to get what they need done without being a double agent for a 3rd party.

1 (windows 10, chrome OS, GMS android)

2 (cookies, pixels, fingerprinting, CDNs, chrome itself)

3 (whatsapp, FB messenger, telegram)

From OS[4] to browser[5] to messaging platform[6].

4 (debian, qubes)

5 (...maybe not? konqueror probably doesn't do telemetry?)

6 (irc probably, matrix, delta chat)

Matrix is IMHO in competition for mindshare not (directly) with WhatsApp, but with Signal.

Matrix | Signal

Increasingly decentralizable | Centralized

E2EE but not quite for metadata | E2EE and metadata-free mostly except recently requiring PINs and server-side storage

Federated with its costs (slower development etc) | Nonfederated with its costs (outages etc)

Temptingly close to P2P or CS | only CS

No voice comms | Voice and video comms

No built-in social graph | social graph via phone (being worked on?)

OSS in practice | OSS in law but hard to contribute to

The contact social graph in signal is stored inside of SGX using a service called contact discovery.

Signal is attempting to design the system such that Signal can never know whose contacts are in your phone as a service provider. They deal with side-channel leakage of lookups from the contact DB into the enclave using a technique called linear scan which is a constant-time bitwise XOR operation on every contact. This is the most brute force version of a class of techniques known as oblivious RAM (ORAM) which are increasingly being used to manage data loads into secure enclaves.

Obvious caveat: if SGX gets broken then these contact lookups are vulnerable to side-channel analysis until the enclave is patched. I think this is a strictly better security property than not having the enclave, but it's far from perfect (no security model is perfect FWIW).

In short, Signal is doing everything they can to avoid having access to your social graph. If you still don't think what Signal is doing is enough, you can run your own signal (or matrix) server, but then you are running a very, very valuable server from a graph analysis perspective. At present, I believe the only way to make the metadata in these services less interesting is to put it inside of an enclave in the hopes that will reduce the value of attempting to attack the servers which manage the graphs for these comms networks.

Source: I work on MobileCoin which uses similar techniques for managing a side-channel resistant ledger.

> If you still don't think what Signal is doing is enough, you can run your own signal (or matrix) server, but then you are running a very, very valuable server from a graph analysis perspective.

...which is precisely why we’re working on P2P matrix. No servers; nowhere for metadata to accumulate (other than the clients, of course).

I understand the goal. I don't understand how a peer P2P system is going to route large groups at scale. I don't think phones are powerful enough to deal with the kinds of routing you need for large scale comms services.

In general, as the number of nodes in a comms graph increases, the overhead of synchronizing the comms between those nodes increases to the point that noise dominates signal, which is why most comms networks end up being a hub and spoke system instead of a mesh. I think that mesh can potentially work in the ~10k node range, but I don't think you can have multi-million node mesh networks, which is what a P2P system will need to function at scale.

I would be thrilled to be proven wrong, but my knowledge of networking suggests that in a high node count network, the overhead of synchronizing node state dominates network traffic.

Edit: In summary, I think that in most big comms networks, the endpoints do the encryption and the servers do the heavy-lifting of routing. Again, I would love to be proven wrong.

Edit 2: Just to be clear, I am excited for the way matrix pushes the envelope on comms tech. I think it's cool to see active development in these systems wherever it comes from. My caution is only about what I've seen in pure mesh-networking systems (and the dangers of self-hosted systems becoming the very centralized systems users thought they were escaping from).

So it's true that P2P Matrix is currently full mesh (which is why it's staggering a bit as everyone trying it from HN piles on). However, you can absolutely do better than full mesh without going straight back to hub-and-spoke: you can use spanning trees (like Yggdrasil), or gossiped segmentation as libp2p's Gossipsub does (https://blog.ipfs.io/2020-05-20-gossipsub-v1.1).

Also, you don't need to scale larger than the number of nodes in a single room (or worst case, the number of nodes visible to your account). For context, the largest rooms in today's non-P2P Matrix have about 100K users in them, and a typical poweruser account sees about 400K other users at any given point. Obviously as Matrix grows this will increase, but I strongly suspect rooms will then move into "celebrity" or "auditorium" modes - much as Facebook & Twitter etc have a separate class of routing algorithms for handling traffic for accounts with millions of followers.

The problem with hub and spoke is that the hubs hold the social graph and then become the target for censorship. I don't think there are any techniques employed by Matrix to mitigate that threat at this time (please correct me if I'm wrong).

In short, if you have a small network (under 10k nodes), I think P2P can work, but for networks of large scale (>10k nodes) I think you need hub and spoke, at which point the routing nodes in the center are the lynchpin. You can use some kind of mixnet tech to try to get around this, but that increases latency and computational overhead, thus lowering throughput. You can go the Signal route and throw the graph in an enclave, but there's still side-channel analysis (which is the thing that mixnets are trying to deal with, albeit I can't comment on their efficacy). Harry over at Nym has some cool ideas here.

I am not sure that spanning trees or gossip protocols solve the problem I'm describing, but, if they do, I'd appreciate if you could elucidate further.

Edit: Yes, I agree that N to N routing networks don't scale well. I think a K of N broadcast network can scale, but it's a tricky UX tradeoff.

Matrix mitigates the threat of a hub & spoke model by not being hub & spoke (particularly in P2P!) :)

> I am not sure that spanning trees or gossip protocols solve the problem I'm describing, but, if they do, I'd appreciate if you could elucidate further.

Perhaps the confusion here is the expression "hub and spoke" which sounds to me like a static centralised star topology routing model.

My point is that if you have 1M nodes trying to share data (e.g. follow a celebrity's personal pubsub topic), you clearly need a smarter routing algorithm than full mesh (where the celeb's node would have to do 1M parallel pokes to send its messages). You're completely right that one solution is a hub-and-spoke model (where the celeb's node would poke a big centralised hub somewhere, which would then relay the poke out to 1M followers on spokes) - but as you point out, the hub becomes a centralised chokepoint of failure/control/privacy-violation etc.

So, the main other options I'm aware of are either to arrange your 1M nodes into a spanning tree (or overlapping spanning trees) of some kind, as Yggdrasil does (their current one looks like https://yggdrasil-map.cwinfo.org/#, although only ~300 nodes are live atm)... or you let the nodes self-organise into some kind of hierarchy based on gossiping and fan out the messages that way (as per https://blog.ipfs.io/2020-05-20-gossipsub-v1.1, and some of the experimental routing stuff we've been doing with Matrix).

The good thing with Matrix is that it's room-based, and I haven't really seen rooms with over 10k users. Presumably most of these nodes would be offline most of the time.

I can easily see routing being delegated to more powerful homeservers. That said, you do not need perfect routing to be better than hub and spoke at using the network effectively, and it seems to be what most current mesh networks aim for.

the only time you need to coordinate beyond rooms is for presence and for synchronising device list updates, fwiw.

> No servers; nowhere for metadata to accumulate (other than the clients, of course).

If the network itself is controlled by adversary, you leak more metadata passing data directly. If data goes through honest server, attacker needs to do time correlation analysis.

If the cell signal is intercepted by a cell-site simulator, what sort of metadata will the MitM see as messages are sent and received?

Do you know anyone who was able to successfully compile & run the server as well as re-compile the mobile clients to specify a different server address? Smells a bit like vendor lock-in to me if you aren't going to bother adding such a UI widget on a FOSS app.

Signal or Riot? For Riot, it's easy to switch the default server - e.g. https://github.com/vector-im/riot-web/blob/develop/config.sa... for Riot Web. Building the server is trivial too.

Although I think it's still a long way to usable and interoperable p2p, matrix does support voice+video chat (in 1:1 chats p2p with signaling via matrix; otherwise via jitsi meet integration).

3rd party identifiers like phone numbers and email addresses can be registered and discovered via centralized service. At least for phone numbers it's not really possible decentralized.

agreed that there's a lot of work left on p2p (although 1:1 VoIP does work over p2p! :)

Please check out Firefox about:telemetry - that I believe is how it should be implemented so users don't freak out. There is nothing about me there. That is a simple way to help project.

I like this idea so much that I've found and opted-in https://wiki.archlinux.org/index.php/Pkgstats

> about:telemetry

On my work notebook, I get "your organization has blocked access to this page."

/me freaks out

> No voice comms | Voice and video comms

incorrect. I use voice/video over matrix every day.

Just wanted to say thanks. I've been running my own homeserver for a little over two years now without any real issue.

I somehow managed to convince 4 of my friends to use Riot and it's been great. We were originally using Group Me.

My friends are not tech savvy and couldn't care less about software freedom or privacy, but I was able to convince them to switch by luring them with the fact that they could post unlimited size videos. It's actually the easiest way to share high quality videos between Android and iOS users.

The only thing that annoys me is when you release updates to the default configuration file and I have to manually resolve the differences in the middle of an APT upgrade. That's not a big deal though.

> The only thing that annoys me is when you release updates to the default configuration file and I have to manually resolve the differences in the middle of an APT upgrade. That's not a big deal though.

Deploying with docker will eliminate this kind of issue.

You have to deploy 3-4 services to run your own with Docker, right? Synapse, Riot, Jitsi, etc?

I just run a single Synapse server in docker.

Self hosted Riot is only relevant if you want to customize it, but I've found radical [1] to be a better alternative since I'm the only one using the server.

You can always fall back to using the Matrix-hosted Jitsi and TURN instances if you don't host your own (although I don't end up video calling with riot anyways).

[1] https://github.com/stoically/radical

Note that the matrix.org hosted turnserver only provides STUN if you use it as fallback, which means that if you need a relayto reliably connect it won't work.

thanks :) agreed that we need to find a better way of managing config upgrades though. YAML is great, but preserving customisations over config upgrades is impossible(?)

Yeah, I mean it’s not the end of the world. It’s probably not worth adding backwards compatibility for config files yet - not while development is so rapid. It would just add maintenance baggage.

Take a look at CUE (https://cuelang.org/). It allows for backwards compatible config files, among other things.

vimdiff is very handy for this.

Wow, I was not expecting to see functional P2P Matrix so soon. Props to the folks working on it.

The hard part of selling Matrix to family and friends who currently use WhatsApp is still the user experience. To be fair Riot has improved substantially since I first tried it but there's still some way to go before my Mum will be comfortable using it.

That said, I'm encouraged by the progress I'm seeing! My money's still on Matrix to supersede traditional centralized messaging systems.

Looks cool. The world needs a lot more network-agnostic and P2P apps. We don't need the man in the middle.

Great work as always Martrix and Riot team. This is a long awaited feature, right now I am considering using Riot to replace most of my messaging, if it supports a SMS bridge.

Given the anti-privacy climate in the US, it's great how Matrix is not US based.

There is an sms bridge that already exists, however, I'm not sure how well it works.


If you're on iOS, there is also an iMessage bridge that exists.


SmsMatrix works decently well. However it does not support MMS messages (group messages or pictures).

Thanks,I didn't know about smsmatrix.

You could probably build one with Twilio. But otherwise it'd be on the bridge-owner to absorb the cost, so it'd almost certainly be a paid / self-run bridge of some kind.

I'm also interested in an SMS bridge

>"P2P Matrix is about more than just letting users store their own conversations: it can also avoid dependencies on the Internet itself by working over local networks, mesh networks, or situations where the Internet has been cut off."

Which makes it interesting, and worth learning about.

I’ve been interested in adhoc mesh networking for a while but I don’t really know where to get started software or hardware wise. I have lots of OpenWRT compatible equipment lying around and want to experiment.

I feel like internet blackouts are not a distant idea or all that improbable and I want to have some knowledge in my back pocket just in case.

Distributed mobile ad-hoc routing is an area of research where I feel there has never been a satisfactory answer to "how do I deliver data efficiently?" Typical schemes will cut your bandwidth to a quarter of its nominal rate.

Fixed ad-hoc routing is a little better, but tends to be kind of fragile since nodes can appear and disappear at random. I've never found a system that worked as well as I would like, although I think there could be value in a network that separates the high bandwidth user data from the low bandwidth routing data. Like it uses 802.11 for the data, and LoRa for the routing decisions.

That's an interesting idea, especially with routing over LoRa getting a lot of development. I've been following Meshtastic [0] where they're doing both routing and data payloads over LoRa in low powered communications devices.

It would be an interesting idea to have a device that could use 802.11 for data when in close enough range. With chips like the ESP8266 you could still have a relatively low power device but with much greater bandwidth.


A very cool project that has had some success is https://althea.net/. Perhaps you could start a network in your area.

This makes me wonder: isn't this basically the perfect thing to bootstrap a decentralized DNS with? In a messaging system there's no reason you'd ever want to revoke control of a domain, and the idea of genuinely-anonymizable communication should be appealing to just about anyone.

I might not be thinking broadly enough here though.

I think decentralized DNS with machine readable addresses is basically a solved problem (e.g. Magnet links or Tor Hidden Services).

A domain in this case is akin to a hash of a public key, or something like that, just enough to securely identify the target of the communication so there’s nothing to “revoke” although that’s not to say the peer discovery systems could not try to blacklist you.

Decentralized DNS with human readable addresses (unique screen names) is perhaps more a political problem than a technical one, and hence never fully “solved” just different sets of trade-offs that can be made.

People want a permissionless (like bitcoin) name service, and they also don't want good names / trademarks to be squatted.

However, between these two you can only pick one. You can't solve squatting problem without some kind of authority over issued names.

Social attestations are another solution to this, unless you count them as an authority over issued names. “If all my friends are convinced this person is called Bob, that’s good enough for me”. Doesn’t help with uniqueness though, but that’s what we have keys for.

I'm not talking about the technical problem, the technical problem is solved (including for human-readable addresses, which is the part that actually matters), I'm talking about the political part of the problem. "No one bothers with decentralized DNS," here's the problem space it's the killer tool for.

>there's no reason you'd ever want to revoke control of a domain

Careful with those words. Experience taught me that if there isn't a reason to ever want to do that, reality will eventually provide you with one.

Worst possible case you can drop the messages from a domain, which is how federated systems already work.

There's no reason to ever revoke control for messaging systems, though, genuinely: imagine if your e-mail address could be taken on a whim by anyone. It can be! But you'd never want to revoke control of a domain rather than just marking it as spam or illegal and dropping messages from it. It functions as an inbox more than it functions as an outbox.

Addressing space should be permanent.

Digging a little into the posts about HOW they setup the communication channel does seem like it could be used to inform a system for that. So Kinda?

It may not be satisfactory in terms of performance though. Decentralization comes with its own baggage.

Current decentralized DNS projects have many problems that revolve almost entirely around UX, but performance isn't really one of them.

Especially given in an environment like this it'd be used mostly to get the initial connection, performance probably wouldn't be too big a problem.

Which current decentralized DNS projects are you referring to?

Hi @Arathorn, any work on using IPFS as the media store? It seems like this would be also a really good project for Matrix and doubly so for p2p matrix...

hey - for whatever reason, nobody's hooked up IPFS (as far as I know) as a filestore. It might be because you still need to pin the content somewhere, as vertex-four says, and so even in a P2P network the fact that the nodes keep coming and going means it might not help much.

IPFS doesn't help - you still need a store-and-forward architecture for messaging, so you might as well use that for images too.

IPFS is not a store-and-forward architecture; a file does not get automatically stored by the system, so when the sender disconnects, the file is no longer accessible. There's "solutions" to this, like Filecoin, but nobody wants folks to be paying to upload images.

Or, in other words - the problem IPFS solves is peer discovery for a given hash, not storage. In Matrix, you already know who your peer is - it's whoever sent you the message referencing the image.

My idea was more in that every p2p node of Matrix can also be a IPFS server, and be used to pin all of your own media + a small cache of your contacts/group media.

You wouldn't need to save all of the media your homeserver is dealing with, but you still could be reasonably sure that your node would be able to find all media you are supposed to access.

Given that the post talks about libp2p and how libp2p's main driver is IPFS, it seemed reasonable to think that if you are adding Dentrite to your browser you might embed an IPFS server as well.

> + a small cache of your contacts/group media

This idea always sounds interesting and dangerous to me. There's a lot of content you really don't want to be a provider for for legal and other reasons. Caching it for yourself is one thing - public distribution is another.

Yeah, I stopped development of one project due to this: https://news.ycombinator.com/item?id=22246797

But if you're doing that then those nodes could also implement the store-and-forward aspect of p2p matrix, which comes with its own non-IPFS discovery and thus solves the discovery problem.

libp2p is what IPFS is built on, not the other way round.

I think I am failing to get my idea through. I am thinking more in the sense of having IPFS as a way to increase redundancy of nodes that might be able to serve some media.

Yes, discovery is not an issue for matrix. But p2p matrix on its own is not able to say "oh, your contact is sending you a message that contains a 10GB file, let me see if I can find other nodes that have this file already and get it from the swarm"

10GB is an exaggeration, but picture a group sharing hundreds of silly videos a day and you already can distribute the load quite a bit, no?

Yes, possibly this could be an optimisation; although then you get to worry about leaking information about your group's contents to the world - anything you send, other clients in your group will ask random servers on the internet for, by a globally identifiable hash. In the best case, this just leaks my friends' meme sharing habits - in the worst case, this announces to the world "I am in a group sharing materials that my Government thinks are dangerous". If someone posts an original file to a public group and a private group, then anyone who downloads that file who isn't in the public group obviously knows that person well enough to be in a private group with them, leaking group membership metadata. And so on.

Random identifiers for each instance of sharing a file to a group would thus be needed, reducing the usefulness of this optimisation quite a bit, where e.g. using the results of the group's p2p matrix discovery mechanism as your peer list to download blocks from has much the same results.

You are right about the problem of leaking metadata - just by asking for the file reveals to the world that you know about it.

However, this is something that can be fixed on IPFS itself, and I believe it is even in their roadmap: the idea of adding "friend peers". Your IPFS node could ask for sensitive files to this privileged list of nodes only.

So, your client can have a simple logic: if you have e2e encrypted message with IPFS-linked media, ask only the group member peers about the contents of the file. Messages that are not e2e (or marked as non-sensitive) can be asked to the global IPFS swarm.

Sure, if IPFS gets that feature, maybe it's worth it. (Although maybe not - at that point IPFS is just an overweight tooling for downloading files from a list of known peers block by block.) I wouldn't hold my breath though.

> at that point IPFS is just an overweight tooling...

Only if you assume that your only use case is to have private communications.

There are a lot of use cases and jurisdictions where sharing files do not have to be private and would benefit a lot from having redundant nodes doing the distribution. The tooling is already there, why not benefit from it?

There are no jurisdictions where all possible information can be shared safely without hiding it.

And IMO, software should maintain privacy by default - doing something less private should involve actively opting into that. Matrix, for example, creates e2e encrypted rooms by default, and even for non-e2e rooms, it doesn't announce the contents of the rooms to unrelated servers.

If you want to publish the fact that you're downloading a file to the world, you should have to actively opt into doing so, every time - you should treat this like reporting your downloads to your Government voluntarily.

> There are no jurisdictions where all possible information can be shared safely without hiding it.

This is not what I am saying. What I am saying is that there are plenty of people in countries doing illegal-on-paper things but are largely ignored by their Governments. No one in Brazil is worried about torrenting, even though it is not legal. No one connecting to a Swedish VPN provider worries much about getting copyright infringement letters, and so on. No one in Russia is worried about downloading music via VK, a social network with more ties to its Government than Facebook with the CIA.

> software should maintain privacy by default (...) you should treat this like reporting your downloads to your Government voluntarily.

I don't see any contradiction between what you are saying and what I am saying.

Yes, it should be secure by default. Yes, people should be aware that anything that is not sent encrypted and securely should be considered public knowledge.

So what?

Nothing of what you said negates the fact that there is a huge number of use cases where people want things to be public (Youtube? Instagram? Basically every social network?) and that they will benefit from having it that way.

You might not care about these use cases because you are too risk-averse to use them. Fine. There are plenty of other people who are totally okay with these risks. As long as they aware of the risks, why stop them?

Worse still, between a system that does only one thing "super-privately" vs something that gives them privacy when they want it and also gives them "things where I don't care if is public or not", which one do you think people will use? Look at another post on the same thread, you have people sharing multi-GB Anime over IRC, ffs.

The point I'm trying to argue, poorly, is that implementing a feature where you can right-click and explicitly choose to download something a bit faster is... probably not worth the sheer amount of arguing for "integrate IPFS into Matrix" that I've seen at every turn, for the past few years. People largely seem to want IPFS because IPFS is their pet project, not for any particular edge that it'll give Matrix, and alternative solutions are dismissed out of hand.

Even before p2p Matrix was announced, people turned up in #matrix-dev every week and argued that Matrix should replace its existing, functional file sharing system with IPFS, but could never come up with a reason to put in the work.

Sharing multi-gigabyte files with groups of people in a public way is, yes, maybe something some people do, but if IPFS was uniquely suited for this and the integrated system was deficient, wouldn't people be posting IPFS links in rooms all the time right now, and be able to talk about why they're doing this? Then it'd be a lot clearer that this is a thing people actually want to do and maybe setting up a button to do it in fewer clicks would be worth it.

> people turned up in #matrix-dev

Honestly, for a "normal" homeserver it does not make as much sense as for the p2p version of it. For the "actual server" synapse, I am more interested now in the multi-home-repo that I saw posted also on this thread.

> Sharing multi-gigabyte files with groups of people in a public way is, yes, maybe something some people do

Microsoft executives in the 90s: "Internet search is a silly idea. Once you find a site that you like, you just bookmark it. Who will be constantly search for the sites they already know?"

> if IPFS was uniquely suited (...) wouldn't people be posting IPFS links

The use-case of posting IPFS to share content is mostly covered by "posting magnet links", which is basically "why people just don't bookmark the sites they like?"

There is a lot more than could use cases that can come up once your matrix node can (a) manage data by its content (b) if it could know who also has it. And these use cases can be implemented in a way that do not make other use cases less secure.

You are right though that just having people asking for something without having no real background about the underlying challenges is a pain in the ass. Barely a year of working with blockchain stuff and I already roll my eyes every time I see someone on a forum doing a "Why don't you... X?" just after reading the word in some random blog. Let's see if I find time to brush up on my Go and contribute to the multi-home-repo project.

I’m interested in what those use cases that you suggest exist are. IPFS has, so far, really not impressed me (nearly all uses I’ve seen so far could literally be replaced with BitTorrent without problems) - I don’t see how integrating it into Matrix fundamentally changes anything.

Anime-over-IRC can hit 10GB files pretty easily. I wouldn't label it an exaggeration so much as "uncommon".

I do like the idea of automatically deduplicating every shared gif tho, which would happen automatically since it's content-addressed.

matrix-media-repo has an early implementation for IPFS support: https://github.com/turt2live/matrix-media-repo/issues/115#is...

Thanks for sharing this. I am working on matrix hosting and a multi-homed media service will be very useful.

I thought part of the point of how IPFS is designed is that you don't need to "implement" it, you just ask for a hash (file) in the IPFS mount and it goes and grabs it if it's not already there? Should just be a matter of handling IPFS URLs correctly. Which, I guess that's something to implement, kinda, though if you already have a handler for 'ipfs://' and Matrix passes off a click on such a URL to your OS to handle as it will (as it might to a browser for 'http://') then there really isn't anything to do

I am talking about the media store, the subsystem of the Matrix homeserver that stores all media that is uploaded.

Ah, yeah, that's less straightforward.

checks the docs

Looks like you could point it at IPFS "MFS" mount (Mutable File System) instead of whatever directory it's using now for its normal local storage media store (which I assume exists?), and that'd do the trick. Files stored in IPFS (so mirrorable through that, or whatever it is that you've got in mind) but appearing as ordinary files with normal names, so the normal file storage media store might manage to Just Work.

[EDIT] Incidentally MFS is exactly what I wished existed back when was kinda halfway considering using IPFS for my personal files. Been added since the last time I looked in on the project. Was too easy to "lose" files in IPFS otherwise. That sort-of fixes the problem, at least, provided the file tree can also be replicated pretty easily.

Have you looked at git-annex recently?


It takes a little bit to work out why you want it and why its so much better in that unixy way many of us insist on vs git-lfs (supporting ipfs should be a clue there).

*as a warning some of the defaults in last years git-annex 7 versions were aggressively annexing files in the git archives but fixed now.

I’m excited to try matrix with bridges to:

- Facebook messenger - Instagram - Whatsapp - Slack - Discord - email (yes)

If I can use a local client (iOS or Android), and not have to resort to a separate server, it’s awesome

Would love to have these bridges bundled with P2P matrix so I can run them all from my own computer.

How does Matrix achieve linearizability in group chat / federation?

I'm curious about what eng tradeoffs and so on have been made and so on. Imo zookeeper type system with an eventual transition to async bft consensus is the best practical approach today. Node join / leave etc is too hard a problem right now in non centrally managed p2p topologies.

The most interesting bit of Matrix academically is the merge resolution algorithm used to converge the room DAGs in a BFT manner. (It's not really consensus, given it only cares about being consistent locally). https://matrix.org/docs/guides/implementing-stateres is a good guide, or failing that https://matrix.uhoreg.ca/stateres/reloaded.html if you speak Haskell, https://github.com/matrix-org/matrix-doc/blob/erikj/state_re... for the original spec proposal, or https://matrix.org/docs/spec/rooms/v2 for the terse formal spec itself.

Edit: it's not "centralisation" :|

I hope you reconsider this design decision.

What is the value in federation if people can come away with different opinions as to what was said in a conversation? Eg. for example a $$ contract negotiation and an attacker using something like a msg replay attack that gives different sides different views of what the contract value is.

Edit: Or does federation mean proxying & replication in this case?

I don't think you've understood the design decision.

Matrix isn't a distributed ledger, and doesn't provide transactional guarantees. For unencrypted conversation, the user trusts their server not to spoof their messages. Other servers cannot spoof history as messages are signed into a room DAG. The worst scenario is that a malicious server could indeed withhold messages from a room DAG, and this would be indistinguishable from a network partition or a slow server.

The way we mitigate your own server attacking your conversations is at the E2EE layer - ensuring the messages are encrypted by the right user and spotting replay attacks based on signed E2EE metadata

There's no mitigation to servers dropping your messages, however, but practically that has little value - you're not going to be able to use it to give different sides of a $$ contract negotiation different views of what the contract value is.

I don't understand the distinction you are making between a distributed ledger and distributed consistency.

I suspect it will be easy to create a byzantine error avalanche with your current design - or break consistency (eg. different views).

I'd be keen to see what aphyr could do with a jepsen test. Perhaps you could run an open hack matrix contest to see what people can achieve. It might surprise you.

I'm saying that Matrix does not aim to be globally consistent within a room, or even seek consensus.

It's perfectly valid and indeed desirable for the network to partition, and for one side of the network to go off talking amongst itself, and the other side to continue, and then for the conversation to join up again afterwards.

Different views are a feature. Imagine you're using P2P Matrix to stay in touch while hiking - you fire up adhoc wifi, use mDNS to discover other peers, and get chatting away. Some people drift in and out of contact, and perhaps even the party splits. But the conversation continues fine for those still present in it. Nobody can spoof each other's messages; nobody can replay each other's messages; nobody can reorder messages; the worst that can happen is for messages to get withheld, maliciously or otherwise.

> I'd be keen to see what aphyr could do with a jepsen test.

Me too. We're overdue an audit, and we'll reach out (assuming he's not too fiendishly expensive).

> Perhaps you could run an open hack matrix contest to see what people can achieve. It might surprise you.

There's already quite a high incentive on the open network to show off by exploiting bugs in Matrix - which is what helped accelerate the v2 of the state resolution algorithm that I tried to link earlier.

Separately, the French government maintains a bounty for their Matrix deployment over at https://yeswehack.com/programs/tchap - and we're also looking forward to an academic paper being published in the coming weeks which is a super deep dive into analysing and auditing our state resolution alg. It might surprise you.

Don't know but it seems to be centralisation

Does anyone have a guide/instructions for self-hosting a Matrix server? I'm not sure which server to run, I heard Synapse is a bit heavy and there's a lighter Rust alternative? Which one should I use?

Is it okay to run it at home, or will I lose messages on downtime? I assume other hosts will retry when my connection is back up?

Synapse is the best bet still and keeps improving. Conduit doesn’t federate yet but looks promising.

https://matrix.org/blog/2020/04/06/running-your-own-secure-c... is a guide I did for selfhosting Synapse.

Thank you! Docker will do fine.

I used this Ansible playbook [1] on a Digital Ocean droplet. Very easy to configure, fully-featured.

[1] https://github.com/spantaleev/matrix-docker-ansible-deploy

You don't lose any messages with downtime if others in the group are on different home servers

That sounds great, thanks. Unfortunately I just tried to use Riot.im on the default server to talk to a friend (also on the default server), and my messages aren't getting to him or his to me, so it looks like Matrix still has some way to go.

Use Mozilla's homeserver instead. A far better experience, presumably because it runs on New Vector's modular.im infra. Supports SSO (can even sign in with your Firefox account).

Will that fix things? Apparently, when I tried to DM my friend and invited him to a DM, the room should have been his username but was "empty room" instead.

It may. Riot can be a bit of a pain to get the initial handshake going. If all else fails just keep blowing up the room and retrying until it sticks. I've had to do that before with members of my team.

I see, thanks. I tried with another friend, I keep trying to verify him but I keep seeing a spinner "waiting for him" and he sees nothing.

I'm going to give Matrix a few more years to mature and try again then, I think.

We're not aware of any performance problems on the matrix.org server right now (we had a breakthrough in performance last week), so I think something else is going wrong. What clients were you using?

I was using web and he was using the android client. Things started working when he switched to web too.

precisely which android client was he using? (and can he submit a bug report from it?)

My guess is that you were trying to do new-style verification, which requires RiotX, which is shortly going to replace the old Riot Android client.

I think he was using the old client, yes. I will ask him to file a report, I tried RiotX and it seems to work well.

It's really not that bad or I certainly wouldn't be using it.

The matrix home server has horrible lag spikes at some times of the day.

The e2e implementation was pretty miserable until a few weeks ago but is now pretty smooth sailing.

Been using it for 3-4 years and quite happy.

As of last week the matrix.org homeserver should no longer be laggy, as per https://twitter.com/matrixdotorg/status/1265412147737223174 fwiw :)

It has seemed much better but anecdotally I was just about to talk to my friend about how much better Matrix had been in the last few weeks and....

Unusuable between 14:20 and 14:28pst today :)

Is there a public ops page for those perf numbers?

-thanks for fixing e2e, cross signing was huge improvement

hm, weird - not seeing anything around 14:20 on the graphs. might be network connectivity or something else going wrong, but not an overloaded synapse for once.

status.matrix.org exists but is fairly useless - we need to publish the graphs publicly.

Are those new issues due to P2P? I've been using riot for some time last year and everything seemed to "just work". Was I lucky, or did it get worse?

P2P is an entirely separate codebase & deployment. The issues here sound like a bug in the legacy Riot/Android app which is about to be replaced by RiotX. In general things have been getting progressively better.

> absolute total autonomy and ownership of their conversations, because the only place their conversations exist is on the devices they own.

This also means a total loss of data if something happens to the device.

Of course, I can imagine some usecases where this would be a plus, but I suspect that most of them are on the wrong side of the law. Now, coming from Russia, I know that it doesn't automatically mean it is something bad, but still, in vast majority of cases, it does.

Not my cup of tea, thanks, because keeping my data is very important for me. For this, good old federated model with client-server architecture strikes the best balance between reliability and privacy. Just use your own domain name, and all will be fine

It doesn't necessarily mean total loss of data, because it would replicate onto your other devices - and you could also run a server too as an always-on p2p node. The idea is to have a hybrid, so casual users can start off p2p but then pin their accounts to a server when they see the value.

This would be similar to people who run Syncthing meshes for their file system backups, correct?

yup, precisely

A big question that I don't see addressed in the text or asked here yet: why does this require the user to not be in private browsing mode? Some necessary API that's not available in incognito mode? Is the restriction temporary?

Empirically service workers don't seem to work in private browsing mode, and the demo relies on running Dendrite as a service worker.

> In Firefox, Service Worker APIs are also hidden and cannot be used when the user is in private browsing mode.

Says https://developer.mozilla.org/en-US/docs/Web/API/Service_Wor...

Doesn't any difference then become a way for websites to detect if you're in incognito, which is something browsers have been fighting back and forth every version?

I'm not a web dev, but I'd guess this is because the implementation relies on browser storage, which is removed when a private browsing session ends. So possibly it would work, but only until the tab / window was closed?

I believe on a full e2ee communication, starting the negotiation under a DoT (who knows near in future - or bootstrapping through p2p) has the potential to be the most reliable decentralised and privacy-by-design communication protocol.

Good announcement and look forward to test it as soon as possible.

This is a really interesting project, thanks for sharing!

> "We also want to take a look at the DAT / hypercore / hyperswarm / Cabal ecosystem to see if there’s a match :) "

In collaboration with the Hyper* team we have built a p2p chat app prototype using the hyper* stack and would like to share this with you.

> "Firstly: we do not yet have a solution for “store and forward” nodes which can relay messages on behalf of a room if all the participating devices are offline."

The main selling point for our prototype is how we are approaching and solving this exact problem. We're sure you will find it interesting.

I've reached out to your support email so that we can figure something out.

Would p2p mobile Matrix drain my battery super fast?

right now, yes. in future, with low bandwidth transports and smarter routing algorithms, it might be okay.

I've been following for a while, i like the comment that are exploring various options including yggdrasil and DAT/hypercore, perhaps a silly question, excuse my absolute ignorance, any value in looking at GNUnet? (I'm sure it has already been done)

Had to download the DMG file directly from riot, I was having no luck finding it on the "official apple app store" https://riot.im/download/desktop/

That's because we haven't published it there. Do people actually use the Mac App Store?

I am looking forward to it. Matrix gets better and better, and at the same time still more exciting.

The rate that Matrix is able to put out excellent updates has been stunning. Many other businesses in the same space struggle to consistently build quality video software at the same rate. Great job Matrix team!

Very nice. Started using Matrix sometime ago and was thinking in running a homeserver myself.

This new p2p stuff is a great addition to an already very versatile platform. I think Matrix has a huge potential.

This kind of privacy/redundancy tech has suddenly gotten really real the past couple weeks.

I think someone needs to look into the onboarding process. My experience so far:

1: Clicked on "Try Now".

2: Was greeted with a blob of text. Spotted "The easiest way to try Matrix is to use the Riot Web ...".

3: Clicked on "Riot Web" because that is a link.

4: Got transferred to a different website with a lot of buttons on it.

5: Clicked on the green "Get started button"

6: Got transferred to another page with a bunch of buttons on it

7: Clicked on "Browser"

8: A new window with a "Sign in" and a "Create Account" button opened

9: Clicked on "Create account"

10: Entered a username and password

11: Got asked to do the Google Captcha

12: Had to select 4 tiles with traffic lights and cklick next

13: Was told to select stairs.

14: Since there were no stairs, I clicked "Skip"

15: Had to select 7 tiles with fire hidrants and click next

16: Had to select 4 tiles with traffic lights and click next

17: Had to select 3 tiles with traffic lights and click next

18: Had to accept terms an conditions and click "accept"

19: Had to enter a recovery passphrase.

20: Had to confirm my recovery passphrase.

21: Was forced to click a "copy" button to copy the recovery passphrase.

22: Had to click "continue"

23: Staring at a spinner for a while.

24: Was prompted with an "OK" button. Clicked it.

25: Was prompted with a "Welcome to Riot" page with different options.

26: Clicked "Explore public rooms".

27: Clicked on the first one.

28: Had to look at a spinner for a while with "Joining room" next to it.

29: Still looking at the spinner ...

30: Spinner is still spinning ... will take the time to post this to HN.


31: Back to the Matrix window. Hurray! I'm in a channel. Typing "Hello" and enter. It seems to work.

I wonder if it would be an option to shortcut these 31 steps into just one and throw everyone who clicks "Try Now" into a "Welcome to Matrix" channel.

So, that's the current Matrix network, not the new P2P Matrix stuff. That said, I think the main takehomes are:

* We need to make it easier to jump into Riot/Web from the matrix.org home page.

* Google reCAPTCHA sucks; we know.

* The new recovery passphrase thing is an error - we know, we're in the middle of fixing it, targeting next week.

* Joining rooms is slow - we know, this is harder to fix, but is on the menu.

Thanks for spelling it out ;)

Ok, great. Here comes more feedback :)

Since I did not save my password when signing up, I proceeded to set an email. Got a confirmation email and clicked the link in it.

Then signed out and clicked the "Set a new password" link.

But it does not work. I always get the message "Failed to send email: This email address was not found".

I've filed that for you at https://github.com/vector-im/riot-web/issues/13897. Please keep the feedback coming there; we are currently chasing down first time user experience snafus like this on the current app.

I also recently signed up, and totally agree that it was a little bit of a PITA.

I will say, however, that a lot of the difficulty appears to be tied to the privacy aspects. I think it's the only service I checked out that didn't require an email or phone number to sign up. I think the privacy/ease of signup trade-off is okay right now, and I also think that Riot is trying to make it better.

Also, I'm successfully using it with non-tech-oriented family and it's been quite good!

I'm not sure whether you are talking about the regular Matrix network on purpose, or not.

Regarding the p2p app (https://p2p.riot.im), yeah, that's a proof-of-concept they bolted onto an existing client. They could have adjusted the login page.

I need to click "Create an account" for it to work, give or take one page refresh. Then I can open the room directory and pick a room.

However, I tend to like some of that hand-holding. Plus, when someone joins a chat system, they're generally introduced to it by acquaintances that already use it, so the next onboardings are easier. Not saying it shouldn't be improved, of course.

Add that to the pain of riot x and riot normal app on Android. Verification doesn't work at all. I tried encrypted channel with bunch of friends before and I think it's a pretty bad experience. The app forgets that you verified from time to time.

Just a few days ago, we had this comment https://news.ycombinator.com/item?id=23358863 complaining about SQLite not being available in browsers and now we see an actual application using SQLite in a browser :D

well, it's a bit of a hack - it's using https://github.com/kripken/sql.js/ in JS from go via https://github.com/matrix-org/go-sqlite3-js, and persisting its data by snapshotting it to indexeddb via https://github.com/skaegi/idbfs every 30 seconds. Not sure this quite qualifies as ACID ;P

Would this docker image work on a 4x4 Raspberry pi?


When the riots where in a different continent nobody gave a shit, did they?

Two other perspectives:

- see it as a nod to activists everywhere

- think about the fact that a decentralized e2e encrypted network might actually be a good tool for people fighting oppression

I'm not sure what you think calling a product something means. Do you think it means that 'Riot' endorses rioting? Or is anti-rioting? Or is making fun or rioters? Or something else?

There's nothing insensitive about the name.

You might also have missed the fact that Riot has existed since 2016: https://en.wikipedia.org/wiki/Riot.im

Otherwise, I somewhat agree that my first reaction to the name when introduced to it back then was "whoa -- it sounds a bit violent".

Well, it's 2020 right now. Some things that used to be considered acceptable for the wrong reasons are no longer acceptable today. I do think that the project should take a clear stance against this horrible violence and racism, and show that they actually care about their potential users-- and not just about "cool tech", whatever that might mean.

How about freedom chat? That would be such a riot!

Should the very talented and accomplished progressive rock band Isis change their name? I don’t think so.

Um, they did exactly that, in 2018. And it was the right decision to make.

ok true they did a reunion show as Celestial (had tickets to it even), but iirc they didn’t change their name up to that point.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact