Hacker News new | past | comments | ask | show | jobs | submit | rklaehn's comments login

I am a contributor to Iroh ( https://github.com/n0-computer/iroh ), an open source library for direct QUIC connections between devices that can be behind a NAT.

Our library is general purpose and can be used whenever you need direct connections, but on top of Iroh we also provide iroh-blobs, which provides BLAKE3 verified streaming over our QUIC connections.

Blobs currently is a library that provides low level primitives and point to point streaming (see e.g. https://www.iroh.computer/sendme as an example/demo )

We are currently working on extending blobs to also allow easy concurrent downloading from multiple providers. We will also provide pluggable content discovery mechanisms as well as a lightweight content tracker implementation.

There is an experimental tracker here: https://github.com/n0-computer/iroh-experiments/tree/main/co...

Due to the properties of the BLAKE3 tree hash you can start sharing content even before you have completely downloaded it, so blobs is very well suited to the use case described above.

We already did a few explorations regarding media streaming over iroh connections, see for example https://www.youtube.com/watch?v=K3qqyu1mmGQ .

The big advantage of iroh over bittorrent is that content can be shared efficiently from even behind routers that don't allow manual or automatic port mapping, such as many carrier grade NAT setups.

Another advantage that BLAKE3 has over the bittorrent protocol is that content is verified incrementally. If somebody sends you wrong data you will notice after at most ~16 KiB. Bittorrent has something similar in the form of piece hashes, but those are more coarse grained. Also, BLAKE3 is extremely fast due to a very SIMD friendly design.

We are big fans of bittorrent and actually use parts of bittorrent, the mainline DHT, for our node discovery.

Here is a talk from last year explaining how iroh works in detail: https://www.youtube.com/watch?v=uj-7Y_7p4Dg , also briefly covering the blobs protocol.


BitTorrent v2 has the incremental hashes via merkle trees. They're surprisingly good. I implemented them here https://github.com/anacrolix/torrent/issues/175#issuecomment...


> Shame it's being abused by crypto bros who want to treat it as money.

Iroh contributor here. I don't know what you are referring to. Iroh is just a library to provide direct QUIC connections between devices, even if they are behind a NAT. We don't have any plans doing a blockchain or an ICO or anything like that.

I am not aware of any project called Iroh that is a scam, but if there is, please provide a link here. It's not us.

I know there have been some scammers trying to make a BLAKE3 coin or something, a year ago.


I actually wasn't referring to iroh but rather ipfs / the stratos thing that I mentioned.

My only gripe with iroh currently is that its browser wasm feels too much for me/ I don't want to learn rust.

So I actually wanted to build something that required connectivity and I used nostr because nostr is great for website and not gonna lie ,its awesome as well (but nostr is also riddled with crypto bros :( )


OK, thanks for the clarification.

I have nothing against crypto in principle, but I really don't want Iroh to be associated with crypto scams.

Iroh is just a library for p2p connections. You can use it for crypto, but I would say that the majority of our users are non-crypto(currency).

We will try to make the wasm version easier to use, but if nostr works well for you, go for it! Not the right place if you want to avoid crypto bros though :-)


Haven't used Nostr recently, but isn't it associated with bitcoiners rather than crypto bros? At least it used to be that way.


We use pkarr at iroh.computer for node discovery. It is anything but slow. It is very rare for a lookup to take more than a few milliseconds. It is sometimes faster than our non-p2p node discovery option which is using DNS.

DHTs get a bad rap because of many recent DHTs that were horribly inefficient. But mainline is different. Many of the design decisions of mainline seem very limiting at first, but make a lot of sense for perf.

E.g. a pkarr record can only be 1000 bytes, so the entire message fits into a single non-fragmented UDP packet.


The simplest possible integration between DNS and peer-to-peer overlay networks


To be fair, IPFS does offer not just content addressing but also a mechanism for mutability with IPNS. You can think of a willow namespace (or iroh document) as a key value store of IPNS entries.

The problem with IPNS is that the performance is... not great... to put it politely, so it is not really an useful primitive to build mutability.

You end up building your own thing using gossip, at which point you are not really getting a giant benefit anymore.


Is the performance a critical design flaw or just an implementation issue?


Difficult to answer.

IPNS uses the IPFS kademlia DHT, which has some performance problems that you can argue are fundamental.

For solving a similar problem with iroh, we would use the bittorrent mainline DHT, which is the largest DHT in existence and has stood the test of time - it still exists despite lots of powerful entities wanting it to go away.

It also generally has very good performance and a very minimalist design.

There is a rust crate to interact with the mainline DHT, https://crates.io/crates/mainline , and a more high level idea to use DHTs as a kind of p2p DNS, https://github.com/nuhvi/pkarr


Design flaw. In IPFS every piece of data (even every chuck of large files) is globally indexable on the same namespace. You need namespaces and/or a path to restrict yourself to just the subset of peers that might actually have the data.

It would be possible to add a layer on top of IPFS to include some context with every hash lookup so the search can be more focused, but the original design just assumed it was ok to do a log2 search for every chunk.


That is not a problem specific to IPNS though. Using a DHT for something like IPNS is fine. Publishing roots of large data sets is also fine(ish).

Using it to publish every tiny chunk of a large file is a horrible idea. It leads to overwhelming traffic.

If you publish a few TB of data, due to the randomness of the DHT xor metric you have to basically talk to every node on the network. Add to that the fact that establishing a tcp libp2p connection is much more heavyweight than sending a single UDP packet like in the bittorrent mainline DHT, and you are basically screwed.

In iroh we don't publish at all by default. But if you have to use a DHT, the fact that we have a single hash for arbitrary large files due to blake3 verified streaming helps a lot.

You still get verified range access.


Imagine if DNS supplied every URL and not just domain names. You need some mechanism to propagate resource changes. IPNS has two practical mechanisms: a global DHT that takes time to propagate, and a pub/sub that requires peers to actively subscribe to your changes.


btlink does DNS per domain name, which you could argue is a sweet spot between too many queries and being too broad. at least in the case of the web, it works nicely.


It's a design flaw.


It's more generic than that.

Syncthing is designed specifically for file system sync (and does a very good job). Willow could be used for file system tasks, but also for storing app data that is unrelated to file systems, like a KV store database.

You should be able to write a good syncthing like app using the willow protocol, especially if you choose blake3 as the hash function.


This is a good question. But it is worth noting that not everything has to scale globally.

E.g. in iroh-sync (which is an experimental impl of the willow protocol) you are not concerned with global scaling. You care only about nodes that are in the same document.

So while if you request hash QmciUVE1BqKPXMSvTTGwHZo1ywYdZRm9FfBvEJkB6J4USb via ipfs, you are trying to globally find anybody that has this hash, which is a very difficult task.

If you ask for some content-addressed data in an iroh document, you know to only ask nodes that participate in this particular document, which makes the task much easier.

Edit: regarding clients, iroh is released for osx, windows and linux. Iroh as a library also works on ios. Download instructions are here: https://iroh.computer/docs/install


Prefix pruning is a very different approach than tombstones. An update will actually remove data, and not just mark the data as removed.

Maybe "total erasure of data" is a too strong promise, but the fact that you can not force nodes that you don't control to unsee things is common knowledge, so in my opinion this does not need a qualifier.


Think of it like const generics in languages like rust or C++.

You can make two data structures with a const parameter.

If the parameter is not the same, they are not compatible (not the same type). The parameter can be tuned according to the specific needs of the application.


I was not aware of iRODS.

Iroh is named by a certain fictional character that likes tea. Any similarity is a coincidence.

But it seems like iRODS is much more high level than iroh. E.g. iroh certainly does not contain anything for workflow automation. You could probably implement something like iRODS using iroh-net and iroh-bytes.


> Iroh is named by a certain fictional character that likes tea.

"The file was in my sleeve the whole time!"


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: