Hacker News new | past | comments | ask | show | jobs | submit login
Orbit – Distributed, serverless, peer-to-peer chat application on IPFS (github.com)
144 points by niklasbuschmann on Oct 16, 2016 | hide | past | web | favorite | 70 comments



Interesting. Can someone explain how IPFS works? Is it like Tor? I don't have any interest in running some sort of distributed content farm that might place CP on my computer. Even if the chance of that happening is 0.00001%.


IPFS isn't an anonymous network and you don't share files that you didn't download. It works more like a distributed CDN for static files.


The best one-liner I've heard: it's like one giant git repo that's inside of one giant bittorrent.

My (imperfect) understanding is that it runs like a market: you temporarily store and forward blocks (<1MB) that are considered "valuable" (i.e. popular) in exchange for people forwarding you the blocks that you want. So there's a cache at each node where popular blocks are held - which I'm sure you can keep in RAM if you want. So while it's possible that content you don't want might pass through your IPFS node, it's pretty ephemeral.

In general, I don't think IPFS is a great place to do naughty things - it's not big on anonymity, and since blocks drop off the network if they're not being actively requested, to keep something up there you have to store it permanently _somewhere_, which is going to be traceable to the same degree that running a webserver is.


As far as my understanding goes (and I've spent a day discussing it with IPFS authors in person this September), IPFS doesn't store or forward anything you haven't explicitly requested. So you participate only in sharing things you're aware of (and bad things are rather easy to remove from that cache) — and that's a deliberate design decision.

Things in local IPFS cache can indeed be "garbage-collected" (and there's a CLI command to trigger GC manually) — but IPFS daemon has a concept of _pinning_, and pinned IPFS nodes won't be collected, and will remain stored (and being shared) as as the pinning goes.


IPFS is not an anonymity network.


Indeed not.

In fact, IPFS via the DHT, tells the network of your whole network topology, including internal address you may have, and VPN endpoints too.

There's still talks in how to handle Tor connections. Because right now, if you were to use a Tor connection with IPFS, it will tell the whole network your public, private, and .onion addresses all.


Why would it do that though? I don't really care for Tor, but private network? That's a bit strange to me. Can you or someone explain? Seems like it has no regard for privacy by the sound of it.


Sure can.

What IPFS does, is looks at network topology to determine 'closeness' of nearby IPFS nodes. It then prefers 'closer' nodes, to speed up transactions and requests.

For example, if we take Gangnam Style video, it spread to something like 100M views rapidly. With Youtube, that's 100M individual downloads. With IPFS, it would be 1 or 2 downloads per local network. And then those machines would provide the local network with the content, rather than hitting the 'net at large.

The only good way to do this, is to include all the adapters in the DHT to where all machines are. It also allows IPFS to seamlessly work across NATs and other junky applications of IPv4 (and egads, already seeing ipv6 nat).


That's neat, though I think in some cases, say Intellectual Property on your LAN, or other things, you may want those services ignored I would think? Sounds like setting up IPFS has to become a rather isolated process for some? Which I find kind of limiting if you're forced to go through hoops for something that could be opt-in / configurable at the very least?


Well, all I can defend against that, is that it's still "Alpha Software". There's stuff that's very much not for private or secure networks.

But for working with data that's intended to be open, it's wonderful.


> Seems like it has no regard for privacy by the sound of it.

IPFS is an infrastructure building block. You are judging it by throwing out "privacy" like all things need to implement privacy without regards to the fact it's the application layer that should be responsible for the "security" of the "private" communication.

I could seed an encrypted file on IPFS and then put a bounty out on it for people to cache it aggressively. I'll increase bounty if you've cached for me before and I'll increase bounty for anyone caching it around a particular timeframe. Any agent looking for my contact who is downloading that file doesn't know if they know the file is for my contact or not.


There's going to be support for private networks, and for various ways of encrypting data. We just haven't gotten to it yet :)

You can of course already encrypt the data yourself before adding it to ipfs.


I'm thinking about creating a gateway for IPFS from clearnet to OnionCat IPv6. So other onion servers can participate without revealing public IPs.


You WILL have to hack on IPFS software, as to not release all your adapter information. Even if you tunnel all the IPFS datastream through .onion , the datastream inside will tell everyone your IP adresses, internal (unroutable) and external.


I've been playing with Freenet, and it does the same. So the IPFS peer would be a VM with no public IP address, which connects through a Tor gateway VM. I'm guessing that IPFS needs a reachable IP:port, so I'd use a throwaway VPS as a clearnet proxy.


There's a good overview of IPFS here - https://www.youtube.com/watch?v=skMTdSEaCtA


Freenet encrypts content, parcels it into chunks and distributes those chunks amongst peers. Does this meet your definition of "distributed content farm that might place CP on my computer"?

I'm curious because I've seen objections to Freenet for that reason yet the content stored is in no way CP. No bad content can be reconstructed from the data in your store. Not just because it's encrypted but because you'd be holding random small chunks of the file.

The vast majority of Freenet content is probably about Freenet itself (Web of trust data, Sone traffic, FMS traffic), not bad content.


Those chunks are intended to be used as child porn, and the "you can't tell its child porn" objection is a weak one for those who don't want to be part of its distribution on moral grounds.


Currently, you don't serve files etc unless you manually pin them yourself.


Not true.

You share:

1. Files you Pinned (think of as torrent seeding)

2. Files you have in your IPFS cache

     a. Cache files are added to when you request IPFS content
     b. The garbage collection triggers and removes non-pinned content at regular intervals
3. The default files that are added to a new IPFS repo (unless you removed them or init'ed using the appropriate option to not include them)

To answer the GP's question: As long as you don't pin child porn, and you don't look for child porn, there's 0% chance in IPFS-land.


Would (a) not be

  a. Cache files are added to when _something causes a request for_ IPFS content
The distinction being that "something" is not always a direct action from the user.

If content on IPFS (ie a web page?) can reference and load content from other addresses (assumption) then could someone end up in the situation where they are "hosting" (from the cache) something they would not expect to be? (until the garbage collection clears it).

If this seems far fetched, A submission to HN the other day seemed to surprise a few people[1] as it made a http request to a adult website to check if they had an active session (but did not display any content).

[1] https://news.ycombinator.com/item?id=12692389


This is true and good to keep in mind, but it's also an inescapable risk of any network involving autonomous agents. We're also susceptible to downloading content that's different than advertised (e.g. Rickrolling).


And in all honesty, the whole "but child porn OMG" is a non-sequitor.

Child porn already exists on the web directly. And Tor Hidden Sites. And Freenet. And other places.

The real problem with CP is that fact there's no mens-rea requirement. A script can download it to your browser cache under blank images. It's in your cache, and you have no clue.

In the current situation, you are illegal. With mens-rea, you aren't - there was no willingness to get it, therefore you aren't at fault. Think of this as shoplifter compared to something that fell in your cart you missed. Same idea.

(In all honesty, I hold to Stallman's idea of CP shouldn't be illegal, period. It's a proof of a crime. Snuff videos of people being murdered isn't illegal, although the murdering part very much is. Child abuse is illegal, as well it should be, but proof of child abuse shouldn't.)


You still have to convince others there was no willingness. And it's much easier to convince them of that if you're not also sending the copies to other people.


Exactly. Stay away from anything that could even be believed to be hosting felony content. I always tell people that, if they do run it, run it through some hosted server at a reputable place so the police just grab that box. They might still hit a residence but they might not if they find nothing on the box they grab while it's active.


It also helps if the hosting facility doesn't know who you are.


I don't think it's anonymous, and AFAIK you don't store random files, so you can't store child pornography unless you request said files.


kefka's replies to this question are accurate (thanks!)


> "As seductive as a blockchain’s other advantages are, neither companies or individuals are particularly keen on publishing all of their information onto a public database that can be arbitrarily read without any restrictions by one’s own government, foreign governments, family members, coworkers and business competitors"

https://blog.ethereum.org/2016/01/15/privacy-on-the-blockcha...


There is no blockchain in IPFS.


Oh okay, I was confusing it with this: https://en.wikipedia.org/wiki/ZeroNet


There's also no blockchain in ZeroNet.


Isn't Bitcoin a pretty integral part of ZeroNet though? (I haven't tried it out myself yet.)


No. The addresses used for the sites are compatible with bitcoin addresses, that's about it.


The browser demo does seem to be working, however it seems very slow? Beautiful interface though! One of the best I've seen yet.

What is going on underneath? Are you guys using WebSocket or WebRTC? The reason I ask is because I wrote an interactive coding tutorial for building a distributed chat app ( http://gun.js.org/converse.html ), and it uses WebSockets to communicate with a federated relay peer server. I'm hoping to add WebRTC support but I'm curious what you guys are doing. Like, IPFS doesn't have pub/sub support right? So did you add this?


The version deployed at orbit.libp2p.io is using orbit-db which is using redis to do pubsub right now. However, pubsub is being worked on and exists in for example go-ipfs#master right now under a feature flag. Run `ipfs pubsub --help` after build from source to try it out. It's also being worked on getting into js-ipfs.



Developer of Orbit here. Great to hear all the feedback, thank you!

Most questions have been already answered, but to clarify:

Orbit indeed uses IPFS pubsub (https://github.com/ipfs/go-ipfs/pull/3202) for real-time message propagation, no servers are involved. In addition, it uses orbit-db (https://github.com/haadcode/orbit-db) - a distributed database on IPFS - for the message history, so the messages are not ephemeral and the channel history can always be retrieved. This is a really nice property and allows Orbit to work in "disconnected" or split networks, as well as offline.

Orbit has been a testbed for IPFS applications and orbit-db came out of that work, enabling various types of distributed, p2p applications and use cases: comment systems, votes/likes/starring systems (with counters), feeds, etc. And now with IPFS pubsub, we're finally at a point of being completely serverless and distributed which is hugely exciting and opens so many doors for future work!

I recently gave a talk at Devcon2 about Orbit and developing distributed real-time applications (https://ethereumfoundation.org/devcon/?session=orbit-distrib...) and while the videos of the talk are not out yet (afaik coming very soon!), there's the uncut video of the talk here http://v.youku.com/v_show/id_XMTc1NjU1NzEyNA==.html?firsttim... if you're interested to learn more. Video of the demo I showed in the talk is here https://ethereumfoundation.org/devcon/wp-content/uploads/201....

I'll be hanging out on #ipfs in Orbit if you're interested to try it out. Note that the Electron app and the web version at orbit.libp2p.io don't talk to each other atm (we're working on this), so I would highly recommend to try out the Electron app.

While you're at it, try drag & dropping files and folders to a channel, that's one of the coolest feature of Orbit atm imo :)

We're actively developing Orbit and making a push in the next few months, if you're interested to take part in the design and development, or would like to develop your own apps using the same tech, join us on Github https://github.com/haadcode/orbit/issues.

Thanks for the comments everyone, much appreciated!


This is a perfect non-use-case for IPFS: ephemeral messages in a chat application.


What makes it ephemeral? Does not IPFS store all objects forever?


My point is that chat messages are meant to be ephemeral, so it was a waste to store them in IPFS and hash them and make them identifiable by a unique useless hash in the entire world forever.

But since I posted the comment I realized that this is actually a cool feature for a chat app to have.


When I choose to use a chat application, of my reasons to use it is so that there is an accessible record of the thing.

More concerning to me is with IPFS is privacy. I would want to encrypt it so that only my chat partner and I could read it. But then would it be interesting in enough to live for long on the network?


IPFS isn't in the business of providing storage for free to users. If you want it stored forever you should pin it or use a service that does the same. Just because the name is permanent doesn't mean the data is unless it's in demand from someone.


The name is misleading, you must concede.


What?

The name of a piece of data is cryptographically secure. It is permanent, but the data that is being named doesn't have to continue to exist on a hard drive just because its name exists.


The cool thing about unique hash per message is that it makes that message linkable in other applications, after all it's just an IPFS hash. So for example, if you share a file in Orbit, you can get that file with IPFS just like you would get it normally.

Linked data ftw! :)


Nope. You have to re-announce them periodically (and this is ignoring all the routing attacks you can do on IPFS).


Routing attacks?


IPFS uses a DHT to look up and announce the availability of content. DHTs are notoriously vulnerable to routing attacks, whereby an attacker can insert nodes into the network and take control of individual hash buckets. This is achieved by inserting the nodes such that they become responsible for pointing clients to the target hash bucket. If an attacker takes control over all such nodes (and can do so with a pathological sequence of join requests and data inserts/deletes), then (s)he can go on to censor buckets and/or serve malicious data from them.


You're very right about the weaknesses of DHTs. We're planning ahead for future mitigations by making every layer of IPFS pluggable. We also devide routing into Peer Routing and Content Routing, as well as soon Wire Routing once there is a packet-switched overlay network in place.

There are also improvements upon Kademlia such as S/Kademlia which we've partially applied to IPFS (and we'll continue to apply more).


Routing attacks aren't specific to DHTs--they affect all kinds of structured overlay networks. If your routing layer allows anyone to route requests, then the attacker can insert its nodes into the routing layer (e.g. as a sybil) and censor data. Also, if your routing layer makes forwarding decisions based on arbitrary data previously uploaded, then the attacker can pathologically insert, delete, and request data to either deny service to the network or divert requests to attacker-chosen nodes.

Splitting routing into peer routing and content routing isn't going to fix this if you're still using a DHT to do both. Also, S/Kademlia's decentralized countermeasures only slow down the attacker, and they don't save you if the attacker has a botnet.

> We're planning ahead for future mitigations by making every layer of IPFS pluggable.

The problem with this line of thinking is that it ignores the fact that IPFS-with-DHT-routing is a fundamentally different system than e.g. IPFS-with-DNS-routing, which are both fundamentally different from IPFS-with-Namecoin-routing. This is because they each make fundamentally different guarantees about availability, durability, and security. In general, the end-to-end properties of a distributed system do not logically decompose.

Plugins make this problem worse, not better. Now it's not enough to know that I'm storing my data with IPFS; now I also need to know which plugins both I and my peers are using using since the set of plugins is what determines the properties of the data store.


Is there a list of some active channels I can join?


I like ipfs this name. but what does ip in ipfs stand for?


interplanetary


Does IPFS provide HTTP adaptor ?


Yes there's an HTTP-to-IPFS gateway included in go-ipfs. It's what backs https://ipfs.io, for example.


Yes, both as a read-only gateway (browser navigation friendly) and as a read+write HTTP API, you can find several API client libraries at:

https://github.com/ipfs/?utf8=%E2%9C%93&query=ipfs-api


Another false headline. "serverless"? Nope. A redis server must be running.


Hah, I agree with your resentment about the recent wave of "serverless" :)

By now redis has been replaced with native IPFS pubsub, which is provided by both go-ipfs and js-ipfs. The only remaining server-ish is some means of bootstrapping, i.e. entering the network.

I'm not sure how up-to-date the readme is, but the demo (orbit.libp2p.io) is out-of-date and still uses redis pubsub. I pinged @haadcode, who can go more into detail.


Orbit used to use a Redis server for pubsub messaging but it's not anymore as IPFS has implemented a peer-to-peer pubsub. Totally serverless.


Are you familiar with IPFS pubsub? Would you be able to link some information about the implementation/usage?

I'm quite surprised to hear what you said. I've been following multiple Github Issues on IPFS pubsub, and none of them (that i followed) announced success/etc. I thought it was still in the planning phase.


There is some prototype work done in master already (if you build go-ipfs from source).

   $ ipfs pubsub --help
   
   USAGE
     ipfs pubsub - An experimental publish-subscribe system on ipfs.
   
   SYNOPSIS
     ipfs pubsub
   
   DESCRIPTION
   
     ipfs pubsub allows you to publish messages to a given topic, and also to
     subscribe to new messages on a given topic.
     
     This is an experimental feature. It is not intended in its current state
     to be used in a production environment.
     
     To use, the daemon must be run with '--enable-pubsub-experiment'.
   
   SUBCOMMANDS
     ipfs pubsub ls                    - List subscribed topics by name.
     ipfs pubsub peers                 - List all peers we are currently pubsubbing with.
     ipfs pubsub pub <topic> <data>... - Publish a message to a given pubsub topic.
     ipfs pubsub sub <topic>           - Subscribe to messages on a given topic.
   
     Use 'ipfs pubsub <subcmd> --help' for more information about each command.


Wow, thank you! Very interesting! Quite odd that none of my follows mentioned this going live, must be an issue of too many github issues.

Time to dig into this a bit!

edit: For anyone else perplexed and surprised by me, seems `floodsub` is their moniker for the new tech, and it was (in part) merged in here: https://github.com/ipfs/go-ipfs/pull/3202

This is really, really cool! Also, if this implementation is robust and performant, this is huge for IPFS.


Yes we haven't been very vocal about it yet because it's not yet part of a go-ipfs release -- it'll be in the next release, go-ipfs v0.4.5.

Note on the name: floodsub is one specific implementation of pubsub. There are many ways to do pubsub, and we chose to go with a naive and simple flooding implementation for now. In the future there will be more, and you'll be able to use different implementations simultaneously.


I imagine an IPFS chat software could store it's necessary information in... IPFS. So why involve redis?


Because IPFS only does distributed storage. It has no processing power or logic to handle data transformations.

Now, one avenue to handle that is js-ipfs. In order to update things like IPNS records, you need the private key of the node you're trying to change. Interestingly enough, and machine with the pub/priv key can submit an IPNS change.

So effectively, you could have a shared repo like Usenet, where everyone has the pub/priv key and pushes updates via js-ipfs. Although, I could imagine easily how that could get super-heavy.

_________________________

Another idea I had, was to build something akin to AWS lambda, except using Tor Hidden Services, and Erlang. It would be effectively a private computation cloud. The reason for the HS is so each machine, regardless of their location, could always talk with each other, using Erlang's built--in networking support. (I am using non-standard applications of Tor Hidden Services - read more what I'm doing here: https://hackaday.io/project/12985-multisite-homeofficehacker... )


That's very cool! Have you looked at OnionCat? I've managed a global LizardFS cluster on a PeerVPN network, overlayed on OnionCat IPv6. Latency is too high for erasure coding or XOR goals, but chunk-replication mode works reliably.


redis was mainly used to get pubsub for the first iteration (demonstrated in June), now (demonstrated in September) orbit uses IPFS pubsub (available in the go-ipfs implementation) for a complete distributed web application.


Isn't Redis used just for running the test suite?


It was just used for pubsub on the 1st iteration, see: https://news.ycombinator.com/item?id=12721898




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: