
Ask HN: Is a P2P browser possible? - nkkollaw
I&#x27;m wondering--if the FCC succeeds in passing their awful law, if Americans will develop some sort of P2P browser (or connection), to get either full speed or better speed than before.<p>Would something like this be technologically possible?
======
throwaway2016a
There are decentralized and anonymized networks (Tor, I2P, etc) that may be
effective to prevent ISP interference but on the other hand those networks are
not known for their speed so while it may prevent tampering it will be
certainly slower than whatever the ISPs put in place.

There are also decentralized VPN-like networks such as Zeronet and Docker
overlay that route peer to peer and plain VPN which requires traffic routed
through a central server. Which are very efficient and as a bonus, traffic is
encrypted at least part of the way and thus can't easily be tampered with
without raising red flags.

Then there is decentralized file hosting like Dat, IPFS, and if you want to go
old school Bittorrent (though that has centralized trackers). All are great
for spreading bandwidth across hosts and if throttling is bandwidth per
destination IP based it might be an effective way to get around it.

None of them would be great at preventing throttling though, except in that
they would be less obvious than large amounts of data coming from a central
source. But ISPs could "solve" that but using a white list to make some sites
immune to limits but have destination IPs severely limited by default.

Overall, I'm not worrying about throttling of websites. I'm worried about
throttling of protocols and tampering with websites. An ISP can easily say,
for example, only allow some ports to go limitless and completely cripple any
other ports. For example... forcing customers to use their DNS which is a big
privacy concern.

Edit: Just to add to that DNS piece. I've always felt that Namecoin and the
like are underrated. That is to say, DNS on the blockchain. It is an awesome
use of blockchain tech that provides both transparency and allows users to
look up domains 100% anonymously. Tor and Zeronet both have decentralized
service lookup (others do as well, I'm sure) but the service names are often
hashes not something friendly,

~~~
3pt14159
I really do not understand why DNS was done so poorly. This could have been so
much more simple and more secure in so many ways.

Why can't I serve a HTTP header or status code to update DNS? Why does
propagation take _hours_ when a simple pub-sub model would have worked and
been a couple seconds of latency at most? How does DNS, something more
important to get right than cryptographic certificates authorities, not come
secure by default? How is it that a wildcard cert costs 10x as much as a TLD?
Why does my browser not come pre-loaded with the top million DNS records? It
would be a couple dozen megabytes (uncompressed).

The whole thing seems so stupidly put together.

~~~
stevekemp
Remember when DNS was created, and that answers your question.

It's the same reason why FTP is terrible, and SMTP allows spoofing. The DNS
protocol has been updated with additional record-types over time, but the
protocol itself is largely unchanged since the time it was created, when the
network was smaller, and the world was more trusting.

------
gnode
As I understand it, the real concern is that ISPs will create "fast lanes" for
companies they're friends with, and reduce / stagnate the speed of everything
else. Subscribers will be happy with their low base rate because they can
stream 4k video from Netflix, yet your video startup will fail because of
relatively poor performance or will have to pay a ransom to the ISP
gatekeepers. Peer to peer applications would be hit just as hard, because they
wouldn't be fast laned.

------
LukeB42
A P2P browser: [https://beakerbrowser.com/](https://beakerbrowser.com/)

A P2P caching proxy for in front of your existing browser:
[https://github.com/Psybernetics/Synchrony](https://github.com/Psybernetics/Synchrony)

------
lnx01
They already exist:

[https://freenetproject.org/](https://freenetproject.org/)

[https://zeronet.io/](https://zeronet.io/)

[https://geti2p.net/en/](https://geti2p.net/en/)

~~~
CapacitorSet
There's also IPFS which is a network of static, hash-addressed content.
However, note that the official client comes with "blocklists" which may be
used for censorship.

~~~
lgierth
> There's also IPFS which is a network of static, hash-addressed content

There's means for dynamic content too: IPNS and Pubsub, the latter was on HN
just 3 days ago:
[https://news.ycombinator.com/item?id=15879752](https://news.ycombinator.com/item?id=15879752)

> However, note that the official client comes with "blocklists" which may be
> used for censorship

There's no blocklists in any of the IPFS clients and libraries at the moment.
However there are plans for opt-in, community-maintained blocklists that'll
allow communities to govern what content they allow.

------
ruiramos
Have a look at [https://beakerbrowser.com/](https://beakerbrowser.com/) and
the DAT project ([https://datproject.org/](https://datproject.org/))

~~~
nkkollaw
This forces you to create a website specifically for the technology, though.

I was wondering if there was something to decentralize downloading anything
without any intervention from the author.

~~~
yoshuaw
It sounds like you're asking for a p2p CDN / caching layer. Having it be auto-
generated would be tricky, as it's hard to determine if the page you're seeing
is meant to be public or private (e.g. for your eyes only).

------
epanchin
It is nonsensical that ISP’s would implement throttling by blacklisting
certain sites. It is more likely they would implement it by whitelisting
services which pay them. I.e. a p2p browser would be throttled.

~~~
amelius
P2P is already throttled, because of the crippled upload speed that most ISPs
implement. Upload is often only 1/10 of download speed on residential internet
connections.

I think Net Neutrality should also address this.

~~~
dx034
Upload speeds are often that low to increase download speeds. If you have the
option between 30/30mbit or 50/10mbit, most users will have a much better
experience with 50/10\. Upload speeds beyond 10mbit/s are rarely noticeable
for standard applications of most users (streaming, surfing).

It also kind of prevents people from running servers at home but renting a
server with average connection quality (what you'd get at home, no multiple
redundancy and excellent peering) is not much cheaper than the energy costs of
running it at home (at least in Europe). So I doubt that this is the main
motivation for ISPs.

------
squarefoot
I believe it would be doable, but we should forget about today beefy webpages:
a p2p web would probably be made of fast text searching and slow binary (big
images, pdf, audio/video media, etc.) delayed access, for example a fast main
page not unlike HN main page with very small thumbnails, but all other media
links would point to p2p (torrent?) magnet files. And of course the whole
"active web" thing has to be ditched in favor of extremely lightweight
protocols that take into account the loss of almost everything realtime.

~~~
ComputerGuru
It sounds better in every possible way.

------
cup-of-tea
Browsers already are peer-to-peer, but they make the connection through a
packet-switched network (the internet). I think that you are getting at is
decentalisation. This is absolutely possible. In fact, the Internet was
specifically designed for it.

The problem you'll run into is physical infrastructure. At some point you
can't escape the fact that you need to transmit packets over a great distance.
This infrastructure comes at great cost and is best distributed and shared
amongst everyone much like the highways.

~~~
pbhjpbhj
I wouldn't describe client-server as p2p.

Opera created a peer-to-peer browser not long before going under, Unite IIRC.
The browser was both a client and server, you had a virtual fridge to put
messages and photos on, could open folders on your computer to any subset of
your chosen peers, etc..

I was really excited about the prospects it seemed to offer; I blogged about
it at the time, [http://alicious.com/opera-about-to-change-the-
world/](http://alicious.com/opera-about-to-change-the-world/).

------
Confiks
The main problems I see with P2P browsers for the existing web are:

1) Most websites are actually web applications which have to maintain state
via cookies over a TLS secured TCP connection. This means that any indirection
must involve tunneling TCP connections, which timeout fairly quickly and
require several round-trips to setup.

2) Even when websites are read-only and have no state to maintain, there is no
common way for a peer to prove that content is authentic. Unfortunately, the
authenticity 'guarantee' that TLS provides does not transfer to a third party
[1]. There is a proof of concept server implementation for non-repudation over
TLS [2], but every website would have to choose to implement it, and there
aren't many incentives to do so.

The best you could do here is trying to reach a consensus about what the
content of a page is, but you would have to account for subtle or not-so-
subtle differences between requests by different clients at a different time.

[1] [https://crypto.stackexchange.com/questions/5455/does-a-
trace...](https://crypto.stackexchange.com/questions/5455/does-a-trace-of-ssl-
packets-provide-a-proof-of-data-authenticity/5467#5467)

[2] [https://tls-n.org/](https://tls-n.org/)

~~~
edvinbesic
Doesn't IPFS solve some of the authenticity issues at least. Not very familiar
with it but (from my shallow understanding) if the signatures matches some
published version it is guaranteed to be the same file.

[1] [https://ipfs.io](https://ipfs.io)

------
arithma
Will a hyper connected fiber optic globally available IPv6 enabled internet
convert the globe into a distributed data center (or just a real network)
instead of the clusters of centralization?

Maybe, akin to all the new startup's that don't actually own the
infrastructure (Uber/Cars, Airbnb/Housing, Amazon/Retail), a startup could
enable with the existing resources to rent people's machine's to each other.

~~~
pythonaut_16
You should be able to tie that in as a Proof of Work of some sort on a
blockchain right?

Not to bring everything back to buzzwords, but that actually seems like an
interesting application of blockchain tech, letting people rent out their
otherwise idle desktops to process and serve web services.

~~~
arithma
Ethereum (for example) can be used to pay if the contract is fulfilled (e.g an
intense computation job.)

1\. You need to be able to verify that the other party is honest

2\. When the data is intensive, you need to make sure that the computation is
opaque to the computer. This is still an active research area, I believe. One
of the keywords I associate with it is homomorphic encryption. Alternatively,
consensus or verification can be done, with other zero-knowledge proofs or
something similar perhaps.

[https://en.wikipedia.org/wiki/Homomorphic_encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption)

------
inscrutable
Maidsafe (if/when completed) will offer an untraceable web with no weak
points. Maybe won't be useful for all web content but for secure comms will be
hard to beat.

[https://blog.maidsafe.net/2017/12/11/safe-network-autumn-
win...](https://blog.maidsafe.net/2017/12/11/safe-network-autumn-
winter-2017-development-update/)

------
finchisko
You can use beakerbrowser. Personally I miss P2P application server. Meaning
you could deploy your app to the P2P "cloud" and not care how it will be
hosted/load balanced... Something similar to ethereum smart contracts, but
with possibility to run code without any restrictions. Add P2P database and I
will call it real revolution.

So far nothing like that exists, but potential is huge. Effectively disrupt
any cloud service as we know today and make a lot of money without building
big and costly hw infrastructure.

~~~
toyg
It's already difficult for a lot of companies to trust a single infrastructure
provider (be it AWS, Google or anyone else). Handing your critical
infrastructure to a bunch of distributed unknowns? Not gonna happen.

~~~
deevolution
Encrypt all the data and put it on distributed database like blockchain,
reward / incentivizes people to provide disk space. Filecoin does this.

~~~
finchisko
yeah but it's just disk space. i ment running applications

~~~
netsharc
Isn't that the idea of Ethereum? Smart contracts can run logic written in
whatever language they invented for that.

~~~
finchisko
no, smart contracts ambition is not run any arbitrary code. It's language is
limited and not general purpose. Think more of DSL to support ethereum
transactions than java,c...

------
camus2
I don't know how it would work but what I'd like to see it a distributed
search engine to bypass google. This is something I had in mind for a long
time. people have hundreds of gigabytes of space on their computer, maybe they
could spare 10/20 gigs for a search index?

------
explorigin
While not a p2p solution, I feel that DecentralEyes is worth mentioning. It is
a plugin that provides heavier caching on common Javascript libraries. I
wonder if the authors have considered expanding its scope to include other
static assets.

------
akerro
You can easily use underlying technologies like FreeNet, IPFS, ZeroNet and
BeakerBrowser to create one. Like in Tor browser, integrate the other software
and bundle them together as one program.

------
tn_
I wonder how much it would cost to cover the entire US w/ solar-powered wifi
extenders to create said mesh-network. Is there an equivalent for bluetooth?

~~~
dx034
Why not use satellites in LEO with LTE? Should be cheaper and more reliable.

------
jlebrech
I'd like to see websites that process not just the current user's tasks but
runs batch processes to make the overall user experience faster.

------
frantzmiccoli
Wouldn't be like a smart routing version of Tor?

