
Viral.js – Peer-to-peer web app distribution - PixelsCommander
http://pixelscommander.github.io/Viral.JS/#.V5tY1ZN97UI
======
zer0gravity
So many problems.

Security : how do you guarantee that a client peer is not tinkering with the
code it distributes further?

Unreliable: browsing sessions come and go. Before forwarding the app maybe the
browser is already closed..

Bandwidth: nobody loves bandwidth thieves

And so on...

~~~
0x6c6f6c
This is an idea I've had for websites such as Wikipedia where consumer funding
is the main source keeping it alive.

This could be great for a small niche of applications, but even for those,
these points are still important.

Particularly on the consumer end, turning my device into a provider as well as
a consumer of data is typically not my intention. It's forced seeding on all
clients in a torrent sense, which not everyone wants to be a part of. Or for
that matter, can be. In a world of data caps, turning my phone, tablet, or
hotspot connected devices into a CDN has moral and fiscal implications that
have to be considered.

~~~
huskyr
From what i've heard from people managing the hosting and development of
Wikipedia, this (using a P2P hosting solution) is mentioned very often, but
very impractical to put in real world use. Hosting Wikipedia is quite cheap:
most of the content is text and images, which lends itself very well for
caching. Making sure people can edit it at high frequency and the transition
to mobile use are far more complex, and not the things that P2P distribution
would solve.

------
slowkow
I'd like to mention related work from 2014 called QMachine. You might also be
interested to read the "Security" section of the article.

Wilkinson, S. R. & Almeida, J. S. QMachine: commodity supercomputing in web
browsers. BMC Bioinformatics 15, 176 (2014).

[http://bmcbioinformatics.biomedcentral.com/articles/10.1186/...](http://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-15-176)

"Modern web browsers can now be used as high-performance workstations"

[https://github.com/qmachine/qmachine](https://github.com/qmachine/qmachine)

"QM is an open-sourced, publicly available web service that acts as a
messaging system for posting tasks and retrieving results over HTTP. The
illustrative application described here distributes the analyses of 20
Streptococcus pneumoniae genomes for shared suffixes. Because all analytical
and data retrieval tasks are executed by volunteer machines, few server
resources are required. Any modern web browser can submit those tasks and/or
volunteer to execute them without installing any extra plugins or programs. A
client library provides high-level distribution templates including MapReduce.
This stark departure from the current reliance on expensive server hardware
running “download and install” software has already gathered substantial
community interest, as QM received more than 2.2 million API calls from 87
countries in 12 months."

~~~
andai
I had the idea a few months ago that, since no one wants to see ads, and since
most devices greatly under-utilize potential output, ad networks would slowly
be replaced by computation networks over the next few years.

So for the time you spend (actively) on a website, instead of being shown ads,
you "rent" your device's CPU etc in exchange for the content.

~~~
fratlas
I actually tried to do a proof of concept for exactly this but gave up after
assuming you could never truly trust the client to not mess with/read the
data.

~~~
otoburb
>> _[...] assuming you could never truly trust the client to not mess with
/read the data._

We have to wait for fully homomorphic encryption[1] to realize this, or a
clever implementation of zero-knowledge proofs. Your proof of concept may work
now if you can implement partial homomorphic encryption, perhaps something
similar to CryptDB[2] or ZeroDB[3]. I read more about this in a recent Zdnet
article that gives a brief summary[4].

[1]
[https://en.wikipedia.org/wiki/Homomorphic_encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption)

[2] [https://css.csail.mit.edu/cryptdb/](https://css.csail.mit.edu/cryptdb/)

[3] [https://zerodb.com/](https://zerodb.com/)

[4] [http://www.zdnet.com/article/encryptions-holy-grail-is-
getti...](http://www.zdnet.com/article/encryptions-holy-grail-is-getting-
closer-one-way-or-another/)

~~~
fratlas
This is the conclusion that I came to (without knowing there was a word for
homomorphic encryption, thanks!). I really think this is the (ideal) future of
monetizing the internet.

------
juretriglav
This is something that PeerCDN [0] was doing, before it was acquired by Yahoo.
What I'm saying is that this definitely has merit, if you can make it easy
enough to use for developers.

0\. [https://www.quora.com/How-is-WebTorrent-different-from-
PeerC...](https://www.quora.com/How-is-WebTorrent-different-from-PeerCDN)

------
antoineMoPa
I tend not to use CDNs as I feel they could hypothetically allow to track
anyone on any website and they could be used to provide backdoors in my apps
(hypothetical example: Trump gets elected and decides all the jquery.min.js
files sent to Canada via US CDNs have to include something that breaks
Canadian apps, then my code stops working). I like to have all my libraries on
my server.

It would be awesome if something existed to use this tool while ensuring users
do not modify the files

~~~
chinathrow
> It would be awesome if something existed to use this tool while ensuring
> users do not modify the files

That should be possible with subresource integrity.

[https://developer.mozilla.org/en-
US/docs/Web/Security/Subres...](https://developer.mozilla.org/en-
US/docs/Web/Security/Subresource_Integrity)

~~~
Spivak
I can't tell you how excited I am for this to be commonplace.

------
flaviotsf
I wanted to build something like this - basically a P2P based End-to-End
encryption messaging technology (so it doesn't get inspected by peers) for use
cases such as messaging (a-la SMS) on a non-networked environment such as
cruise ships / National Parks, etc [maybe through bluetooth or something].
Cruise lines should / could easily build a messaging app that works on a
closed network (lots of branding potential.. not sure why it hasn't been done
yet!). If anyone is interested ping me. :)

~~~
alexandre_m
Do you mean something like Bittorrent Bleep?

Haven't used it, but same premise.

~~~
flaviotsf
Yes, exactly. Would be great to brand bleep to certain audiences - but the
functionality is exactly what I thought. Thanks for the link!

~~~
rakoo
See also Briar ([https://briarproject.org/](https://briarproject.org/)) which
has the added benefit of being open source

------
mxuribe
To wybiral's point, signing in via FB login is a turn off.

One point i might add to questions of NAT...what if the first types of apps to
be distributed this way were games (then later other catgories of apps), and
using ipv6? I think that would do 2 things: avoid some (though not all) nat-
related issues; and help increase adoption of ipv6 overall. (I suppose my
question should not need to be specific to this viral.js.)

------
wccrawford
It was probably fun to make, but there are so many issues (nat, bandwidth
costs for users, browsers not processing background pages' tasks well, etc)
that I can't see actually using this in any real-life scenario.

~~~
PixelsCommander
NATs are pain. True.

And also I agree that there were a lot of problems to work around and benefits
are not that obvious if application size is 3 Mb. However it becomes closer to
real life with every additional megabyte of application size. E.g. for video
content it definitely worth and it also could work for web games and VR
experiences. Other cases? Sure there are some. Let`s consider this as first
Viral JavaScript implementation.

------
justrossthings
OP, can you share why social login is necessary?

------
heisenbit
The edge of the network often has asymmetric bandwidth. ADSL and cable
infrastructure are geared towards serving the home and not serving from the
home. Niche applications e.g. low bandwidth m2m may emerge where there is a
strong use case but in general the limitations of the uplink will constrain
server applications for most users in the foreseeable future. It will be
interesting to see if anything different emerges from the small set of fiber
connected households.

------
jswny
Could be cool but I can't find out because I refuse to login to Facebook for a
demo of a JavaScript framework.

------
brink
I'm not giving my email address to a random demo through Facebook. At least
buy me dinner first.

------
andrewvijay
I think this is suitable for less important and publicly available information
in sites. If a site has a lot patient users who would wait for this content to
load then may be it can be tried. ;)

~~~
rawnlq
Information (in the form of text) is tiny and cheap. The main use case for
this has to be videos and other medias.

All I can think of is high resolution porn which is currently not cost
effective to be purely ad driven. Or hosting illegal videos/music that you
can't really make enough real money off of to pay for server costs.

But definitely not for "distributing web apps" like the author mentioned
unless your js is so bloat it is tens of MBs.

~~~
SapphireSun
Here's an case study from an article I wrote on this topic that I never
published:

Case Study 1: Google Maps and Offline Mode

Alice navigates her iPad to [http://maps.google.com](http://maps.google.com)
and looks up the geography of the city she lives in, Boston. Alice knows that
she’s going to be traveling later that day and might not have access to WiFi,
so she clicks the “Save for Offline” mode button that is either part of the
browser or website interface.

The interface presents a permissions menu: “Maps will need to use 500MB of
space and will add [https://maps.google.com”](https://maps.google.com”) to
your offline hosts” Alice agrees and the download begins. Google Maps saves a
copy of its client code, the necessary map data, and a light nodejs server
implementation into a browser cache.

When the browser senses a connection has been lost, and Alice navigates to or
is already on maps.google.com, the nodejs server spins up and requests are
forwarded to that server. Alice notes that the browser indicates that the
connection is offline and sets her expectations accordingly for what she’ll be
able to do. When the connection is regained, the server shuts down and the
browser forwards requests to maps.google.com’s true IP address.

One day Alice leaves the big city to go camping. She has downloaded maps of
the campgrounds onto her iPad as is her usual practice. She moves into her
cabin with her friends, and they all realize they have no connection and
forgot to grab the maps. They have all brought their phones. Alice activates
the promiscuous mode of google maps and the nodejs server becomes active on a
mesh network and declares itself the secondary access point to the primary
maps.google.com. The phones all mesh network with each other and the iPad.
When one of her friends goes to maps.google.com, and looks up the camp grounds
the phone asks the friend if she’d like to acquire a copy of the maps server
from the iPad with a version number and date of acquisition. She agrees, and
the phone downloads the copy of maps offline. As all the friends get the copy
of maps offline, the servers distribute the requested map tiles using
bittorrent.

The friends can now go explore the woods without depending on a single device
(though they probably should have brought a paper map).

------
fredliu
This seems to be built on top of WebRTC, using data channels ? I believe at
least Safari doesn't support that, but their github readme does seem to imply
support on Safari. Also, since it's based on WebRTC, who's responsible for
setting up the STUN/TURN server?

------
rkeene2
Possibly related: RAID-CDN, it's come up before on this site:
[https://github.com/lorriexingfang/webRTC-CDN-raidcdn-
sample](https://github.com/lorriexingfang/webRTC-CDN-raidcdn-sample)

------
jondubois
That seems interesting this would be good for CDN-like use cases where you
want to distribute static, low-security files/assets.

Until we have fast, fully homomorphic encryption, I don't see how peer-to-peer
apps are going to be useful in the industry.

------
so898
The main problem is that it is not easy to find the nearest person who has the
js file. You see, the connecting time is much longer than downloading time
when you download small files with P2P software.

------
235337
This seems like its just moving the costs of bandwidth to your users, probably
without them knowing. Sooo yea, I guess you can save money by making others
spend theirs instead.

------
wybiral
My user experience report:

1\. Opened website

2\. Clicked "Demo" to see what this was all about

3\. Read "Please, login via Facebook..."

4\. No thanks. Closed tab.

~~~
fizzbatter
I'm such an "old man" when it comes to Facebook login. I'll happily login to
Google/etc _(if i want the product of course)_ , but as a non-Facebook'er, i
think back to the days of Facebook apps invading the user privacy, posting
things, etc, and it built this mental model of worry around linking _anything_
to Facebook. I avoid any type of api interaction with Facebook like the
plague.

I only use FB maybe twice a year, and apparently it's not frequent enough to
rid this old and overly concerned fear i have.

Alas, no login via Facebook for this mentally old man, apparently.

~~~
notduncansmith
Do you feel that your privacy is less at risk when you give your data to
Google over Facebook?

~~~
fizzbatter
It's not strictly about that, it's an irrational fear of the API growing pains
that Facebook went through. That is to say, apps the user installed having
more access than the user liked. Whether through malicious means or ignorant
users, it was a common trope "back in the day".

I understand it's completely irrational, but i have yet to link my FB to a
single thing due to this irrational fear. I'm sure if i used FB more i would
have worn away this fear, but there is honestly no page on the internet that
makes me feel more old than FB. I go there and all the buttons and overload of
"things" makes me feel like i'm looking at a AOL sign-on page from way back.

Mind you i'm 32, so not _that_ old.. but still, there is some type of age and
or usage issue going on in my head.

I say this not as a complaint, but more as a curiosity for fellow developers.
I'm sure i'm a small minority of FB users, but it seems as if i'm a fringe
user who had their trust broken and is very tough to get back.

I'm totally willing to go to FB _(albeit, infrequently)_ , just unwilling to
link stuff via their API.

Also, to be clear, when i mentioned the trigger word "Privacy", i did _not_
mean Privacy from Facebook. I was referring to spammers/scammers/etc.

~~~
amorphid
I dislike it when I create an account with Facebook on some site, and then
can't do basic things like update my email address.

