Oh, wow. I didn't expect to see this on HN today. This was a toy app I wrote to learn a bit more about Elixir, Phoenix, and Phoenix's concept "channels" (websockets). I submitted it a long time ago to HN.
I wrote a bit more about it on my blog: http://gabe.durazo.us/tech/ephemeral-p2p-project/ (wow, I need to update my blog). Unfortunately, it's not really that p2p, contrary to the name. There's a server running on heroku that manages all the websocket connections and matching. What is p2p is the page content, which lives in the browser, and which gets piped around.
I get my little $7 bill from heroku every month and wonder "is this the time I should finally spend an hour and shut it all down?" But then every once in a while I dig up the old HN submission and follow the link and I see a "Currently Viewing" count of 2 or 3 and I wonder who or what it is and why. If it ever drops to 0 then it wouldn't really be recoverable and I'd shut it down.
Thank you for footing that bill for so long, this has been a little joy for me for a long time, just seeing how long it can last. Would love to buy you a beer someday.
Great work! Also there are alternatives to Heroku that don't cost money now that they killed the free hobby tier. I've used Vercel and Cyclic and both are great among others.
I wonder: did you consider adding size to the end of the hash?
In a future where all content could be addressed by hash there will be collisions, but adding the expected size moves it from “possibly given enough files” to “extremely unlikely assuming there’s no flaw in the hashing algo”
I love love love these kinds of experiments. Reminds me of the Early Web.
I really wish WebRTC was easier to use. I wish there were some good faith free servers to do the… crap I forget the acronyms… the STUN and TURN stuff and whatnot to negotiate the connections. Ie. Not a proxy, just an operator. Having one I could use freely and comfortably would trivialize a lot of neat projects, particularly fun little multiplayer games.
I recently learned you can do the candidate server exchange without those servers. There was an implementation which used [this](https://github.com/cjb/serverless-webrtc) as a backend and a QR code to do the transfer. I can't seem to find it now. However they were having issues with the QR code reaching its size limit, leading to the question of whether you could break the message into a series of qr codes.
Pretty interesting stuff, I was considering taking a crack at it as part of a project I'm working on.
You would still need a STUN server, though there are many public ones out there.
The main idea in removing the signaling server is to somehow transfer the session description between clients e.g. one client scans a qr code that encodes the session description of another or sending session description through some existing chat application and allowing a user to enter it manually instead of having your own server do the transfer over some mechanism.
Also it may be possible to avoid the multiple qr code thing this blog is talking about (qr code per ice candidate) if you just wait for the ice gathering state to be complete then send the session description, though they are correct that the session description in its entirety will most likely exceed the qr code capacity (coincidentally they use LZ compression in an attempt to lower the length, similar to what I tried long ago though I ended up deciding it wasn't worth it).
I recently came across an in-progress book from Manning, "Peer-to-Peer Web Applications" [0] that, despite a fair bit of crypto/web3 references, seems to be targeting an audience of people looking to build more robust/complex version of this.
I'd recently been thinking about how it's unfortunate that we have seen so little growth in the peer-to-peer part of the web, which to me is the most resilient strategy to resist a few large tech companies taking over everything and turning the web into a gigantic advertisement.
Curious if anyone has read the MEAP for that book so far and has any opinions on it?
I understand the idea behind this page (and I really liked it), but the title made me think of more a "philosophical" question... do any webpage exists if nobody visits it?
If the page is a static HTML you could say that yes, it exists in the webserver as a file. But what about dynamic webpages.
The frontpage of New York Times (I guess it is generated dynamically from a database)... does it exists if no one access NYT? Do any dynamic webpage exists if no one requests it? Being available and ready to be requested doesn't mean the webpage exists... does it?
> Do any dynamic webpage exists if no one requests it
Matter of perspective/semantics. A more general thought experiment in the same vein: Can you stand in the same river twice?
To answer the question: "does it [exist]?" you have to first define what a "it" (a dynamic webpage) is. My answer in response is "Is it cached?" if no, then no. Whether or not a person is involved at all is irrelevant imo. But I define a dynamic webpage by the existance of the DOM, not the services that generate it. I might make an exception if it can be proven those services are deterministic, which I wouldn't assume.
Your browser retrieved it from the browser of someone currently viewing this page. You're now a part of the network and someone who loads this page in the future may get it from you!
If it passes over a central server, does it really count as p2p?
If browsers grew p2p capabilities [1], we could begin to bypass centralization and build truly distributed (not just federated) social networks, news systems, and the like.
Imagine not having to run "archive.is" against headlines. You just share them - tamperlessly - with others. And the comment graph. You could even begin to boost high-value content with your own algorithm that you control.
P2P in browsers would return us to where many of us thought the internet would lead us in the early 00's. The disappearance of bittorrent and arrival of central platforms like Facebook felt like the future faded away and disappeared.
People are using Discord now, but in the 00's people used completely customizable open source clients to connect to their favorite messaging platforms. You can't even imagine that today. Everything is a platform.
[1] I doubt Google would ever get behind this. Centralization reinforces their ads business.
Ignoring STUN/TURN, WebRTC should be able to do this. there are things like webtorrent.io
All we need is demand. Demand for non-walled garden content. Demand for less monthly subscriptions to be able to run something you already own. Demand for DRM-free content that you can share with your grandma or watch on a seperate device on a flight.
Is it just me or did WebRTC supporting clients cause a split in the BitTorrent clients?
I come across many torrents that have a lot of seeders on the tracker and WebTorrent (in Brave and as a standalone app) is able to immediately connect to a ton of peers and begin downloading at high speeds while Transmission fails to find any peers even after a long time.
The opposite also often happens, I see lots of seeders on the tracker. WebTorrent fails to connect to any clients while Transmission is able to connect to a ton of peers and download at high speeds.
I observed this for a long time and then I remember reading somewhere that WebTorrent only supports connections over WebRTC and most other clients don't support WebRTC peering.
I have UPnP working just fine. No hole-punching issues in the network.
Fascinating experiment in ephemerality. If a page is popular, more people will see/share it amongst themselves... but as soon as it loses interest... the content is gone.
Also a high potential for content to be gone before it can be indexed.
I think it would make a fascinating tik-tok clone where you don’t have to pay storage fees! Except you could charge users to help them seed if they don’t want to leave their phone app running until they go viral.
Yes exactly. I've had this page up in a browser for many years, most of the time with only two other people. I have no idea who they are, no way to communicate with them, but feel very connected in a strange way.
OMG, you! I just guffawed in real life at my desk here. I mentioned in another comment here[0] how I've checked off and on through the years and wondered who could possibly be keeping this page open and why. Thanks for re-posting this, it's too funny to hear from one of you guys.
SO this totally popped into my brain - there was a PHP script you could add to your site called Crisc - https://web.archive.org/web/20021010104130/http://biomatic.o...
It showed your IP and you could send a message to the other visitors on the site - I was blown away at the time I saw this.
I’d love to try to implement an extension attack to add some trivial content to the page, but don’t have time right this second (I’m at a work offsite)
Obviously its current feature set leaves it highly vulnerable to griefing. Currently lacking the motivation to improve, since it works perfectly for my personal use case of deciding boardgame night once a week and family vacation plans once a year.
Neat. A simple enhancement could be letting the server to cache the content for a short period of time, and push the cached webpage in the static rendering. This way:
* non js enabled clients can see the page, think robots, SEO, etc.
* js enabled clients can help to perpetuate the content
* if the last js enable client had a network glitch, the content is still available.
I wonder if that's because one or several people have been seeding it long term - it seems likely. In which case, I'm curious if a 6+ year browser websockets session is possible or if a seeding tool was used for stability.
I used this same sort of idea to make a whiteboard app website. The whiteboard content wasn't stored on the server and would disappear when the last person left.
Quite the opposite. For this set up, you still need to pay for hosting, or host it yourself, and you need to deliver the underlying frontend code to the user's devices that makes all of this possible. Then you also need a backend to handle the socket connections and send it from user to user (this is not P2P). The constant pinging to see if anyone is waiting for a copy, and then sending it out, will only continue to eat up resources the more people are looking at it.
From an efficiency standpoint, this doesn't come close to beating a simple static cacheable text file. Definitely still a very cool concept, but offers limited real-world use.
I wrote a bit more about it on my blog: http://gabe.durazo.us/tech/ephemeral-p2p-project/ (wow, I need to update my blog). Unfortunately, it's not really that p2p, contrary to the name. There's a server running on heroku that manages all the websocket connections and matching. What is p2p is the page content, which lives in the browser, and which gets piped around.
I get my little $7 bill from heroku every month and wonder "is this the time I should finally spend an hour and shut it all down?" But then every once in a while I dig up the old HN submission and follow the link and I see a "Currently Viewing" count of 2 or 3 and I wonder who or what it is and why. If it ever drops to 0 then it wouldn't really be recoverable and I'd shut it down.