Hacker News new | past | comments | ask | show | jobs | submit login
Unwalled.Garden: souped-up RSS for P2P social apps (hashbase.io)
137 points by bpierre 9 days ago | hide | past | web | favorite | 50 comments

>Unlike Twitter and the other social networks, RSS never tried to capture the social graph, likes, comments, annotations, retweets, and etc of its userbase. It limited itself to posted content (mostly blog posts) and that was a missed opportunity.

>missed opportunity

Maybe it's just me, but RSS being all about content is the best thing about it.

This is the two competing goals of every publishing mechanism - one person wants to broadcast their data and another wants to consume it, and it's very rare for both parties to want exactly the same thing.

Except where the same person is both publisher and consumer. I run a couple of systems at work were RSS is invaluable, a website with news and a set of screens on the premises running Xibo, that ingest the news via RSS to display.

This looks awesome! A few questions if anyone knowledgable is around:

* Does Unwalled.Garden automatically pin the content of people you follow, so that their stuff stays online if they aren't?

* I didn't understand "We also use a second “private” dat for records which should be kept off the network". What is this referring to, the follow request, or some other private content like a DM? I assume that since this is still Dat there is no access control if you happen to know the address?

* How is reader privacy and DHT coming along with this ecosystem?

> Does Unwalled.Garden automatically pin the content of people you follow, so that their stuff stays online if they aren't?

It gets saved locally so you can access it when they're offline. It'll also be possible to instruct your pinning service to seed it on your behalf.

> I didn't understand "We also use a second “private” dat for records which should be kept off the network".

I'll talk more about this in the future as Beaker 0.9 is closer to release. We basically are creating a filesystem with a root dat (which is private). Your public dat will then be "mounted" to it at /public so it feels like one unified FS. The structure will look something like this:

  /.data/.unwalled.garden/* <- private records
  /public/.data/.unwalled.garden/* <- public records
> How is reader privacy and DHT coming along with this ecosystem?

At this stage, I'm pretty sure we're going to need to use proxies to solve this. We're open to using an anonymity protocol like Tor but still uncertain about the tradeoffs.

Cool, these folks: https://hashmatter.com/ https://github.com/hashmatter seem to be doing something with onion routing and IPFS, but I'm quite unqualified to understand exactly what or if it's any good.

At first glance -- and that's an important caveat, since I haven't dug in -- this looks like it's duplicating an awful lot of work that's already been done in the IndieWeb using existing, well-established protocols.


this is for the DAT protocol, not the legacy web (atm)

This all sounds pretty exciting. I’ve been using dat/beaker and I love how easy it is to make a site right in the browser. It made me realize how many barriers there are on http to just build a personal website and host it for free.

This update to beaker makes sense to me as it was pretty hard to discover new sites on the network though I’d love to see dat become a bit more stable. I’d often make a site and a friend wouldn’t be able to connect to it. I think if these issues could be resolved is some way the whole network would really take off.

Is there a reason to keep the post type out of the JSON and store it as file metadata? That works in Dat which supports file metadata, but what about when the data is handled in other ways (as plain files on disk, etc)?


It's not in metadata, it's in the directory structure!

> Every client in the network acts like a Web crawler.

So O(n^2) connections? Might be an issue in the long term. But I like the simple interface.

I'm going to start sounding like a broken record, but I find it peculiar this project claims RSS didn't go far enough, then it creates a short-circuited implementation when the actual went-very-far progression for decentralized information, something like RSS-XML-FOAF-RDF-Linked-Data-Solid, uses well established and open ended schemas for much more than this project's basics. With decentralized identifiers (DIDs), it will work on blockchains too. Fine, make a million different decentralized systems, but why not just use LD in a basic way so it's not just creating more fragmentation?

In following Solid, I've noticed a lot of decentralized projects are interested and adapting to its approach.

We're still building out schemas, so I wouldn't rate the protocol based on the current number. I just didn't want to speculatively add a bunch without having the live system to test against.

If I thought any of the existing standards were winners, I would've gone with them. If you really want to get mad, you can read why I decided not to use RDF or JSON-LD: https://unwalled.garden/docs/why-not-rdf

But this is why libraries are written. As you use each schema (using json-ld, or turtle, or whatever), wrap it with a library, and it becomes as easy to work with as anything. And you're adding to the critical mass of a read-write "web." Programmers need to work with the data and schemas occasionally, everyone else, forever. You've exchanged YAGNI with NIH. Unless you just expect to gain a critical mass, or don't care about participating in that web.

It really doesn't become as easy to work with. You still have to wrangle around the RDF "everything is a triple and attributes are URLs" concept, and even if you hid all that away you'd be attempting to gain something as simple as this design is. We just need a way to share & publish objects which link to each other. We don't need the graph model.

Quite a fan of your work with beaker, so curious how this will pan out.

Activitypub implementations seem to fare mostly fine by ignoring that ActivityStreams2 technically is JSON-LD, using it as "just JSON". Curious if you have any particular points against it (I can imagine a few, but would be interested in your take on it).

The main reason I didn't use AP is that it's designed for federated servers and uses a mailboxes network which doesn't fit with Dat & P2P. We may find a way to interop, but it'll require some kind of bridge service, and I didn't really see the point of adopting their schemas if it didn't buy us interop automatically.

Edit: also, thanks!

Coming up with yet another format in that space makes that kind of bridge service more annoying to create though, especially if the format grows more complicated. E.g. just trying to bridge a few of the existing choices like Atom, AS2 and microformats has an annoying amount of edge cases.

"Unlike Twitter and the other social networks, RSS never tried to capture the social graph, likes, comments, annotations, retweets, and etc of its userbase."

Except when they do, of course. Some places still have comment feeds.

Still, it's an interesting idea.

Yeah I knew there had been some shots at it, but my google fu couldn't pull up anything solid to point to. I'd love to see if somebody ever tried to "go big" with RSS. EDIT: also the comments RSS is generally comments posted to the blog, right? Not quite the same thing as each user having their own RSS feed with their posts and comments.

> I'd love to see if somebody ever tried to "go big" with RSS

I would bet decent money that between Dublin Core, indieweb and various other XML standards of the day, something like that is already defined / describable, it's just not used because the model you suggest doesn't scale well - once comments are in the thousands (which happens, on FB or twitter), single files become unwieldy and the clients get too slow. If you steer away from single files, now you have a protocol that also defines URLs that clients and servers must agree on, increasing overall complexity. It has nothing to do with RSS, a format that has been extendable through XML schemas since 1.0 at least; it's that nobody could agree on such schemas in numbers high enough to make them a de-facto standards, because most actors had an interest in lock-in on their own platforms.

I wish you luck, but this space is hardly new. The challenge is not technical, it's entirely a political one - a lot of people have to agree and implement a standard and then reach some sort of critical mass without triggering the search for lock-in.

The thing is, RSS is about dissemination of content while follow/like/comment is more about dissemination of opinion. There were indeed some protocols to fill the missing pieces, including json-ld for having some sane, common format and the Salmon protocol to make information go back up to the source. Like a salmon. But it definitely was a patchwork of multiple technologies with various advancement status and quasi-inexistant documentation.

Today all of those are dead now in favor of ActivityPub. AP loses a lot of the simplicity that could make RSS ubiquitous (you need a home server and na application server to handle api calls) but the expected interactions are all there, and it's built to be extensible.

In fact, I think reusing the "everything is a file" is definitely cool and makes things easier to try, but you would definitely benefit from using the whole JSON-LD taxonomy. Things are already defined and have been used for some time in production, so you can expect some work has been thrown at it. It'd also make collaboration easier.

Webmentions is a more modern protocol for this sort of functionality.

The Indieweb captures these things in HTML microformats. I can see why you'd want to go with JSON, though - it's simpler to parse.

What is the likelihood of Beaker running on Android?

Looks like the Bunsen Browser. It is fun to play with although I've never seen so many JavaScript errors without running developer tools. I was unable to load Unwalled Garden which sounds expected given that you are still building a new version of Beaker to support UG features, right?

Anyone who tries to use JSON for complex, extensible formats that involve documents (as opposed to data structures), metadata and semantic data are shooting themselves and their users in the foot. Just because W3C fucked up in some of their XML-based standards doesn't mean that trying to shove JSON everywhere is a sensible idea.

  "topic": "dat://unwalled.garden",
  "body": "Why didn't you use XML!?",
  "createdAt": "2018-12-07T04:15:44.722Z"
Now, I want to change body to include images or something of that sort. Bam, all the software that parses the old format is now broken.

Just use XML. Preferably, extend one of the existing RSS-like formats. Apply lessons learned from HTML and XHTML. It will work zillion times better in the long run.

There's nothing stopping you from extending the JSON schema. We'll probably come up with extension points to make it easier (as subobjects, each declaring their own schema IDs).

The format really isn't the interesting question. The interesting question is how do we get clients to behave predictably with each other. If you start breaking the schemas that everyone on the network is using, then yes your posts should fail to render and probably be ignored.

The format is exactly for the purpose of getting clients to behave predictably!

Without defining these schemas to allow for flexible mixed media documents, you're always going to have chaotic implementations floating around.

I think inability to include images it is actually a feature.

As long as your body is a plain text, you can present it in a very wide variety of ways and styles. Once you allow formatting and inline images, you start moving towards HTML documents, and lose consumer's control over content.

You also invite degeneration into an imageboard which ends up being a wall of memes and stupid jokes.

I don't follow you. JSON is very extensible. Just add a new key like "body-image" and you're done.

How is that different from XML if your body is defined as a text field?

How do you express "Why didn't you use <img src="xml_has_its_uses.png"> XML!?" with json?

you don't, you just store the string as is in the body tag and let the client side parser know how to parse html strings that may also include an image tag

what is the argument here? why didn't you know you answered your own question

"body":"Why didn't you use <img src="xml_has_its_uses.png"> XML!?"

OP is asking what the content type of the body is to be interpreted as.

You're assuming HTML string, which might not always be correct. What do you do with Markdown?

You need to have a standard way to represent things like this, because otherwise it won't gain traction solely due to the immense complexity of implementing viewers.

Edit: On top of this, HTML might not necessarily be the best choice. In that case it'd just be browser wrappers galore.

Well grandparent also assumed HTML since he put an HTML tag with html attributes in his node. I'm not sure how different this is, and if he wants to convert this to markdown he'll also need a html to markdown converter.

gain traction? you are really talking about a non-issue this decade

everyone is already used to making their frontend, backend and database parse and store external things in key value pairs

this schema pre-detecting ship has sailed a long time ago

Assuming It's eventually going to be read as HTML I do this

    "childNodes": [
      {"span":{"textContent":"Why didn't you use "}},
      {"span":{"textContent":" XML!?"}}

can you explain that example more clearly?

if you wanted to "include images or something of that sort" you would add an "images" key or a "something of that sort" key

in your particular example, body can clearly take image links image data as string and the parser can deal with it.

in my projects we typically would have a nosql database which out the box allows it to be indifferent about objects in a collection/table that include or exclude certain keys

Not who you're responding to, but:

>and the parser can deal with it.

That's not ideal. You need a defined schema so that the majority of parsers can reliably parse the majority of objects.

>in my projects we typically would have a nosql database Parser implementations shouldn't be assumed to use the same or similar tech stack that you do.

Every existing ActivityPub application requires you to rely on a server operated by someone else that controls your identity. This application puts you fundamentally in control of your content: it is just served from your own device by default, with the option to have another server pin your content (all of your content has Dat addresses).

Also, this sort of "use this, not that" dismissal ignores the fact that almost every decentralized application can be bridged together. It would not be hard to have Mastodon/Pleroma pull someones Unwalled.Garden feed and put it on its federated timeline, or conversely for Mastodon/Pleroma to expose its users feeds as Unwalled.Garden files.

Okay, but why RSS over Activity Streams?

It's not actually RSS, it just uses a very similar model of static files shared off websites.

  body: string 
:( it's 2019, let me write my post with photos & gifs !!

Basic Q: How/where are these "dat://" sites hosted?

The Beaker browser (https://beakerbrowser.com) has dat hosting builtin, and you can use hosting services to keep them online when your device is off (https://hashbase.io)

I was really hoping someone would do something like this, bravo!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact