Hacker News new | past | comments | ask | show | jobs | submit login

Love the text-first aesthetic of this reader.

I would have liked to have used this, but I've come to the conclusion that the data model for bookmarking and sharing (all metadata being tightly coupled to the URL) doesn't really work in the age of highlights and comments. Taking a look at the sample page[1], there is no way I'm going to click through any of these without a reason.

I've been using a "content-first" data model both for knowledge management and my own RSS feeds[2] for the last few years that I'm very happy with. I think this might just be the kick in the butt that gets me to finally write an article about it.

[1]: https://vore.website/j3s

[2]: https://lgug2z.com/ - scroll down to "recent highlights" for an idea of what this looks like




> I've come to the conclusion that the data model for bookmarking and sharing (all metadata being tightly coupled to the URL) doesn't really work in the age of highlights and comments

could you elaborate on this a bit? it sounds like an interesting idea but i’m afraid i couldn’t grasp it


I'm gonna paste below what I managed to churn out the last time I tried writing an article about this, and then add some fresh commentary after the pasted section:

---

The bookmark itself is tied to a URL, and anything else related to the bookmark, such as the title, the scraped content (if the service scrapes on your behalf), highlights and annotations are stored as additional metadata linked to that URL.

There are some unfortunate restrictions that come with this data model.

Let's take the example of comments.

Comments by their nature are distributed; the same article can be shared on any number of websites for any number of different users and communities to discuss. Especially in the case of tightly focused communities, the commentary on an article is often just as valuable as the article itself.

When the data model is anchored around the URL, highlights become tightly coupled to that URL, and this tight coupling ignores the distributed reality of comments, leaving them with no real place to exist in the data model.

Ideally, highlights made on an article and comments saved about an article should be easy to connect and view together, because it is the article itself, and not the URL, that is the common denominator (the URL is an imperfect proxy for the article).

What if there is nothing of value in the article itself to highlight, but the discussion of the article contains the real information of value that you want to save? This is often the case when a commenter debunks bogus claims published in an article shared on link aggregation websites while simultaneously explaining their method.

If you think about this for long enough, you may also come to the conclusion that I have, that saving comments is also a form of highlighting, and both comments and highlights can be described in a common data model as "content".

---

Now this is definitely a bit more on the "knowledge management" side of things, but I think that people who curate digital gardens of knowledge are in a unique position to be curators of focused streams of knowledge in a post-firehouse digital world, and that this can take the form of RSS feeds.

Think of "firehoses" like RSS feeds of websites that throw any and everything at you, RSS feeds of subreddits, the HN front page etc. They are all impersonal, without context, and without any reason or explanation of _why_ you should spend your time consuming an item.

Our time is increasingly the most valuable thing that we have, and so if I am recommending that someone read something, I want to provide the reason why I think they should read it up-front (ie. one or more highlights or analytical comments in the body of the RSS item); if that clicks with the subscriber, they can go on to click through to the source and spend more time on it.

But just a title (which is unfortunately likely to just be clickbait these days) with no item body? Or with a huge wall of text? (I'm honestly not sure which is less respectful of an individual's time) - I can't consume from feeds like that anymore. And for that reason, I also refuse to produce feeds like that anymore.


> people who curate digital gardens of knowledge are in a unique position to be curators of focused streams of knowledge

I think you're onto something here. I save and organise articles and have been exploring ways to share them for some time.

There's two ways to organise knowledge - chronologically and topically. They're both important but for different audiences. Organising chronologically means followers can keep up with recent activity. It's simple and works and is the basis for RSS and social media feeds.

Organising by topic is for knowledge management where you can arrange and re-arrange content based on your understanding of the topic. One day you have a list of cool engineering articles then you might split that into articles about data etc.

I'm working on an app that tries to capture both. Each topic has a timeline of updates and each topic can be broken down into other topics.


This sounds really interesting, please drop me a note on Mastodon or Twitter when you have something to share! You can see a video of how I've approached this with Notado feeds here.[1]

[1]: https://www.youtube.com/watch?v=fgtrBdp2AZQ


In my never-ending effort to balance consuming too much information vs. focusing on the handful of tasks that are actually important, I have started closing browser tabs more frequently. However I still can't accept "losing" the reference to the page, so I collect the URLs in my personal notes text files.

I often save the URL plus the HN post URL as a single unit, for the comments.


  > this tight coupling ignores the distributed reality of comments, leaving them with no real place to exist in the data model.
The canonical way to do this is too assign each article a unique identifier, say a primary key, in a table "articles". Then, each comment gets a unique identifier in a table "article_comments" with a foreign key back to the article's unique identifier. This is a well known pattern in databases but could be adapted to any other data structure, even one that is not segmented into tables.


wow. thanks for all of this - i'll be noodling about it. the model you've described is interesting & it's clear you've put a lot of thought into it.


If you want to try it out first-hand, I built https://notado.app from the ground-up around this exact content-first data model.

All of the "recent highlights" feeds on my website are built around this idea (I save interesting "content" [with the URL/book/comment permalink being treated as metadata instead of a primary identifying key], categorize it, and based on the categorization, the content is automatically published to topic-specific RSS feeds which can be consumed by individuals with RSS readers, or, in this case, my website!)

There is a 30 day trial and it's pretty cheap after that ($1.99/month), but if you want to try it for longer than 30 days, send me a DM on Mastodon and I can extend your trial for longer as well.


If anyone is interested in how I use Notado to achieve what I've discussed below, I have published a tutorial playlist on YouTube[1] (short videos, ~1m each) that you can refer to in order to get a better idea of how everything looks in practice.

[1]: https://www.youtube.com/watch?v=eqXD2UoE8Do&list=PLllZnrEJu8...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: