Sort of related but I've really enjoyed setting up a urlwatch - https://urlwatch.readthedocs.io/en/latest/ - setup. Especially once you get past the boilerplate of pupetteer and can boot up a chrome instance to scrape websites with javascript. I start to feel like I'm taking control of the web in a push not pull way
There's an amazing power to just monitor websites with no sweat and skim in the morning
- new job openings for companies you like
- new job openings/closings from your current company
- products you're waiting to go on sale/back in stock/available refurbished (got 70% some nice headphones)
- covid sewage stats, if you want to know about spikes
- apartment listings
- github releases you care a lot about (<3 yabai)
- legal-ese for critical websites
Personally I rent a little digital ocean droplet for $5 since I also self host a RSS reader, personal telegram bot, etc (and it's very useful to set up little http site for experimentation)
but could do it on your laptop since it doesn't have to run every day at the same time
Heh, I wrote [1] specifically for the "apartment listings" use case, but instead of notifying you by email, it uses GitHub Actions to create an RSS feed from a couple of CSS selectors.
I love it! I'm doing something similar: using playwright as a headless browser to log into twitter a couple times a day and pull news & updates for me from saved searches. It's just for me: gives me a digest of twitter updates about the topics I care about without having to endure the site, toxic debates, and annoying ads.
Thanks, I’ve been looking for something like that for quite some time. Even thought about making my own since none exist but it’s been years and I haven’t.
(im just going to ramble about it, cos im really mad about the state of IT right now and i cant articulate some ideas well)
sometimes i fantasize about the concept of "your IT person", kind of your local barber, general practitioner, tailor or baker. that is on charge of some aspect of your digital life with their own local little infra, tailoring personalized feeds and takes care of privacy/health issues, and provides you with their own simple interfaces or "speak" open protocols that connects with your feed reader, from movies, written articles to memes and funny videos. the thing is to have a human being to talk about this. not an dark algorithm that is adjusted constantly for profit for a soulless company.
Some other ideas revolt time to time on my head : community run local data centers(kind of libraries) or providing simple content services from your home internet connection (that's why i loved this idea: https://gitlab.com/veilid/veilid). Your personal digital human(but maybe AI assisted) curators,all part of a idealized "virtual solar punk world", sustainable, private and healthier with humans put first, far from the toxic fascination with disruption , profit, perpetual and global growth and all of the startup toxic crap, we have everything we just have to glue it, and i see people thirsty for a digital world like that.
its not the first time i hear about feeling healthier after moving to the feediverse. i have my own set of scripts and mini apps running on top puppeteer with a local llamacpp for summaries and recommendations, its not perfect but im planning to put more effort on this and maybe look for OSS projects that are aiming at that (ie. (*arr suites, nostr, activitypub, veilid).) and offer that to friends and family members to see if they like the idea, also i have a name for all those scripts: "not a browser" the web without HTML CSS and JS served along with the data, just provide the data, how you displayed, its the user concern.
I can really commiserate with how you are feeling. Something I’ve done recently is set up a “homelab” server and running a few web apps on it of various functions. Nothing novel here, but what I really love about this is how all of these apps are user centric and open source/community built. There’s no user hostile algorithms here or even ad based systems. Only software that tends to put the user first and tends to speak open protocols.
This sort of thing is very achievable if you work in IT and can run your own server. But what about everyone else?
Your idea of a personal “IT person”, like your own personal barber or tailor, is very intriguing. I wonder if its feasible to provide a service like this to people who are of a similar mindset about disconnecting from huge tech companies/algorithms and using something more personal, but don’t have the technical means to achieve this?
I’ve also been thinking about the personal data aspect of healthcare. I hate that my medical records are stored in MyChart and a dozen other proprietary systems that I have no control over. Yet, I have a super computer in my pocket. Why can’t I maintain my own copy of my records and selectively share data with my doctors when I arrive for an appointment? Why do I still need to have one doctor’s office fax something to another’s? I should be able to own my data and tap a button on my phone to do this. Only Apple’s Health app seems to come anywhere close to providing a fraction of this functionality, but there seems to be 0 adoption of this within the US. Even then, this only benefits Apple users. Something like health data should not be locked in a propriety system, even one that runs locally like Apple Health. There should be some open protocol and an ecosystem of implementations.
I’m 100% in agreement, the state of IT makes me mad.
We had something close to good for a brief window before marketing and greed took over the internet.
I think if someone bully a synology nas-like product with apps designed to benefit the end user and people (“your IT person”) to support you which wasn’t aimed at fleecing all personal data and dollars out of customers it may very well work and would be utopia for the end user but the economics probably makes it a low margin business with lots of risks (e.g. liability over losing peoples files)
> sometimes i fantasize about the concept of "your IT person", kind of your local barber, general practitioner, tailor or baker. that is on charge of some aspect of your digital life with their own local little infra, tailoring personalized feeds and takes care of privacy/health issues, and provides you with their own simple interfaces or "speak" open protocols that connects with your feed reader, from movies, written articles to memes and funny videos.
Wow, I haven't thought about this idea before. I really resonate with this and can see it being an actual reality in the next 5-10 years. That being said, I'm sure somebody has tried the approach. What's preventing it from happening now?
Just improvising with the idea and trashing more on big tech:
I would say, nothing, if something like that gets to happen, my guess, it will be silent, away from the noise, would not be a sexy tech headline, kind of what is happening with mastodon, there is no big names, personalities or numbers with tons of 0s behind it, just an interoperable protocol, you know, real tech. it does not need marketing or infinite scale, dependent on FOMO or other social phenomenons, its just pure value for its users, also, as Cory Doctorow said somewhere, as instances of mastodon, ideas like that are going to come and go, and that's fine, that's the process of finding the next valuable thing that will stick with us, it should be organic, the next big thing will not come from the silicon valley casino like esque.
An "IT person" would be just a job. pretty much as any other craft, what happens its that some of the most noisy parts of our industry are sick, feverish, delusional and full of their own bullshit, and we have in our subconsciousness that everything needs to be flashy, glorious, amazing, disruptive, big, competition proof and fast (in terms of success), not sustainable at all.
The idea is lovely and really appealing — I think the main reason it hasn’t gone that way is because this technology is inherently about leverage and scale. Write code once, run it a million times for little marginal cost. All structural forces seem to be working against this. Maybe when we’re all out of a job replaced by AI it will be more feasible??
"How to do nothing" by Jenny Odell is such a great book. Probably not for the usual reader of Hacker News, but I strongly recommend it to anyone who also starts to feel the fake "productivity" pressure imposed by the attention economy.
Indeed, although I would add to clarify that it's not about "digital detoxing" or something for increased productivity, but rather more philosophical, more of a discourse.
In that vein, I'd also recommend another of hers, "Saving Time," which is (quietly?) incredibly radical. Personally, I found it the better of the two, with more of a focused narrative.
I agree. I bought the book twice. Bought a paper copy, read it and gave it to a family member. Recently I bought it again on Libro.fm as an audio book.
I have an Audible membership, they've seemingly given it to me as some sort of free inclusion with the membership, if that's useful to anyone else in the same boat.
But I want to go further to not only be a personal feed but also time limited and distraction free.
I want to build a feed of all the written content I follow. And every day it selects the combination of items that together make for about 30 minutes of reading. This should include blog posts, articles, tweets, everything.
It can use chatgpt to filter what is the most "nutritious" content, or whatever other tool, but it should give priority to valuable content over flame wars.
Then this should be delivered to my Kindle or my remarkable tablet. Away from colors and flashes and away from fast internet.
Finally, for step two, is I want to be able to subscribe to my friend's feed and get "guest" content from their feed every now and then.
I’ve been working on some side projects for this sort of thing. Would you be interested in trying out a beta of this? I really like the idea of adding some summarization with gpt and if others are interested maybe I’ll wrap it up and share it?
Now it’s just a simple JavaScript app I use locally but maybe I should make it cooler… this idea sounds cooler than what I was thinking
This matches some vision of the early Internet and agents (having programs browsing the internet for you to do useful tasks). I thought it was Douglas Engelbart but I cannot find anything about agents. Maybe it was some other technology expert.
Microsoft (also?) said we would all have a dog or something sniffing up stuff on the Internet for us and bring it back. Back when Windows 95 was cool, IIRC.
Something that leapt out and resonated with me here was the conscious decision not to bother with automated tests (at first? maybe they exist now). Its taken me quite a while to get over the feeling of dirtiness that comes with not writing tests, but I now take a similar approach when building toy projects for personal use after too many of them got killed by the first ~day of development, when I should have been focusing on gaining some momentum before I lost the will to build whatever it was, instead being spent putting together test infrastructure and CI pipelines. I now work on the basis that tests can be added once the lack of them becomes an issue for me.
It's easily lost that development best practices are a function of scale.
What's absolutely necessary to get anything done in a large mature project with many developers and a sprawling code base may be cumbersome in a small single-developer project.
I'd argue a function of scale as well as a function of criticality. Not all projects need the same level of care put into them as the blast radius of bugs is... negligible sometimes
For larger projects, even trivial bugs can waste an enormous amount of time, since the number of places they can hide is much larger, and fixing them may mean blocking other work. Means that at such a scale, optimizing for correctness at all times makes sense.
In a small project where bugs are easy to identify and tend to be an easy 10 minutes fix, foregoing the correctness tax and fixing bugs as they become apparent is the more economic choice.
(I'm the post author). I agree with this. Perhaps I'd emphasize larger in terms of people working on it rather than codebase size. Once you have competing "mental models" about how the code works, the problem of silent bugs can escalate quickly.
So I do think the "always test" mentality is a reasonable default for the non-prototype type of work. There's a tipping point where going on without testing can get the project out of control and it's hard to tell when that is, so in contexts where you care to avoid that risk it makes sense to be strict about it. I didn't care for that risk in this case, so it made sense not to bother and focus on building momentum. I'd probably add some integration tests now if I wanted to try a significant new feature or refactor, or if I had to consider contributions from other developers.
The inverse is true as well: for a project regardless of size that's part of a medical device, the economic choice isn't allowed for certain classes of bug.
And function of ease of manual testing. Some things are impossible to manually test without changing the actual code.
For example, testing access permissions. Your UI isn't gonna display data or operations you don't have access to.
But that doesn't mean your back end is honoring permissions.
The other thing to take into account is how noticable the bug is. You should inch more towards writing tests for bugs that are less likely to be noticed by someone.
For personal projects I start writing tests when I run into something that is difficult to get right. Not before. I then focus on adding tests ad hoc when I run into problems.
I generally agree, though it depends on the nature of the application.
A couple years ago, I was working on a personal project where I'd built a respiratory gas analyzer and needed to write some software that could interface with it over Bluetooth and display the data in real time. Unit testing for various functions that needed to accurately perform scientific calculations was very beneficial, but in retrospect, writing tests for other aspects of the interface was pretty much a waste of time. I ended up getting rid of the tests entirely once I had confidence my functions wouldn't be changing by that point. It turns out that when you're a solo developer, manually testing your application can be perfectly adequate.
New anon account so I can speak openly about the future.
My jaw is on the floor because this post seems like it could have been written by my future self. I can’t believe how much in common I have with the author.
I’ve realized I’m burnt out and have been planning to quit my job early next year (wanting to state this is the reason for the anon account).
But there’s a lot of people who are probably feeling this same way. What’s astonishing to me is what the author did is almost exactly what I’ve been daydreaming about doing in my time off.
I’ve also been thinking about the open/IndieWeb and how I’d like to engage with it. I was planning to build some sort of app to experiment in this space.
There are similarities even down to one of the specific problems in this space: how to prevent infrequent posts from being lost in the flood. And technical considerations: what languages and technologies to use. I had also been considering building something using modern web technologies, where I’m about 10 years out of date on web development.
In some ways, I’m delighted there’s another person out there to provide some validation to what I’ve been thinking and feeling recently. It makes me feel like I’m going to head down the right path.
In other ways, I’m upset and jealous the author got there first.
>In other ways, I’m upset and jealous the author got there first.
If it makes you feel better (or worse?) I wrote something very similar to this 20 years ago and still use it every day. I wanted my own RSS reader back then, hated all the ones I saw and wanted one that looked like a normal blog (not inbox, similar to OP's requirements) that I could design however I wanted. So I basically wrote an RSS feed parser to give me everything and designed it to look like my normal blog does.
That got modified to going to grab the full article if the RSS feed only had blurbs. I didn't want to have to click outside my reader to view the full post, I wanted everything in my feed reader. Since I had a basic page scraper, I used that for other sites that didn't have RSS feeds, this helped a lot when social media became popular so I never had to actually go to any social media sites at all. I could just stay in my own feed with the content I wanted (I was never really into social media anyway).
And as you can imagine, it being 20 years old means it's written in old tech, PHP and XSLT, because that's what I was writing 20 years ago. And it's still written in that.
In any case I highly suggest making your own. It's a fun project. Sure mine is crufty and old and sometimes doesn't generate the right content I want (scraping is fairly imperfect), but it's mine and it's been my daily reader for 20 years and I love it.
> In other ways, I’m upset and jealous the author got there first.
Don't be! the whole point of this was to build something for myself, and use the process to reflect. There's no reason why trying something similar shouldn't work for you, I certainly wasn't the first to implement a personal reader.
Fun fact: I just skimmed through one of the indie web posts I linked (which I had read months ago) and it struck me how much of their ideas I just replicated almost verbatim in my post:
> Firstly, don't try and make your software work for everyone, or just for a specific set of people you think may be interested. Make it for you.
> By making it generic and possible for others to work with it, you'll make tradeoffs that may make things worse for your own usage, or may even design for an imaginary user that may not even exist, or build a system that with 17 different configuration items you could have a completely different system. Be selfish and make it more useful for yourself.
> Don't be! the whole point of this was to build something for myself, and use the process to reflect. There's no reason why trying something similar shouldn't work for you, I certainly wasn't the first to implement a personal reader.
Thank you for the encouragement! I do intend to still do something in this space if only for no other reason than to go through this process myself with the hopes of rekindling my interest in technology and software development.
And thank you for sharing your experience with us! The validation and motivation I’m feeling after reading this definitely overshadows the jealousy. :-)
This was a breath of fresh air. I've been on a very similar burnout/recovery path in the last year. Building useful personal software let me enjoy my work again.
The other main benefit for me is also being able to play and use any "unconventional" tech I care for. Building new-fangled single-binary PHP executables, using sqlite in prod, deploying WITHOUT Docker: these make it enjoyable for me.
What I've noticed is that this also has a knock-on effect on the main work for me - personal use repos often have new techniques & optimisations I can add in.
Some parts sounded like you were talking about me! :-)
Its funny, how many technologists play with stuff on the side, derive learnings from these experiences and end up bring something helpful, even valuable to their professional jobs...but employers sometimes - at least in some of my cases - disparages such efforts, or often block such enthusiasm. Then again, maybe i'm just working for the wrong kinds of orgs. ;-)
I'm glad that you have been able to reap the benefits of the knock-on effect; kudos to you!
Big fan of the "feed" mentality over the checklist of things to read/consume. I have picked up a few RSS readers over the years, but haven't stuck with them. Who needs another inbox to manage? I'll definitely check out feedi!(https://github.com/facundoolano/feedi)
I'm exactly the opposite. Having an RSS reader means that I quickly process all the "new" stuff, and then proceed to hack for a while. Before setting this up, I'd find myself aimlessly clicking about, thinking I might find something new here on HN, maybe there on Reuters, or even over yonder on ... etc. etc.
(see https://news.ycombinator.com/item?id=38369946 for more details. Newsboat stores its state in a regular sqlite db, so I've since queried the db for stats and evened out my alphabetic splits)
I use rss2email to send all my rss subscription entries as emails. So I have one centralized location for everything (personal emails, mailing list messages, rss feeds). Combined with filtering to individual spools and a powerful email client (mutt), it is a very pleasant unified experience.
It looks like the author added authentication to support accessing the app from anywhere.
Would it instead be possible/easier to throw it on a VPN and make the VPN accessible from anywhere?
The reason I ask is that I want to be able to access personal web apps securely and I’m trying to figure out the easiest approach. Every time I look at authentication, it’s a labyrinth of concepts, protocols, and libraries. I don’t want to maintain that!
The gold standard easy way of doing this is tailscale. I have a few apps hosted on a Raspberry Pi at home (like home assistant). I have tailscale on that Pi, and on my phone and it works very well. The only auth you have to do is "log into tailscale on each machine".
I really cannot recommend tailscale enough for how easy it is to set up a secure network of your own devices.
In that setup, how does your phone know to access your Pi over the VPN, but use the regular connection for everything else? Subnet mask?
Edit: any pointers on how to set this up would be appreciated! Maybe I’m using the wrong search engine, but I haven’t found this scenario laid out clearly, yet.
In detail, yes the underlying network routing on your device will route the target it to the right network, which means it goes through the tailscale encryption.
In practice though - you don't have to worry about any of this! These are the steps:
* Create a tailscale account (there's a good free plan)
* Set up server. Give it a hostname (I'll use "mypi" in this example)
* Set up your web service on server - check you can get to it locally (e.g. connected by wire or just on http://localhost on that computer)
* Install tailscale on your server, and log in to your tailscale account (tailscale login and follow prompts)
* Install tailscale on your phone/laptop. Log into to tailscale
* On your phone/laptop go to http://mypi and it should Just Work!
On my iPhone I have a wireguard VPN set up with "Allowed IPs" 10.200.200.0/24
When the VPN is on, the phone directs any traffic for 10.200.200.0/24 through the VPN and the rest of the traffic through the normal network stack. This is often called split vpn or split tunnel.
The other end of the VPN needs to be running wireguard and accessible from the internet. I have a VPS for this because my desktop is behind a firewall. But I can connect from my desktop to the VPS over wireguard (same setup) with keepalive, and they can all talk to each other over that private network.
I don't usually have this on, but occasionally if I want to ssh back home from my kid's soccer practice, I'll use it. I ssh to the 10-net address for my laptop after bringing up the split vpn.
It'd definitely be possible although, if you're not already using a VPN, I doubt it'd be easier. You could do it a few ways, but the gist would be running the VPN endpoint and the web app in the "same place" (same machine, same network, etc.) and restrict access to the web app from anywhere else.
So if I'm not on my VPN (or at home) nothing is shown. Other considerations:
- You may want a VLAN or separate guest network depending on if you allow guests on your network, what type of services you're running, etc.
- Many of the things I run at home have password authentication and I use them in addition to the VPN restriction.
- This was the first thing I thought of and may be insecure for reasons outside of my expertise.
- The nice thing about this is that I run pihole in the same compose file so when my phone is on my VPN I get remote ad-blocking "for free".
- Tailscale is easier and nicer (UI-wise) to set up, but I stopped using it because it's a battery hog on iOS. The "trusting someone else's server" thing is also an issue, but if not for the battery issue, I would probably still be trading the added risk for the convenience. This was not too bad to set up, though, and I'm happy with it for my simple needs. The Tailscale app also doesn't have a convenience feature that the Wireguard app does: I can tell Wireguard specific networks that I don't want it to run on (i.e. when I'm home) so that it enables automatically when I leave and turns off when I'm home.
Wow, that’s remarkably close to what I’ve been thinking we’d need on our cruising sailboat. We have intermittent connectivity, especially when offshore, and so couple of additional things would make this an exact match:
* Being able to say “sync now” when we have a moment of connectivity (for example, passing an island with LTE)
* Being able to Readability and local cache all content (including images) by default so one can read when offline
i had a similar experience a few months ago where I wanted to have just an RSS reader be my go to homepage.
miniflux as a self hosted option worked out perfectly for me (I used pikapods to host them).
It almost feels like reading newspaper, checking off one page at a time and when done - I move on to my other work/daily duties. No more random/aimless scrolling or visiting multiple pages.
I love this and have been using more RSS myself lately.
The one thing that drives me crazy is that there is still no way to get posts from facebook groups into any sort of rss feed. Does anyone else have this problem or know a solution? The rss-bridge for FB groups has been broken for years b/c the FB redesign in 2020 made scraping harder.
I miss the various *planet feeds. I've started but then stopped building copies over the years. Whenever I get interested I don't have time to do the work and whenever I have the time I don't have the interest.
Hi, I want to comment on the "readability" package. I am using it, too, on my bookmarking project.
The main project is on Ruby on Rails, but I have a microservice on node just for the "readability". I also extracted another service that would need a lot of memory and run only occasionally.
Those microservices are only needed from time to time, and I call them from a background job, so I let them autoscale to 0 on Fly.
Nice project, I have been working on something like this using pyqt6, something that scans news sites, dumps the urls in a locally running Postgresq, and can grab urls for me to display in a built in web renderer component.
I have lots of ideas I want to implement inclcuding storing all this data using embedding layers and the ability to deconstruct the html in use.
My last thought was that I want to take the 'articles', pull out the content, redisplay using django or a smaller web server, and replace the ads with positive affirmations such as "you are doing great!", "great job on your excercise". I figure I can even use deep learning to generate fake ads that look like normal ones but only serve to uplift me!
Very cool project, i was just thinking about rss readers and the like the other day.
I enjoyed your take on using the app how you imagine it and developing around it (frequency buckets, etc). Those problems sometimes go unnoticed during the design stage but are crucial to really getting the benefit of the app - the REAL reason why you're building it and and eventually excited to use it. I have gone through similar career burnouts and it's projects like these that can re light the flame... Until the next burnout lol
> I treasure my attention and so I've spent some time to opt out of ads. 5 years ago I couldn't tell what DNS was, now I've got an OPNsense router at home running ZenArmor/Unbound and I can can link to it from my phone over wireguard.
> Using ublock I've made a point to prune the pages I visit, so classes like "header" "breadcrumbs" "recommended" "sponsor" &c. are hidden.
Your description of the use case you wanted matches almost exactly what Google Reader used to be.
Google started dying for me with Reader, then Inbox, and now the last blows were the recent abuses in Chrome, YouTube and the absolute garbage the search frontpage has become.
I will definitely take a look a feedi. You just gave me some hope.
> I wasn’t interested in implementing the "social" features of a fully-fledged indie reader. I didn’t mind opening another tab to comment, nor having my "content" scattered across third-party sites.
I did the same thing with my feed reader (where I'm subscribed), which sits nicely combined with my timeline (where I publish) in the same website system that I built.
Although the option to immediately reply to posts in my reader (by means of webmention) is very attractive, for now I don't mind clicking and visiting external websites and maybe leaving a comment over there.
For the rest, this article really hits home for me. I'm also dogfooding and trying to make it work for myself first, although my pub/sub functionality is part of a larger website/homepage system.
I'm really happy that more people are building this kind of stuff to be able to ignore algorithmic timelines, ads and other enshittification.
edit: Some kind of automatic reader is next on the list. I want to have that filter on keywords I entered upfront. In the background, this automatic reader should collect interesting articles/posts and create a sort of Newspaper or Magazine, which periodically presents itself to me as a sort of surprise.
> I was disillusioned with the software industry. There seemed to be a disconnection between what I used to like about the job, what I was good at, and what the market wanted to buy from me.
If you have not figured it out yet, they would make more money having you do what you love to do, but the sadists in HR want to destroy your soul so they push you to do things you hate.
Those HR bastards manipulating the customers into buying adtech when everyone knows the real profit^H benefit to humanity would be if I made an indie game in straight C
> I tried several Python libraries to extract HTML content, but none worked as well as the readability one used by Firefox. Since it’s a JavaScript package, I had to resign myself to introducing an optional dependency on Node.js
Mmm, no you don't. In fact, even if you install it with NPM—which you don't have to do, of course—Readability was designed to run in the browser, not on Node.
The "JavaScript" → "Node" logical leap that people make (even in starkly inappropriate circumstances like this one) shows how much damage the warped traditions of the package.json cult have done to good, clear-thinking reasoning.
fwiw my reasoning wasn't javascript -> node, it was more that I was biased towards doing all significant work on the backend. So it didn't even occur to me that I could add some ad hoc logic to fetch the article HTML from js, pass it to readability and insert the in the UI, which I guess is what you suggest.
Having this logic available in the backend was convenient for me, anyway. I'm using it also for the "send to kindle" feature, which I couldn't have if content cleaning was done browser side. Having it in the backend also opens the option to save the content in the db to skip loading/parsing latency and preserving it long term.
> it didn't even occur to me that I could add some ad hoc logic to fetch the article HTML from js, pass it to readability and insert the in the UI
I don't know what you mean by "ad hoc". Again, Readability was written to run in the browser. It operates on the DOM, not HTML. If there's anything ad hoc going on, it would be (a) the fake DOM that Gijs wrote[1] so you can run it when all you have is HTML instead of an object graph, and (b) logic involved in shelling out to a separate NodeJS process from Python. These are hacks on top of hacks.
> which I guess is what you suggest
I wasn't suggesting anything. I'm making an observation about how illogical the JS,-therefore-Node cliché is. If I were going to suggest anything, it would be "don't use Readability" since it isn't a good fit for this use case. If "use Readability" were a requirement, then I would suggest, for the benefit of yourself and your brethren, rewriting Readability in Python or creating a binary Python module using either QuickJS and Readability plus Gijs's fake DOM, or Haxe.
> If I were going to suggest anything, it would be "don't use Readability" since it isn't a good fit for this use case.
It is a perfect fit for the user experience I was going for, even if it adds development and operational complexity --both of which were a lower priority for this project, as I stressed in the blog post.
There's an amazing power to just monitor websites with no sweat and skim in the morning
- new job openings for companies you like
- new job openings/closings from your current company
- products you're waiting to go on sale/back in stock/available refurbished (got 70% some nice headphones)
- covid sewage stats, if you want to know about spikes
- apartment listings
- github releases you care a lot about (<3 yabai)
- legal-ese for critical websites
Personally I rent a little digital ocean droplet for $5 since I also self host a RSS reader, personal telegram bot, etc (and it's very useful to set up little http site for experimentation) but could do it on your laptop since it doesn't have to run every day at the same time