IMO the genius of uMatrix was that Raymond realized that an in-browser requests firewall should have two dimensions, instead of one: (1) the (sub-)domains that the requests pointed at, and (2) the type of request (i.e. cookie, image, XHR, etc).
Now, obviously you could always write an ABP compatible filter that could block any combination of these two, but that's hard. What uMatrix did is present the underlying complexity in a way that's easy to intuit for a power user, giving you point-and-click request filtering power over both domains simultaneously.
For that reason, I'm skeptical that uMatrix can be replaced with a traditional blocker, not even one with an advanced mode like uBo, because it simply doesn't allow the specificity that a two-dimensional model like uMatrix did. That makes uMatrix's being archived incredibly tragic and a great loss for the web. I hope someone as trustworthy and competent as Raymond will pick it up in the future, and I thank him for all his work on it up to this point.
This is so true, the two dimensions allow such a nice simplification of what should be rendered even in UX terms.
What I personally didn't like about it though are the tech specific parts (e.g. xhr and other both require script anyways, as not a single website will work with xhr disabled).
So for my browser project [1] I decided to split it up into "text", "image", "audio", "video" and "other" which I hope will make more sense for most people.
Back in February when the debate about the manifest and requests api started, I started my own semantic browser project which aims to filter out all UX-interfering CSS and HTML and probably won't allow JS anyhow.
It tries to focus on the automation and caching parts that are broken in current web browsers, so that everything is peer to peer, and that other trusted peers can be used to share bandwidth, content, or metadata with each other and that each one has a 100% persistent local cache that only gets refreshed once the user tells the browser to.
If there'll be a need to support webapps later, they will probably be sandboxed in a new window that represents a temporary cache located in /tmp/... in order to prevent abuse of local storage and cookies. I've learned to not trust any site these days, even my bank's website uses foreign overseas-located trackers, which is technically a GDPR violation.
> not a single website will work with xhr disabled
That's total nonsense. I browse with uMatrix blocking everything by default and I rarely need to enable xhr to make a site work. Most of the time it's only used to bloat a site, not deliver the actual content. The same is mostly true of javascript as well.
This is particularly true on newspaper websites. A great many news sites are reasonable with everything disabled but utter cancer by (typical) default.
Well, maybe we have a different pool of websites we visit. But usually, in my case, pretty much all websites built with vue.js, react, angular and others usually don't have server side rendering implemented correctly.
Doesn't contain content, it's just a blank page without the XHR request. And all webapps I've seen so far basically just scaffold all polyfills and stuff, without any kind of content being delivered (or serialized) inside the HTML.
Additionally, all newspaper websites that I've seen in my country blank out everything with white-on-white if you don't allow JS with XHR. Either that or the article teaser is faded out with an overlayed blur image. Well, that is at least when you don't set the user-agent to Googlebot :)
One of the great things about umatrix is the above mentioned multidimensionality.
Your example displays content for me just fine in my default config, because the XHR requests are to the origin. Yet it blocks useless requests for two dozen different resources on other domains.
Well there's your problem. Microsoft counts as one of the sites that uses web tech to invade your privacy. Many parts of microsoft-dot-com don't work without javascript.
Hmm, this would suggest that any website that dynamically renders content should give me a server-side rendered version if I just switch my user agent to Googlebot. I may start doing that for select sites.
When a page doesn't work, and I care enough to un-break it, I will first try enabling just js. If that doesn't work, I'll add XHRs, too.
So, I can say with confidence that there are a good deal of sites that fall into both camps. I cannot say the ratio with confidence, but there are enough sites that work enough with just js that I choose not to enable XHRs. Here are a few examples:
The final example is the most relevant to this conversation, I think. Especially, there is a phenomenon, common enough that I've noticed it as a pattern of sites that display images with js (and sometimes content), and don't require XHRs to do so.
----------
Looking through my history to find these examples, two things become clear (about the sites I visit; I don't claim my browsing is representative):
1. Regardless of whether they also need XHRs, sites that require JS fall overwhelmingly into one of two categories: either I use them regularly and they're already on my saved whitelist (eg, YouTube), or I decide that I didn't care enough about them to bother enabling javascript
2. Yes, you're right; most sites that require js, also require XHRs. However, if we limit ourselves to the subset of sites above that I haven't whitelisted and don't want to walk away from — ie, the ones where I'm actually fiddling with uMatrix — it's somewhere around 50%.
In conclusion, I think the functionality is useful and should not be axed, but it's probably a also good idea to have a simplified mode like you suggest: combining js, xhr, and other into one column. Maybe also removes the "cookies" column — most browsers have built-in preferences for those; 3rd party cookies almost never have to be unblocked; and I don't know (m)any people besides me who block 1st party cookies by default.
[0]: Here's what that looks like when actually applied to the page. The keen observer will note that scripts are only temporarily enabled (I do this only when I want to use the collapse functionality). This (enable it when I want it) is a common browsing pattern of mine on sites that do progressive enhancement. https://smichel.me/hn/umatrix-hn.png
"Just as an example, what I visited yesterday: ..."
The content is served from a different URL. The simplest solution is to use that URL, not the "empty container" one. For example, to retrieve the content and extract just the FAQ part:
Why don't you just write an API that bounces requests through your webserver, where you render the page using a browser that runs the JavaScript and XHR requests, OCR a screenshot, and then send it through? That would help you win this argument, right?
This conversation is about browsing the web, not the strawman you've constructed. We're talking about websites that you go to and they don't render if you turn off JavaScript, of which there are many. Dragging it into the corner where you're using cURL+grep on an plain-text endpoint which happened to exist for the example provided is not a valid response.
Your proposed solution would not be appropriate for me since I use a text-only browser and prefer to use the web as either a text-only information retrieval source or a media download source. I do not use a "modern" graphical browser except for commercial, interactive, transactional uses, which comprise a very, very small fraction of my personal web use.
It is probably a mischaracterisation to suggest the "plain-text endpoint" existed by chance. Many, many websites use the same or similar frameworks and "plain-text endpoints" have become commonplace. Regardless of the trends in web development, the solutions I use for text retrieval work reliably across almost any website, otherwise I would not use them.
You're not going to allow JS? I don't know the answer, but I'm curious what percentage of sites are unusable without JS? It might be fine for basic browsing, but nearly any ecommerce or site with "interactive" content uses JS. Many video players require JS. It seems like a very niche solution that won't have mass appeal if you're not going to allow JS.
To me, the perfect solution would be something similar to uMatrix where power users can choose the settings they think are best. Anyone else can use simplified controls to indicate if they're seeing ads or if site functionality is broken.
Then there's some aggregation software that looks at all of those inputs and determines the best default settings for each site. If ads are seen then the blocking levels are increased toward the fringe of advanced user inputs, if functionality is broken then it walks back the blocking levels until broken functionality reports end.
This would be a never ending eb and flow for each site so it would be important to automate it as much as possible.
Turn everything OFF for all sites by default. I turn off third party stuff and all scripting.
Then visit a new site. Then opt-in until you either - decide the site sucks and leave, or get something acceptable and read your article. Then press SAVE to save the settings for that site.
It's worth mentioning that many sites do not render well without javascript, but reader mode renders a perfectly readable article.
but also it's much MORE prevalent that you turn on javascript and the site will do much worse things.
This is the point to me, and this is the way I've used it for years. Everything always loads almost instantly, I enjoy not burning my bandwidth for things I don't care about.
I regularly have over 150 tabs, and at that point blocking unneeded/unwanted content makes a very noticeable difference, especially on older machines.
Most of the web is static, allow "all" (or its columns media, script, XHR), ads still blocked.
uBlock Origin has blocking mode [1], it should be possible to start with hard mode and degrade with shortcut.
Disabling part of the page make it broken for one user and improves for another. No cookies - no previously viewed; no javascript - no interaction; no css - hidden content displayed; no media - faster load. Just like with Stylus a lot of opinions.
I can mark what works with my setup, but it does not necessary matches your accepted level of "brokennes".
Well, at least not in the "main app" of the Web Browser. I'd rather integrate an automateable sandboxing concept for webapps, so that users can choose to e.g. use Instagram or Facebook; but without having to worry that the localStorage and cookie quirks in there can be used for any tracking.
In my head it will be probably something along the lines of if "other" is activated, display a little notification under the address bar that asks something like "Do you want to bookmark this as a sandboxed Webapp? [x] Yes [ ] No" which will lead to a new window being opened that is sandboxed with its own WebKit cache and its own WebKit userdata folders etc.
Regarding video players: I'm also thinking about integrating youtube-dl (probably as a JS API/runner(?) as my Browser is implemented in node/ES2018) for video websites, but currently I'm not sure whether this will be an endless task to compete with, as a lot of video streaming sites work with mixed transfer channels such as WebSockets or chunked transfer encodings wherein the video chunks themselves are not transferred via 206 Partial Content.
Ah ok, that explanation makes more sense. Since so many of the sites I use are web apps, I wouldn't be able to use a browser that doesn't support JS. But, having a JS sandbox that is content aware would be great.
I'm hoping that also solves the problem safari has where if I login to a site in one tab (in private mode), I can't open a second tab for that same site without having to login again. Chrome doesn't have that problem with incognito, but of course those cookies and local storage can be used for tracking across other sites then too.
> my bank's website uses foreign overseas-located trackers, which is technically a GDPR violation.
Foreign overseas-located trackers isn't technically a GDPR violation. It's a GDPR violation if the trackers don't treat data in a GDPR-compliant fashion, and it's your bank's responsibility to ensure they do.
It definitely can’t be replaced by a traditional blacker. All the blockers for Safari are one step above worthless, and have been since they changed their plugin architecture.
When I have to use Safari, I use AdGuard, which has been surprisingly decent though less reliable than uBlock on Firefox. I believe that, for reasons unknown to me, it's allowed to install and use a local component outside the web browser, which significantly increases its capabilities. I was under the impression that WebExtensions was supposed to vastly reduce your security exposure from using extensions, but if anything in AdGuard's case it seems more intrusive.
Apple uses this model because they believe that an application on your computer that has been codesigned is more secure than a web extension that you could ostensibly install without Apple being in the loop. This is very clear if you look at how they've implemented their Web Extensions support in Safari 14.
Disclaimer: I don't develop ad blockers or any other plugins, and the change I'm actually the maddest about is that uBlock Origin stopped working when Apple stopped supporting WebExtensions. I may be less mad about uMatrix simply because it wasn't on Safari in the first place.
I'm not 100% sure on the reason why uMatrix wasn't available on Safari, but I think it's because content blockers aren't allowed to see any user data. Ad blocking plugins just send a list of content to block to the content blocking API, and Safari does all the blocking and doesn't send any data to the plugin. So uMatrix's intuitive UI that GP was talking about isn't possible.
Apple claimed that they stopped supporting WebExtensions and made the content blocking API in the interest of user privacy, but all it really did was drive users to other browsers. In typical Apple fashion, they decided what was best for me when (AFAIK) I've never had a problem with an untrustworthy plugin stealing my information.
It might technically be possible to functionally implement uMatrix with multiple plugins (one to interact with the DOM to figure out what to block, then generate the blocking list, and one to deal with the content blocking API) but all the plugins I've seen don't do it. Maybe it's not possible, maybe the extra development effort isn't worth it to support a single browser that has fairly low usage.
> Apple claimed that they stopped supporting WebExtensions and made the content blocking API in the interest of user privacy, but all it really did was drive users to other browsers.
I’m not sure I’ve seen that - the number of Safari users had been pretty constant on my sites, and the main pressure seem to be the Chrome pushes on Google sites.
I think the UI challenges of Safari’s approach are a big problem but on the other hand there were years of people blaming browsers for ABP’s bad performance and users privacy was definitely sold out by unethical developers. As a user it definitely is easier to trust one over the other.
> one to interact with the DOM to figure out what to block, then generate the blocking list, and one to deal with the content blocking API
No, it's not possible to do performantly. You can't dynamically update your blocking list that quickly, the JSON rule list must go through WebKit and that can easily take multiple seconds.
Maybe its most useful to think of this as sort of an "isometric" 3D rendering, because this extra dimension is usually only exposed to the user modally: the third dimension is whatever site you happen to be visiting at the present time. By 2D I was mostly referring to the interface, which is a "matrix", and commenting on how that particular metaphor makes managing a complex request blocker a much better experience.
> It's actually 3 dimensions, as any selections you make in that UI only apply to the site you're currently visiting.
Well, technically that is only partially true as the third dimension is the (sub-) domain scope or "*" which can be reflected behind the scenes with the first-party settings for the origin's domain for each request type because it has the identical effect.
No, it's a full dimension. That you can't see it in the GUI is because the GUI is already scoped to the currently-active url, and you can only control the specificity of the rules, as you say.
But if you view the rules, you'll see its three dimensions spelled out:
This is full [source domain] [target domain] [request type] flexibility. The GUI will only show the stackexchange rules above when you're actually visiting that site; it doesn't mean the third dimension is fake.
How do you feel about a model like NoScript’s, where origins are labelled with tags (trusted/untrusted/default, but in theory you could have arbitrary tags), and then request rules are applied to the tags, rather than to origins directly?
I have to admit I haven't used NoScript since well before the version 10 UI changes. Prior to that it was definitely "one-dimensional" and didn't really offer the features of uMatrix. Now, I'd say (just judging by screenshots and the user guide) that it's about half way there, allowing you to look at a list of subdomains and set each one to a category. (The guides don't seem to clearly state that this only affects your current site, allowing you to have different source<->host combinations be allowed or disallowed, but I assume this is the case.)
However, the tagging model seems so limited compared to what uMatrix can do. uMatrix has 8 different requests types, so you'd need 2^8=256 different tags to cover every combination of requests to a subdomain. And that's if NoScript can block cookie requests at all: 90% of even the domains I fully trust have cookies blocked in uMatrix, simply because the sites don't actually need cookies to function. Maybe I would need yet another extension for that.
Also, however, part of what I wanted to get at in talking about the brilliance of uMatrix was the way the interface made very precise controls easy, just a point and click operation. Maybe it's possible to get a similar amount of power with a tool like NoScript, but as far as I can tell the usability of the interface just doesn't come close.
> you could always write an ABP compatible filter that could block any combination of these two
I don't think you can for cookies. You can block cookies in the browser, but I'd like a filter list so that it is possible to change across all my browsers by subscribing to the same filter list.
The developer, Raymond Hill had this to say on reddit a month ago:
I will never hand over development to whoever, I had my lesson in the past -- I wouldn't like that someone would turn the project into something I never intended it to become (monetization, feature bloat, etc.). At most I would archive the project and whoever is free to fork under a new name. For now I resisted doing this, so people will have to be patient for new stable release.
What would actually help is that people help to completely investigate existing issues instead of keep asking me to add yet more features. Turns out people willing to step in the code to investigate and pinpoint exactly where is an issue (or that there is no issue) is incredibly rare.
Presumably referring to the mess that happened with uBlock when he tried to step away. That's why it's called "uBlock Origin" now; the original name was taken over by someone who then developed it in ways against Gorhill's intentions.
it was taken over by the same people behind "AdBlockPlus" which is a shakedown operation. They're allowing ads to be unblocked if advertisers pay them money.
The developers running it should be able to accept monies from the users of the product, especially donations. They have built something and are keeping the end users interests at heart while evolving the capability. It needs to be sustainable, else the intrinsic motivation will dry up.
The adblockplus model is extortion. They're not incentivized to serve the end user well, especially when their primary source of revenue is the advertisers the users are attempting to block.
The problem was that the person was essentially squatting on the name (in spirit; the name had been legitimately transferred) to gather donations for himself while not actively developing it. Then he sold it to the "Acceptable Ads" people.
It's strange that this is so prevalent in the industry. Who ever got promoted for fixing all the hard bugs? Reinventing the wheel is the safe career move in many companies. And the result are huge R&D budgets, bugs and a bad user experience.
To be fair, a rewrite from scratch can be justified in many situations, e.g. out of support stack that is hard to hire for or a change of direction where a lot of bespoke code was written and now a one size fits all is used, thus leaving a lot of redundant and interlinked code.
Rarely are those potential reasons actually weighted up against the full cost of the rewrite
But for it to work well the rewrite should be done by someone that understands the current system, not as a way to avoid understanding the current system.
This is a tragic loss, I can't be grateful enough to the author, such a fantastic tool.
I even started installing it in my parent's computers (not advanced users), reducing to 0 the amount of time I had to intervene to fix their computers. The trick is to configure it in blacklist mode, instead of whitelist. This way it only blocks requests from domains in the blacklist and frame elements. Just with this change you get non-power users out of trouble in their surfing habits and impacts negligibly their use.
I have taught them that if a web is blocked or doesn't work properly is most likely not a web they want to use but that there is also the possibility to turn it off using the on/off button (that they should use very judiciously). In this mode it is not too different from setting up Hosts file but they can understand better what's going on and how to turn it off if needed.
Most people want to block annoyances like advertising or tracking without breaking the pages they want to visit. What you're suggesting does the opposite.
Because fuck them. Recaptcha in particular is abusive to non-chrome users with their "slow fade" tiles which are specifically engineered to frustrate to real humans (bots do not experience frustration, and if it were simply a matter of slowing down bots they would have simply added a timer, not spend five seconds animating a tile with fade transitions.)
Fuck any site that requires this hostile bullshit. 9 times out of 10 when I see recaptcha that site is dead to me. Very few sites are worth tolerating that sort of abuse from.
I get the same slow animations with Chrome. I don't know what the point of it is, since it's apparently trivially bypassable, but I don't think the goal is to annoy people.
Also, the web would be much more annoying to use without captchas. (Not necessarily recaptcha, but just the concept in general.) If you've ever been an administrator of a site that's prone to spam, it's usually one of the only effective options. Other trade-offs would generally involve blocking huge ranges of potential users, with tons of false positives, or laborious manual approval which isn't feasible past a certain scale if it's just you or a few people.
I get tons of recaptcha on google search, basically everytime i open firefox incognito, surprising how it never happens with chrome. Google is doing many anti-competitive things with chrome.
> Also, the web would be much more annoying to use without captchas. (Not necessarily recaptcha, but just the concept in general.)
This is a non sequitur; we're talking about Google's abusive faux-captcha (which is not actually recaptcha; that's the two-word OCR challenge captcha they replaced with said faux-captcha), not about any actual captcha or captchas in general.
Sorry, you're right, I think my error was due to the start of the parent:
>Because fuck them. Recaptcha in particular
I think I read it at that moment as "because fuck [captchas]. Recaptcha in particular". But they meant Cloudflare and Recaptcha.
I will say, as annoying as Recaptcha is, I find hCaptcha a lot more annoying, difficult, and time-consuming. (Cloudflare recently switched from Recaptcha to hCaptcha.)
I failed 4 "select the motorcycles" yesterday after selecting like about 7 - 8 of 18 images per try. So that's minutes spent clicking 28 - 32 out of 72 squares, and I failed every time, because I don't know much about bikes/vehicles and they mixed in regular bicycles and other semi-motorized bikes (which were all wrong answers), and many of the images were extreme close-ups of possible axles or handlebars with no clear shapes, and others were just generally blurry, unclear photos. It makes Recaptcha's ultra-slow fade-ins seem like bliss. I got the fifth one when they switched from motorcycles to something else, but that one wasn't easy, either.
> as "because fuck [captchas]. Recaptcha in particular"
Ah, that makes more sense, and now I'm not sure that wasn't what they meant (although it seems unlikely because fuck Cloudflare).
I'm not familiar with hCaptcha, but what I've heard (including from you just now) suggests that it, like Google 'captcha', is also a javascript-using non-captcha, in which case fuck them too.
Any pointers on how to do this? I run into problems with online purchases, and find out easier to switch to chrome than try and authorise various sites on the fly.
If you want to replicate that kind of configuration that is as painless as possible you just have to go into the extension options/configuration. Head to the "My rules" tab and you will find a rule, towards the top of the rule list, that says:
"* * * block"
This rule acts as a default blacklist. If you switch it to:
"* * * allow" it will allow everything by default (except the blacklisted domains, which overrule this).
Then in the "Assets" tab you can configure your blacklists, I can recommend Steven Black's lists. He curates and consolidates several of the most famous ones:
It's fairly easy to determine what's breaking things. With something like online purchases, one of the domains will be for a payment processor that you've probably heard of, accept things from them and it will likely work.
More generally, you can often find a domain that calls itself a cdl, and those are usually needed. And sadly, if the site doesn't seem to work at all it probably needs google.
Oh, and it is basically never something in dark red.
Thank you Gorhill. For such great resources. Your extensions are the only reason I use FF on mobile and desktop.
uMatrix helped me realize how much of 3rd party resources are crap. Actual crap. Completely unnecessary. It also helped me get familiar with new 3rd party crap that pops up on the internet.
I'll use uMatrix 1.4.0 as long as it works. Many thanks
In my experience uMatrix and uBO work better in Firefox compared to Chromium. You don't have to completely reload the whole page to see what's blocked or hidden. They can also block the sneakiest trackers that hide behind other domains. Not possible in Chromium.
Chromium mobile doesnt even have addons because they are shit scared of adblockers.
"Chromium mobile doesnt even have addons because they are shit scared of adblockers."
Paradoxically, I'm kind of half-happy to hear this.
It means that finally a large number of users are using adblockers.
For many years the standard theme of many threads on adblockers was that companies like Google didn't care about them because too small a percentage of their users used them for it to matter.
Now finally there are enough adblocker users for it to hurt their bottom line, and that means that there are more people than ever who clearly just don't want to see ads.
That gives me hope that there will some day be anti-advertising legislation, and that we might not even need adblockers... some day... some day...
People dreamed of an Internet where sharing of information was free (as in freedom). Then DRM happened. And even if DRM cannot be 100% reliable, being broken by design, it's reliable enough, plus in the US at least it's a felony to break it. And as years go by, we see more DRM, not less. This happens, because the practice is normalized, and because small inconveniences are taken care of (usually by monopolies winning the market—e.g. you stop complaining that alternative e-book readers don't work, when everybody is using the Kindle or the Audible apps).
Similarly, for the open web — the action has been moving on mobile devices. A majority of people now consume content via mobile devices. So what do you see? More websites? Or more apps? And for all ad-blocking happening at the DNS level (e.g. Pi-hole), how long do you think it is before apps start doing DNS over HTTPS on their own, bypassing the OS's stack?
This is a wack-a-mole game, and the big publishers have enough resources to push for both technical and legal changes for outlawing ad-blocking. I'm actually surprised that content blockers remained legal thus far. But the writing is on the wall IMO.
---
There's also another side of this coin. I see more and more people on HN complaining that articles are submitted from publications that are setting up paywalls.
People also hate paying for content. And even those that pay for content, they don't recognize what an incredible privilege it is to afford it.
Either content is monetized somehow, or the only content that we get will be content created by hobbyists, in their spare time, for free, while working a regular job.
Well I for one don't want a world in which the poor don't get access to online resources, or a world in which people can't make ends meet doing what they love.
Firstly, I don't think any of us anticipated preferential attachment. Even despite the popularity of "six degrees of separation" and other graph related notions. Clay Shirky's essay about Power Laws was my first exposure to the idea. Here's Kotte's meta entry: http://www.kottke.org/03/02/weblogs-and-power-laws
Everyone complains about how "the web" we got is broken. It wasn't until very recently that I understood that Ted Nelson's Xanadu vision, an often imagined alternative timeline perfect web, requires centralization. Um, is this really what we want? Because that's what we're getting. Incrementally, fitfully, inevitably. Your warnings about DRM times infinity.
Also the libertarians, anarchists, technophiles behind "the web" thought we'd have micropayments. Instead, we got advertisements and freemium. I don't know if micropayments, or prepaid wallets, or subscriptions, would be less toxic. But it couldn't be any worse.
uBO and uMatrix on Firefox are more capable. For example, uBO on Firefox can block trackers masquerading as first-party requests.
Also Google is going ahead with deprecating the necessary APIs in Manifest v3, going with a Safari-like model for content blocking, which is far less capable. Soon uBO, uMatrix, Privacy Badger won't be possible at all on top of Chrome.
> Anyway, as it is, I've archived uMatrix's repo, I can't and won't be spending any more time on this project, and neither on all such issues [linking to all issues closed as invalid].
This makes me realize that Github’s domain model (where “issues” exist as objects associated with—but outside of—the repo) is kind of hostile to forking. The original author wants to close issues not because they’ve been fixed in the codebase, but because he wants to not solve them. To get those same issues “back”, a fork developer would have to:
1. Copy-and-paste the repo, rather than using Github’s forking mechanism, because Github “forks” don’t have their own issues/PRs (being something more equivalent to multi-branch workdir collections for an upstream repo);
2. Copy all the issues over from the origin repo (manually, or by writing a script against Github’s API/CLI);
3. Sadly, likely lose all the original conversation on those issues.
These problems would be obviated if issues were just data files committed inside the repo — with any branch that contains the open issue file meaning the issue pertains within that branch; and any branch that closes the issue meaning that the issue is solved as of that commit. A fork developer could just fork the repo in the traditional fashion, and end up with a fork of all the issues alongside.
Does any git hosting service/software handle issues in this fashion, i.e. as a layer of web-chrome and backend indexing over files committed to branches of the repo (where you’d always have to be looking at the issues as they exist within a particular branch)?
For that matter, does any git hosting service have a sane high-level set of workflows for “forking” in the sense of creating a competing (or replacement) maintained-upstream-repo for people to contribute to?
> because Github “forks” don’t have their own issues/PRs
You can enable issues on forked repo, and any forked repo may receive PRs.
Still, manual repository is preferable as deleting repo will delete also its forks created using Github interface (at least it was happening some time ago)
My understanding was that when a repo is deleted one of its forks will be chosen to become the root instead; however if the parent repo is taken down (via DMCA etc.) then all its forks are also removed.
Yes. I had a similar thing happen a few months ago when I made a repo private - the first public fork became the upstream for all the other public forks.
>You can enable issues on forked repo, and any forked repo may receive PRs.
Their point was that issues in the new repo are completely separate from issues in the old repo. A new issue created in the new repo will be issue 1, and to browse the old repo's issues you have to navigate to the old repo. If the old repo is deleted all those issues will also be deleted and history will be lost.
That's why they mentioned the workaround of using the Github API to export the old repo's issues and re-import them into the new one, which would not be needed if issues were git objects and thus trivially copied into all forks.
> To get those same issues “back”, a fork developer would have to:
1. Agree with the original author that a fork/take-over is the right thing to do.
2. Create a GitHub organisation for the project, where rights and repos and everything can be re-allocated/delegated as needed.
3. Make original author transfer his repo to this organisation.
4. Done. No more steps.
It also includes magic redirects for all requests to the old repo, including issues and also git-request, so down-stream projects won’t even have to know.
Not so simple if the original developer is dead, MIA, or uncooperative.
I think this is a clear deficiency of git and many other VCSs, which fossil avoids. It's clear that bug reports and commits will frequently cross-reference each other, so they should be tracked in the same system. Git only implements one half of the puzzle, leaving the other half up to others which ultimately facilitates vendor lockin.
Last release was in the end of February (beta; non-beta was September last year). Last commit was in April.
Perhaps the activity was slow, I guess you could call it "maintenance mode," but I've been using it all this time and uMatrix works fine in its current state, so all it means is that there were no new features being added.
It looks like the immediate reason for the repository being archived was somebody opening one issue too much.
That is actually a quite hostile move for OSS, which has become common lately.
Unless you have trademarked the name, you don’t get to reserve it for a rainy day, anyone should be able to pick up the torch and continue the project without having to start from scratch with a new unknown name.
gorhill stopped doing uBlock and passed it to others, who subsequently took it in a direction he didn’t like, so that he resumed maintenance under the name uBlock Origin (since the name “uBlock” had been transferred).
So this time when he stops maintaining a project, he’s avoiding the same thing happening.
The wiki link somewhat explains the situation but to reiterate it for HN: gorhill no longer wanted to work on uBlock full time so he transferred the project to chrisaljoudi who immediately registered ublock.org in order to solicit donations under the valuable uBlock branding. Development on uBlock all but stopped and the project died, some 3 years later chrisaljoudi sold the project to AdBlock who are essentially an advertising company. Meanwhile gorhill continued development at a slower pace on his personal fork, uBlock Origin, which is still maintained to this day.
So if 'squatting' the name of your own OSS project is considered 'user hostile' then let this be a lesson to people considering giving up their project: the person you give it to may abuse your trust and the trust of the community in order to further their own agenda, in this situation it was all just pretty harmless petty drama but it might not always work out so well.
Random Github projects aren't organisations with funding and staff with rules and responsibilities to keep the project running, it's okay to archive the project and let the community decide what to do, giving the project to the first person who asks may end up achieving the same thing as archiving the project or it may come back to bite you in the arse damaging your reputation in the process.
@gorhill explains this in a Reddit post from a month ago:
> I will never hand over development to whoever, I had my lesson in the past -- I wouldn't like that someone would turn the project into something I never intended it to become (monetization, feature bloat, etc.). At most I would archive the project and whoever is free to fork under a new name. For now I resisted doing this, so people will have to be patient for new stable release.
I disagree. Trust is earned by a name. Someone new picking up the project should earn that trust again. There should be fair competition between forks - not just the first to claim the name wins.
Security software (and this is security software) lives and dies on reputation. It makes sense to avoid a situation where some rando could fork it, patch it with rubbish, and just say “this is the updated version!”
I believe because the last commit was released under the GPL this is actually not legally binding, just a request (which IMO should be honored). The GPL only requires "The work must carry prominent notices stating that you modified it". The only legal way to prevent this would be if "uMatrix" was trademarked, and I don't believe it was.
Marks used in trade are automatically protected, just by their commercial use. uMatrix has been commercially used as a name, so it is a trade mark (nb: commercially here does not necessarily mean money is involved).
Trademarks don't need to be registered to be enforceable. Registration makes things easier for everyone by stating the registrant's intented scope of the trademark and makes a few things easier for the owner. However, the enforceability of a trademark rests on its awareness by customers, active use by the owner, and active defence of the trademark by its owners. uMatrix certainly is certainly a trade mark, however, the two latter criteria are probably not met.
I'm also not a lawyer, just interested in this stuff. This is not legal advice. And of course the details will be wildly different in different parts of the world.
> uMatrix has been commercially used as a name, so it is a trade mark (nb: commercially here does not necessarily mean money is involved).
Again, I don't know anything, but I'm surprised that uMatrix would count as having been "commercially" used. And as you point out, these two criteria probably wouldn't be met if the project becomes inactive:
> active use by the owner, and active defence of the trademark by its owners
uMatrix is without a doubt used as a commercial trade mark. Again, commercial doesn't mean money is involved.
It's also still used by the owner, as it's still listed in several extension stores.
Trademark protection also doesn't end over night. Trademarks are generally protected for a few years after the owner ceased to use it. Also, one should keep in mind that trademarks aren't primarily an intellectual property concept like patents and copyrights. Their main purpose is to protect consumers from copycats and fake products.
Trade marks can be registered, but need not be. The use of the trade mark in normal commerce is sufficient to establish rights. However, the registration of the trade mark can confer additional benefits. This is the general rule in most common law jurisdictions.
As uMatrix was distributed to users through the extensions platfrom, this is already sufficient to classify as commercial use.
There is quite old precedent in OSS for creating a brand new project rather than taking the name of the old one. Whether it's because the old one was abandoned, wouldn't incorporate changes, needed functionality had to break the ABI, legal/regulatory issues, copyright issues, etc, forking a project with a new name is commonplace in OSS. Probably most commonly as "-ng" (Next Generation) project names.
You already have the right to copy all of someone's years of hard work, add a tiny patch, and pass it off as your own project with a new name. The very least you can do is give respect to the original author by allowing them to keep their original name.
Then there's potential for libel. Let's say you fork a project, keep the original name, and then pepper the project with Nazi propaganda. The original author is trying to get a job, and his resume has the name of the original project. A prospective employer searches the name and finds the new project (with the old name) full of hate speech. If he gave away the old name, a libel suit to change the new project's name may fail, and his reputation might be forever tarnished.
I wouldn't say forever, while it is not great that somebody would do this, is not like it isn't possible to preemptively mention that you have nothing to do with the fork, and you are input responsible for the original in your resume.
Trademarks and copyrights are automatic, you don't need to do anything to have them (although registering them makes it cheaper to prove you have them).
Obviously "anyone should be able to pick up the torch" has the issue that the "anyone" may well be a malicious person who is seeking to defraud the users for monetary gain.
Yes, that means if someone creates uMatrix and the original author won't do anything about it, then that other person can keep that name. We didn't get to that point yet.
Will it still work? If not, is there a trustworthy replacement?
I don't want an "ad blocker" with blocking lists etc. I just want to see the page I navigated to. And then allow it to load additional resources as I see fit.
If uMatrix goes out of existence, then that would be the biggest loss due to discontinued software in my lifetime.
Firefox add-ons can be installed from third-party sources as well, and in the case of uMatrix it's worth doing it anyway, since the latest version (1.4.1b6) is on GitHub only:
AFAICT uBO even in advanced mode doesn't differentiate between the kinds of requests that can be filtered, so filtering is only per domain and filters every kind of request for the domain equally. uM on the other hand differentiates between scripts, CSS, images, XHR, media and frames, and allows you to filter them individually.
But most importantly, uM also allows you to filter cookies with the same fidelity, which is the number one thing I would miss if I had to rely solely on uBO, because it means I can default to blocking even first-party cookies from sites I don't want leaving cookies on my machine. FF by itself gets close, by letting me set a policy that says "block all cookies except for cookies from these domains", but that doesn't let me filter which site is allowed to access those cookies.
Frankly, I find uBO redundant if one has uM installed but for two things: uBO can use the usual content-blocker lists (I personally don't need them because my router's DNS server does filtering using those same lists already, but it's useful for people without such a setup), and uBO can block remote fonts whereas uM can't. It would be great if uM's kind-based filtering was merged into uBO and remote fonts were kept as just another kind of request that can be filtered, but I don't know what gorhill plans to do.
Check out "Cookie AutoDelete" for cookie management. It automatically deletes all cookies (not whitelisted) when you close a website. I've used it for a while and it is pretty nice.
Looks like the maintainer just didn't have time and there weren't enough people in the community willing to step up and do issue triage or contribute code.
> What would actually help is that people help to completely investigate existing issues instead of keep asking me to add yet more features. Turns out people willing to step in the code to investigate and pinpoint exactly where is an issue (or that there is no issue) is incredibly rare.
This is sad for me as a user, but I can fully understand his unwillingness to further engage with requests from entitled users. I myself had bad experiences with releasing open source, too.
That said, this is clearly a useful tool and I wouldn't be surprised if the user base was 10,000+ which means that if you'd make it $3 monthly to use as a commercial product, the revenue (after attrition) should be enough to pay for at least one part time employee to do the maintenance.
I would also expect that releasing this as a paid product, as opposed to open source, will actually reduce entitlement by users. Or at the very least, you can always just issue a refund and be done with it.
I would still hope for source code insight to make it transparent how this tool works. But that is not necessary a hindrance towards productizing it. Unreal Engine 4 is a commercial success, despite shipping with full source code.
> I would also expect that releasing this as a paid product, as opposed to open source, will actually reduce entitlement by users.
That is the very opposite of what I’d expect and have observed personally over the years. The folks who’ve paid a small amount are virtually always the most demanding and refunding them and asking them politely to go away just fuels their indignation further.
I've been selling a $9 Windows app for 5 years now and never had a case where someone continued to contact support after a refund. We did have people who had a system straight from hell where nothing worked as planned, but then again the users of those systems seemed to be aware that it's their computer and not my app.
Many notable or even vital software projects are literally maintained by one person, sometimes even in companies. I am certain that most users don’t “get” this about software. And from experience, most users don’t contribute anything at all, not even the most basic bug reports. They do however complain.
The best way to “mourn” a lost software project is to ask yourself what you will do to maintain the software ecosystem. How many things do you use for free? How many things have bugs you never bothered to tell anyone about? Has each of you contributed something (even a short E-mail thank-you) to some software project?
For those who are wondering the difference between ublock origin and umatrix, a cursory quick search turned up this forum post[0]:
> uMatrix is a blocker(cookie,css,image,plugin,script,XHR,frame, and other) you can control what you block and what you want to allow(like uBlock Origin dynamic filtering but way more flexible and can be way more strict) uMatrix just blocks ads through the use of host files, uBlock Origin blocks them more deeper per se then uMatrix because of cosmetic and patteren-based filtering like adblock plus. I use both of them together just uncheck the malware domains in uBlock and peter Lowe's and the host files. Also you have more privacy and security when running uMatrix because of the switches(user agent spoofing and referrer spoofing, clearing blocked cookies, blocking hyperlink auditing attempts etc.) and also if you run uBlock it gets whatever ads uMatrix does not get from its blocking) Look at my sig to see how I run them. If you need help just PM me.:thumb::):cool:
I personally run ublock origin and have been super happy with it, never even think about it these days, if I was supposed to switch to uMatrix at some point (I know uBlock and uBlock origin are different now and origin is preferred) I must have missed it.
uMatrix is awesome. I wish there were more tools like this -- just advanced enough to do some real heavy lifting yet still quick and intuitive after even a little investment. The browsers brought SSL awareness with the padlock, but most users are still woefully unaware of just how many websites they hit when they load any page. It's insanity.
I hope this isn't due to browser vendors making things difficult, but it wouldn't surprise me. Since the concerns are similar, it would be great if there was a way to marry the two. uBlock - advanced interface mode or something. Just a thought, not a feature request.
Thank you Gorhill for all your work. Sad to see it go, I actually can't fathom how I'll surf the web without it.
> just advanced enough to do some real heavy lifting yet still quick and intuitive after even a little investment.
This describes uMatrix perfectly. I didn't understand a single thing after installing it, but one day, I spent 20 mins reading the wiki on Github and then understood what to use each tool for.
uBlock Origin in hard mode[0] (plus I set it to not run any js by default on top of that) is, while not exactly a replacement in terms of functionality, a really good alternative. It's all the granularity that most users could really need, I think.
It's not a complete replacement, though. For example, a couple of months ago we had a discussion about websites scanning local ports, prompted by [1]. This can in fact be done without Javascript, in which case uMatrix would still protect you, whereas NoScript would not.
The <img> won't be requested until the stylesheet has failed to load, which takes a different amount of time depending on whether there was something listening on that port, or not.
uMatrix won't allow the request to the local machine to go through.
Mozilla has modified their extensions API pretty regularly (with major changes every few years at least, recently), and they're also still in the process of developing the API for the Android browser, which is likely to remain incomplete and different from the desktop API for the forseeable future. Granted, maybe not a lot of people use uMatrix on their phone, but both of these seem like valid reasons to worry.
Wait.. wasn't the a good chunk of the "webextension" transition to allow better interoperability?
After some time, it looks to me as if no real change has happened. The webextension model is still too weak in several areas to allow for some old extensions to function properly (keyboard handling is a major, major PITA), and at the same time a lot of work is still being spent to support cross-browser (and to a lesser extent, cross-version) functionality.
Forward-compatibility on the same browser seems to be the only good point, until you realize it's also how chrome can pull the plug on request filters and kill extensions on a whim anyway.
I didn't even know you needed mozilla's blessing for extensions on android. Not so different than Chrome here, Mozilla. Not at all. First, the useless signing requirement, then this? :(
You can't use it currently if you have a release version of Firefox after 68, yes. The API is buggy and in fact quite a few extensions don't work, even if you force-install them. It's still unclear if they will ever whitelist stuff that's outside of their "recommended extensions" program, and presumably the best chance it would have of getting whitelisted is if it were actively maintained and bugs encountered with the new FFA could be worked on in coordination with the developer.
People want to know about alternatives for the inevitable day when the extension API changes and there it no longer works so they won't get caught with their pants down.
I seem to remember this movie. In the following scene, some spammer forks the code and claims to be the new maintainer, but threads some horrible spammy behaviour into the code. The spammy fork (under the original name) gains significant popularity. Final scene. In disgust, the valiant OP returns with a new fork, possibly called "uMatrix Origin". Curtains and lights
He got tired of kids complaining of issues without contributing code. Just filter list issues, which are not really his fault but the list maintainers.
Gorhill has clearly stated: if you want to help, fix bugs.
He doesn't want to do this as a job, so if you want to pay someone to help instead of helping directly, then find someone who wants to be paid to fix bugs in uBO.
uMatrix has been the first app where I can exactly see what domains are being used and over a period build a whitelist that basically works seamlessly while blocking the analytics stuff. I'm just hoping browser internal security changes in the future don't render it broken.
uMatrix has to have been the most useful, all-rounded, intelligent browser extension I've ever used. I see it as a gold standard. It truly extends the browser, rather than use it as a platform to deploy « apps ».
There’s a “save” button you have to click to persist your changes after you’ve toggled the green/red cells of the media-types-per-domain table to suit what you want to block and what you want to allow. i gave up on uMatrix once before figuring that out.
A huge loss but thank you gorhill for developing this awesome extension! Like many others I'll probably keep using it while it still works.
On another note, how does uMatrix even work internally? I guess the bulk of its functionality is based on the webRequest API and I think it uses some kind of CSP hack for inline scripts and workers? (And is it only my perception or does uMatrix have to resort to a lot of hacky workarounds to implement some of its features?)
I am sad to hear this. I have been using uMatrix now for quite some time, it has always been one of the extensions I install directly after setting up a new browser (together with uBlock Origin).
If the author, Raymond Hill, ends up reading this: thank you Sir, for all the (probably unpaid!) effort you have been putting into this extension for years. It's certainly an inspiration to actively contribute to the open source community.
This is so critical to my internet use, I plan to maintain a fork. No new features, no distribution, just minimum changes to make it run on FF desktop.
I don't have the time to do it justice, and I expect others will do a better job of running a project, but it's something I can't work without. Fingers crossed it's just version bumps when new FF versions come out.
uMatrix is one of the first add-ons I've installed on my browsers for many years. It exposes so much crap on the web that browsing the www without uMatrix feels unsafe.
The fine grain controls on uMatrix are so powerful and quite intuitive, especially once you get oriented. You can see (and block) websites trying to load in crap asynchronously, see the problematic iFrame that's loading in a scripts, see all the trackers and even the cloudflare endpoints that may be responsible for bringing in malicious content.
Gorhill's uBo project is nice, but geared towards simplicity and it's too simple, even with the advanced interface, imo.
Although Gorhill never accepted donations, someone forking uMatrix will hopefully use something like github.com/sponsors to ensure it's sustainable.
Can someone clarify for a layman what this means in practice? I still want to use uMatrix on Firefox. Does this mean that over time, uMatrix will eventually stop working?
Separate question, is there somewhere can we can read from the author about this decision to archive uMatrix?
uMatrix extends the notion of uBlock (DNS-based content blocking) to a matrix of domains, subdomains, and sites (the vertical column) against specific Web capabilities; content, images, CSS, JS, XSR, and other elements. It affords fine-grained control over what sites are permitted to do on your browser.
Blocks the rest of the nonsense? :) I see it as a very configurable privacy tool for powerusers. It also clearly shows the parts of the page and where they were (not) loaded from.
He probably could have saved time and resources by incorporating uMatrix most granular blocking functions into uBlock Origin, say as an "expert mode" users had to enable explicitly. I mean, many of us probably have both installed; since they do essentially the same things, although at different levels, merging some functionalities of the latter into the former doesn't seem that absurd to me.
Anyway, big thanks to gorhill for putting great effort in software that today is absolutely necessary to surf the web, and that makes it even more sad to see uMatrix go.
Can you explain the advantage of installing both uBlock Origin and uMatrix? I never tried that, thinking that there was too much functional overlap and guessing that I would have to check and uncheck numerous lists to keep one from stepping on the other. Or maybe I'm just lazy and didn't feel like diving deeply into how to make them play nice together.
They don't interfere with each other much, if something is broken 90% of the time it's uMatrix and 10% of the time it's both. From my understanding, uBlock Origin will stop a few things uMatrix misses, but mostly I already had uBlock set up and have never seen a reason to ditch it.
Dear HN community. Pretty, pretty please with sugar on the top, keep maintaining this project. It's absolutely essential for a sane browsing of contemporary web.
I did the same thing twice, then actually spent ten minutes reading the wiki. It's very simple to use, and I doubt a new UI could perform the same functions any easier.
I felt the same way before I realized the flexibility of the domain scoping UI and that I could allow certain things (e.g. reCAPTCHA and the million Google scripts it depends on) on a global basis instead of on a tedious per-domain basis.
Umatrix isn't for casual users. It was for hard-core I want complete control of what's being sent into my browser users. ublock origin is more than enough for casual users who won't put the time in to tame umatrix
Now, obviously you could always write an ABP compatible filter that could block any combination of these two, but that's hard. What uMatrix did is present the underlying complexity in a way that's easy to intuit for a power user, giving you point-and-click request filtering power over both domains simultaneously.
For that reason, I'm skeptical that uMatrix can be replaced with a traditional blocker, not even one with an advanced mode like uBo, because it simply doesn't allow the specificity that a two-dimensional model like uMatrix did. That makes uMatrix's being archived incredibly tragic and a great loss for the web. I hope someone as trustworthy and competent as Raymond will pick it up in the future, and I thank him for all his work on it up to this point.