I don't know, it seems to me like advice from a time before security was a priority for browser makers, and high-quality ad blockers existed. At this point, I really don't see the value.
But I must say I hate GDPR banners and this could convert me.
Inspect element -> remove
Browsers should really add "remove element" directly to the context menu.
I swear I've had ones that popup as I move the mouse to close the tab.
Not to mention that a host of vulnerabilities were image related a few years back (one of the original rookits exploited a TGA bug).
> uBlock Origin
As a lover of old image formats and the security issues they can cause* this sounds fascinating, but some quick google searches don’t seem to surface what you are referencing. Can you share any more details?
* I once fell into discovering a memory disclosure flaw with Firefox and XBM images
Brave's browser claims a speedup over AdBlock plus, but was inspired by UBO, so the performance is fairly similar, but is baked into the browser instead of being an extension.
> We therefore rebuilt our ad-blocker taking inspiration from uBlock Origin and Ghostery’s ad-blocker approach.
Jesus, why does everyone these days automatically assumes that everyone else is using Chrome or Chromium? It's almost as crazy as calling Windows a "PC".
Also just so you know, Brave isn't "written" in Rust alone, it is a big software with a lot of parts, including but not limited to a rendering engine, a JS VM and a WASM engine.
The Rust part at most (unconfirmed) would be the glue that connects them together, and I doubt that's where the bottleneck is for most browsers.
>The new algorithm with optimised set of rules is 69x faster on average than the current engine.
IME, 9 times out of 10, web developers are using JS for non-necessary reasons. The user configurable settings of popular browsers make it easy to designate the small number of sites that actually require JS and keep JS disabled for all other sites. They anticipate that the user will not have one default JS policy for all websites. In other words, these web browsers do not expect that all users should just leave JS enabled/disabled for every website, they acknowledge there will be situations where it should be disabled.
However as we all know most users probably never change settings. Doubtful it is a coincidence that all these browsers have JS enabled by default.
There is nothing inherently wrong with the use of JS. It is nice to have a built-in interpreter in a web browser for certain uses. For example, it makes web-based commerce much easier. However, I believe the largest use of JS today is to support the internet ad industry. Without having automatic execution of code by the browser without user review, approval or even interaction, I do not believe the internet ad "industry" would exist as we know it.
I believe this not because I think having a JS or other interpreter is technically necessary, but because these companies have become wholly reliant upon it.
That's why disabling JS stopsa remarkable amount of ads and tracking.
I browse the web with JS disabled by default. If I encounter a site that has trouble with that, I enable it for that site until I can determine if it is worth leaving it enabled, which usually means at some point I'll be back there again and need it on.
For the most part, it is a superior experience to what I was seeing before with just an ad blocker. The most noticeable thing about it is probably how many images simply don't load because developers lean on JS for loading and scaling them.
> It's almost equivalent to downloading and silently executing untrusted code on your machine.
No it's not. The code is run in a VM, which is run in a browser. So, the code is limited in doing things to the browser, which itself is limited in what it can do to your computer (files and whatnot). So it's not at all like running untrusted code "on your machine".
It's virtualized (in the browser) such that all the code will run almost the same on different browsers and chipsets. Again, the browser code is what keeps the computer safe from any code it runs, including CSS code or other VMs it may use, like Java or Flash. Also the OS keeps the computer safe from the browser (or at least it should).
The "security features" of popular browsers will never protect the user from the tentacles of internet advertising.
Companies/organizations that author popular web browsers generally rely on the success of internet advertising in order to continue as going concerns; as such, they are obviously not focused on internet advertising, and collection of user data, as a "security threat".
Use a tracking pixel (eg. image) to make further requests and cookie will be included in the request.
this is not about how "you" do things, It is more about how it should be done! js is almost never provides want I want when I browse, I expect to get some information! I am not on the circus looking for adventures!
web is just connection to other people, not a tool for others to bully you just bein' smart about "the code" they wrote is brilliant!
 I once enabled JS on a site that claimed it would provide "a better experience", and was bombarded with a bunch of ads and other irritations that just made me turn it off again. It was not a "better experience".
I forget about the back button. By default, I always open links in new tabs which means back button has no data. Also, SPAs have hijacked the back button or just broken it completely, so I've been trained to not count on it behaving as expected. There's also mobile experience where getting to the back button itself is often painful after the UI hides navigation from you.
Otherwise, I am 100% in agreement. If a page is so user hostile to not making a friendly non-JS page, the tab gets closed
I really wish, even if it was an optional setting, browsers would copy the past history of the source tab when you did that. If it hit back in a tab I opened that way, I still want “where I got here from” not “stay here” or “new tab page" or especially “close the tab” (thanks a lot Android Chrome).
(I occasionally used it for curiosity, but found it too tedious in the long term. I have settled on CookieAutoDelete, which seem to address most tracking. Not many seem to run a completely server based fingerprint database.)
 A beta that you can download from the github page. I assume the latest stable version also works fine, but the beta had a few additional bugfixes and features and I haven't encountered any instability.
Things like this are seriously creepy: https://www.crazyegg.com/blog/mouse-recorder/
Legitimate tools for measuring effectiveness of pages with little in the way of nefarious tracking afaics. Also very useful for replaying user errors/problems.
JS doesn't have any magic to it, location information is opt-in, but your IP is a much better advertising identifier.
Nowadays OSes have protection for this sort of thing. But I'd imagine you could still fingerprint an OS like that. Combine that with TLS, HTTP, etc. specifics and you could narrow it down quite a bit I bet.
Canvas fingerprinting, WebGL fingerprinting, GPU, fonts etc etc etc.
Please, stop arguing, JS is a nightmare for privacy. Period
most people don't run their own resolvers, so at best you're fingerprinting DNS server of the ISP.
can be easily cleared, or mitigated entirely by extensions or browser (eg. multi account containers).
That’s not how it’s tracked commonly. Similar to HTTP caches, you can fingerprint visitors by how quickly a domain request resolves for them. Sure, all of this can be mitigated. But you have to even know what to mitigate. And given the most fanatical privacy folks aren’t aware of basic timing fingerprints is a good indicator that no one is mitigating it nearly as well as they might think.
that's just off the top of my head.
The other rule is that the JS is all hand-written. No frameworks or other dependencies.
A great way to make your browsing better is to disable 3rd party scripts by default and whitelist when needed, but <noscript> fails to work in those conditions.
I combine this with another ad blocker (Wipr) to block everything else.
If someone knows how to achieve the same on Linux for Chrome and Firefox, I'd love to hear it (browser plugins are a bit of a security and stability shitshow, so non-plugin solution would be preferred, all else being equal).
I think Chrome actually is fine although I don't know of a keybinding for it: Don't they still have a toggle right in the site menu (click the icon to the left of the URL to toggle all kinds of these things)?
It's about the best UI I could come up with for this particular knob and the other things adjacent to it.
Generally mitmproxy gives a feeling what sites the browser talks too. And strace gives often a good feeling what a Linux binary does. But the browser is too big and complicated to read strace output in most cases.
Open your browser's developer tools, go to the Script/Debugger tab and have at it. It's just about as obtuse to use as a tool as gdb, but you'll see exactly what it does. Chrome dev tools has automatic formatting of the code, maybe firefox too. But you'll be stuck with shitty variable names if they been mangled. Although you could try http://www.jsnice.org/, I had variable luck with using it.
It would be interesting to have a browser tool that is like strace and you could filter by calls, so you can see exactly where window.navigator is being used for example, or localStorage.setItem. For now best you can do is searching for "navigator" which works, but can be minified/hidden away by coder as well.
Exactly, that's what I meant.
Additionally, you can set breakpoints on event handlers and Chromium has deobfuscation built in. You can usually tell approximately what's going on by stepping through the code and watching the variables in local scope.
Right, so you are describing the implementation of the tool I was looking for. Obviously I don't want to do that manually while tracing a page.
How can I generate pages with dynamic content easily? Ideally with absolute minimal dependencies.
What part of it do you think is bad for usability or accessibility?
If you want to build something that's nice for humans and machines, look up best practices for this sort of thing - plenty of information is widely available on how to build things in usable and accessible ways (and it's simpler to do it correctly than to use these 'hack'-like workarounds anyway!)
With SSR, you need some component that's aware of every change, and that triggers those re-renders at sensible times (every render takes server resources). This all feels messy, compared to rendering just-in-time on the client-side.
In other news, possibly the best designed website of 2020: http://www.muskfoundation.org/
> What I think they mean by this is that you shouldn't link to resources on their website to make it seem like they endorse your (product, website, whatever).
and sue anyone who links to them. Hopefully the author will be so grateful for this insight that they won't sue me for reproducing their copyrighted work in this comment.
> FOR A FREE CAR INSURANCE RATE QUOTE THAT COULD SAVE YOU SUBSTANTIAL MONEY WWW.GEICO.COM OR CALL 1-888-395-6349, 24 HOURS A DAY
...on the homepage of a quarter-trillion dollar company, with no other ads.
They would need to have compromised one of the root certificates on your machine to not give you a giant security warning.
In modern browsers there’s not even a button to bypass them (although I know I chrome you can type “this is unsafe” to a hidden input in the error page and it will let you bypass it temporarily).
Watch his videos. Check out his articles on A List Apart and in Smashing Magazine, among others. Pay attention, he's very thoughtful and you'll probably learn a lot.
BTW, you probably want to move off of uM given gorhill has abandoned it in favor of uBO. (I converted all my rules to a mix of uBO dynamic rules for JS and static rules for everything else, except for cookies which I still use uM for because uBO can't manage them.)
BTW, the site works as expected with my Linux/Firefox/uMatrix setup... the inline scripts are disabled by default and I see the page content. I'm not sure why GP had issues.
foo.com bar.com css allow
I have a "block everything by default" rule at the top that's:
1. Block a bunch of things by default.
2. Block images by replacing them with the built-in 1x1 GIF instead of canceling the request.
3. Disable web workers by setting the CSP worker-src.
4. Override the previous rules by allowing first-party CSS, frames and images. (The @@ means it's an override rule.)
(The fact that my default is to block everything is why the first example I gave above starts with @@ too.)
Web workers can be allowed on a per-site basis by overriding the csp directive with a reset:
no-scripting: * true
It's annoying to have to move between static and dynamic rules when deciding to enable JS on a site, but I'm not sure there's a better way. Neither static nor dynamic rules individually support everything that uM could do - static rules can't block inline JS nor render `<noscript>` content, and dynamic rules can't block every kind of request.
Static rules are also nice in that you can have empty lines and comments and arbitrary ordering of your rules, so it's easier to group rules in sections based on the domain names, add comments, etc. Dynamic rules however are like uM's rules and are forced to be sorted by domain name with no empty lines or comments.
And it's mostly a big ActiveX control.
Takes max 10 seconds, on any site. Can you do it in less than 10 seconds using that validator?
But my point was that the markup is invalid.
I have no idea why they made the SVG image inline but the CSS style external, though. That same image is used on every page.
* Washington Post no longer has a paywall
* anandtech.com is seemingly unaffected, but tomshardware.com is very different (and less pushy)
* nationalreview.com is broken -- I can only read a few paragraphs in a NR PLUS story, and there's no way to keep scrolling (I can read the article fine in another tab)
* An article I co-wrote, published in a Cambridge University Press journal, is now sans tables and figures, but the console reports no errors or exceptions.
An interesting experiment! Overall, it seems my internet experience is better without JS (but reading an academic article online is way worse).
As someone pointed out, there's a button on uBlock to disable it.
I didn't quite spend so much time on Firefox preferences but I didn't find the option. I'm sure it's there somewhere
Would love to have heard that final talk!
Here is the link to disabling JS for a site in Chromium.
That's literally the only thing it did until I reconfigured my browser to access it. It's a misuse of `<noscript>` and it's completely unnecessarily intruding on how I use my own computer to access the content. I thought that was the kind of thing people here (especially the anti-JS people) frown upon.
My philosophy is: set a good example, make the benefit clear, and communicate with other devs who might not realize the horror show they’re sending down the wire/executing in their users’ browsers. But forcing people to figure out how to disable something increasingly hard to disable before they can even hear you is not good communication.
I only clicked the link because I was hoping there would be a regular site under default config and some special treat with JS disabled. Progressive enhancement. That would have been a clever and compelling execution.
Ad hominem: You're wrong because you're an idiot.
Just an insult: You're an idiot because you're wrong.
Furthermore, concluding that somebody is wrong because they used a logical fallacy is itself a logical fallacy. If I said "2+2=4 because you're an idiot" my reasoning would be fallacious, but to conclude that the answer must therefore not be four is also fallacious.
Calling out your histrionics for what they are is not ad hominem. I’m attacking your statement, not your person.
Further, that’s not some sort of axiomatic law, that’s just a phrase. Even if it was, losers using ad hominem doesn’t mean winners don’t, that’s not how logic works.