Although https://www.rfc-editor.org/rfc/rfc7217 doesn't require a /64 or shorter prefix for its variant of address auto-configuration, for example Linux doesn't allow otherwise.
Depends on the project. I have found Pinephone users quite nice overall as a kernel developer.
Anyway, if your project involves convincing hundreds of maintainers to increase their cognitive/work load in order to include your fancy new foreign workflow breaking language into their project, you have to expect pushback.
Yeah, let's just ignore all the wars and genocides that nuclear powers engaged in and supported and all nuclear powers that are constantly at war or engaging in occupation of others since they started existing and millions of dead and affected people.
Nice "peace".
We had 100 years of such kind of peace among major europe powers before nuclear weapons. We're not even at 80 years of peace in nuclear age, this time, and nuclear armed power is already attacking from the east and from inside via new media.
I wouldn't call it done and clear, about the "nuclear age peace".
I can not run my own extension in Firefox by modifying a config file. It's not possible. Not even if I don't let dishonest actors anywhere near my Firefox install.
I can murder some trees and poison the environment for all of us, to do pointless mutli-hour re-builds of Firefox for each release and point release to have it accept my add-ons, though.
I've also never seen a reason, why I can't at least place my CA into Firefox /usr/lib/firefox folder or /etc/firefox and have it be respected. Or just place local extensions there and have firefox not require signatures for them, because there's no way these can be installed accidentally from web by clicking some link.
And if someone can trick me into modifying /usr/lib, they can just trick me into replacing Firefox completly with their malwared build, so signing will not save me anyway.
The Debian build of Firefox does load extensions from /usr/share/mozilla/extensions, so that it will load the extensions in the Debian webext-* packages. You can even add a symlink there pointing at a dir in your /home so you can load extensions you are developing.
That's because it's an ESR build. Normal build does that, too. The extensions still have to be signed. It's not a Debian thing.
One ESR build, you can disable signature checks though in about:config. Not sure how this fits into the standard Mozilla orthodoxy. Remember that core tenet of the orthodoxy is that users can't be trusted to protect themselves...
So maybe Mozilla cares less about safety of users that want to use their ESR (extended support) build. There are way fewer of these users than that of main Firefox build, so their safety is maybe not that important on the grand scale of 2.5% marketshare that Firefox still holds.
> And if someone can trick me into modifying /usr/lib, they can just trick me into replacing Firefox completly with their malwared build, so signing will not save me anyway.
As you said yourself, that's a much bigger hassle and cost. In other words, it's an effective deterrent. Writing to a user owned file is a very low bar for allowing privileged code execution in the browser.
A long time ago browsers used to be infested with all kinds of toolbars and extensions automatically installed by third party software, I for one am glad to not have to worry about that in my computer and on networks I manage or frequent.
Only preview versions and developer versions can run unsigned addons. Both coming with their own set of reasons why you shouldn't use them as your daily browser.
And ESR, but that may not be normally distributed in Linux distros. It's not in Arch Linux.
There's no hassle free solution. Only way to run your own code on normal branded Firefox release is to rely on third party signed extensions (eg. Violentmonkey), but that's not really hassle free either if you have 10s of userscripts and multiple browser profiles, and you have to trust some third-party to not go rogue. I got pretty terrible malware from mozilla add-on store in the past.
I also rely on hosting provider DDoS protection and don't use very intrusive protection like Cloudflare.
Only issues I had to deal with are when someone finds some slow endpoint, and manages to overload the server with it, and my go to approach is to optimize it to max <10-20ms response time, while blocking the source of traffic if it keeps being too annoying after optimization.
And this happened like 2-3 times over 20 years of hosting the eshop.
Much better than exposing users to CF or likes of it.
They do not support major browsers. They support "major browsers in default configuration without any extensions" (which is of course ridiculous proposition), forcing people to either abandon any privacy/security preserving measures they use, or to abandon the websites covered by CF.
I use uptodate Firefox, and was blocked from using company gitlab for months on end simply because I disabled some useless new web API in about:config way before CF started silently requiring it without any feature testing or meningful error message for the user. Just a redirect loop. Gitlab support forum was completely useless for this, just blaming the user.
So we dropped gitlab at the company and went with basic git over https hosting + cgit, rather than pay some company that will happily block us via some user hostile intermediary without any resolution. I figured out what was "wrong" (lack of feature testing for web API features CF uses, and lack of meaningful error message feedback to the user) after the move.
Although I sometimes have problems with Cloudflare, it does not seem to affect GitHub nor Gitlab for me, although they have other problems, which I have been able to work around.
Some things that I had found helpful when working with Gitlab is to add ".patch" on the end of commit URLs, and changing "blob" to "raw" in file URLs. (This works on GitHub as well.) It is also possible to use API, and sometimes the data can be found within the HTML the server sends to you without needing any additional requests (this seems to work on GitHub more reliably than on Gitlab though).
You could also clone the repository into your own computer in order to see the files (and then use the git command line to send any changes you make to the server), but that does not include issue tracker etc, and you might not want all of the files anyways, if the repository has a lot of files.
You can also send packets to userspace from nftables and do your SNI parsing/deep inspection/decision there. I used that a few times to do various things, like duplicate packet removal, etc.
The Lego pieces are indeed available for you go build this stuff yourself, but the engineering effort required to do so quickly make Palo Alto or Checkpoint's licensing look extremely cheap.
Yeah, until you hit some turd in fortinet (see how they mangle SDP if you send re-INVITE in a SIP dialog, even with all SIP protocol handling checkboxes disabled) and have to spend weeks with support and many hours of debugging and back and forth just trying to convince them they have an issue, after initially spending ~ 10h of dev/debugging time on trying to convince SIP phone manufacturer they have buggy SIP phone, before realizing different SIP packets are arriving on a SIP phone then are comming from PBX, because of this amazing forticrap middlebox. All the while whole company has issues with SIP telephony during attended transfers for months on end, disrupting commuincation with customers.
Although https://www.rfc-editor.org/rfc/rfc7217 doesn't require a /64 or shorter prefix for its variant of address auto-configuration, for example Linux doesn't allow otherwise.
reply