Hacker News new | past | comments | ask | show | jobs | submit login
Tor Browser disabled NoScript, but can't update (torproject.org)
123 points by lichtenberger 42 days ago | hide | past | web | favorite | 110 comments



> Turns out an unrelated 3rd party can suddenly remotely disable Tor anonymity protections at their whim

Am I the only one annoyed by people pushing this "they flicked a switch" narrative? No. They provided shitty software that didn't work under certain conditions (in this case date related) and thus broke your shitty software.

A third party having remote control capability is something entirely different.


The extension disabled itself due to expiry in this case, but it sounds like OP is upset that Mozilla can remotely disable extensions used by Tor users at will with a kill switch (which they didn't use here, but surely exists to combat malicious extensions).

Some other people are also upset that the Normandy system has the capability of pushing extensions that it does and that it is apparently on by default.

These both seem like reasonable concerns that are unrelated to the expiry but highlighted by it.


Lack of understanding of the technology and excessive paranoia will lead to that, I guess. "I'm telling you people, the man's after us!", and this man gets to feel like he's the hero exposing some conspiracy.


Definitely a looney if they care about Mozilla having essentially a live backdoor into your computer if you’re running Tor. It’s not like they are a US company that the government can manipulate at will as long as they properly insinuate terrorism or child pornography. These guys can just be trusted, right?


If I'm not mistaken, then very well could revoke the intermediate certificate if they wanted. This wasn't a case of that, but it seems it could happen.


Does the intermediate have an OCSP URL, and/or does the CA that signed it have a CRL URL?

One or both would need to be true for a normal revocation to have any effect.


Usually the whole certificate chain is distributed as a bundle, and most software don't bother checking revocations.


Are we talking about Firefox extensions or something else?


Not sure of the Firefox implementation; just pointing out revocation isn't necessarily straightforward.


The way I understand it is: certificate revocation is handled by checking OCSP servers, and OCSP servers can be programmed so they give different answers depending on the IP address of whomever is asking. In other words, it should be possible to disable all addons for a selected user by targeting him by IP address.


That is such a massive security flaw, and if any other company had such a power over my browser experience I would drop their product immediately.

Mozilla, you've really been testing my patience for the last two years.


Seems that Firefox skips revocation checks for CA certs [1].

[1] https://wiki.mozilla.org/CA/Revocation_Checking_in_Firefox


Where does it say that? The link says they centrally manage revocations using OneCRL and then push a single revocation list to browsers (independent of browser updates). Which means they can revoke any certificate they want using that mechanism.


Ah, you're correct. Seems they skip CA CRL/OCSP in favor of their own CRL. Thanks for the correction.


> Am I the only one annoyed by people pushing this "they flicked a switch" narrative?

What they set was essentially a dead mans switch, though.

> A third party having remote control capability is something entirely different.

They’re pushing fixes through a telemetry channel which can also be used to change your preferences soo..


Certificates can be renewed or revoked that's literary a switch. That the apparatus before or after switch is more complicated is irrelevant.


Flicking a switch, releasing the deadman's switch, is there really a difference?


Letting a certificate expire is the same as flicking a switch IMO. Mozilla is a real organization with full time paid staff, things like this don't just slip through the cracks, especially when the 'fix' is to let Mozilla run spyware on your computer.


Your comment has a lovely silver lining: You believe it’s impossible for Mozilla to make such a mistake by accident! Anyone would be cheered to hear that, if presented positively; so I hope my callout of this can help the Mozilla folks working on this smile a little at the faith in them it shows.


I think almost anyone who has worked at real organisations with full-time paid staff can tell you that things do slip through the cracks. At least I can.


I do security for a large Enterprise. You'd be surprised how unorganized some of this shit is.


Somewhat related to this ongoing Mozilla plugin saga, are there any infamous stories I should look up that involve huge mistakes leading to unfixable clients? Ie. Imagine bricking your customer's devices with an irreversible buggy update. In Mozilla's case they were able to deploy a hotfix for many, and an update for others. But I imagine there's got to be some great stories about completely bricking countless devices.


It's one of my favorite interview questions. I only know the answer because it happened to us once with 5M users on the affected version before it was discovered. It's a very good question to see how people react in a mostly hopeless situation. The only candidate out of 100s to logically get to the solution (or essentially it) was one of the best engineesr I've hired and worked with.

"So you and your team just released a tested and reviewed build, did gradual rollout and numbers looked good - so out to general public it goes. All clients update over the following days. QA discovers, upon testing the next release, a latent bug that prevents the current version from connecting to the update service and running the updater. How do you resolve this?"


I'm super interested to know the answer to this question. I imagine it's more interesting than just uninstall/reinstall.


As said in the other answer, hopefully the interviewee asks how it's unable to connect. If it's an Oculus situation [0] where the signing certificate is expired and you don't have a backdoor like Firefox studies, the only way is a reinstall or a "fix utility" [1].

0: https://news.ycombinator.com/item?id=16541235

1: https://www.theverge.com/2018/3/8/17095414/oculus-rift-softw...


Need more info, why it couldn't connect?

Eg, if it was just a typo in the domain name, just register the domain new name or adjust the DNS record ;-)


Yep, that's how interview questions work. If someone immediately recommends a solution, that's someone who probably needs to grow in their field a little before being given a position where this is a potentially real situation.


And as an interviewer I don't really want an answer. What I really want is to watch your brain grind away at a problem. So ask these questions, even rhetorically. Talk through all. Your angles of attack.


A good signal is when the question is: Who else is on my team?


To play around with a non-technical solution I would consider rushing out a "Version 2.0" of the application that accomplishes the following (if possible within a <24 hour window):

1) Has a functioning update service that fixes the original problem

2) A quick re-skin/theme-change to i.e. a more "modern" version of its original design

3) Whatever performance/security improvements you can cobble together quickly or were in the pipe already

I would do this because I see this as also a business/finance/sales/marketing problem as a technical one. Assuming this bricked the core product, you have to save the business first, then save the quarter, then the week, then the day.

I think this would work because it is much easier to get a user to re-download a version 2.0 with performance/security/design improvements then it is to download almost anything else (imo). Again, this is assuming no saner solution exists. Interested to hear your accepted answer though.


I might not be your target user base, but that's exactly the opposite of what I would like as an user. I am already forced to update because the current version does not work correctly. Forcing UI/design/any major change (especially ones that are "quickly cobbled together" and thus might be buggy) on me in that situation is not nice. I am then left with the choice between a) non working software or b) a major update I might not like.

Perhaps it's good from the company POV because that's how you make sure your users migrate to the new version even if it is still inferior/buggier.


You’re definitely right, my suggestion is business friendly (theoretically) and likely user-hostile depending on how well it’s executed.


I would find any source of displaying a message to the end user.

Most applications have some data that comes from a remote server, whether it's on the home screen, any kind of announcements screen, server-sourced error messages, etc.

I would use that to communicate an update is available, with a handy link.


"I would give my two weeks notice."

So how'd I do in the interview? ;)


I’d love to hear how this was resolved.


My first thought is to email a link to a fix, including a reason and an apology.


Start screaming and rollback?


You make the next version a major upgrade. Similar to Firefox 53?


“My left shoe won’t even reboot”

https://arstechnica.com/gadgets/2019/02/my-left-shoe-wont-ev...

I don’t know if they found a fix in the end



Pretty much this with some versions of Samsung's One Connect Mini box - the box (which is the only way to connect HDMI to many Samsung sets) connects to an update service, but during the process triggers a reboot which cuts the update short, which then starts the update again, which then triggers a reboot and... There appears to be no way to get the box back to a working state without replacing it entirely.

I will never buy another Samsung set again. Shipping a TV where HDMI access depends on a proprietary box is unconscionable.


Not unfixable, but there was a fun story about a game that shipped without the ability to be patched, so the developers later "patched" it by buffer overflowing their own EULA: https://www.gamasutra.com/view/feature/194772/dirty_game_dev...


BLU phones released an update that made it impossible to unlock the phones. I factory reset mine rather than muck with downloading another update for the sd card.

https://www.pcmag.com/news/357649/blu-phone-update-is-bricki...


Pretty amazing you could even reset it. If that happened on iOS you’d be out of luck.


Mobile apps broadly fall under this, because update cycles of apps can be very long.

Storytime: App devs built their app with the debug flag on. (Why would anyone use CI anyways?) The debug flag is used for the local developer builds. All API endpoints were pointing to dev/stagings. In order to make this fuckup go unnoticed(otherwise customer retention would be awful + possible media coverage), the dev/staging endpoints were redirected to production, so that they would take the real orders from the broken builds. But we could only do this for our own APIs, not for our payment providers. So we had to ship product without getting paid. :|


Not exactly clients, but the Parity multisig "hack"[0] was pretty wild. All it took was one permissionless call to permanently wipe the shared library of a large bunch of multisig wallets, thereby freezing those wallets forever. Some of the impacted people, including Parity, are still pushing for overriding the wallets to be unlocked, despite no support from anyone else.

0: https://blog.zeppelin.solutions/on-the-parity-wallet-multisi...


Not an update, but expired certificate on Wink hubs:

http://status.winkapp.com/incidents/4c47cdvgl6b4


Imagine if this affected something like FirefoxOS devices and the lack of working certificate was making it unable to fix remotely? I know I'd be annoyed.


If Mozilla hadn't locked down Firefox so much, the fix could have been as simple as going into about:config and flipping a switch to allow unsigned plugins.


FWIW, since the original intent of the change to block unsigned plugins was to prevent third party software installers from adding malicious plugins without user consent, leaving a preference to toggle it back would have been pretty pointless as the third party installers could have just toggled the preference themselves (about:config is just some sort of easily modifiable data format on disk).

TBH it’s not hard for me to think of ways Mozilla could have preserved user control even in the face of these constraints though. For instance, I already use a master password to encrypt/decrypt the password database in Firefox. Why not allow the master password to also protect the settings and store them encrypted so a third party can’t modify them?


If the malicious plugin or executable can write to disk it's probably to late anyway?


You're right, I think it just makes it less "gray". If Oracle bundles crapware with a Java installer that installs an unsigned plugin in Firefox, you can write a PR release to smooth things over.

If the crapware uninstall Firefox, install Firefox Dev Edition, and installs the unsigned plugin, there's not much of a "gray" defence there.


Adware came bundled with chrome itself (for the money), i.e. installing new browsers is not out of the question here. That excuse feels really flimsy and tailored to permit only the exact choices made and no other option. But there are other options, such as helping users to get rid of adware when it's detected in the browser. It would also result in a larger benefit of improving their system instead just the browser.


I find my explanation more plausible than the implication that "Firefox went through the effort of making plugins required to be signed for any other reason so long as it's arbitrary or somehow malicious" that you and others seemingly subscribe to.


That's not what I am saying. I am saying that through a chain of choices they arrived at a specific conclusion how to solve their problem and then added justifications that admitted no alternatives when alternative solutions would have avoided the situation today.

People, including myself, were argueing against putting mozilla in a position where they are a single point of failure. That critique was countered with the specific argument that giving users an override is not acceptable because any override could also be used by adware and thus mozilla must be the sole arbiter of what addons can be installed.


It was a significant issue for years. Mostly adware and things that replaced your default search engine.


the solution is to help users remove the adware, not centralize control over addons


Yeah, it could replace Firefox with the developer edition that allows unsigned plugins.


The malicious code may well have the necessary permissions to write to the profile folder, but not to modify the executable.

Under the Unix security model, for example, that would be pretty likely: your profile's owned by your account, the executable & its containing folder are owned by root & go-w, so code running as you can tamper with the former but not the latter.


The malicious code can simply write an executable somewhere in the profile (or /tmp, or anywhere). It is highly unlikely that the profile (usually in /home/${USER}/) is on a filesystem mounted with the "noexec" flag.

Or just inject code the browser at runtime.

Under the Unix security model, the UID is the permission boundary. Even if the binary is owned by root, it inherits the user's UID when they run it.


Well, true, but all this is a lot more conspicuously fishy than "flip a setting that the user might have legitimately flipped themselves". You're not going to get far hiding a whole new Firefox install in $HOME before someone asks why theirs suddenly got 200MB bigger, for one thing.


They explicitly argued with the windows threat model that installers run with admin privileges and thus can modify the program directory too. Now arguing with the unix model is shifting goalposts.


Who argued that, where? I don't see it in this subthread.

Regardless, as I said in my comment, I only mentioned the Unix model "for example" - not to "shift the goalposts", just because I'm more familiar with it than with whatever anti-binary-tampering measures Windows may have.



That very page acknowledges your concern & presents their counterargument - namely, they consider security software to be part of the Windows model, and expect that to intervene in the case of modified binaries:

> By baking the signing requirement into the executable these programs will either have to submit to our review process or take the blatant malware step of replacing or altering Firefox. We are sure some will take that step, but it won’t be an attractive option for a Fortune 500 plugin vendor, popular download sites, or the laptop vendor involved in distributing Superfish. For the ones who do, we hope that modifying another program’s executable code is blatant enough that security software vendors will take action and stop letting these programs hide behind terms buried in their user-hostile EULAs.

(emphasis mine)


Replacing FF stable with FF dev edition would not constitute modifying the program, it's installing a new browser. A lot of installers shipped chrome, this was even encouraged by google. So I do not think this addresses the issue.


In reality though, this change has been live for a year. How many of the malicious actors switched to the model of uninstalling Firefox Stable and installing Firefox Developer Edition? (And presumably updating all the user’s aliases, start menu items, etc to point to the new browser).

I haven’t heard of this actually happening.


"attack widely known and observed in the wild" is usually not the bar by which computer security systems are measured.


Mozilla’s thesis is that attackers wouldn’t be willing to go to this next step (actually uninstalling Firefox and replacing it with a different binary) because antivirus vendors would start treating them like malware.

In other words, it’s not a purely technical solution, it’s a political solution, and the success or failure of such a political solution can only be judged by real world results rather than technical possibility.


> For instance, I already use a master password to encrypt/decrypt the password database in Firefox.

This feature uses SHA-1. [1]

> Why not allow the master password to also protect the settings and store them encrypted so a third party can’t modify them?

It cannot protect against such. It can protect against a household adversary such as a 4 year old, and that's about it.

[1] https://palant.de/2018/03/10/master-password-in-firefox-or-t...


That’s disappointing to see. I had assumed it used PBKDF or something sane.

Of course if Mozilla had wanted to secure the config settings they could just fix the master password hashing functionality at the same time.


If you have disk access, why even bother with a malicious plugin that has to follow all the plugin rules? You could just modify the executable itself and do whatever you wanted with no use visibility.


It's not impossible, but it's much harder to do this in a way that reliably persists through Firefox changes and updates. In comparison, changes to the prefs file are trivially persistent regardless of when and how Firefox is updated.


> If Mozilla hadn't locked down Firefox so much..

Ordinary users would have 5 levels of toolbars!!! Installed by anti-virus apps, etc.

Locking down can back fire, but think not that this wasn't done in the interest of most users.


I did exactly that, it worked on Debian however not on Windows:

1. Type about:config in URL bar

2. Change xpinstall.signatures.required from true to false

3. Restart Firefox


Debian modifies Firefox builds to set a build flag to allow this preference to work, I believe? The default Windows and Mac builds of Firefox ignore this preference.


It's the opposite: builds compiled from the source allows changing the flag, those builds from Mozilla have the option disabled.


Apparently the switch to enable unsigned plugins does work in Tor Browser still - it just doesn't re-enable plugins that failed the signature check because the failure is cached somehow.


Set `app.update.lastUpdateTime.xpi-signature-verification` to 0 and restart the browser to refresh the cache (on normal firefox).


And if only my company would allow me to FTP into our servers directly I could hotfix a problem in production much faster.

...which is to say, it's a trade-off, and though you can disagree with the weight Mozilla assigned to the pros and cons, their reasoning was valid.


> Turns out an unrelated 3rd party can suddenly remotely disable Tor anonymity protections at their whim, and possibly endanger TBB users (or deliberately help in deanonymizing them).

> remotely disable Tor anonymity protections [!]

Maybe a 'certificate is expiring soon' warning on the browser side?

Users would know and Mozilla would be forced to keep certs valid.


One of the weird things here is that javascript is not disabled by default in the Tor Browser and instead an addon is needed.

This whole thing is also yet another reason to use something like Tails or Whonix which are much safer to use with JS enabled.


It is a valid trade-off to leave Javascript enabled by default in Tor Browser. The reason being, the experience is much more user-friendly to a layperson and thus results in greater network usage and diversity. "Safe" is a relative concept in information security and not everyone has the same goals or risks - though in principle I agree disabling JS greatly hardens a browser.


I remember saying over and over again that using Firefox for the TOR browser was a horrible idea.

Use curl -H "" and links --dump. Until we have something sane that's really the only safe way to browse TOR if you actually have something to hide.

(There's dillo and other things like the absolutely horrifying browser I've been writing but that hardly even supports forms right now heh. but I'm worried about connecting stuff like that to stuff like TOR)


OTOH the more people who use Tor, the more anonymous everyone on the network is, so having a usable version is important.

https://www.freehaven.net/anonbib/cache/usability:weis2006.p...

People with heightened privacy concerns can always use Whonix or Tails instead.


curl is good enough for fetching RSS, leaving that running would probably help quite a lot.


Every single one of my Firefox extensions stopped working with an error message that says "could not be verified for use in Firefox and has been disabled" re-downloading doesn't work either. I'm assuming it's happening to everyone?



... a method to apply a hotfix manually without enabling studies: https://news.ycombinator.com/item?id=19827302


I can't improve upon this comment:

"Hey Mozilla - this is why people said that forcing addons to be signed with no way to disable was a bad idea. You didn't even make it a year without screwing it up.

Who could have seen this coming? Oh wait, pretty much everyone who argued against this policy."


It's a dumb comment. There was a ton of malware being distributed as add-ons that came packaged with other installers.

This solved that issue.


The cost of their “solution” seems pretty high in retrospect.

This isnt a binary proposition, they could have solved that issue while respecting user choices, if it had been a priority. The solution they chose inherently was flawed. Hopefully this gives other providers pause before they build similar things.


I don't get why this flag was disabled on Linux though. Nobody uses "installers" on Linux, come on. I am not sure if malware of this kind even exists on Linux.


Burning down London stopped the plague too.


The Great Fire burned central London long after the plague had moved to the poorer outer areas of the city, where it continued to die out gradually unaffected by the fires elsewhere.


How Firefox came to be like that? Which browser should I use now?


[flagged]


Don't attribute malice where incompetence may serve as a valid explanation. This is a fundamental tenet of information security practice. Besides, it's silly to think an outage of massive proportions would be the most effective means of owning software.


[flagged]


> I don't buy for 5 minutes that Tor Browser is secure regardless of configuration.

You don't need to "buy" this idea - we're talking about free and open source software. If you passionately believe it's insecure, you would do everyone a great service by explicitly pointing out how it is so in the source code, rather than peddle FUD.


Tor project is US government op, true. It's good at hiding government employees using untrusted networks from hostile third parties. It is also good for foreign whistleblowers releasing data to US government.

Tor is NOT good at keeping you private from government. It is good at keeping you private from your ISP or employer however.

Don't do anything illegal using tor and thinking you are safe.


> Tor is NOT good at keeping you private from government

This is a bold claim. Would you mind backing it up?


I wouldn't agree with the claim, but it might be referring to Tor's weaknesses against country or global passive adversaries -- timing attacks correlating your requests with the site's responses. If you can log everyone's bandwidth and can get ISPs to tell you which human is using Tor, metadata anonymity goes away.


Isn't this true for pretty much every privacy tool (i2p, freenode, gnunet, etc) though? I don't think that any of them can protect you if an adversary can monitor the whole internet.


Yes. There are some research networks like Vuvuzela that add noise to resist timing attacks but they aren't common.

The reason it's still a valid criticism is that people suggest that Tor has this resistance in threads like this one. :)

(Of course, you don't need to monitor the whole Internet if it's intra-country traffic that you're interested in.)



Yes, what about it? This is a list of the IP addresses of the exit nodes, it is useful if you want to start blocking tor traffic in your site (though I am against that).

This is not a list of the IP addresses of the onion sites nor of the users, in addition to that exit nodes are not used when you connect to an onion site.


How could Tor both be "good at hiding government employees using untrusted networks from hostile third parties" and "NOT good at keeping you private from government"?


This article sums up all the issues with tor

https://restoreprivacy.com/tor/

In particular, silk road shutdown, and various pedophile rings successfully de-anonymized and operators arrested. There is good evidence that powerful enough entity (esp. entity spanning multinational geographic regions) can successfully de-anonymize tor traffic.

If you keep low profile and just use tor to browse regular internet privately, you are fine. But if you do wide scale illegal activity, you most likely will pay the price.


This sounds like FUD.

Why would it be better to connect directly to a site, rather than force them to expend resources to deanonymize my connection?


[flagged]


Even if it isn’t all funded by gov money the security apparatus of, at least, the big players are, without a doubt deep in to Tor.

We already know that some governments use drug money to pay for clandestine / black ops.


The user needs to keep in mind the goals for the development of this software, originally developed by US military.

It has an asymmetric architecture biased for a 'home network' infiltrating a 'hostile network'. So Tor is perfect for military intelligence officers working from a 'safe network' (one that doesn't zero in [on] Tor protocol beacons) to pop up in some other network and do their job.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: