Hacker News new | comments | show | ask | jobs | submit login
Trust but Verify (brendaneich.com)
157 points by khuey 1375 days ago | hide | past | web | 44 comments | favorite



Nice to see some hands being waved about the matter. I've long been confused why large open source projects aren't, by default, providing a virtual system that is capable of building the project with close to no configuration. There is some discussion on the linked bug 885777 (and links out from there) about why current builds are not idempotent on a given system, and building a Vagrantfile to setup a build environment.

Even ignoring the verifiable build aspect, which I think is important, having an easy to start with environment to use as a launching pad, and more importantly, example build environment, would ease new developers into the process of making a commit. A little more than a year ago, I started toying around with the idea of commiting to Mozilla late one night when bored. I went through the process of getting dependencies installed (build-dep wasn't enough as it didn't provide the necessary version of autoconf), and even passing a ./configuration, the build failed and I gave up. I would have likely powered through if I already had a bug fix to commit, but as mentioned I was just idling playing around with the idea while bored.


Firefox's build system is complex, but things are gradually getting easier for newbies. For example, if you're on Linux or Mac there's now a script (see https://developer.mozilla.org/en-US/docs/Simple_Firefox_buil...) that installs all the required dependencies. And |mach|, the (relatively) new build driver program helps. And better build defaults means you can get away without using a mozconfig file, at least to begin with.

As for the build error, I understand that's frustrating. Asking on the #introduction channel on IRC would be a good way to make progress (https://wiki.mozilla.org/IRC). That's a channel with lots of informed people and they're generally very friendly.

In fact, the very first time I built Firefox I had various build problems, and the people on Mozilla's IRC channels quickly helped me identify that it was actually faulty RAM -- I was building on a brand new machine -- at fault!


The NSA is going to attack the weakest link in the chain. This proposal has some merit for hardening software against trojan and malware attacks, but realistically, the majority of people are running Firefox on top of Windows and OSX, and if governments want to plant surveillance, we know it's trivially easy to do so, especially on Windows.

Looking at the Snowden revelations, the NSA was attacking the client and server, but the server side a lot more. PRISM and MUSCULAR, plus intercepting people's actual hardware deliveries and implanting backdoors before FedEx even places the computer on your doorstep.

On top of all that, is actual browser security of the runtime. Do we have any evidence historically of government injected, or even large-org injected backdoors into open source projects in their closed source bits? Isn't it far more likely that the spooks would leverage traditional exploits of buffer overflows and use-after-free bugs to do their work? It seems as if we have evidence of the NSA maintaining a huge catalog of zero-day exploits to deploy.

In the end, if you are running Firefox on anything but an open source operating system, you can't verify the compete system. In this extent, Richard Stallman's warnings over the years have turned out right. Even down to the firmware in your harddrive, bios, or radio chip, binary blobs pose a threat.


This is good and necessary, but I see this as mostly symbolic. In practical terms, coercing a browser vendor into modifying their binaries to add an exploit is a comparatively risky and difficult way for the NSA to get a backdoor. There are much less-risky ways for the NSA to get an exploit. To wit:

1) All browser vendors keep a private catalog of security-critical bugs. If the NSA cared to gain access to these, they could, with or without the cooperation of the vendor.

2) The NSA maintains its own catalog of zero-days in browsers, operating systems, Flash, Acrobat, and so on. They actively exploit these bugs in attacks targeted against individuals. That these attacks are delivered only to their intended targets makes the bugs essentially impossible for browser developers to detect and fix, until they're independently discovered.

3) NSA can add exploits to source. Compared to coercing a vendor into adding an exploit into their binaries, this has the advantage of being easy to pull off without the vendor's knowledge or cooperation.

In terms of Firefox specifically, the safeguards in place are not sufficient to prevent a determined adversary from committing a backdoor into the source. Code review can't catch all security-critical bugs, and anyway Mozilla committers regularly check in code which doesn't match the reviewed code; it would be easy to slip in an exploit after review.

I suspect something similar applies to other browsers. A browser's being OSS only makes the NSA's job of adding exploits marginally easier; there's nothing stopping the NSA from planting engineers on the IE team, for example.

4) The NSA could use its targeted attack ability to deliver backdoored binaries to specific targets. Even if Mozilla protects downloads with a pinned SSL cert, I have to believe that stealing or forging a cert is not out of the question for the NSA.

If I were the NSA, I'd need to observe that none of the above strategies was working before attempting the much more risky job of coercing a browser vendor into adding an exploit into the binaries they distribute. And we know that at least (2) above /is/ working for the NSA.

In other words, if we're serious about securing our browsers against this adversary, we need to think a lot bigger than just verifying that the binaries on our FTP servers match our source.


Auditing != code review. Static analysis and fuzzing count as auditing in general (and fuzzing works on closed source too).

@cromwellian's citation of Snowden on weak endpoint (OS, PCs running Windows especially) security is on target. Security is never done, and there are too many levels and spots to attack for users, open source projects, and even big companies to be able to afford defense everywhere. (Get a load of https://www.openssl.org/news/secadv_hack.txt if you haven't seen it.)

The browser cannot be excluded from this analysis. It presents a large attack surface, and top browsers have wide unverified binary distribution nets.

Adding exploits to Linux open source has been attempted but detected. We don't of course know of undetected successful attempts.

The risk of coercing a browser vendor is hard to quantify. Ladar Levison of Lavabit was brave enough to speak up pre-gag-order and shut down his business.

Have the principals of other companies been that tough? We don't know in some cases, but in others the answer is "no".

Even excluding what some such company principals didn't know but should have known (oops, unencrypted data over telco dark fiber), the bigs knew enough pre-Snowden to make a public stink. None did; many (AT&T, e.g. -- there are other examples) were complicit for decades.


Regarding #1 & #2: We need to find a way to drastically reduce (literally to zero) exploitable defects. Browsers, and especially Firefox, can go a lot farther here than they historically have. In the case of Firefox in particular, we'll figure out a way to make sandboxing work for Gecko and there's good reason to expect that Servo can succeed. But, even sandboxing and memory-safe languages cannot prevent logic problems. For example, https://bugzilla.mozilla.org/show_bug.cgi?id=919877 is up there as one of the worst SSL implementation bugs of all time, and Chrome's sandboxing did nothing to stop it, and neither would using a memory-safe language. (Edit: that bug never affected any Firefox release, as we found & fixed the bug before enabling that feature in the upcoming Firefox 28.) Ultimately, openness, and the verifiability that bolsters it, will be essential for getting to zero security vulnerabilities no matter what defense-in-depth mechanisms we put in place.

RE #3: I don't think the NSA even needs to try this yet, given #1 and #2. There are too many vulnerabilities going into browsers accidentally for anybody to bother deliberately adding one.

Re #4: On Windows, SSL is a defense-in-depth measure for protecting Firefox downloads/updates. Windows's Authenticode check is another defense-in-depth measure. I agree that both aren't perfect, but they aren't completely useless. More importantly, though, Firefox updates on Windows are signed and verified using a mechanism that is completely separate from the public PKI world. This is the primary defense (IMO) and this is something we should extend to updates on other platforms. (B2G has its own, different, wholly-separate signing mechanism too.) The design is sound though I wouldn't be surprised if there were things we could do operationally to improve things further; I don't know since I'm not in opsec.

> In other words, if we're serious about securing our browsers against this adversary, we need to think a lot bigger than just verifying that the binaries on our FTP servers match our source.

I don't think that anybody is going to disagree with this. However, this statement being true doesn't mean that verifiable, open-source builds are useless. Especially in the long term, as we improve in #1 and #2 in particular, verifiability and openness is very important.

Also, I miss you.


"But, even sandboxing and memory-safe languages cannot prevent logic problems." => Indeed. I wonder how you plan to reduce the amount of exploitable bugs to literally zero given that nothing can fully prevent logic problems.

Anyway, it sounds like a lot of work to reduce to literally zero the number of exploitable defects. Some advocate that a lazier approach based on POLA and object capabilities could work too. https://www.youtube.com/watch?feature=player_detailpage&v=eL... (I timestamped for the relevant part, but the full talk is good to watch too).

Building on an object capability language, even if some exploitable bugs remain along the edges, allow a high-level of security. Not sure how Rust stands, but it looks promising for sure.


You misspelled zarro. :-|

I miss Justin too!

/be


I miss you too.


> and anyway Mozilla committers regularly check in code which doesn't match the reviewed code

How does that work? Are reviews optional?


It works because a reviewer will say "here are a bunch of comments, the patch is approved once these are fixed". It's a very reasonable thing to say when the comments are minor (such a style issues) and the reviewer and coder trust each other.


Reviews are required, but we don't have automation to test integrity and authenticity. Mostly the diffs between what is reviewed and what is pushed are cosmetics that the reviewer asked for and pre-r+'ed without wanting another patch in a bug, but there's room for mischief.

People do watch bugs and repos and catch stuff. We are working on making the system more foolproof, as part of a larger effort to move all our CI infra (save Mac, sigh) to the cloud and our release engineering to a devops model.


>(save Mac, sigh)

Bug 921040 [0] seems to be progressing, even if slowly.

https://bugzilla.mozilla.org/show_bug.cgi?id=921040


Yeah, I am not holding my breath. If the tremendous effort there pays off and Apple doesn't screw us later, yay.

/be


> "This will be the most effective on platforms where we already use open-source compilers to produce the executable, to avoid compiler-level attacks as shown in 1984 by Ken Thompson."

Using an open source compiler does not preclude the attacks in "Reflections on Trusting Trust". The whole point is that a backdoor can be inserted into the compiler, such that no copy of the source code has the backdoor, but the binaries generated would (and would perpetuate through the compiler compiling itself.)

With sufficient sophistication--and by no means do I believe the world is this compromised--all tools used to inspect such a binary could also mask the exploit. Every tool could be colluding against you.


But if it is an open source compiler - then there are ways to counter the attack: https://www.schneier.com/blog/archives/2006/01/countering_tr...


For those that don't recognize the title, "Trust but verify" was popularized by Reagan[1], which in turn was an adaptation of a Russian proverb.

[1] http://en.wikipedia.org/wiki/Trust,_but_verify


I think "don't trust" is a more accurate expression of what people really mean, but it has less of a ring to it.


Does Mozilla (or any other browser vendor) use deterministic/reproducible builds?

The Tor [1] and Bitcoin [2] projects are starting to use them. I hope this becomes more common.

1. https://blog.torproject.org/blog/deterministic-builds-part-o...

2. http://gitian.org/


As mentioned in the article, https://bugzilla.mozilla.org/show_bug.cgi?id=885777 is tracking that.


Following this argument, Chromium is also acceptable, as are Webkit builds[1].

[1]: http://nightly.webkit.org


Yeah, that part was a bit weird since he only mentioned the "rendering engines WebKit and Blink" being open source, while both have fully open source browsers built on them that are pretty easy to build yourself (especially compared to Firefox's current build process, though that is improving in leaps and bounds).

That said, the big difference is in how widespread they are. While he does use an explicitly unqualified "advantage over all other browser vendors", if you think instead about a verified build system that has guarantees for significant numbers of internet users, your Chromium population is much more limited. Probably the majority are the linux distros that ship with chromium and the rest are people who are largely building from source anyways.


While there are other, open-source browser's built on Webkit and Chrome source code, what a typical user downloads from Google is not an open source product you can build youself.


That is not quite correct (and what's correct is just repeating what I just wrote).

If you download source code, you can build these browsers. If you go to their release pages, you can download these browsers. If you apt-get install chromium-browser, you'll get one of those browsers.

You're correct that Chrome and Safari both contain large chunks of closed source code, but they are the "other" to the source code. The source code builds just fine.

And yes, the open source versions aren't what users usually download, which is exactly what I said. The largest users of Chromium are from Linux distros that use it and people building from source themselves. I'm not aware of any major users of Webkit builds directly, but there are many smaller browsers that are themselves open source and using Webkit or embedding Chromium (and I imagine a handful now using Blink).


People tend to use the concept of a "grandmother" who downloads what the site offers her, which in Google's case is the closed-source non-verifiable Chrome. However, people usually counter these grandmother arguments noting that the grandma has no big understanding of privacy and so Chrome is fine for her.

However, in this case, a more apt concept of a "mother" should be applied -- i.e. somebody who understands the values of privacy on the internet, but still gets influenced by what the website offers her.

So me, you and the hypothetical mother all want a browser that's supposed to be as safe from intrusions by the NSA as possible, but unlike the two of us who will understand Google's claims like "Get a fast, free web browser" in the proper context, the mother would not.

---

My point is that I believe we should strive to make it very clear, for this particular group of "mothers", which browser is as secure to intrusions as possible and which is only "fast and free" according to the corporation that issues it (and has a harder-to-find open-source variant).


There are also various other, open-source browsers built on webkit.


<rant type="philosophical">

There are roughly three level of trust that people can have about something or somebody.

1) Credit is the lowest level. It is based on nothing more than a history of good conduct, a statistical likelihood of good conduct, or some other objective data about the person or entity in question. This is the highest level of trust that most reasonable people are willing to grant to others that they don't know very well, such as the guy who just walked into the bank looking for a loan. When you give someone credit but they turn out to be a bad investment, you may lose some money or stuff but you generally don't need to feel personally hurt.

2) Trust, in a narrow sense, is the next higher level. It is based partly on a history of good conduct, but also partly on a deeper understanding of them that tells you they care personally about you and therefore they'll try not to betray your expectations. This is the level of trust that people usually exchange with friends, old-time acquaintances, loyal pets and close relatives. If the person you trusted betrays your trust, you feel personally hurt.

3) Faith is the highest level. It need not be based on any actual history of good conduct, and it may even be granted despite a history of unwanted conduct. Most people reserve faith to supernatural entities (if they exist) and very close family members. A deeply religious person keeps faith in God even if God appears to turn a blind eye to his misfortunes. A parent keeps faith in her developmentally challenged child even if the child has experienced nothing but a series of failures so far. If you put faith in somebody and they wilfully, maliciously betray you, you are destroyed.

Too often, governments and corporations ask us to have Faith in them. This is ridiculous, especially when there's been a series of events that clearly explain why people don't want to trust them. It also makes a perverse sense because a) Faith is the only level of trust that can be granted despite evidence to the contrary, and b) Faith ties the believer so strongly with the believed that they can now pull off bullshit like "we're on the same boat" or "we're too big to fail".

Since Google and Microsoft are neither God nor your mom, reasonable folks should refuse to grant them Faith and consider downgrading them to either Trust or Credit, if anything at all.

In the case of most large multinational corporations that sell proprietary, probably backdoor-ridden software, the best we can grant them is Credit -- a cold, rational analysis based on how they, as rational self-interested actors, are likely to behave.

Meanwhile, some organizations offer us a deeper look into how they and their products work. This is the beauty of open-source software and the nonprofits that maintain them, like Mozilla. The deeper understanding (either of code or of the people involved) that they offer is what makes me feel that they are worthy of Trust, not just Credit.

But Trust still requires evidence-based judgment. Hence "Trust but Verify". A deterministic build system, for example, adds evidence for Trust.

If you don't feel any need to verify, that's Faith, not Trust. Remember, nobody deserves Faith except gods (if they exist) and your very closest kin.

</rant><sorry for="wall of text" />


i remember at least one foia request to reveal what ties google to the nsa a couple of years back that was shut down. i also remember that when google first released gmail, everyone knew that google is profiling your emails.

now that the nsa has become the official strawman, google has automatically become the good guy. i don't get it


Probably because Google's scanning your emails is all done with algorihms and for the purposes of selling ads. The NSA's spying program has a lot more foul connotations and possibilities.


thank you for properly xhtml self-closing tag


Apart from the open sourceness of browsers, it's worth to consider the culture of the companies behind them. Speaking of Mozilla, a non-profit, they have always cared about user rights and privacy and stayed true to their ideals for more than 10 years now. Something you can't say about Google, which has been tainted with all imaginable privacy issues and keeps actively luring people to stay connected in Chrome, Google+, Chromebook - all for a better surveillance experience.


"WebKit and Blink (chromium) are open-source, the Safari and Chrome browsers that use them are not fully open-source. Both contain significant fractions of closed-source code."

Can we please site a source here explaining which parts on not open source?


For Safari:

The Open Source WebKit part is significantly less self-contained than Firefox or Chromium. A whole lot of stuff that Firefox or Chrome implement themselves relies on OS X system APIs in the case of Safari. (Some of those things may be open sourced separately, though.) Non-OS X WebKit ports have to implement all that stuff on their own on top of whatever infrastructure they use, such as Qt.

For Chrome:

Everyone knows about the PPAPI Flash Player.

Additionally, https://src.chromium.org/svn/trunk/src/build/all.gyp gives hints about other stuff that's built in Chrome but not in Chromium. Some of that stuff is actually Open Source. For example, courgette shows up on the Chrome-only list, but a quick search suggests that it's Open Source.

However, in that file, you can find references to Widevine (DRM) and the PDF reader without finding source code for either in the repo.

Also, there's some Chrome-only print stuff and it's not clear how much of that is Open Source and how much proprietary and if it is Open Source, why it's not built in Chromium.

The stuff in all.gyp is not the whole story, like the can concluded from knowledge about Flash Player without seeing it mentioned there.

To find out more, one would need to go through https://src.chromium.org/svn/trunk/src/third_party/ and figure out which dependencies aren't Open Source. In addition to the adobe/ directory there, you can see swiftshader/ (software fallback renderer for WebGL). It comes with a README.chromium file (https://src.chromium.org/svn/trunk/src/third_party/swiftshad...) that says: "License: Proprietary".

So that gives a minimum of:

* Flash Player

* PDF Reader

* Widevine

* SwiftShader

* Maybe some Cloud Print stuff.

(Edit: typo and formatting)


H.264 codec...


That one is, IIRC, LGPL, so it's Open Source in the copyright and code availability sense, but it's otherwise encumbered, so it's not in Chromium by default.


"can we use such audited browsers as trust anchors, to authenticate fully-audited open-source Internet services?"

I don't understand how. Anyone have an inkling?


I welcome this call to action and continue to believe that the open source approach provides the foundation for transparency ... though the red herring in this debate is the concept/notion of privacy.

The reasons why such systems are built and supported is that they check a lot of boxes for purely commercial reasons ... the scenarios that no one is talking about is the one where senators dip into the classified stream giving serious competitive advantage to actors in the free market.

Earlier efforts, such as ECHELON were just state sanctioned industrial espionage and what is going on now is just an extension of the same.

I am all for defending/upgrading the concepts of privacy and basic human rights, but as with many things ... the pragmatic solution is to make it very easy to 'follow the money' ...


Can someone clarify which parts of Chrome aren't open-source? Is it just the Flash, Adobe, etc. closed-source plugins that get distributed with it, or are there other not-open pieces that get compiled in?


All Google integration such as sync is omitted from the Chromium builds and a couple of other features.. for example on Windows (not sure about OS X) the default in browser PDF reader for Chrome is actually a compiled in closed source tool that uses code mainly licensed from FoxIt Reader. I'm sure there's more but those are the most notable examples that I can think of off the top of my head.


sync is actually part of Chromium[1][2]. The big parts are probably the PDF viewer, as you said, and Flash. Chromium also ships with fewer codecs than Chrome (e.g. no h264), but I'm not sure if they're from closed source or just not licensed for redistribution. There's probably other parts someone can chime in with.

[1] https://code.google.com/p/chromium/codesearch#chromium/src/s...

[2] http://www.chromium.org/developers/design-documents/sync


Trust but verify is just a fancy way of saying trust no one.

You don't need to verify the things you trust, because that is the definition of trust.


Trust is a product of statistical evidence and probabilities. How does trust builds up? By observing that the source you monitor is very often right. How does one lose trust? By observing that the source is not right often enough. If you cannot or don't want to verify it's not trust, it is blind faith. Trust only allows you to assume that your source is right when you cannot (afford to) verify.


Can someone expand the "use Firefox as a trust anchor for services" idea? How would this work?


Stay tuned.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: