Even ignoring the verifiable build aspect, which I think is important, having an easy to start with environment to use as a launching pad, and more importantly, example build environment, would ease new developers into the process of making a commit. A little more than a year ago, I started toying around with the idea of commiting to Mozilla late one night when bored. I went through the process of getting dependencies installed (build-dep wasn't enough as it didn't provide the necessary version of autoconf), and even passing a ./configuration, the build failed and I gave up. I would have likely powered through if I already had a bug fix to commit, but as mentioned I was just idling playing around with the idea while bored.
As for the build error, I understand that's frustrating. Asking on the #introduction channel on IRC would be a good way to make progress (https://wiki.mozilla.org/IRC). That's a channel with lots of informed people and they're generally very friendly.
In fact, the very first time I built Firefox I had various build problems, and the people on Mozilla's IRC channels quickly helped me identify that it was actually faulty RAM -- I was building on a brand new machine -- at fault!
Looking at the Snowden revelations, the NSA was attacking the client and server, but the server side a lot more. PRISM and MUSCULAR, plus intercepting people's actual hardware deliveries and implanting backdoors before FedEx even places the computer on your doorstep.
On top of all that, is actual browser security of the runtime. Do we have any evidence historically of government injected, or even large-org injected backdoors into open source projects in their closed source bits? Isn't it far more likely that the spooks would leverage traditional exploits of buffer overflows and use-after-free bugs to do their work? It seems as if we have evidence of the NSA maintaining a huge catalog of zero-day exploits to deploy.
In the end, if you are running Firefox on anything but an open source operating system, you can't verify the compete system. In this extent, Richard Stallman's warnings over the years have turned out right. Even down to the firmware in your harddrive, bios, or radio chip, binary blobs pose a threat.
1) All browser vendors keep a private catalog of security-critical bugs. If the NSA cared to gain access to these, they could, with or without the cooperation of the vendor.
2) The NSA maintains its own catalog of zero-days in browsers, operating systems, Flash, Acrobat, and so on. They actively exploit these bugs in attacks targeted against individuals. That these attacks are delivered only to their intended targets makes the bugs essentially impossible for browser developers to detect and fix, until they're independently discovered.
3) NSA can add exploits to source. Compared to coercing a vendor into adding an exploit into their binaries, this has the advantage of being easy to pull off without the vendor's knowledge or cooperation.
In terms of Firefox specifically, the safeguards in place are not sufficient to prevent a determined adversary from committing a backdoor into the source. Code review can't catch all security-critical bugs, and anyway Mozilla committers regularly check in code which doesn't match the reviewed code; it would be easy to slip in an exploit after review.
I suspect something similar applies to other browsers. A browser's being OSS only makes the NSA's job of adding exploits marginally easier; there's nothing stopping the NSA from planting engineers on the IE team, for example.
4) The NSA could use its targeted attack ability to deliver backdoored binaries to specific targets. Even if Mozilla protects downloads with a pinned SSL cert, I have to believe that stealing or forging a cert is not out of the question for the NSA.
If I were the NSA, I'd need to observe that none of the above strategies was working before attempting the much more risky job of coercing a browser vendor into adding an exploit into the binaries they distribute. And we know that at least (2) above /is/ working for the NSA.
In other words, if we're serious about securing our browsers against this adversary, we need to think a lot bigger than just verifying that the binaries on our FTP servers match our source.
@cromwellian's citation of Snowden on weak endpoint (OS, PCs running Windows especially) security is on target. Security is never done, and there are too many levels and spots to attack for users, open source projects, and even big companies to be able to afford defense everywhere. (Get a load of https://www.openssl.org/news/secadv_hack.txt if you haven't seen it.)
The browser cannot be excluded from this analysis. It presents a large attack surface, and top browsers have wide unverified binary distribution nets.
Adding exploits to Linux open source has been attempted but detected. We don't of course know of undetected successful attempts.
The risk of coercing a browser vendor is hard to quantify. Ladar Levison of Lavabit was brave enough to speak up pre-gag-order and shut down his business.
Have the principals of other companies been that tough? We don't know in some cases, but in others the answer is "no".
Even excluding what some such company principals didn't know but should have known (oops, unencrypted data over telco dark fiber), the bigs knew enough pre-Snowden to make a public stink. None did; many (AT&T, e.g. -- there are other examples) were complicit for decades.
RE #3: I don't think the NSA even needs to try this yet, given #1 and #2. There are too many vulnerabilities going into browsers accidentally for anybody to bother deliberately adding one.
Re #4: On Windows, SSL is a defense-in-depth measure for protecting Firefox downloads/updates. Windows's Authenticode check is another defense-in-depth measure. I agree that both aren't perfect, but they aren't completely useless. More importantly, though, Firefox updates on Windows are signed and verified using a mechanism that is completely separate from the public PKI world. This is the primary defense (IMO) and this is something we should extend to updates on other platforms. (B2G has its own, different, wholly-separate signing mechanism too.) The design is sound though I wouldn't be surprised if there were things we could do operationally to improve things further; I don't know since I'm not in opsec.
> In other words, if we're serious about securing our browsers against this adversary, we need to think a lot bigger than just verifying that the binaries on our FTP servers match our source.
I don't think that anybody is going to disagree with this. However, this statement being true doesn't mean that verifiable, open-source builds are useless. Especially in the long term, as we improve in #1 and #2 in particular, verifiability and openness is very important.
Also, I miss you.
Anyway, it sounds like a lot of work to reduce to literally zero the number of exploitable defects. Some advocate that a lazier approach based on POLA and object capabilities could work too.
https://www.youtube.com/watch?feature=player_detailpage&v=eL... (I timestamped for the relevant part, but the full talk is good to watch too).
Building on an object capability language, even if some exploitable bugs remain along the edges, allow a high-level of security. Not sure how Rust stands, but it looks promising for sure.
I miss Justin too!
How does that work? Are reviews optional?
People do watch bugs and repos and catch stuff. We are working on making the system more foolproof, as part of a larger effort to move all our CI infra (save Mac, sigh) to the cloud and our release engineering to a devops model.
Bug 921040  seems to be progressing, even if slowly.
Using an open source compiler does not preclude the attacks in "Reflections on Trusting Trust". The whole point is that a backdoor can be inserted into the compiler, such that no copy of the source code has the backdoor, but the binaries generated would (and would perpetuate through the compiler compiling itself.)
With sufficient sophistication--and by no means do I believe the world is this compromised--all tools used to inspect such a binary could also mask the exploit. Every tool could be colluding against you.
The Tor  and Bitcoin  projects are starting to use them. I hope this becomes more common.
That said, the big difference is in how widespread they are. While he does use an explicitly unqualified "advantage over all other browser vendors", if you think instead about a verified build system that has guarantees for significant numbers of internet users, your Chromium population is much more limited. Probably the majority are the linux distros that ship with chromium and the rest are people who are largely building from source anyways.
If you download source code, you can build these browsers. If you go to their release pages, you can download these browsers. If you apt-get install chromium-browser, you'll get one of those browsers.
You're correct that Chrome and Safari both contain large chunks of closed source code, but they are the "other" to the source code. The source code builds just fine.
And yes, the open source versions aren't what users usually download, which is exactly what I said. The largest users of Chromium are from Linux distros that use it and people building from source themselves. I'm not aware of any major users of Webkit builds directly, but there are many smaller browsers that are themselves open source and using Webkit or embedding Chromium (and I imagine a handful now using Blink).
However, in this case, a more apt concept of a "mother" should be applied -- i.e. somebody who understands the values of privacy on the internet, but still gets influenced by what the website offers her.
So me, you and the hypothetical mother all want a browser that's supposed to be as safe from intrusions by the NSA as possible, but unlike the two of us who will understand Google's claims like "Get a fast, free web browser" in the proper context, the mother would not.
My point is that I believe we should strive to make it very clear, for this particular group of "mothers", which browser is as secure to intrusions as possible and which is only "fast and free" according to the corporation that issues it (and has a harder-to-find open-source variant).
There are roughly three level of trust that people can have about something or somebody.
1) Credit is the lowest level. It is based on nothing more than a history of good conduct, a statistical likelihood of good conduct, or some other objective data about the person or entity in question. This is the highest level of trust that most reasonable people are willing to grant to others that they don't know very well, such as the guy who just walked into the bank looking for a loan. When you give someone credit but they turn out to be a bad investment, you may lose some money or stuff but you generally don't need to feel personally hurt.
2) Trust, in a narrow sense, is the next higher level. It is based partly on a history of good conduct, but also partly on a deeper understanding of them that tells you they care personally about you and therefore they'll try not to betray your expectations. This is the level of trust that people usually exchange with friends, old-time acquaintances, loyal pets and close relatives. If the person you trusted betrays your trust, you feel personally hurt.
3) Faith is the highest level. It need not be based on any actual history of good conduct, and it may even be granted despite a history of unwanted conduct. Most people reserve faith to supernatural entities (if they exist) and very close family members. A deeply religious person keeps faith in God even if God appears to turn a blind eye to his misfortunes. A parent keeps faith in her developmentally challenged child even if the child has experienced nothing but a series of failures so far. If you put faith in somebody and they wilfully, maliciously betray you, you are destroyed.
Too often, governments and corporations ask us to have Faith in them. This is ridiculous, especially when there's been a series of events that clearly explain why people don't want to trust them. It also makes a perverse sense because a) Faith is the only level of trust that can be granted despite evidence to the contrary, and b) Faith ties the believer so strongly with the believed that they can now pull off bullshit like "we're on the same boat" or "we're too big to fail".
Since Google and Microsoft are neither God nor your mom, reasonable folks should refuse to grant them Faith and consider downgrading them to either Trust or Credit, if anything at all.
In the case of most large multinational corporations that sell proprietary, probably backdoor-ridden software, the best we can grant them is Credit -- a cold, rational analysis based on how they, as rational self-interested actors, are likely to behave.
Meanwhile, some organizations offer us a deeper look into how they and their products work. This is the beauty of open-source software and the nonprofits that maintain them, like Mozilla. The deeper understanding (either of code or of the people involved) that they offer is what makes me feel that they are worthy of Trust, not just Credit.
But Trust still requires evidence-based judgment. Hence "Trust but Verify". A deterministic build system, for example, adds evidence for Trust.
If you don't feel any need to verify, that's Faith, not Trust. Remember, nobody deserves Faith except gods (if they exist) and your very closest kin.
</rant><sorry for="wall of text" />
now that the nsa has become the official strawman, google has automatically become the good guy. i don't get it
Can we please site a source here explaining which parts on not open source?
The Open Source WebKit part is significantly less self-contained than Firefox or Chromium. A whole lot of stuff that Firefox or Chrome implement themselves relies on OS X system APIs in the case of Safari. (Some of those things may be open sourced separately, though.) Non-OS X WebKit ports have to implement all that stuff on their own on top of whatever infrastructure they use, such as Qt.
Everyone knows about the PPAPI Flash Player.
Additionally, https://src.chromium.org/svn/trunk/src/build/all.gyp gives hints about other stuff that's built in Chrome but not in Chromium. Some of that stuff is actually Open Source. For example, courgette shows up on the Chrome-only list, but a quick search suggests that it's Open Source.
However, in that file, you can find references to Widevine (DRM) and the PDF reader without finding source code for either in the repo.
Also, there's some Chrome-only print stuff and it's not clear how much of that is Open Source and how much proprietary and if it is Open Source, why it's not built in Chromium.
The stuff in all.gyp is not the whole story, like the can concluded from knowledge about Flash Player without seeing it mentioned there.
To find out more, one would need to go through https://src.chromium.org/svn/trunk/src/third_party/ and figure out which dependencies aren't Open Source. In addition to the adobe/ directory there, you can see swiftshader/ (software fallback renderer for WebGL). It comes with a README.chromium file (https://src.chromium.org/svn/trunk/src/third_party/swiftshad...) that says: "License: Proprietary".
So that gives a minimum of:
* Flash Player
* PDF Reader
* Maybe some Cloud Print stuff.
(Edit: typo and formatting)
I don't understand how. Anyone have an inkling?
The reasons why such systems are built and supported is that they check a lot of boxes for purely commercial reasons ... the scenarios that no one is talking about is the one where senators dip into the classified stream giving serious competitive advantage to actors in the free market.
Earlier efforts, such as ECHELON were just state sanctioned industrial espionage and what is going on now is just an extension of the same.
I am all for defending/upgrading the concepts of privacy and basic human rights, but as with many things ... the pragmatic solution is to make it very easy to 'follow the money' ...
You don't need to verify the things you trust, because that is the definition of trust.