Hacker News new | past | comments | ask | show | jobs | submit login
Don't trust me: I might be a spook (daemonology.net)
371 points by cperciva on Sept 10, 2013 | hide | past | favorite | 145 comments



"If you can't see anything because you can't get the source code... well, who knows what they might be hiding?"

Actually, it's worse than that. Even if you can review the source code, you STILL won't know for sure. As Ken Thompson put it three decades ago, "no amount of source-level verification or scrutiny will protect you from using untrusted code."[1] This is because the tools we all use to process or transform source code (e.g., already-compiled-to-binary compilers, assemblers, loaders, etc.) may already contain secret back-doors. Ditto for even lower-level tools like hardware microcode.

While open-source software tools -- and open hardware too -- are much less likely to have secret back-doors embedded in them, there are no 100% guarantees. Ultimately we have little choice but to trust the people who create, package, and distribute all the software and hardware we use but didn't create ourselves from scratch.

--

EDITS: Added last paragraph. Made minor edits to second paragraph so it more accurately conveys my thoughts.

--

[1] http://cm.bell-labs.com/who/ken/trust.html


It's not quite as bad as it sounds: I don't think even the NSA has people smart enough to insert code into a compiler which will add backdoors to software which didn't yet exist when the compiler was written.

If you're paranoid, use software written in the past 5 years and compile it on a system running FreeBSD 4.1 off a CD which you've had in your closet for the past decade.


> I don't think even the NSA has people smart enough to insert code into a compiler which will add backdoors to software which didn't yet exist when the compiler was written.

I doubt such a thing is possible given the current state of the art in program analysis. Even if it were possible, it's hard to imagine it being undetectable (e.g. it would probably cause massive slowdowns in compilation).

> If you're paranoid, use software written in the past 5 years and compile it on a system running FreeBSD 4.1 off a CD which you've had in your closet for the past decade.

If you have a trusted compiler, you can bootstrap any non-trusted one [1]. That said, I suspect implementing diverse double-compiling to build any nontrivial system would take significant work.

1. http://www.dwheeler.com/trusting-trust/


It's better than that -- you don't have to trust either compiler, just that they both don't have the same backdoor.


I don't think even the NSA has people smart enough to insert code into a compiler which will add backdoors to software

Really? Why wouldn't they? That's one of the most valuable attack vectors, precisely because it's so difficult.

I wouldn't have guessed the NSA had people smart enough to break Windows Update's encryption using a brand-new cryptographic technique that also required several hundred thousand dollars of machine time to execute, but it happened nonetheless.

We have to start thinking ahead, e.g. by making deterministic builds a standard procedure: https://blog.torproject.org/blog/deterministic-builds-part-o...


> Why wouldn't they? That's one of the most valuable attack vectors, precisely because it's so difficult.

If this logic made any sense then we should just give up, as their megabit-scale quantum computers have already cracked all keys on the planet.

My guess is you forgot the bit of the sentence you clipped.


Deterministic builds do more than merely defend against the Trusting Trust attack (what you and cperciva are dismissing as a flight of fancy equivalent to worrying about megabit-scale quantum computers in 2013) --- they also prevent compromised open-source binaries, a much more serious and realistic attack vector.

So we need to start implementing deterministic builds into every major open source project if we're even pretending like we care about putting up resistance to what's going on. If Tor browser can do it in just a few weekends, then so can we for mainstream Firefox, and hopefully eventually Ubuntu.


> I wouldn't have guessed the NSA had people smart enough to break Windows Update's encryption using a brand-new cryptographic technique that also required several hundred thousand dollars of machine time to execute, but it happened nonetheless.

That's rather unsettling. Do you have any further information on this?


sillysaurus2 is talking about the Flame malware, which used a previously unknown MD5 collision attack technique:

http://blog.cryptographyengineering.com/2012/06/flame-certif...

Why are you finding it unsettling? I think that's exactly what the NSA do: stay ahead of everyone else and take advantage of what they know. In this case, a different MD5 collision attack technique was invented by Marc Stevens at about the same time frame, so you couldn't even say that [whoever wrote Flame] was ahead by a lot.

I have found it more interesting that they knew about the Microsoft design errors that they exploited to break the update mechanism. And, of course, wondered if the design errors were not forced.


That's the "telephone game" retelling of the Flame virus.


Is the idea behind deterministic builds that you build the software on your own trusted system and then compare a hash of the binary with the hash the original builders got for their build? Thus you know that the binary you have must not contain any backdoors (or both binaries do)?

I'm just trying to understand how it would work.

How are deterministic builds better than distributing binaries with hashes for verification? Just because I don't have to trust the original author's compiler?


How are deterministic builds better than distributing binaries with hashes for verification? Just because I don't have to trust the original author's compiler?

Indeed, that's precisely why it's such an important protection mechanism. But it's about more than just not having to trust the original author's compiler. The original author might maliciously copy-paste some additional source code into the build process just before compilation.

Deterministic builds are a way for any of us to download any source code, build it, and verify that all of us are using a binary derived from exactly that source code, and nothing else.


Ah, I get it. Instead of relying on one guy, I can see that lots of different people who don't trust each other all agree that the binary hash should be X. Lots of people unknown to each other are unable to conspire to backdoor a binary.

Cool. Thanks for the explanation.

My first thought is that the next attack will be analogous to fake Amazon reviews. "This binary with the hash e99a12d388afa2fa5fdde8ed3bcbe055 gets five stars! After using 51fc8eff10b2fccec9890fe5d1b0cfd9 for years, I've realized that e99a12 is far superior in speed and robustness. I recommend you upgrade today!"


Really? Why wouldn't they? That's one of the most valuable attack vectors, precisely because it's so difficult.

You're assuming that because the NSA would like to have such a capability (as would most programmers), that they actually do, because they have abundant money to throw around. Likewise, I'm sure the Air Force, Army etc. would all like to have antigravity generators, and they have abundant money to throw at the problem, so they must have them, right?

Of course not. 'A compiler that adds backdoors to software' implies a compiler that knows which routines are for security and which are not. How exactly is it to distinguish between

  get_string(super_sekrit_password)
and

  get_string(pretty_background_color)
for example? What if the background color is actually a way of revealing hidden messages, do you think it can see that coming? You're asking for a system which not only inserts unwanted code seamlessly into an application, but which can actually model the intention of the programmer and make decisions about how to compromise the code. From within the compiler running on a standard desktop or laptop, without noticeably extending compile times. I'm sorry, but given what a poor job humans do at turning specifications into code, the idea that there's a super-clever anti-security demon lurking inside every compiler is just laughable.

And that's before the other obvious objection that looking at compiled code in a debugger/disassembler is going to reveal lumps of code that were not put there by the original author. Invisible on a sufficiently large project? Sure, but an encryption/decryption program doesn't need to be very large to begin with - all it has to do is to reliably transform a block of data into scrambled form and back again. This is more amenable to proof than most computer programs (not least because it has no need to be interactive). Furthermore, we can easily imagine test cases that are very very short; ROT-13 is a lousy cipher, but it is a cipher, and one that can be implemented in ~20 lines of (non-obfuscated) code. Now suppose we make a variant that asks the user how many places to rotate by (eg 14) and that number functions as our 'secret' key. Still hopelessly insecure to anyone over the age of 10, but what of it? Wouldn't your hypothesized 'insecurity demon' need to put a backdoor in anyway, because it is an encryption tool, be it ever so primitive? And wouldn't that block of code show up in a debugger? If your answer is no, you're now positing hidden functionality that not only divines programmers' intentions and subverts those that are intended to add security, but also looks at the quality of the security algorithm and only sticks in a backdoor if it passes a certain threshold of cleverness.

I wish compilers had those smarts built in! Think how much tedious/craptacular code could be automated away by simply labeling things as 'sekrit!!' and having the NSA module inside the compiler generate lightweight, reliable code with no real penalty! It would take the pain out of unit testing for ever!


You're using a very specific definition of backdoor there. I can think of others that at least seem plausible to introduce at the compiler level.

Timing attacks for instance, could plausibly fall out of optimizations that terminate a loop when it's clear that the value being computed won't change (e.g. is false and is repeatedly getting anded with things). This would probably be even worse for power consumption or other side channels. For a potentially easier to measure side channel, you might try introducing some state-dependent delay (short and caused by something like that loop optimization) in a bit of code preceding a packet send.

Alternately, introducing (via incorrect optimizations) the right kinds of buffer overruns or race conditions that corrupt a pointer just right, could get you a nice stack smashing exploit with certain (very abnormal) inputs -- and remote access to the machine in question.


You're presenting your argument as if that's the only kind of malicious binary we have to worry about, and since it's (probably) impossible, then therefore we don't have to worry. But there are more ways to compromise a binary than via the compiler automatically backdooring them. Deterministic builds protect against compromised binaries of any kind, so we need to use it.

The attack to be worried about goes like this: You build "Firefox Setup 23.0.1.exe" and intentionally insert a backdoor into the setup process. You make sure the setup process appears to function exactly the same as the clean installer (not hard). You then replace Firefox Setup 23.0.1.exe on various distribution websites with your malicious version. Or you MITM the distribution websites in order to send your malicious version in place of the one the user expects to be downloading.

Deterministic builds defend against that attack vector, while also defending against any hypothetical compiler-backdoor-autoinjector. You get both defense layers for free, just by using deterministic builds. This is a necessary step for the future, not an optional security layer.


I responded to the claim you made. If you wanted to talk about something else in the first place, maybe you should have done that instead.

Deterministic builds protect against compromised binaries of any kind, so we need to use it.

Unless I'm a spook (or group of spooks) any my binary includes a backdoor by design from the outset. Or unless I have backdoors built into the chips (a far more likely possibility than magic compiler demons). ISTM you're yo-yoing between treating the NSA as omniscient/omnipotent one moment and then holding up things like this as silver bullets the next.


What is up with you? We're on the same side here, and you're trying to play a game of superior-nerd.

Deterministic builds aren't a silver bullet. But they're an important defense layer.


The compiler-hack need not affect all crypto software; in fact it might be tailored for only one popular package, such as OpenSSL.

So your ROT13 program would not be affected.


>don't think even the NSA has people smart enough to insert code into a compiler which will add backdoors to software which didn't yet exist when the compiler was written.

Oddly enough, that is almost exactly how the Flame malware worked.

It injected itself into a compiler, causing the compiler to silently compile malicious code into custom firmware being compiled for nuclear reactor control equipment.

So at least one state actor has done just that at least once.


I knew I was keeping all those discs for a reason!


GCC's optimiser does some crazy shit sometimes including removing necessary code in -O3. Perhaps some of that crazy could be related?

As for using FreeBSD 4.1 off a CD in your closet, I recently set up an NT4 machine with SP2 (requirement!) that was to be used entirely offline. Surprisngly it booted and works fine on a brand new Intel chipset H61 based machine with an SSD in it (graphics are stuck at 800x600x16 but that's a requirement too).


I dont think it is as hard as it it sounds.

Consider, for example, hooking all fclose function calls and testing on every call whether

* you have write permission on the file,

* its an object file,

* its in an architecture your exploit supports,

* it uses the fclose function (or the corresponding system call, if it is linked statically)

* and your exploit is not already present.

If those conditions are true, hook the fclose calls in the object file before actually closing it, otherwise just close it normally.


If I understand your proposal correctly, it would be found out almost immediately, as file checksums would be altered and quite a few places do check them.


My idea was that, if you have an object file open with write permissions, you can assume that it has been created/modified anyway, and people will expect the checksum to change.

Of course this does not work if the checksum is calculated before the file is closed by your backdoored compiler.


Good, so don't give them ideas.


Be careful about magical thinking here. They aren't omniscient supermen. The materials on the Guardian implied that these exploits were "fragile".

Given that statement and the insinuations that the NSA had folks participating on standards discussions basically gumming up the works for things like IPSec, NIST standards, etc (effectively crowding the people who know what they are doing out of the room), it's likely that they're created circumstances where accepting defaults or not really understanding how to implement various technologies creates attack surfaces or weaknesses that the NSA has known means to exploit.

Like anything else, you need to think about the risks and controls for whatever your doing. If you're protecting the interests of a government or company likely to be spied upon, you probably want to factor in the ability of a nation-state actor to potentially intercept data into your risk calculus and operational strategy.

For the rest of us, you need to think about what the risks really are -- if you're a politician or other visible individual, you need to be very mindful of your communications privacy and practices. (Google "Anthony Weiner"). If you're posting on Hacker News and buying stuff from Amazon, the NSA decrypting that is a low-risk threat.


If you have two compilers (and you trust one) you can counter the 'trusting trust' type attack: http://www.dwheeler.com/trusting-trust/

Even if you don't fully trust both compilers, the 'diverse-double compiling' is still useful:

"Finally, note that the “trusted” compiler(s) could be malicious and still work well well for DDC. We just need justified confidence that any triggers or payloads in a trusted compiler do not affect the DDC process when applied to the compiler-under-test. That is much, much easier to justify."


Open source software represents what is IMHO the best combination between ease of self-verification and total security. In other words if you have only a little bit of time/resources/etc. to put into verification so that you don't have to rely on trust, the software that you're running is probably the best place to put that time and energy.

It's probably possible to verify closed-source binaries by decompilation and a lot of effort, if you wish.

But on the other hand between compilers/hardware/OS/etc. the part of the computer that is most susceptible to being attacked (the application software) is the part that's easiest to audit with open source.

If your chip's built-in RNG is cracked you can simply refuse to use it.

But if your libopenssl.so or CRYPTAPI.DLL is cracked you're probably screwed. It's at least somewhat straightforward to build your own libopenssl.so (or at the very least, verify that your distro's signed package RPM builds the same sha1sum from the known-good source tarball + patches). There's nothing you can do to independently verify the source used for CRYPTAPI.DLL, and even the binary of that DLL requires trusting Microsoft (and figuring out which Service Pack, KB patchset, etc. are installed).

However in the end you do have to trust people no matter what you do, so if you're willing to trust FooCorp's binaries then just go into decision with eyes open. Even most open source users are effectively no different in that they simply trust what Fedora or Debian are pushing out there.

But even one alert user is enough to catch a hacked .deb or .rpm, so you don't need most of the users to be paranoid. It's kind of a reverse herd immunity, if you will.


I've seen security problems sit in open-source code for years because no one was interested enough in going through it.

If you are talking emacs, or the kernel, or gcc, there is probably enough interest there (and strong personalities in charge of the projects) to keep things safe.

I don't know where the cutoff is. At thousands of users you probably can't count on someone else having looked through it. At millions you can.


That's certainty true. I can tell you as an OSS dev that that are simply not enough eyeballs for all bugs to be shallow.

However you don't have even the possibility with closed-source products. That means you must instead rely on the process used to create the software. So something by MS might actually be pretty trustworthy from a "security issues not accidentally introduced" standpoint at this point after years of improvement on secure coding practices.

But they still operate by a profit motive, and there's still the possibility of deliberate introduction of security issues when it suits business purposes or for legal compliance.

Unfortunately there's no great way to tell that an OSS project from RANDOM_DEV is well-coded without looking at the code and it may be that a well-intentioned but junior dev introduces a relative swath of security bugs by accident.

I don't know where the cutoff is either. It may depend more on your threat model than anything else.


What I've always wondered is how hard it is to have both a production system running some software on some hardware both of which may or may not have a backdoor and have a sentry system running next to it on different hardware that is somehow watching the first system for traffic anomalies not aligned with how the first system should be functioning. On top of that the first production system also watches the sentry for signs of disabling or tampering. With two machines watching each other, I would imagine that it's much harder to use any backdoor without being detected and shut down.

In a similar vein, would it not be possible to spot hardware backdoors in PRNG by running many identical encryption tests across many different types of hardware looking for one that doesn't behave like the others?

More generally, what tools exist for the automatic detection and mitigation of backdoors?


On the Space Shuttle, four computers with identical software ran redundantly, and a fifth with independent software was used to detect errors. http://en.wikipedia.org/wiki/Space_Shuttle http://www.hq.nasa.gov/pao/History/computers/Ch4-4.html

This was, of course, very expensive, and made software changes a very difficult and slow process.


It doesn't mitigate untrusted software, but an interesting approach to tamper resistance can be found in Infineon SLE78 security controllers (for smart cards, etc). They run dual CPUs out-of-sync and compare results after each instruction.

http://www.infineon.com/cms/en/product/chip-card-and-securit...


Look into Gitian, which is used by the bitcoin developers to create deterministic, verifiable builds that can be compared by multiple developers on multiple machines before doing a binary release. In theory, the process could be extended to an entire *nix distribution (and there is interest in doing so).


>We need an army of software developers on the look out for potential NSA back doors — to borrow a phrase, if you see something, say something. And if you can't see anything because you can't get the source code... well, who knows that they might hiding?

Of course this rules out all 'cloud' software other than simple dumb data storage. RMS was right again[1].

[1] "Cloud computing is a trap, warns GNU founder Richard Stallman" http://www.theguardian.com/technology/2008/sep/29/cloud.comp...


I never liked the "cloud" from the start. To me it was simply dumb terminals all over again, which had the potential to shift everything to a server which was not mine. Security was an issue, but also it meant being reliant on the connection and service being up and running. In my experience, web sites were down more often than my desk top. And that's just on a personal level. Why a business would do such a thing was even more beyond me. Personal or business owned clouds for limited scope use, fine. But handing all that control over to a 3rd parts, over the net, in a foreign country? I see the marketed advantages, useful as a throwaway extra, but over all? No way.


> Why a business would do such a thing was even more beyond me.

Mostly the same reason they rely on third party infrastructure like electricity and plumbing: economics.


Both electricity and plumbing has immense regulations, and historical perspective of being neutral and non-threatening to businesses.

Cloud services has none of that, and can cut off service for any reason. Work on any competing services that "the company" dislike, or do anything disruptive to people in power, and instant see your future, your business, you speech be taken down to a 404 not found. Any logs, any files, any property at all confiscated. Even pure money is not safe from "indefinite suspension of the account".

Maybe this is a lesson that if you ever want to legally produce a scam, steal, or do anything that normally would land you in jail, all you need to do is go into producing third-party digital infrastructure. There is basically nothing a cloud service can do that would lend the owners to go to jail except tax evasion.


The regulation has to do with the fact that plumbing and electricity are Natural Monopolies. If they were not Natural Monopolies, they would have competition (as Cloud providers do), which acts as a form of regulation.


That is not regulation, merely threat of competition. And competition is moot if all the servers are accessible to the "bad guys", in this case, NSA. Unless, you mean to tell me there is a viable alternative to AWS that is located in Switzerland?


Right, because it's not like the US government has been able to pressure the Swiss banks into giving up the historically famous anonymity of their clients! Oh wait...


Alrighty then, pick another country FWIW. Germany, Iceland, wherever. Still doesn't change the point I'm making.



Apples and pears.

There was not a time when the mass population had their own generators, then flocked to supplied electricity in preference. They never had power independence.

Water was originally done locally, rivers and what not, but hygiene and health became a game changer. The benefits were huge and clear.

The whole history of all three are so different, you cant reasonably compare any of them.


The benefits of outsourcing your computing infrastructure are also huge and clear, as are the benefits of sharing your personal information. That's why so many people do it, and will continue to do so.

In-house computing infrastructure is very expensive and most firms have their comparative advantage in doing Something Else, so it makes much more economic sense to run on AWS or somesuch instead of buying a pile of physical machines and running their own server farm. Likewise, many web and mobile services provide huge benefits for their users via network effects. I personally don't enjoy using Facebook enough to maintain an active account, but many millions of people clearly do, and I can't say they're wrong for choosing to spend their time that way.


You wanted to know the reason why companies choose cloud computing, and I answered. Computing power is - no - not exactly like other things like electricity and water, but it's headed that way for certain types of applications.

What hosting solution(s) your own company use?


In the company where I work, everything but the IDE is in cloud...

We use Google Apps, Trello, Wave (for accounting), Corona SDK (that builds in Corona cloud servers), Google Drive (previously was Dropbox), Google Docs (we never used any office here, MS, Libre, or anything else), and so on...

I am personally against it, but the CEO decided it that way because he did not wanted to bother about IT infrastructure, we are not big enough yet to bother about it, cloud stuff make things much easier.


Yeah, sure, until some numpty digs up your internet connection cable. Cheaper, I can vaguely accept, but "easier" to me means "lazy". Nothing good comes from easy. But, yes, in reality, I can see how CEO types would be sold on the cloud. I just think its potentially misguided.

BTW, IT infrastructure? You are all networked, right? So, whats left? One large ish server? Hardly my idea of "infrastructure", which to me implies something a bit more large and diverse. IMHO, being networked for the internet means you already have infrastructure, ish!!!


Right now our office has two workstations, two desks, two chairs, a water filter (not cooler, only a filter, made of clay), paper, pens, two potted plants, and of course, all relevant cables.

We don't even have a place to put a server... We already freak out enough fearing people will steal the workstations and test devices! (we are in Brazil, last year there was 20 mass robberies, 90 cops executed, several random murders, and 3 different incidents resulted into homeowners ending in hospital after being shot with military-grade rifles in common robberies)


> Yeah, sure, until some numpty digs up your internet connection cable.

My last employer was not into 'the cloud'. Just a smart manufacturing company, with sites in five countries, two data centers, one on each continent.

Which would die in about a day without the internet.

Each site could chug along for a while, but without the bits flowing in, and out, coordination gets out of whack and after about three workshifts they'd have no idea what to work on next.

Cloud or not, if the internet goes so goes the business.

That or they'd ask us to re-install those doggone fax servers we retired in 2007.


What do you do if some numpty digs up the power cable to your HQ+server site? You can put the servers on generators, but in most companies users still couldn't use them as their office gear would be offline.

In any case, if your servers are running but you can't connect to your suppliers and customers - you might as well turn them off, your business likely can't function without connectivity even if you avoid the cloud completely; so avoiding cloud doesn't decrease your risks much.

Just buy two or more redundant, reasonably different internet channels - and then prolonged loss of connection is no more common as other major downtime risks such as flooding, fires, etc.


So, you're not using https://c9.io/ yet? ;) Are there any other tools that could fit the description of cloud IDE?


Cloud software is fine as long as you don't need to trust it. I use Google Calendar, for example, because it's convenient and I really don't care if the NSA knows when my next dentist appointment is.


No, but the NSA might care about your movements, who you know, who you deal with, and so on.

Remember that video that was posted of a Lawyer given a talk to law students about never speaking to policemen? Remember how innocent things can be completely twisted and used against you?

Yeah, I'd worry about the NSA snooping in my calendar.


This cannot be understated, especially when it's not just the NSA snooping -- it's the entirety of the US Government. The DEA and the IRS revelations about using NSA-sourced information are just the tip of the iceberg, and it should be clear and obvious that there are use-cases for this data in a variety of other areas.

Unless the government puts explicit restrictions (not secret restrictions!) on how their resources (not just stored data!) can be used and actively punishes those using it improperly, then it should be assumed that all the resources at the NSA's disposal will be used for the convenience of the rest of the government.


How do you know the NSA isn't snooping on your dentist's calendar?


Well, we dont. And if they are, that's data connections. Maybe the dentist is connected to some one scary, a client. Now "you" are connected to some one scary. That's a terror network...


Even in the braindead idiocy of government, which I can confirm from firsthand experience, that would not constitute a "terror network".


I'm personally more frightened about someone like the IRS being handed data from the NSA, than the NSA thinking I'm a terrorist, and I pay my taxes and everything. I'm just frightened about being nailed for any tiny mistakes that I have made in the past, that this data capture could glean, if that makes sense?

FWIW, I'm not even from or in the US of A, but I'm worried about my government doing something exactly like that considering how much it's cooperated with the NSA directly in the past and present.


The problem is that connections like these occur purely by chance. When you multiply the events together you get a low probability ... but there are hundreds of millions of Americans. There will be a lot of false positives.

And how will they be treated? It seems: poorly.


ah, but it's plausible cperciva is connected to a suspected terrorist and might be using the magazine stand in the dentists waiting room for dead drops..


the sad truth of the matter is it's a cultural thing, not a technology thing.

sharing your calendar can be twisted against you.

not sharing your calendar can also be twisted against you ("you don't seem to be sharing your calendar like most of your fellow citizens, what are you hiding?")

see: mccarthyism.


Yeah, indeed. I remember some thing posted here a while ago about about a prospective employer assuming something is wrong with a candidate with out a facebook profile for the prospective employer to snoop over prior to hiring.

Perhaps we should exercise our little grey cells more. Granted for a busy sales man or something that's impractical, but Im sure we could generally commit more exclusively to memory..

I posted this before, but to repeat... I think the legacy here is that we move more and more as a matter of habit or routine to a more secure way of living. No, we cant ever fully insulate ourselves from the NSA, but I think more and more of us will commit less to the internet, choose HTTPS via automatic extensions, use more secure software as a default, self censor, and so on.


It's very convenient for NSA to know your next dentist appointment because they can search your home at that time. Or install/remove bugs.

BTW it's a major trope in spy movies. Literally dentist appointment in "The Tall Blond Man with One Black Shoe" ("Le Grand Blond avec une chaussure noire").


They might care when everyone's birthdays are though.


That information is already in many, many places. In particular, if you have a passport or drivers license it's already available to the government.


In fact, practically, your birth date is only your birth date inasmuch as the government believes that that's the date it is. If a nurse in the pregnancy ward makes a typo, and that's what gets submitted to the government, then that's your birth date, and no arguing.

(I say this from a position somewhere in-between: I was born in British Columbia, and my birth date is right on my birth certificate, but wrong on the public-health-plan card you also get issued at birth. I had a very strange time bootstrapping my first credit card...)


I like the linguistic ambiguity here. Of course, you are talking about a legal birth date and how in practice that is what matters, but really, what the government thinks has no bearing on your actual birth date.


@derefr, my sister had a similar thing happen. In the public records department someone possibly mistook a 7 for a 1 and my sister's original birth certificate got issued with the wrong date and was filed wrong. When my parents called about the error, they issued a new one, but never requested the old one back. She has two official birth certificates with two different birth dates. I'm not sure which one the government believes is "correct" but she's been using her real birthday birth certificate on all documents (drivers license, passport...) since.


I have a birth certificate with a different last name, as my Dad adopted me (natural father and mother, got married a year later, born with mum's name but now have dad's once I turned about 1.5yrs). I even am pretty certain I have a second birth certificate with mt dad's name, but I'm not certain.

I also have my adoption papers, and I left the country I was born in when I was young (7 years). As far as the country I live in is concerned I'm my Father's last name, but have legitimate primary ID with my mother's pre marriage last name.

Always wanted to get my spy on and build up a second identity with that. Of course that's highly illegal, so itw only ever been a fun thought exercise :)


Did you read "The silicon jungle"? You Should. Holds for all of you, btw.


There are a lot of places the "cloud" can be zero-knowledge for payloads in storage, email, and real time communications, but most services are not set up that way, and they would have to charge a subscription fee to replace their ad revenue.

It is possible to make cloud services where trust is not needed. It remains to be seen if cloud services will have to change to remain in business.


People say email can be zero-knowledge, but I'm not quite sure--ever since Gmail pushed us from organizing everything into folders, to one giant Archive + labels, I find server-side search to be an essential attribute of my email service. I don't want to pull my entire mail history down onto my phone just to find one message; even the index would be ridiculous.


But the server-side could be a desktop computer at your house instead of Google's servers. Seems harder for the NSA to access


For those with the wherewithal to run their own server both in terms of hardware and software along with secure backup policies. The likelihood of be compromised through missing a security patch or poor configuration is almost certainly massively higher than anything like this.

But then where do you get the software for said server? Or your mobile client? We're back to the age old question of how do you trust the compiler? How do you trust the source code?

There's a reason that Governments that take security seriously have their own, private, WAN infrastructure that has gateway upon gateway before it reaches the internet, if it ever does.


That sounds like a lot of work. You have to set up the server, get a domain, update it, back it up etc; What do you get out of it? An email server. Google will give you all of that in a few seconds. Google's uptime is way better than what I could hope to get.

When my laptop had water spilled on it, there was no need to worry about my thesis. I went to a lab and started working on it again 10 minutes later. No work lost. My own backup solutions would be far worse, and likley involve losing some work.

When using these services, I have to ask myself, how much do I value my privacy? I think I get more from google's services than I lose. For me, and a lot of people, it's not worth the time to set up an email server, let alone maintain one.


Yes, for some people, it's too much work. However, there are many for whom it would not be at all. I have always run my own servers both for personal and business use and I don't find it the least bit onerous. Honestly, I think it only seems like too much work until one learns more about it. Just like anything else, I suppose.

I'm not trying to change your mind; not everyone is meant to run their own systems. But I certainly want to counter your discouragement against doing so.


It means that my mail archive (which is very useful to me) either (A) is vulnerable to risks such as theft/fire/flood that are orders of magnitude more likely than google losing my data; or (B) will cost me effort to make and maintain proper regular offsite backups, which is a rather significant chore that I'd want to outsource anyway to someone else.

Heck. I really could host everything at home but the mere thought of the periodic hassle when some hardware or software inevitably breaks... I'd rather simply send all my email archive to NSA, KGB and Chinese gov't, the hassle is really not worth it (for me).


The home search server would just be a cache, presumably the authoritative mailbox would be on an encrypted cloud service. Disaster recovery would consist of replacing the appliance and reauthenticating it.


It would be hard to trust a Web client, never mind implementing folders etc in anything but a desktop client. There really isn't any way to make the semantics of encrypted email identical to Webmail that is sent and stored in the clear.


> Of course this rules out all 'cloud' software other than simple dumb data storage.

This is pretty much the conclusion that I've come to as well. Using cloud storage to keep thing in sync, but only doing decryption on the client. Even doing something like storing a SQLite database and encrypting all of the fields seems like it would be leaking information rather than just encrypting/syncing the entire database.


I wonder if stuff like this could help, but it could be years away from deployment anyway (and that's if the companies are even willing to do it - so all the more reason to avoid current cloud services):

http://www.theregister.co.uk/2013/09/10/boffins_propose_more...


Sadly, for most users (read: people who don't program, or read HN, and even those who do), "Read the code and find bugs in it" is probably an impractical defense, although it is the logically correct one from the cryptographic point of view.

An easier way for most people (Read: People who have never heard of cryptography. People who think 'DES' is a Government agency. People who think of an actual python when they hear 'python'. People who think Perfect Forward Secrecy means no one sees your Mom's embarrassing email forwards. There are a lot of such people, probably far more of them than software developers who speak C.) is probably to lobby for change in the draconian laws that authorize and encourage this sort of spookery. It is not a perfect solution, but in the long run, will probably make things easier for the average person.


When it comes to crypto most people who read HN aren't close to being qualified to read the code and spot issues.

The things the NSA is doing isn't putting in "IF password = 'Joshua" then LetMeIn()", it's subtle flaws in random number generators. The reality is that open source may be our best defence but it's still a pretty poor defence because of the complexity of what's been compromised.

The more I think about it the more I think the actual best thing you can do is to pick obscure technologies that were probably too small to warrant NSA attention.


I've dealt with a lot of bugs in FreeBSD, and most of them were found by non-security-experts. The people finding said bugs didn't necessarily know enough to be able to exploit the bugs, or even enough to be certain that there was a bug; but there were a lot of "this code looks weird" reports turning out to expose vulnerabilities.

I don't think you need to be a security or cryptography expert to find vulnerabilities the NSA might try to exploit.


While you're right that fewer issues means fewer potential exploits, all you're really doing is getting rid of weaknesses that should never have been there in the first place, but you're doing nothing to address the systematic undermining of encryption standards and hardware and software implementations of them.

To use a metaphor, it's great that you're making sure your windows are shut but the locksmith may have sold you a deliberately weak front door lock.


When speaking of encryption and "systematic undermining of encryption standards" you have to be aware of what that means.

What people fear is the deliberate weakening of the algorithm such that it can be broken (implementations can be fixed, the real nightmare being subverted algorithms), where broken doesn't mean what many people think it means.

Say you've got a 256 bits key, to brute force it you'd normally need to try 2^256 combinations of bits, right? Well, if you found a flaw that permitted you to brute force it in 2^200 attempts, then that's a massive improvement and you can consider the algorithm broken, but guess what - that's still exponential complexity with the issue being solvable by simply making the key bigger. And there are people working on these standards that are not on NSA's payroll and outside the US jurisdiction, people that aren't idiots, so bigger flaws than this aren't feasible.

This is why, even if they've introduced subtle flaws in current standards, that doesn't mean they have the capability of breaking the encryption - e.g. it is possible that they are able to break RSA-1024 keys, but RSA-2048 is an entirely different problem. And RSA-4096 keys will likely stay unbreakable, unless a huge breakthrough happens.

People give them more credit than they deserve: yes, they have cash and authority and can coerce companies and individuals and they can also plan for the long term, etc, etc, but let's be realistic about their abilities.


That probably works in FreeBSD, but I can't imagine being welcomed by Linus if you report that "this code looks weird." Although it might be worth risking the anger, just so one would have a personal rant to hang on the wall, similar to how people save checks from Knuth.


He ought to welcome it, since I get the impression his approach to debugging relies partly on spotting code that looks weird. Of course whether he would is another matter.


> I don't think you need to be a security or cryptography expert to find vulnerabilities the NSA might try to exploit.

Yes, but I'm assuming you have to be a medium-to-expert level programmer, generally in C, at least, correct?

What I'm trying to say is that that is probably not the case for most people in the world, or even in the tech industry.


How many of the people talking about 'many eyes' on HN feel that they would have spotted the Debian OpenSSL bug[1]? As far as I remember it was not spotted through code review, but because people discovered identical certs in the wild.

[1] https://www.schneier.com/blog/archives/2008/05/random_number...


It's possible someone could've - just the diff in itself was fishy, because it commented out two lines which added entropy, yet it was obvious from the diff alone that that the OpenSSL developers only thought one of them should be disabled to keep memory checkers happy.


This is an argument for vanilla distributions. The most sensible place for expert code review (a limited resource) is the upstream project. By comparison, distro modifications receive relatively little review. If the Debian patch had been mailed to openssl-dev, I doubt it would have quietly gotten "LGTM" replies and been picked up by a maintainer.


But "could have" is the theoretical defence. "Did" (or rather "didn't") is the real world test.

This is the whole point - OSS offers the potential to be safer but if there aren't qualified people with the skills and expertise digging into every nook and cranny of every bit of code.

Ironically it's like the security services say - they have to win every time to win, the terrorists only have to win once.

The community have to spot and fix every exploit to secure the system, the NSA only have to get one in to compromise it - and these aren't like accidental flaws, they're things the NSA will know are there, know the extent of and know precisely how to take advantage of them.


Open-source is orders of magnitude better, simply because you can compromise it only through systematic long-term efforts for introducing subtle flaws that are hard to detect by regular code reviews, and you can count on such flaws only for limited periods of time, not to mention that architectural decisions can always happen without the NSA being able to prevent it. E.g. if your Linux distro is up to date, the older flaws are likely to be fixed already.

On the other hand, NSA can simply demand the addition of a dumb IF just like you described in any proprietary software, and nobody will ever find out about it. If Windows XP contained backdoors in 2001, it may as well contain the same backdoors now.

Just because you can't get a 100% guarantee of safety, that doesn't mean that open-source isn't much safer.

And yeah, people exemplify with that embarrassing SSH flaw in Debian. Well, at least you found out about it, at least you know it was fixed.


That would be the sort of systematic long-term efforts we know the NSA have been undertaking and almost certainly continue to take?

Given that the assumption that most people would (reasonably) make is that OSS is more resistant to these things, if you're the NSA isn't that exactly what you'd target?

And given that we know that they've undermined standards, compromised hardware and so on, wouldn't it be naive to think that they'd not made significant efforts to infiltrate and compromise OSS projects?

I certainly agree safer, I'm just really not convinced that safer crosses the line to actually being safe.


Frankly, against a State sponsored and run adversary like the NSA, with the clout of the US of A behind it... I personally don't think it's possible to cross the line is to safe. Frankly, with a lot of our hardware and software coming out of companies in the US, there are so god damn many avenues the NSA can take, I'm going to go fashion myself a tin foil hat at this point! FOSS gets us closer, but without FOSS hardware known (and can you ever know?) to be secure and free of influence... And I don't see the very act of creating that FOSS hardware to replace my devices as feasible at all.

Sigh. This sucks. I knew it was bad, I use encryption, I share only what I want to share, etc. And none of it turns out to really matter. The NSA's actions have made me as a non-US citizen or resident so wary of anything coming out of that country now :(

And with China having the same issues, the two big tech powerhouses... What's left? My own country (Australia) definitely would cooperate with these sorts of things too.

We are effectively controlled by tech dictators at this point, and no way out in the near future. This sucks.


Changing the law is actually the ideal solution IMO.

The power asymmetry between the government and private citizens and companies is a matter of law not technology. Even if each person and company encrypts their data perfectly, they can still be compelled to give up the encryption key with the threat of jail or financial ruin.

I'm not saying that perfect technical security is not worth pursuing--it is, because it will force the matter into the legal realm. But if the legal realm isn't right, the technology by itself won't save us.


The problem is not the people who don't know what AES/DES is, they will never tell you they looked at the code and it looked good. You are worried about the people who do (from say reading Applied Cryptography or doing the Matasano challenges) and think that qualifies them to do crypto or audit to against malicious backdoors. You are worried about people who, for example, think LFRS are a safe random number generator for CBC mode IVs.


> Sadly, for most users (read: people who don't program, or read HN, and even those who do), "Read the code and find bugs in it" is probably an impractical defense, although it is the logically correct one from the cryptographic point of view.

Actually, what gives me the warm fuzzies is that I can hire someone who knows what they are doing to read the code. So could a purely non-tech person. This can't occur with closed source software.


"The only solution is to read source code and look for anything suspicious. Linus's Law states that "given enough eyeballs, all bugs are shallow": If enough people read source code, we will find the bugs — including any which the NSA was hoping to exploit in order to spy on us. "

I understand the point about software, but what about hardware? If the NSA is corrupting software, then it must be corrupting chips too. How can chips be verified NSA spy free?


I understand the point about software, but what about hardware? If the NSA is corrupting software, then it must be corrupting chips too. How can chips be verified NSA spy free?

In principle, an analagous "Linus's Law" exists for open-sourced hardware. So if you have the code/design for chips, that could be verified as well.


Of course the path from Verilog to working hardware involves a number of steps along the way. You don't really know that your hardware design has made it into silicon intact and that nothing else has been added. [1]

[1] https://www.schneier.com/blog/archives/2012/05/backdoor_foun...


And how do you verify that the design was not modified before the chip went into production?


I've been reverse-engineering old microprocessors lately (e.g. Z-80), and I've been thinking about this question. For chips of that era, you can actually look at the silicon, examine what it does, and be pretty confident that there's no back door. (Although interestingly the Z-80 is said to have a few "traps" to prevent copying, where due to silicon doping tricks the circuit doesn't do what it looks like visually. But scanning capacitance microscopy would catch this.)

But for a modern microprocessor, it's impractical to reverse-engineer the chip at this level. One solution would be the manufacturer could publish the full plans and circuitry for the chip, which would make it easier to see if everything looks okay in the circuit, and then you could verify that what's on the chip matches the plans. For instance, if Intel published the circuitry for their digital random number generator, you could look at the circuitry, look at the chip, and be pretty confident that everything is okay. (Note that this could be published without being open-sourced. In a perfect world, patents would include all this information.)

The problem is for chips that have microcode that can be updated, knowing the chip itself is "correct" doesn't help you, since the microcode could be malicious. Even if the random number generator is valid, the microcode could modify the values before you get them. My conclusion is that even if your chip is 100% validated, it doesn't help you at all if you don't know what's in the microcode.

(This all depends on what kind of attack is being put into the chip, of course. I wouldn't notice any side-channel attacks built into the Z-80 such as timing-based attacks or power consumption. You could probably sneak in something super-obscure like the carry flag not getting cleared in some corner case and make it non-obvious, but that would be pretty hard to exploit.)


So, basically, design your own chip on paper (because you can't trust the CAD tools or the compiler that made the CAD tools), build your own fab and make your chip at home, build your own computer, write your own compiler, write your own operating system... Yeah, this'll be good.


And also, while you're doing all that, don't leave in any exploitable breaks.

Don't accidentally hardware RANDU as your PRNG, for instance.


Similarly, how do you know your compiler didn't add a backdoor to your software? Even if you compiled your own compiler, you don't know if the software you used for that is safe. Unless you trust your hardware and you bootstrapped your compiler, I don't think you can truly trust software.


One option (if you have a trusted FPGA) is to run something from opencores.org, where you can verify the code.

But more generally, yes there is an issue of having an initial base system which can be trusted to some degree.


Some strong solvent and a microscope? =)


A regular microscope? Maybe if you want to manufacture in a twenty year old process. Visible light only goes down to ~400nm, so no matter the magnification your eyes will not be able to see minimum size features on anything made since 1994 (600nm) or so.


Oh no, SEM.


But can you trust a SEM? ;)


Every time these issues come up for discussion, my mind always returns to some sort of imagined utopia where our software is much more modular, composable and compartmentalizable (grammar?) than todays "mainstream" platforms.

It could enable several different things, like running systems where most modules have access to much less resources of different kinds, allowing less mischief (microkernels, hierarchical resource mgmt frameworks like GenodeOS, etc).

Greater modularity would also mean that basic or common modules would need less changes, requiring new audits less regularly, thus decreasing that load on the community. Of course many vulnerabilities are the result of unexpected interactions between modules, but then a certain composition of certain modules could be audited and hopefully not require any changes for some time.

Being a software engineer I'm not kidding myself with regards to the huge software engineering problems inherent in achieving that, and of course the open source community already performs a lot of code reuse. Many will argue that an argument for more reuse is an argument against fragmentation and thus an argument against experimenting and forking. I guess that's true, but I still feel that more could be split and shared while still experimenting on many other things. There will also always be politics, personalities and the will to reinvent wheels for many reasons, like self-education or implementation pet peeves.

Also, different languages and/or runtimes/VMs and their differing suitability for different environments affect fragmentation greatly. We probably won't ever end up with a single "winner", no matter how much many C/C++/JS/Go/Rust/Haskell/ATS/theorem provers we go through, and for reasons (only some of which are mentioned above) we probably don't want to.

Dreaming is nice, though.


Definitely. One thing I've noticed is that constant code churn causes a huge amount of instability in many software domains (cloud, desktop, phone). The problem is only getting worse and will continue to get worse as software runs more of our lives and the world. Apple is sending down updates all the time; Ubuntu is; Google is, etc. Nobody can keep track of what the hell they're running. It's a security nightmare.

There are so many components to systems now -- and not all of them need to change all the time. What I imagine, and what you seem to be hinting it, is that we have to do is factor software into processes which vary by the amount of code churn. Instead of having 1000 modules being patched every week, have 500 modules which change every 5 years, 300 which change every year, ... and maybe 2 or 3 which change every day. This would let us maintain the pace of software innovation.

Unix does this to some extent -- think about when you would have a REAL need to upgrade "grep"? Almost never. It's basically hardware at this point.

Likewise, with the architecture of Chrome and the Quark browser yesterday on HN, you could just update the rendering engine, or the JS engine, which run in restricted processes. Browser kernel updates could be separate, and verified separately.

Related paper I found interesting:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.180....

We are moving closer and closer to "ubiquitous computing", and I agree with that paper in that software upgrade has to be treated like a first class problem. Right now even the best vendors are too sloppy about upgrades and modularity. I am constantly being nagged for updates on every device, and at work I am constantly dealing with shifting sands underneath my code.


Thanks for the paper link, it looks interesting.

Yes, factoring software into different processes and enforcing proper sandboxing and resource restrictions is definitely one aspect of it. Hierarchies of such processess where resource allocation can be delegated makes it even more interesting.

Another aspect is composing any one process of - where possible - shared, audited modules that haven't changed for a good while and that you're able to verify individually before building your binary. It requires proper code signing and delegates the problem to the domains of who to trust (and of course issues like Thompson's famous Trusting Trust) and once again I'm not kidding myself about the difficulties of that either. But that shouldn't stop us from working towards such a goal if we find it desireable.

This of course helps closed source software production, but even more important is for any individual or FOSS organisation to be able to build their own system of (a myriad of) components that can be cryptographically verified to not have changed since some specific audit event, provided that these signatures of single modules and signatures of compositions of modules can be kept by a public and decently trustworthy actor or preferrably actors, like Linux distributions or FSF or whoever. In a world where the loyalties of individuals are easily purchased we need to keep each other honest.

This extends from Joe building his own doubly-linked lists and red-black trees to crypto modules that take side-channel attacks into account to kernels and drivers and everything. Many will argue that layers and abstractions can kill software with complexity. I certainly agree that they can if used poorly (or, quite frankly, if not used with great skill) but they are also one of the only things that can make complex problems manageable, and - as you say - our systems are now hilariously complex.


The only solution is to read source code and look for anything suspicious. Linus's Law states that "given enough eyeballs, all bugs are shallow

Well. It's probably easier to break into your home, put an eavesdropping device and get your keys. I think it's important to audit code but let's be realistic, you won't defeat an agency that has you on your radar because you use "open source software".

And by the way, Linus Law doesn't make any sense, simply because some bugs cannot be seen just by looking at the source code and also because the bandwidth between eyeballs that don't belong to the same brain is extremely limited.


It's probably easier to break into your home, put an eavesdropping device and get your keys.

If they were only targeting one person, maybe. But getting a back door added to a piece of widely-used software is much easier than bugging the homes of everybody who uses that software.


The backdoor can be anywhere between the two computers (most likely at the extremities though).

I agree with you in the sense that software should be made as secure as possible, my comment is just a remainder that one should be realistic about what level of security he can get.

If you didn't audit the hardware your software is running on and your local isn't physically secure spending so much time on software auditing isn't very useful.

You'd even have to thoroughly double check that the software you have it the one you expect, audit your compiler, audit your os and recompile everything you have.

You know better than me that there is no such thing as absolute security, so you're secure relative to a menace. If you try to secure yourself against a 1st world intelligence agency, I say... "Good luck". ;)


> you won't defeat an agency that has you on your radar because you use "open source software".

The point is not to create obstacles to criminal investigations but to thwart massive online private data collection.


The only way to guarantee you are not being spied on is to not use computers.


The most damaging thing is that I literally have no idea what is safe now - even good actors might unwittingly have been backdoored at the hardware level.

I'm taking all private communication completely off the internet to minimise my exposure - I do not have enough knowledge or experience to make decisions on cryptography, nor sufficient need to waste time on this.


But where you are taking them? SMS? Phone calls?

Maybe a private mesh network setup through line-of-sight directed antennas?


Face-to-face is the only way.

All emails or SMS or phone calls have to be just stuff like "Hey, how are you are doing? Do you wanna meet up and have a beer."


Do you remove the battery from your phone before talking in person? The NSA could very well listen in to your phone's mic/s. Your phone probably has a camera or two. Even if you have no phone, everyone around you likely does. If you live in an urban area, your entire walk to the bar is likely on camera, the bar has cameras, your cab ride home has cameras, the transit bus has multiple cameras, the transit station has cameras, if you live in a high rise it likely has cameras, in a suburb atleast one of your neighbors have cameras. Most all of these cameras are now connected to internet enabled DVRs with questionable at best security. This is reality now. Privacy seems to be on it's last legs.


And I am happen to live in the UK where we have CCTV. ...you are right, we have woken to a truly disturbing reality.


I think everyone is overlooking a significant threat. If we have information that is shared, we know when it is changed. The NSA has data that is not necessarily shared, so if the top people decide to "do you in", such as Obama and Romney, they have only to "edit" the data - and you have no way of proving they didn't. Meanwhile, we know the FISC will accept whatever NSA gives them as the absolute truth.


> If the NSA can break 2048-bit RSA, it would be a Big Deal

2048-bit is nothing. Here's why:

1. A great mathematician can find shortcuts. Tons of great mathematicians working in parallel will find them faster.

2. A fast processor can eventually brute force a key. Millions of fast processors working in parallel will find it faster.

You are talking about a well-known method with a determinate algorithm. May as well just hand them a cake, because they are going to eat that up.

Why use encryption mechanisms they've trained us to believe in and use? And how could be beat them at a game where they hold all of the talent and power? The only answer is to have dark encryption methods that are not shared, and that change frequently, frequently enough so there is no group of mathematicians with millions of servers that could determine the constantly changing algorithms.


So you're suggesting we should just frequently change almost-certainly-flawed algorithms, but rely on their obscurity for security? I feel like there's a term for this strategy, but I can't think what it is...


I do not know where this op-ed piece was intended for, but I think you should have written it anyway, stating just this feeling of yours. There was/is/will be so much media frenzy about this, every article (even if you think you really had nothing useful to say, which you actually do), op-ed or not, that does not resolve to sensationalists headlines is just really helpful.


I seriously considered it -- I even started trying to write a "there's nothing new here, the NSA has always been spying on us" editorial -- but I just couldn't make it work.


"I also pay for bugs people find in Tarsnap, as well as scrypt, kivaloo, and spiped, right down to the level of typographical errors in comments."

I think these bug bounties are interesting. Is there something like a market price for one of these bugs? I.e. are other people out there besides Colin willing to pay for bugs in the tarsnap client?


I imagine there could be a similar reputation market along the lines of the write-my-college-paper services, in which you sell the bug to someone who can then take credit for it.

Certainly I'm sure you could find some people who'd be prepared to pay for a valid Knuth error, although exactly how you'd advertise/seek those who have such things would risk wrecking the value of the thing you seek to acquire.

Stepping up from that, there are clearly existing markets for exploits, and you might be able to sell potentially exploitable bugs in the same way.


There is a black market for Viruses, Trojans, and Data - why shouldn't there be one for bugs?


I don't see what the NSA revelations have to do with this line of reasoning.

How do we know you're not (and have not been) selling tarsnap data to a corporation or non-US intelligence agency? This has always been a possibility and it has roughly equal probability to you being an NSA agent even after the Snowden leaks.


I don't see how you don't see a relation.

You assume he is, then you read the source of the client that is sending the data, and verify that it's not. You trust the code, not the person. That's the TLDR of the article.


Also, since there is no downloadable binary, you must compile the code yourself.


Note that anyone who can't read code has to trust the person. I guess they should be trusting a person who has read the code? What happens if you don't personally know any software developers?


What happens if you don't know any civil engineers? Do you stop driving over bridges?


Bridges don't secretly collapse. And an overt collapse due to a serious, government-mandated secret weakness in design or construction leading to the revelation that a large portion of a "free" country's transportation infrastructure suffered from similar secret weaknesses in the name of national security would be a scandal of government-shattering proportions.



I'm glad I kept reading past the product plug.


Come on, that was clearly on-topic.


thank you. common sense ftw.

someone who doesn't seem generally surprised that the NSA are involved in espionage of all things... :)

"Despite all the above, it is still possible that I am working for the NSA, and you should not trust that I am not trying to steal your data."

now if you can just get people to pay attention to issues that result in dead children and reduced quality of /real/ life to people instead of overhyped and questionable civil liberties violations... XD




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: