
Curl’s backdoor threat - sohkamyung
https://daniel.haxx.se/blog/2017/09/12/the-backdoor-threat/
======
notacoward
The bit about code signing is really important. The people who wrote distro
package managers recognized this years ago, and ever since have at least tried
to do the right thing. That can make getting code into those repos a bit
cumbersome, but that's kind of the price you pay.

Unfortunately, most of the language- and environment-specific package managers
have pretty much ignored that issue. There's too often no way to verify that
the code you just downloaded hasn't been tampered with. Heck, half the time
you can't even be sure it's a _version_ that's compatible with everything else
you have. It's a total farce.

Software distribution is too important, security-wise and other-wise, to leave
it to dilettantes as an afterthought to other things they were doing. Others
should follow curl's example, instead of just dumping code into an insecure
repo or (even worse) putting it on GitHub with a README that tells users to
sudo the install script.

~~~
moobsen
Assuming one can trust git and GitHub (and CAs), is there a technical reason
why you would consider distributing code via GitHub as unsafe?

~~~
mikegerwitz
In addition to account compromise, there's also the risk of bugs/compromise of
GitHub itself:

[https://github.com/blog/1068-public-key-security-
vulnerabili...](https://github.com/blog/1068-public-key-security-
vulnerability-and-mitigation)

Commit signing can help to mitigate that.[0] Note that GitHub now offers the
ability to add your GPG public key to your profile and show whether a commit
is signed with that key or not. I find this more dangerous than useful: if an
attacker compromises the account and adds his/her key, and adds a malicious
commit, GitHub would show it as verified.

[0]: [https://mikegerwitz.com/papers/git-horror-
story](https://mikegerwitz.com/papers/git-horror-story)

~~~
developer2
Do you know how the signing requirements work on GitHub when accepting a pull
request on a repo requiring signed commits, when the pull request is from a
fork where someone is _not_ signing their commits? Must the commit to the fork
be signed in order for the pull request to be merged, or is it possible for
the main repo to merge an unsigned commit while signing it themselves in the
process?

I can see requiring every commit on the primary repo to be signed, but it's a
larger nightmare to accept pull requests from forks if _they_ are also forced
to sign their commits.

~~~
mikegerwitz
I'm not all that familiar with GitHub.

What's ultimately important for trust is that the maintainers (or whomever you
are to trust) sign commits. They may choose to pass this responsibility down
the line a bit (e.g. how Linus has his "lieutenants"), but if some random
contributor does or does not sign a commit, do we care? Are they in the
maintainers' web of trust? What benefit does verifying their identity actually
have with respect for the project?

So in that case, a maintainer may decide to just review the patches and sign
the merge commit.

That contributor may want to _assert_ their identity---e.g. have their signed
commit committed to the repository to show that they actually did that work---
but that's a different issue.

------
ghgr
Backdoors in software is one of those nightmarish scenarios that disturbs you
until you just think about something else and kind of temporarily forget it
(like nuclear war, killer bees, climate change or asteroid impact). Open
Source just raises the bar but in no way solves these problems (for example
OpenSSH's roaming feature/bug, OpenSSL and Heartbleed). In computing one can
quickly become paranoid: software buttons for switching smartphones off (are
they 'really' off?), always-on microphones, webcams (granted, you can cover
it), blobs in your smartphone and home router, downloads from
[http://download.cnet.com](http://download.cnet.com) (note the lack of https),
winscp, putty from www.putty.org (again, no https, which is not even the
actual site, but nevertheless the first result from Google). In Linux the
landscape is slightly better, but do you really trust all those packagers? Do
you _really_ understand each line of code when you "git clone foo; cd foo;
./configure; make; sudo make install"?, and in X11 you can easily make a key-
logger without even being root! [1]. And not even full disk encryption can
protect us (Evil Maid [2]).

That's one of the reasons I'm skeptical of the Ethereum smart-contract
concept. In theory it works, but in practice I'm not sure at all. The DAO
heist was one early example of security bugs in smart contracts, but I fear
they will become more common when malware developers turn to "contract-
engineering".

[1] [https://superuser.com/questions/301646/linux-keylogger-
witho...](https://superuser.com/questions/301646/linux-keylogger-without-root-
or-sudo-is-it-real)

[2]
[https://www.schneier.com/blog/archives/2009/10/evil_maid_att...](https://www.schneier.com/blog/archives/2009/10/evil_maid_attac.html)

~~~
vog
_>
[https://www.schneier.com/blog/archives/2009/10/evil_maid_att...](https://www.schneier.com/blog/archives/2009/10/evil_maid_attac.html)
_

The following part of this classic (2009) article caught my eye:

 _> Symantec’s CTO Mark Bregman was recently advised by "three-letter agencies
in the US Government" to use separate laptop and mobile device when traveling
to China, citing potential hardware-based compromise._

It is strange how much this changed after the Snowden relevations. In Europe,
now you get that same advice when travelling to the US.

~~~
Angostura
Now all we have to worry about are the attacks from our countries' domestic
security services.

~~~
zdkl
You mean by the private contractors they hire.

------
angrygoat
The sort of hypothetical security vulnerability here is likely to depend on
undefined behaviour (buffer over-runs, subverting parsers, etc etc). Just
another reason to continue moving over to safe languages, especially for the
lower level bits of our stacks. HTTP is big and complicated, I'm much happier
exposing Rust/Go/C#/... to it than I am exposing C to it.

In safe languages, backdoors must be far more explicit, so we close off the
likely scenario posited here.

~~~
moxious
There's a good chance that if you were to rewrite some of today's existing
stack in new languages you'd end up with more bugs, not fewer.

C may be awful in some respects but for quality it's going to be hard to beat
15 years of peer review with any new languages cool features.

~~~
pdimitar
You are conflating "new and cool language" with a "language that eliminates
ton of possible bugs from the get go".

These two terms are far from identical.

You might be masquerading your conservative approach to new languages by
hiding behind "C is mature". No it's not. Right now somebody on the planet is
introducing a buffer overflow without knowing it, while coding in the "mature
C".

Get real already, please. It's high time.

 _Random example: I dislike Go 's error handling but the explicit nature of it
has saved me from working half-asleep 50+ times already. Another one: one
meager if/else in a supervised Elixir worker saved a server from infinite
repeating of a bugged task that would otherwise keep crashing forever. There
are others, lots of them. I am sure people can give plenty of examples for
Rust as well._

~~~
notacoward
I think you're overreacting. The GP wasn't being conservative about new
languages for new projects. S/he was merely warning that rewrites carry their
own risks, which might outweigh the benefits of better languages (or for that
matter other infrastructure). If you avoid 100 bugs in the new version but add
101 because you didn't _completely_ understand the old code and the
environment it runs in, you haven't come out ahead. This phenomenon has been
too well known for too long to be blithely ignored.

~~~
pdimitar
Me over-reacting is most likely true. Been dealing with people dismissing
unquestionably life-improving tech for far too long lately.

Sorry.

I do believe most of Linux userland has to be rewritten though. Be it Go,
Rust, Nim, D, doesn't matter much as long as it's a memory-safe language.

~~~
moxious
> I do believe most of Linux userland has to be rewritten though

Why? I'm sympathetic to the argument that 2017 computing shouldn't be on the
basis of 1970s UNIX limitations and mindset, but changing that would require a
lot more than just rewriting the user land applications, and would require a
bigger re-think.

But assuming that the shell's functionality is OK as it is, what's to be
gained in a re-write?

~~~
pdimitar
For one thing, piping being mostly text-friendly is limiting in many scenarios
I stumbled upon. A modern shell should allow arbitrary objects to be piped and
processed, much like the functional programming paradigma. The UNIX idea was
and still is wonderful, but we're past the text-only thing.

In any case, I feel (and I _don 't_ have tens of facts, I admit) we're dragged
by the past for far too long. Others have documented their gripes with the
current incarnation of Ops / sysadmin problems much better than I could. Here
in HN (but I think years ago).

------
bambax
> _I’m convinced the most likely backdoor code in curl is a deliberate but
> hard-to-detect security vulnerability_

Evil organisations and/or big government agencies are probably working on
finding vulnerabilities and using them without reporting them.

That sounds more efficient and impossible to spot or prove, than trying to
implement backdoors directly.

------
raesene6
This is a very real threat for all software, open source and commercial, and
hard to completely fix.

That said there are a number of possible mitigations and the fact that they're
not more widespread is, to me, an indication that people who rely on software
don't think that this threat is worth the trade-off of the additional costs or
time that mitigating it would take.

For example :-

\- Requiring packages signed by the developers for all package managers (e.g.
[https://theupdateframework.github.io/](https://theupdateframework.github.io/)
) . This would help mitigate the risk of a compromise on the package managers
hosting, but we see many large software repositories that either don't have
the concept or don't make much use of it (e.g. npm, rubygems, pip)

\- Having some form of third party review of software packages. It would be
possible for popular packages like curl to get regular security reviews by
independent bodies. That doesn't completely remove the problem of backdoors
but it makes it harder for one to go undetected. This one has obvious costs
both in financial terms and also in terms of delaying new releases of those
packages while reviews are done. There are some things which act a bit like
this (e.g. bug bounty programmes) but they're not uniform or regular.

\- Liability for insecure software. Really only applies to commercial
software, but at the moment there doesn't seem to be much in the way of
liability for companies having insecure software, which in turn reduces their
incentives to spend money addressing the problem.

I'm sure a load of commercial software includes curl or libcurl, but if there
was a backdoor in it that affected that sofware, I don't think the companies
would have any liability for it at the moment, so there's no incentive for
them to spend money preventing it.

------
benmmurphy
tinfoil: why is he suddenly writing a blog post on how curl is not backdoored?
does this mean curl is backdoored but he can't say directly that it is
backdoored :)

~~~
sapphire_tomb
I was at dinner with Daniel the night before FOSDEM kicked off this year, and
he said "I always get asked at some point in the Q&A about whether or not curl
has a backdoor in it."

The next day, I was due to meet a developer from a vendor organisation I'd
been working with, and he came and found me during the curl talk Daniel was
giving. Because the talk was ongoing, I didn't really get a chance to say
hello to him or anything.

Cue the Q&A section. My vendor contact sticks his hand up, and is the first
person to be picked to ask a question. His question "Have you put any
backdoors in curl?".

You couldn't make it up.

[edit] I can't spell "Daniel" :(

------
WalterBright
Since one of the backdoor methods mentioned was introducing a memory safety
bug (like a buffer overflow), one way to reduce the attack surface is to use a
memory safe language.

The thing is, one can write memory safe code in C. The problem is the
difficulty in verifying it is memory safe.

I've opined before that this is why, soon, people will demand that internet
facing code be developed with a memory safe language.

~~~
gfredtech
Immediately you mentioned memory safety, Rust is the language that came to
mind. Memory safety is indeed important, but I find it hard to believe that in
the future, ~80% of mainstream system tools will be migrated from C/C++ to
Rust. Is it possible or has there been any attempts to write standalone tools
that are able to check before compile-time that a particular C/C++ program
won't have buffer overflows/dangling pointers? I don't expect such a tool to
catch everything, but it should at least be able to track most of such bugs.

EDIT: typos

~~~
xamuel
Detecting 100% of buffer overflows with 0% false alarms, would require solving
the Halting Problem.

For applications that demand both maximum speed as well as maximum security,
the best solution is probably something like a C compiler that requires the
code to be accompanied by formal proofs of defined behavior. Even this will
necessarily sacrifice speed in a theoretical sense, because there are certain
problems for which the fastest solution is safe but can't be proven safe
within (PA/ZF/ZFC/insert any consistent foundation of mathematics you like).

Eventually we'll have people writing fast programs and proving their soundness
using large cardinal axioms which might or might not actually be true. Then
someday one of those large cardinal axioms will turn out to be inconsistent
[1] and suddenly some "proven" code will be proven no more.

[1]
[https://en.wikipedia.org/wiki/Kunen%27s_inconsistency_theore...](https://en.wikipedia.org/wiki/Kunen%27s_inconsistency_theorem)

~~~
andrewflnr
Why would you need large cardinal axioms to prove a program correct?

~~~
xamuel
I used large cardinal axioms as [the canonical] example of any axiom stronger
than standard mathematical foundations.

A contrived example: there are certain large cardinal axioms that imply the
consistency of ZFC. Thus ZFC cannot prove those axioms unless ZFC is
inconsistent (Godel's incompleteness theorem). Consider the following problem:
"If ZFC can prove 1=0 in n steps, output 1. Else, output 0." A naive solution
would brute-force search all ZFC-proofs of length n. A faster solution would
be: "Ignore n and immediately output 0." This is correct, because ZFC never
proves 1=0. You could formally verify the correctness by using certain large
cardinal axioms, but not using raw ZFC.

------
davedx
I'm reading a book called 'Nexus' at the moment. Last night I read a part
where they are deliberately installing a backdoor in their own system. They do
it by modifying the compiler itself to inject malicious machine code into the
binary. It's hands down the best technical description of hacking I've ever
read in a work of fiction - highly recommended.

~~~
Taniwha
Already been done - it was the subject of Ken Thompson's Turing award paper

[http://delivery.acm.org/10.1145/360000/358210/reflections.pd...](http://delivery.acm.org/10.1145/360000/358210/reflections.pdf?ip=203.86.204.69&id=358210&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&CFID=983668164&CFTOKEN=60374561&__acm__=1505206740_44d52154950e9825dd69d394ccfb7d90)

~~~
Sean1708
Your link doesn't seem to work for me, but I presume you're talking about
Reflections On Trusting Trust[0]?

[0]:
[https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...](https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf)

~~~
Taniwha
yes, thanks - looks like the ACM hands out temp URLs

------
Perados
Sometimes I wonder, what would happen if one of these invisible heroes dies ?
What would happen to Linux if Linus Torvalds dies? What would happen to curl
is Daniel Stenberg dies? For curl for instance, only Daniel can sign a
release. So what happens if he is not able to do so anymore? This is just a
small example, but you get the idea. There is so much power under under these
men that it sometimes gets very scary.

~~~
vacri
The Linux kernel is a massive project with a web of contributors and
maintainers, and it's clear which of the senior level members could step in at
any given time.

Big open-source projects have plenty of meatspace to draw on. It's the little
projects that 'come from somewhere' that actually only have one or two people
'in the know' that are the ones at risk.

~~~
geofft
More importantly for Linux, curl, etc., almost nobody uses Linux from Linus or
curl from Daniel. You get it from your distributor, and in the case of Linux
it usually comes with quite a few patches. These distributors (even the all-
volunteer ones like Debian) are projects involving lots of people and clearly
defined procedures for what to do if one of _their_ maintainers stops being
able to contribute.

A good example is glibc; several years back, a huge number of people were
using the eglibc fork, not because glibc upstream (Ulrich Drepper) stopped
being able to do releases, but simply because he was refusing patches for
architectures he didn't like and other similar changes. Very few end users
even noticed that they weren't using "real" glibc. (Ulrich has now stepped
down and the eglibc changes have been merged back in.)

------
snomad
This is why servers /network should be configured to reject/prevent out bound
calls by default, only allowing connections from a white list.

------
shp0ngle
Was there ever a big, significant backdoor in any widely used open source
software?

I don't mean a bug like heartbleed, but an actual intentional backdoor.

~~~
dsr_
[https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...](https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf)

(Reflections on Trusting Trust)

~~~
simias
Still completely theoretical though, at least as far as I know.

~~~
Jach
And has a counter anyway: [https://www.dwheeler.com/trusting-
trust/](https://www.dwheeler.com/trusting-trust/)

~~~
guipsp
It only has a counter assuming you have trusted compiler.

~~~
Jach
Do you trust yourself? You can choose whatever compiler(s) you want as the
'trusted' one(s), even writing your own. It doesn't even need to be a very
good or complete compiler.

~~~
guipsp
What do you compile the trusted compiler with?

~~~
Jach
Who says it needs to be compiled? Your trusted compiler can be a Basic script.

The dissertation likely addresses all your concerns. Sticking with the Basic
example, say you wanted to see if tcc was vulnerable, but you don't trust gcc,
icc, borland, clang, etc. either as compilers to use directly or as compilers
to compile your trusted one.. And you don't want to write your own in Java or
Python because you don't trust the VMs. Whip out an Altair BASIC binary from
the 70s and write your compiler with that, just enough to compile tcc. Perform
DCC. If the tcc source does not correspond with its binary, you'll know.

------
otakucode
It could simply be my lack of in-depth understading of curl, but wouldn't curl
make for a pretty weird target for a backdoor? It doesn't serve content or
remain running for long periods of time, does it?

~~~
FrancoisBosun
No it doesn't, but what if you can be convinced to use a compromised version
of curl that downloads a compromised nginx? This is the threat model that we
are looking at here. That's why we should always verify signatures for
packages we download, to make sure we get the right bits.

------
ageisp0lis
"If one of the curl project members with git push rights would get her account
hacked and her SSH key password brute-forced, a very skilled hacker could
possibly sneak in something, short-term. Although my hopes are that as we
review and comment each others’ code to a very high degree, that would be
really hard."

Nip this entire discussion in the bud; just use a deterministic build process
for any binaries you release. Like Gitian:
[https://gitian.org](https://gitian.org)

I implemented this for Zcash (see [https://z.cash/blog/deterministic-
builds.html](https://z.cash/blog/deterministic-builds.html)), more software
projects should be doing this in general.

------
wahB4vai
Curl is the back door.

A modern idiom is "curl [https://$THING/](https://$THING/) | sudo /bin/bash "

And the arguments around it being secure because it's https...

------
jzzskijj
From the article:

"Additionally, I’m a Swede living in Sweden. The American organizations cannot
legally force me to backdoor anything, and the Swedish versions of those
secret organizations don’t have the legal rights to do so either (caveat: I’m
not a lawyer). So, the real threat is not by legal means."

Considering how well Assange's things turned out with Sweden, that makes me
wonder. Of course he is Australian, so there is a big difference.

------
thinkMOAR
As others write, this is not specific to curl and strict explicit egress
filtering, the best (and imho only) safety, pain in the ass initially but
avoid connections to any non whitelisted destination.

~~~
im3w1l
How do you know the egress filter is legit?

~~~
thinkMOAR
Well at some point one can never know things for sure. BUT, not running it on
the device you want to filter would be a good step. And default deny.

Security is a matter of layers, there is not one layer that fixes all.

------
aidos
I guess I'm a bit cynical but this seems hand wavy to me. (Note, I love curl
and implicitly trust it).

The argument that it would probably take too much code and would be too
obvious doesn't seem solid. I'm no expert in this area but curl sends data
over a network and sometimes runs as part of a larger application. It seems
like the big dangerous bits are there and it wouldn't take a major bug to send
the wrong thing.

~~~
shakna
> There is only one way to be sure: review the code you download and intend to
> use. Or get it from a trusted source that did the review for you.

~~~
elnygren
Agreed. However, this is unfeasible/impractical. Ain't nobody got time for
that.

Are there any commercial or non-profit organisations that maintain lists/repos
of audited and trusted software with their checksums? It seems like there
would a demand for a package manager like that.

I suppose some Linux distros have pretty OK repositories?

I wouldn't trust npm, pip, gem etc. though.

~~~
shakna
That's the second half of the quote:

> Or get it from a trusted source that did the review for you.

Incidentally, curl has been reviewed by the Mozilla Secure Open Source project
[0], who maintain lists of audits which include checksums within the report.
Maybe you're looking for something similar?

[0]
[https://wiki.mozilla.org/MOSS/Secure_Open_Source](https://wiki.mozilla.org/MOSS/Secure_Open_Source)

~~~
pjmlp
Which version?

In order for this process to be truly secure, every single software version
needs to be audited.

~~~
Xylakant
7.50.1 The repost can be found here
[https://wiki.mozilla.org/images/a/aa/Curl-
report.pdf](https://wiki.mozilla.org/images/a/aa/Curl-report.pdf)

You don't have to do a full audit on each version, auditing the deltas should
be fairly comprehensive. Now, there could be some malicious code hidden that
gets triggered by a begnin change, but otoh, no audit ever will guarantee full
security.

------
_pmf_
> No. I’ve never seen a deliberate attempt to add a flaw, a vulnerability or a
> backdoor into curl.

That's exactly what someone who has deliberately put a backdoor into curl
would say.

~~~
wiz21c
> Since most of them were triggered by mistakes in code I wrote myself, I can
> be certain that none of those problems were introduced on purpose

Definitely.

The issue here is now, how can we be collectively sure that the code is
safe...

~~~
im3w1l
You can be sure that the code is _unsafe_. Along with almost all other code.

