
Drupal Core – Highly Critical Public Service announcement - ohashi
https://www.drupal.org/PSA-2014-003
======
currysausage
Whenever I read about the latest vulnerability in a popular WCMS, I wonder why
static HTML export still doesn't seem to be a prioritized feature in popular
systems.

After all, most sites out there probably don't need server-side dynamic
preprocessing for every request. The CMS directory could be locked using HTTP
auth (implemented by the HTTP server); this way, not every little CMS bug
would allow the world to compromise the whole server.

Do we really expect every parish choir with a web site to hire a CMS
specialist who installs updates within hours of the release and fixes all
compatibility quirks that occur with new major releases? This is an unworldly
approach that bestows thousands of zombie machines on us.

And what happens if the CMS for some old site stops being maintained? A
responsible admin would shut the site down, resulting in a loss of potentially
valuable information. This issue would be solved by using static HTML export,
too.

Are there any well-maintained open-source CMS out there where static HTML
export is an integral part of the architecture, ideally with good usability
and written in PHP (not that I like the language, but that's what is available
everywhere)? (I'm not talking about command line static site generators
without a user-friendly backend - those are only an option for techies.)

~~~
ishener
It's a little more complex when the site is not 100% static. Even a contact
form requires a server.

But I do think there should be a good separation between html and admin
backend. Security is only one reason. There are other very important reasons:

1\. waste of resources. the machine that builds the html from the cms is
completely idle for 99% of the time.

2\. scalability. the html should be served from a storage server like S3
together with a CDN. there should be absolutely no downtime in viewing html as
a result of overload.

the ideal system for small websites is a machine that is turned off by
default, and when an admin needs to change something it is turned on (even if
it takes a whole minute). After the changes are committed, the system creates
html and sends to S3. For forms, comments, and dynamic things it's best to use
third-party (like facebook comments and a billion forms services), or use a
different small machine that captures user input (completely separate from the
turned-off admin machine).

~~~
jacquesm
> The machine that builds the html from the cms is completely idle for 99% of
> the time.

Why would that have to be a separate machine? It could easily be the same one.

~~~
thesnufkin
A separate machine/VM is better from an availability perspective.

~~~
jacquesm
> A separate machine/VM is better from an availability perspective.

How so? A separate machine is one more thing that can fail and since it isn't
web facing it won't help with availability if the other one fails. And if it
is a VM they will both go down if the underlying hardware fails.

~~~
thesnufkin
Its true that a separate machine is one more thing that can fail, but if its
purpose is so different, as with the "active CMS" \- "static hosting service",
then it becomes easier to create a replacement.

E.g. the frontend can be replicated (if needed), S3 can be used, while the
backend CMS remains intact. Or the backend CMS can be implemented in a HA
setup and the static hosting in the cloud.

------
geerlingguy
The takeaway: if you run a site or application that is accessible on the
Internet, you are responsible for ensuring the site, servers, and
infrastructure is maintained after the initial build is complete.

If you're helping a client or company build a new site/app, and they are not
ready and willing to either maintain the site themselves, or pay for a decent
maintenance contract, or otherwise guarantee that security patches and hot
fixes are applied in a timely manner, they don't yet value their website/app
enough. It's your job—as someone who knows the consequences—to convince people
of this.

This applies all the more when a project is built on a widely-used platform
like Drupal, Wordpress, RoR, Django, etc. Though bespoke/lesser-used
frameworks _may_ provide a _tiny_ amount of protection through obscurity,
that's still no excuse for not educating people on the importance of
maintenance in any software project.

~~~
Nickoladze
If I'm using something like Composer to manage dependencies, should I be
running an update every week on every past website I've created? I feel like
having this run as a cron job would be a bad idea.

If I'm not, is there a way to be notified about critical patches? Should I be
signed up for a mailing list for every Github project I've incorporated?

Where I work, it seems like the standard practice is to finish the project,
deploy it, and let it sit until the client requests any changes.

~~~
sbarre
It would definitely be a bad idea to run Composer update via cron because
unless you know for sure that future versions of packages won't break any
functionality you built on top of them, your website could stop working (or
worse start working in unexpected ways) without your knowledge.

Maintain a staging environment (even in a temporary virtual machine if
necessary) and run your updates in that environment first, then check it, and
then deploy to production once you've confirmed everything is ok.

~~~
lotyrin
This requires some combo of having a support contract with a budget large
enough for manual QA or having built automated tests with the initial work.
Vast majority of projects at build-and-forget CMS agencies simply won't. (The
reason Drupal and PHP thrive in these environments is because clients can't be
upsold out of their cheap LAMP shared hosting.)

------
ThinkBeat
The premise that the attacks occurred after the announcement was made, and
thus can be be blamed on the announcement itself is in error.

The article details how it can be practically impossible to tell if a site has
been hacked. There is no reason to believe that your site has not been
exploited prior to the announcement.

Whilst the post might have increased the volume of such attacks, I strongly
doubt that this exploit was completely unknown prior to announcement.

In other words, if you run a Drupal site, that was vulnerable to this attack
prior to the announcement, there is a risk that your site was exploited before
the announcement.

This is a much more realistic scenario and also a more frightening one.

~~~
mjhoy
All the more realistic given that the issue -- or a big fat hint at the issue,
anyway -- was sitting in the public issue queue for nearly _a year_.

[https://www.drupal.org/node/2146839](https://www.drupal.org/node/2146839)

------
daviddede
That's as big as it can be. We started seeing attacks hours after the initial
disclosure and shared some of them here:

[http://blog.sucuri.net/2014/10/drupal-sql-injection-
attempts...](http://blog.sucuri.net/2014/10/drupal-sql-injection-attempts-in-
the-wild.html?updated)

This is a lot worse than Heartbleed, Poodle and others. Full database / server
take over for 700,000+ sites that use Drupal.

~~~
ohashi
Is it becoming more common to see these type of attacks directly after patches
are released?

~~~
hawkice
"More common" might be a bit tricky, because it's always been a race once
patches go out. But modern systems can hit every IP address and quite a
ridiculous number of domains extremely easily. There are more targets, so
while it might be just as easy to get N% of vulnerable machines, the rewards
for a hacker doing that are much higher, so the incentives put you at
substantially more risk.

There's also greater danger and incentive for attackers now that more websites
are run by primarily non-technical people, as they are less likely to patch
immediately.

~~~
sbarre
I'm sure there are also indexers out there who catalogue known
Drupal/Wordpress/RoR/etc sites in anticipation of quickly hitting them with an
exploit once a new one is released.

~~~
mixologic
This definitely happened. Major hosting providers were seeing attacks against
_all_ of their sites, in _alphabetical order_.

------
jhgg
Looking at the patch that fixed the vulnerability [0], I think it's a pretty
safe bet to say that having a hybrid array/key-value store [1] is a generally
terrible idea.

[0]: [https://www.drupal.org/files/issues/SA-
CORE-2014-005-D7.patc...](https://www.drupal.org/files/issues/SA-
CORE-2014-005-D7.patch) [1]:
[http://php.net/manual/en/language.types.array.php](http://php.net/manual/en/language.types.array.php)

~~~
las_cases
I don't agree with you, I believe it is wrong to blame the tool for your
mistakes. I agree that it may be debatable how much of the screw ups in the
PHP world is actually inherited behavior from how easy and permissive PHP is
to work with (perhaps a truism though and nothing to debate about).

------
raesene4
This is a really good example of why, if you have something important to your
organisation on the Internet, you need to really be doing defence in depth, to
either reduce the risk of compromise, and/or increase your chance of noticing
when you're attacked and reacting accordingly.

So things like IDS (network and host), perhaps looking at adding a WAF to
catch basic attacks, shipping all your logs off your front end servers so they
can't be easily destroyed by attackers etc.

This kind of disclosure--> attack timeline now looks to be the norm, so this
will become a theme and companies are either going to have to spend more on
defense, or spend more on incident clean-up and then spend more on defense....

~~~
dc2447
WAF are a bad idea imvho. They can give a false sense of security and the
feeling that security is being taken care of. The reality is theta they are
generally not very good at catching anything other than the most basic of
attacks.

I much prefer

\- external security monitoring (there are many vendors) \- automated testing
in pre production using skipfish/w3af/whatever \- static code analysis \-
penetration testing \- responsible disclosure programmes \- hackdays

~~~
raesene4
I don't see it as an either/or decision TBH. I wouldn't suggest that WAFs are
a panacea, but that doesn't mean that they can't be a useful defensive layer.

A lot of companies have difficulties getting app. patches applied quickly due
to test cycles, so applying a WAF rule to block known issues (this one for
example) can be a fast, low risk way of reducing the risk.

------
syntheticnature
Note that the "within hours" is in the past -- if you didn't patch rapidly on
10/15, you should assume you're compromised.

~~~
Xorlev
You should assume you're compromised regardless. More than researchers could
have known about the vulnerability.

------
0x0
Why is this posted now, more than 2 weeks later? Even back then, within hours,
people had publicly published working examples of remote code execution via
simple single HTTP POST requests?

For example:
[https://twitter.com/i0n1c/status/522495098630987777](https://twitter.com/i0n1c/status/522495098630987777)

~~~
acomjean
As someone who was learning drupal I was wondering this. My test site came up
with a friendly notice to "upgrade core". I figured I should since it seemed
like a good exercise.

But trying to figure out how was not intuitive. The drupal web site was mum on
the issue (you figured they post something on the site telling people to
upgrade asap)

I figured it out eventually. Was disappointed how it was handled.

~~~
geoka9
I recommend also subscribing to the Debian security mailing list[1], even if
you're not a Debian user--they are on top of security issues that involve
software in their repo (and that's a lot of software) within minutes of the
advisories.

In fact, that's how I learned about most of the Drupal's core security issues
(got a message in my inbox) and was able to patch them up really quickly.

[1] [https://lists.debian.org/debian-security-
announce/](https://lists.debian.org/debian-security-announce/)

------
thrillgore
I am doing an in-development project, we patched to 7.32 minutes after the
initial SA was sent. We do have some components sitting out on the net (not
public) and they were not updated (brought in sync with development code) for
about three hours, so I am going to go ahead and conduct an audit.

------
jrochkind1
I'm feeling like it's a whole new age of security, these days.

The amount of time/resources it takes to keep your apps or sites secure today
is so much greater than it was even a few years ago.

Development and maintenance practices that seemed reasonable to people only a
few years ago now seem impossible. Delivering an app or site based on Drupal,
WordPress, Rails, etc. as a finished product to a client that does not have
sufficient in-house IT staff -- you can almost guarantee they're going to run
into security trouble. And what is required for 'sufficient in-house IT staff'
is way more than we thought a few years ago -- even if not everyone has
realized it yet (those who have not will get burned).

~~~
coldtea
> _Delivering an app or site based on Drupal, WordPress, Rails, etc. as a
> finished product to a client that does not have sufficient in-house IT staff
> -- you can almost guarantee they 're going to run into security trouble. And
> what is required for 'sufficient in-house IT staff' is way more than we
> thought a few years ago -- even if not everyone has realized it yet (those
> who have not will get burned)._

And that for 99% of the cases (unless they process credit cards and
transactions), it won't matter much, if at all.

~~~
jrochkind1
Well, it matters to the customer if their WordPress site goes down because it
was infected by malware that sends out spam or makes clicks on Google Adwords.

This happened with someone I was working with, to try and rescue their
WordPress.

Ironically, the site went down only because the malware that I'm guessing was
scraping Google or making clicks on google adwords or something (I just
skimmed the malicious code, it wasn't entirely clear to me what it did) -- had
a bug in it that brought down their site. If it had been bug free, it could
have kept using their site for it's malicious purposes for years without them
ever noticing.

------
amingilani
I'm concerned about all the critical websites powered by drupal, including
Whitehouse.gov.

~~~
mkempe
Actually, the White House was hacked two weeks ago. [1] Not much is officially
known, some think that the hackers managed to penetrate a lot of their systems
(some employees leaked info before the announcement).

[1] [http://www.usatoday.com/story/theoval/2014/10/29/white-
house...](http://www.usatoday.com/story/theoval/2014/10/29/white-house-
computer-hacks-russia/18104231/)

~~~
snowwrestler
That's their internal network (LAN), not their web servers.

~~~
mkempe
Surely you are not claiming that there is no possible connection? if/once the
Drupal Web server was hacked, the hackers could not possibly have served
malware to admin users, or reached into other machines?

~~~
eli
Anything is possible, but I think it would be unwise to _assume_ they are
related.

------
at-fates-hands
This is going to be a nightmare for a lot of smaller shops I know who have
hundreds of Drupal clients. They must be going crazy right now.

I stopped using Drupal and WordPress about a year ago and am glad I did.
Myself and several clients just dodged a MASSIVE bullet!

~~~
pgrote
What are you using in place of those? Wouldn't your clients normally pay for
maintenance?

The killer here is:

"Consider obtaining a new server, or otherwise remove all the website’s files
and database from the server. (Keep a copy safe for later analysis.)"

~~~
zippergz
I replaced Wordpress with a home-built solution that is drastically simpler.
It retains most of the URL compatibility so links wouldn't break, but it has
only a tiny fraction of the functionality of Wordpress (most of which we
didn't use anyway). It's entirely possible that our solution has
vulnerabilities (though we designed it with security in mind, and the code
base is much easier to audit due to its simplicity). But at least it's not
going to get compromised due to a generic Wordpress exploit.

~~~
eksith
There would be a lot of demand for a much simpler WP alternative built with
security in mind. Would you by any chance be open sourcing the project? More
eyes on the source couldn't hurt.

~~~
NewsReader42
We ourselves built a project with speed and security in mind and are working
on open sourcing it in 2015

------
untog
Wow. This is about as big as it can get - your site has probably been
compromised, data could have been stolen, and you will have no idea if you
were hit or not.

------
cdnsteve
Yes Drupal waiting too long to send out this public notice, yes it seems there
were exploits written very quickly. The thing is, no one can manually exploit
nearly every Drupal site out there without assistance of search engines. It's
my opinion that search engines are the primary tool in a mass malicious
exploit attempt such as this.

If search engines had restrictions in place for this obvious type of malicious
search activity, we'd be much better off and would see a massive reduction in
the huge number of infected/exploited sites/apps. They are using them as a
database of targets and must be stopped.

I feel this is a wake up call to petition search engine providers to implement
and restrict these types of queries that are obviously malicious. We can't
prevent direct attacks but these robots that farm search engines for mass
infection, something can be done.

The other item is these software vendors need to restrict and minimize
anything that puts a giant flag up saying "Look at me, I'm a Drupal site!" EG:
CHANGELOG.txt shouldn't be in www root, etc. Even some simple htaccess rules
can provide a mountain of help.

------
andyhnj
We did a lot of Drupal work at my last job, and I sent my ex-boss a friendly
warning about this when it was first announced, but I don't think he patched
all of their Drupal sites. I could be wrong, but I just checked the
CHANGELOG.txt on a couple of them, and they're still on 7.27.

~~~
JohnTHaller
Why on earth is CHANGELOG.txt included on a production server and publicly
accessible?

~~~
Fannon
I had to try to believe it.

------
sam_lowry_
To worsen the things, Drupal did not make the new version available through
its usual update mechanism immediately after the announcement. `drush up` did
not bring up the 7.32 version for many hours if not a day.

~~~
TacticalMalice
I believe that to be a drush issue. drush cc drush should have fixed that.

~~~
idbehold
Why on Earth is the list of available Drupal updates cached?

~~~
TacticalMalice
Beats me. This is a third-party tool. The release XML was updated immediately
after release, which is the normal procedure.

------
drawkbox
Those developers still using Drupal must really not be having fun. Drupal is
an EOL still sticking around from back in the day of monolithic php cms
frameworks that were broken at the core (Joomla, PHPNuke, etc). They make
developers lives worse and lead to very little in solid products. It was good
for a time, that time has passed and there are better ways.

Keep this in mind when you pick up new monolithic/take-over frameworks, they
bloat and die, microframeworks are the way to go. I am sad for developers lost
in Drupal dead zones. Drupal man walking!

~~~
mixologic
Have fun trying to get your microframework to satisfy the requirements of
massive governmental
organizations([http://australia.gov.au/misc/drupal.js](http://australia.gov.au/misc/drupal.js))
, universities
([http://www.harvard.edu/misc/drupal.js](http://www.harvard.edu/misc/drupal.js)),
and gigantic media sites
([http://www.nbc.com/misc/drupal.js](http://www.nbc.com/misc/drupal.js)).

I think you haven't a clue as to what drupal's capabilities are if you've got
it even in the same ballpark as Joomla and PHPNuke.

~~~
jonahx
The parent's claim was that Drupal is monolithic, bloated, and behind the
times -- not that it was lacking in functionality.

If anything, linking to massive governmental organizations and universities
(both famous for their bureaucracy and resistance to change) that use Drupal
would seem to support his claim.

~~~
jtreminio
Drupal 8 aims to cut down on the NIH syndrome by using high-quality Symfony 2
components instead of in house libraries.

~~~
shkkmo
I've been waiting to see a Drupal 8 RC for years...

------
lectrick
My 1-project experience with Drupal is that Drupal is a clusterfuck that any
experienced developer will quickly become frustrated with (much like its
underlying PHP).

------
hberg
So glad I stopped using Drupal to build websites long ago.

------
abhishekmdb
yeah they said drupal 7 was vulnerable to sql injection however it looks
bigger [http://www.techworm.net/2014/10/drupal-7-vulnerable-sql-
inje...](http://www.techworm.net/2014/10/drupal-7-vulnerable-sql-injection-
can-leave-site-open-attacks.html)

~~~
TacticalMalice
It is a SQL-injection bug. The consequences of such a bug are pretty grave,
and include remote code execution.

------
sarciszewski
There is a lesson to be learned here, I believe:

Thoroughly vet any platform (i.e. audit as much code as you can) before you
deploy it, even if you want to believe that there are a lot of well-meaning
white-hats looking at the product. (Linus's Law, etc.)

The SQL Injection vulnerability here was a little bit more clever than the
ones you'd normally see. ($query = "SELECT * FROM table WHERE id = $tainted";
mysql_query($query);) All the more reason to take a closer look at the code.

And if no one else will, I'm certainly up to the task. But I make no promises
I will publish my findings.

