Hacker News new | comments | ask | show | jobs | submit login
Drupal Remote Code Execution vulnerability exploited widely (drupal.sh)
111 points by velmu 9 months ago | hide | past | web | favorite | 79 comments



I work at a company specializing in Drupal services. In fact, it was one of my coworkers who discovered the Drupalgeddon2 exploit. My opinion is that people and companies should be careful with using any CMS. All of our customers have a genuine need for using Drupal, but the truth is that for most companies, there is no need for an outward facing CMS. If there is no user interactivity, I strongly recommend to anyone to build their site to static HTML and serve that if at all possible.

Maintaining a CMS is a lot of work. We all stayed up late last night patching all our supported sites. I think it's worth it for companies to completely outsource maintenance to specialized companies. These recent patches seem back up that claim. I'd guess that the vast majority of unpatched sites are self-maintained.


The main reasons to use a CMS have nothing to do with user interactivity. It’s about content sharing and reuse between pages. Interlinking of Pages. Site wide configurability.

Yes some small companies and personal web pages could be replaced by static HTML with minimal inconvenience.

But most companies benefit greatly from a CMS.

How about instead of telling users not to use a CMS, the CMS developers start listening to the security concerns that have been brought up to them again and again for the last 20 years.


Many of those things can be done without an (internet-facing) CMS. You could for example use an internal CMS that generates static HTML which is then pushed to a simple webserver with no option to execute code. This is certainly not a panacea, but it will prevent 0-day automated attacks from random botnets.


You can have the benefits of a CMS with static sites too. Just have a internal CMS and export it as static pages whenever you have changes. This is fast and secure but breaks interactivity with the CMS


> I think it's worth it for companies to completely outsource maintenance to specialized companies.

This isn't any kind of panacea either. This works up until the contractor/contracting company goes out of business, decides they no longer want to work on "unsexy" products or get dropped because management hasn't a clue as to why they keep receiving invoices from a random company. I've seen variations of scenarios like these play out time and again. I think more companies need to realize that they are in fact now software companies in some shape or form, take ownership of their code and make the necessary personnel hires. The days of outsourcing everything IT are over.


A middle road is also possible. I use a self-hosted opensource hosting system (https://github.com/omega8cc/boa) which automatically patched all my sites upon release of the advisory. This may not work for all vulnerabilities, but it was very convenient for the last two Drupal vulnerabilities.


This is terrible advice and frankly it's a bit shocking to see so little insight into contemporary CMS architecture and why it exists on a tech site. Yes UGC is a very important driver, but many organisations have an internal community of non-technical publishers and marketers whose needs are served by a CMS.

These original web publishing systems used to have a complete separation of back office and front end. For large publishers back in the day, one of Drupal's major innovations was the concept of a single type of user who could publish using the web front end - that enabled content to be filed and updated from out of the office, covering court cases and music festivals for example. Before they adopted Drupal, I saw a case at a major US publisher where minor changes to a site layout had to have externally conducted pen tests booked in at great cost and then go to a signoff committee which met every 2 weeks - that's what applying strict backend and frontend separation entails.

Publishing to HTML isn't a new idea - it's how CMS's used to work and wasn't up to the task. Using HTML files as a caching layer as you propose is already available in Drupal but has terrible performance due to the number of variants that a modern site can generate (think about permutations of Drupal Views here) and how that hits filesystem limits and OS architecture. You also have the fun of cache invalidation and recompiling every HTML file which uses some upstream fragment you decide to edit - it's ultimately using the filesystem like a very inefficient version of what a relational database is designed for. Quite apart from the latency, that's why people use CDN's instead for caching static content.

Every piece of software we use has periodic bugs which are potentially catastrophic and need patching - yet nobody is saying we should switch to a pen and paper instead of OSX and Windows. I shudder to think what is going on in router firmware and yet somehow we are getting advice that a business shouldn't use a CMS - it's complete nonsense!

If maintenance is too much hassle, the model isn't to switch to static publishing, it's to switch to SaaS - which is exactly what we see. If your needs are too specific for SaaS, then by definition you are going to be creating a tailored solution and managing that in-house or outsourcing.


> I strongly recommend to anyone to build their site to static HTML and serve that if at all possible.

Is there any products that support CMS as backend and automatically generate HTML as frontend OOTB?


I recommend gatsbyjs (react.js based static generator) and it has plugin for Drupal and Wordpress.

Here is the location of the Drupal plugin: https://github.com/gatsbyjs/gatsby/tree/master/packages/gats...

And the headless Drupal GatbyJS demo: https://using-drupal.gatsbyjs.org/


I really like Gatsby & static site generators, especially when hosted on something like Netlify.

Have you used it with a Drupal or WordPress on a larger site? To do a decoupled site, every time someone publishes a new article or a change, you are going to have to do a ton of HTTP requests during build time for Gatsby since incremental builds are not a thing yet. If you use the Drupal Paragraphs module, that can really complicate things as well. The only way around this with Gatsby that I can think of is to have Drupal create a repository of static files that get incrementally updated. Then Gatsby could grab a compressed version of this during build & create a new version of the site.

I would love to hear thoughts from anyone else though as I'm sure better ideas must exist.


I noticed this too. I wrote a WordPress plugin to trigger a TravisCI build when I publish a post and this works great, but it takes 5 minutes or so to publish a new version of the site. For me this isn’t a huge deal, although waiting 5 minutes to fix a grammar issue sucks, but for some this is going to be a bigger deal when they’re used to being able to make changes instantly.

I noticed that the Gatsby WordPress plugin hits every endpoint on your site to build the graphql data store, you could probably modify it to just hit the ones you need. Additionally I feel like there should be a way to do persistent incremental builds. At least there should be a way to cache the graphql stuff. Maybe a incremental webpack build plugin exists.


Incremental builds via Webpack being worked on. Gatsby is waiting on these - https://github.com/gatsbyjs/gatsby/issues/179


So what tool would be used to extract the static information from the site conveniently? Or does it still need a running instance of Drupal to function? Cause that would still leave you vulnerable to Drupal exploits basically.


There are various things you could do with the headless Drupal: you could put it behind a Firewall or enable access control where only your front-end Gatsby.js app can have access to.


depending on how complex your url structure is you could have wget download everything from your local cms, then rsync it to your webserver. I have been considering doing this with a side project I am working on.


For our corporate site, we use Drupal internally, and then use httrack and bash to package it up statically and deploy to our public web server. This works very well, and allows our marketing department to easily maintain it while keeping our server reasonably safe.


I use Jekyll along with Jekyll+ [1] for the CMS. Put it behind CloudFlare and you have a very professional solution.

It supports multilingual content, custom content types, media... Most of what you'd do with any other CMS.

For reference, it runs the new Starbucks website [2].

1: https://github.com/Wiredcraft/jekyllplus

2: https://wiredcraft.com/blog/the-new-starbucks-cn-website-bol...


forestry.io works with Jekyll/Hugo.


+1 to Forestry. Lets me build lightning fast static sites for people that they can easily edit. I recently deployed a site that uses https://lunrjs.com/ to also allow for quick browser-side site searching.

An honorable mention goes to Netlify CMS. I just wish they had documentation on how to host the service yourself.


You can get started pretty quickly with Jekyll+ [1] and self host it at no cost [2] using now [3].

1: https://github.com/Wiredcraft/jekyllplus#quick-start

2: https://github.com/Wiredcraft/jekyllplus#installation--devel...

3: https://zeit.co/now


Movable Type?


I offer free hosting and administration for a non-profit's Drupal site. It has been a wild month. A few weeks after the March RCE issue, Drupal security team announced that the bug was being actively exploited in the wild. For the RCE announced yesterday, they sent a heads-up on Monday about upcoming critical security release. I was on stand-by and applied the patch within minutes of it being released yesterday. And sure enough - it was still rated "Critical" when I applied the patch, but within only a few hours the ongoing exploits had been adapted to take advantage of this new RCE and the bug was re-assessed as "Highly critical".

Kudos to the Drupal security team for their work. I bet it's not the easiest codebase to work with, but at least the security team are doing their job well.


My biggest gripe with tools like Drupal (& WP and other FOSS CMS like things) is that updating them often seems to break things. You have a custom layout and upgrade? Shit breaks. Have some custom plugins? Shit breaks. After the first couple of updates, your client no longer wants to pay you to fix things after an upgrade, so you stop upgrading. Inevitably a remotely executable flaw is found, and you're now fucked.


While I do agree, Drupal does publish just the patch for these critical updates for both D7 and D8. You can patch an old version of the site pretty quickly without upgrading or working back in the inevitable core hacks. Each of the big 3 Drupal fixes have been maybe 30 lines of code across 3 or so files, IIRC.

Now if they aren't willing to even pay for that minimal level of service I have little sympathy.


There are so many people out there who pay someone 500eur/usd for a simple drupal website, and then expect to pay maybe 50/year for hosting, but are entirely unwilling to pay for anything else. After all, they can get dirt cheap hosting or free wordpress sites all over the internet, so paying anything at all makes them seem like a good customer.

You get what you pay for. Unfortunately, many of those unpatched websites end up causing trouble for others...


> Nowadays Drupal is mostly marketed as a robust enterprise tool for markets to hold critical data

Who markets it as this? That's utter madness!


The enterprise market is a huge one for all involved in the chain.

Drupal loves it because their tools get used on large-scale websites, and it differentiates them from the likes of WordPress. Agencies love it because the enterprise market usually means a larger budget than what you would get for a standard site. Clients love it because they feel they are paying for quality.

Of course, there's nothing stopping a development team from using a standard CMS or framework to build this site. In fact, I've had far more success with the likes of Umbraco than with enterprise-level systems like Sitecore, and we've delivered the same marketing features the enterprise systems use as a selling point. The problem is that a lot of clients explicitly say they want a site built in Drupal, Sitecore, Episerver, etc, and in an agency or in-house environment it's often not the developers making the tech decisions...


This is what millions in enterprise funding gets you. A decade+ of sunk cost fallacy.


What are the alternatives in ""enterprise"" CMS, though? Would you rather use Sharepoint?


When someone suggests an enterprise CMS, the usual best case outcome is to go back to the requirements analysis and remove enough requirements until an enterprise CMS is not required.

I've done my fair share of work on the things over the years and the outcome has never been the best one for the organisation.


Ah yes, training up people to handcraft html every time a new press release needs to go up on the site, or a VP changes on thecabout us page. Not fun.


There are apps that generate static websites.


By that definition, everything that has a WYSIWYG editor is an "enterprise CMS"?


Hire someone to do it. Cheaper than an enterprise CMS is to run and/or commission.


Lets see, my Wordpress install running a well known theme, using Wordfence to alert for plugin updates, with me keeping an eye on it cost about £2,000 to set up and has ongoing costs of about £1,000 a year.

No. Its not cheaper to hire someone to hand-code the site and then hand-code every change.


If you're doing that, just use wordpress.com. Wordpress is way out of scope for the term "enterprise CMS".


So we've gone from "there's no need for a CMS, write HTML flat files" to "Use a hosted CMS" - OK.


Sitecore and Episerver are quite "popular" (probably not really the right word) in some circles.


People keep trying to make it so you can build complex sites and applications without coding, and in the end, what you end up with is far more complicated, fragile and inflexible than if you had just stuck with a normal development pipeline. Drupal's insistence on being both a product and a platform out of the box is what got them into this mess in the first place. They overestimated how much different use cases have in common, so they had to bolt on late-binding and lazy evaluation all the way up into the presentation layer.

I don't know if this is solvable. A CMS is complex enough that each of its functionalities deserves the care of good product and technical designers. These pieces ought to then be integrated into a cohesive whole, a la carte. For desktop software, we were able to pull this off pretty well, with 2000s OS X as the most cohesive, successful attempt, dictated by strong design guidelines and solid enough tech... And note that they invisibly transitioned CPU architecture along the way! But when online collaboration and multi device access became a requirement, everyone flailed and forgot the lessons of the past.

I don't think the current software market is capable of something like that. Almost everyone is trying to make captive SaaS where compatibility only exists on a service-to-service basis, in a subordinate manner. Branding is more important than cohesion, and flexibility and interoperability has taken a back seat to dumbing down.


I really need to take a day or so and stick those 10 odd static pages in a jekyll theme to get rid of this ageing D7 site I am maintaining. This endless stream of updates is getting very tiresome.

The price one pays for being somewhat lazy (and bad at designing websites).


Am I reading the announcement correctly, that only registered users on the site can exploit this specific vulnerability?

The seriously scary part about the previous vulnerabilities like Drupalgeddon was that anyone could exploit them without having an account on the site.


There used to be many Drupal-based sites which offer anonymous registration. Last time I worked with Drupal was 5 years ago so maybe the situation changed.


> To make matters worse for Drupal security records, the vulnerability is being actively exploited hours after the patch was released by the Drupal core team.

This is true for Drupal 7, but not for Drupal 8. Nothing in wild has surfaced regarding SA-CORE-2018-004 for D8 yet.


A giant warning label should be required for Drupal, Wordpress & any CMS that offers a large market of plugins.

"Plugins are dependencies that if not maintained actively, may cause you to fall behind on updating the core CMS. Not keeping your CMS up to date will crush your website if an emergency security patch comes out & you can't update to it."

I get that this could be said with any framework or code base, but as someone whose job is to update a couple Drupal sites, I've almost been burned on this way to many times. Content editors rely on a plugin. That plugin isn't getting updated as fast as it should & Drupal introduced breaking changes so the plugin won't work with the new Drupal version. We have to hold back an update until the plugin gets patched. Fortunately I had just finished updating our sites before this security announcement.

The best advice I can give to anyone managing multiple Drupal sites would be pick your plugins carefully, make sure you have a testing server, make sure you can set aside some time each month to run updates & fix the many issues created by those updates, and create integration tests.


It's ugly but I updated just a few minutes ago, although the patch has been available for 16 hours. The thing is, `drush up` did not show available updates for me yesterday, and I was hesitating between going to bed and manually applying a patch. Finally, I went to bed.

It's a pity Drupal infrastructure was not up to the task of distributing updates to everyone at the moment of announcement.


what you were probably missing is drush pm-refresh.

drush updates its information about the latest available versions only every few hours. Which... in times of urgent severe vulns that need to be patched immediately is probably something they should reconsider.


Similarly, if you use Composer, it can cache things for up to 10-15 minutes. Also, Drupal.org’s subtree split process takes 5-10 minutes to push out a new core update to Packagist, so if you wait for the update using Composer it could be 20-30 minutes.

The most reliable method is to use Git, Drush, or the patch plugin with Composer to apply the patch immediately, and push it out to your servers. Then update to the latest core version and push that out to your servers once it’s ready.

The security team usually links to the raw patch file for each new version in the CVE.


Looking over the code pre-patch, can someone with more knowledge explain to me how that allows RCE?



Being a Drupal user since 2000(yes, that's 18 years) I now am moving all my sites to wordpress. Never liked Drupal 8 is the main reason.

That been said, Drupal 8 might be a good fit for large sites that needs flexibility and customization, for most (smaller) projects though, wordpress seems a better choice these days, its popularity proves that.


I was pretty involved the Drupal community for a while (permanent member of the DA, multiple modules, running a Drupal dev shop...) but left a while ago [1].

Just move to Jekyll or similar. Combined with a CMS like Jekyll+ [2] you can create very large sites and apps without having to bend a bloated stack to do what you it do in the front-end. You'll spend less time configuring things in an admin panel, and more time actually investing in the UX.

For reference, we used it to build the new Starbucks website [3].

1: http://teddy.fr/2013/01/25/dropping-the-drop/

2: https://github.com/Wiredcraft/jekyllplus

3: https://wiredcraft.com/blog/the-new-starbucks-cn-website-bol...


Have you tried Ghost or Netlify's CMS? I would highly suggest them for small sites over Wordpress. I find Wordpress runs into many of the same issues as Drupal.


heard about both.

Ghost looks like more for the hosting service similar to wordpress.com. I want to self-host, so it is not the best option.

a quick check did not reveal how Netlify does its CMS, e.g. post can be public/private/user-access-by-permissions, note CMS means I can classify various levels of access rights, i.e. content-management(instead of just publishing). Most CMS are geared towards public viewing, which is what blog does, but CMS should do more,

Drupal with its access module got CMS right(still not built-in though), Wordpress is similar, default for public viewing, but you have plugin to control each posts' access rights. I have yet to see an OSS CMS without the need of modules/plugins to make it a true CMS.


You can host Ghost on your own but it doesn't have much for user access levels.

Netlify offers something but I haven't used it - https://www.netlify.com/docs/identity/


Will formal verification reverse the trend of old root bugs as a law of nature?

We applied the D6 patch immediately, 911b/admin/reports/status says 6.35. The avg active acct age is >10y. Anon users can do nothing. Is there anything else to fix? Moved to Drupal 04/2005.

What is the longest running D* site?


>Will formal verification reverse the trend of old root bugs as a law of nature? >We applied the D6 patch immediately,

Formal verification can do nothing to help people who run software that's been outdated and unsupported for several years.


IDK why you seem annoyed, Drupal is so far from formal verification it's obviously a different subject... this bug didn't care what D version you were on anyway. There are many advantages to staying behind the upgrade curve, especially on simple software.

You can spend a bunch of time going from 6 to 7, or you can spend almost the same amount of time going from 6 to 8. It's not like there are major features the users need, most of the work goes into not breaking existing customization they expect.


Is there a patch for drupal 6, as there was for SA-CORE-2018-002?



Is there some pattern or rule you can put into Varnish/CDN/nginx to prevent this??



You really need a Layer 7 kind of thing to be able to inspect the actual payload of the request. I don't think any of those can do that. You'd need an actual WAF.


Cloudflare was able to protect against Drupalgeddon 2

https://blog.cloudflare.com/keeping-drupal-sites-safe-with-c...


That's a WAF layer though. I don't know of a way to do this with Varnish or a straight CDN.


there is this shit https://github.com/dreadlocked/Drupalgeddon2 that i saw in my logs.

how to report it to github


I'm not sure why you think GitHub ought to take down code for a proof of concept.


its not exactly proof of a concept, there is also a version that allows you to execute anything "as in wget a backdoor" look at the code, with this code you can execute anything at the target/victim.

hell, look right at the message when the target is vulnerable.

Good News Everyone! Target seems to be exploitable (Code execution)! w00hooOO!


Pretty much any non-obfuscated PoC that anyone could come up with for this vulnerability would be trivial to adapt to run malicious code. This doesn't really lower the bar for anyone, and as long as there's no malicious payload, it doesn't seem legally or morally wrong.

(This is not to say that you couldn't end up in a situation where you'll spend a few years and a truckload of money in a legal fight because you've released something like this.)


That's why i did't work on drupal after 2012..


what you work on then?


I legitimately wish that there were procedures and infrastructures in place for the core teams of things like Drupal to exploit such RCE vulnerabilities themselves (finding installations by the standard means that the bad guys also use) with a payload that applies a suitable patch to fix the vulnerability (definitely in a very-cautious way, e.g. only if the entire file to be patched matches an expected checksum), and then email the site owner if possible to declare what has been done.


My hoster is doing exactly that. It automatically patches security vulnerabilities on all hostings for Wordpress, Joomla, Drupal and osCommerce.*

The feature is activated by default, you get informed when one of your instances automatically gets patched. It works very well.

Apparently they bought this patching solution from an outside vendor, wasn't able to find the name of the product though.

It's been quite the load of my mind...

* = Details in german, no affiliation: https://www.cyon.ch/support/a/wie-funktioniert-das-automatis...


Take what you're describing, make it secure and legal. Then what you have is called automatic updates.

Which is exactly what Drupal should do and Wordpress did many years ago. It works.


That auto-updating is even possible from the web server process is itself a very severe security vulnerability. I haven’t used Drupal since the 6 days, but back then and earlier the recommended deployment policy had the files directory (for uploads) as the only directory that the server process could write to—the code was definitely to be read-only. I think this was also checked by the system so that it would produce a warning were it not so.

In such a world, RCE isn’t quite so scary. Not quite. (Yeah, PHP code in the database and all that.)

In practice, shared hosting doesn’t tend to take kindly to genuine read-only-ness, and so the grand ideal of not being able to inject persistent code doesn’t work quite so well.

I really don’t like the way Wordpress does it, but the way Drupal does it also isn’t great. I don’t like the security models of any of these PHP things.


I use WordPress CLI and do the automatic updates with cron. I host on Digital Ocean, and the www-data user can’t write to any WordPress directory.


Good intentions but honestly, that would be a legal nightmare.


I can easily see why such a thing is unlikely to happen; yet it is demonstrably pro bono—the bad guys will exploit it, and you are simply protecting the innocent.

Such is the sad state of humans.


Well, you could make it and opt-in service with all legal disclaimers applied.


In that case it’s completely useless.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: