I assume Digital Ocean is using consumer SSDs, and it feels like it shouldn't be a problem with the exception of the bad neighbor issue.
My question was along the lines of: Using consumer HDs in servers is a disaster because the 24/7 read workload eventually breaks spinning platters. Server grade magnetic disks are a must in servers. Consumer grade SSDs are acceptable in servers because they don't wear out from 24/7 reads. Consumer SSDs fail from constant deletes+writes. Server workloads don't produce many deletes, therefore it is safe to put consumer SSDs in servers.
Is the above correct? Is the premium for server grade SSDs a myth? Should I feel safe using Digital Ocean under the assumption they are using consumer SSDs for multi-tenant servers?
Did you read what I wrote? Do you mean delete-intensive?
Different workloads can somewhat affect how much write amplification results from the wear leveling, but the best-case there actually is to have no long-lived data on the drive, ie. lots of deletes.
Modern flash drives do indeed try to level the writes across all bits, but he's talking about work-loads that don't rewrite at all.
And, a properly equipped server for the workload being used, should not have its disks being used a lot, since loads tend to be spiky. So they should be busy during peak periods but much less busy other times.
Which is also possible in the "new school" camp…
Random IO is processed first through the SSDs (the thing that they are really good at) while sequential IO short-cuts to the hard drives - which is pretty slick.
I'm curious, what do you use to develop something like that? Is it build on top of something? Built into the kernel? I wouldn't even know where to begin...
Then, 'random' reads will usually come from the SSD, but those that happen to be under-the-head (as most often in long sequential reads) would come from the HD.
I don't know if usual RAID software/controllers are already optimized for such wildly-different device response speeds (as opposed to uniform devices).
Have not had a single issue, bar some packet routing at AMS1 that just lead to latency for a few minutes. Their API is getting quite nice to tie into, though I've had to resort to screen scraping for some of the newer options.
Looks like DO is earning their keep. I'm planning to move a forum off a shared host, so VPS was the first place I looked. They were recommended to me, but I only had one other first hand account of them until now.
How is everyone satisfied with hetzner? I have a few friends that run setups on their systems, mostly for heavy forums. I'm more interested how do you deal with large backups? Seems to me it's just easier to buy machines in sets for redundancy and every now and then move things over to glacier.
I used them quite a bit (and still do), however even with Cloudflare my tests were a bit sluggish here in Perth,AU.
You probably meant the reverse of that (With write-back caching... etc.).
Maybe you'll notice, maybe you wont. There's a place for good enough, and any front-SSD cache will be a huge advance for sure over not. But I second the notion that you probably want to find a SSD that is performant, and low capacity ones often are heavily compromised in this department.
Chicks? Really? Come on Linode, you should be better than this.
What would you guess is the average interval between accesses for any given byte-on-disk a Linode customer has? If it's hours, days, or weeks, I'd call it reckless to be spending money to put those bytes on expensive SSD systems.
Make the hot stuff fast, be price conscious with the rest.
PHPBB? What year is this?
The Website design and structure seriously needs some thought and work.
LongView, either have a trial only version of bump the free tier retention to 24 hours. What is the point of a 30 minutes graph?
Get rid of the Add on, those pricing are just plain stupid.
Give options to increase Memory without buying new plans for $10/GB at a maximum of double current capacity. So a $20 1GB plan could be increase to $30 2GB Memory with everything else the same $20 Plan. That should just make Linode competitive against DO.
Linode CDN - A CDN coming from those 6 Linode DC with Data coming off your transfer pool. May be any Data served over CDN would count as triple the amount of data from your pool.
SSD Speed up. From the Data on ServerBear, this new SSD tier is working as well as its competitors.
I am sure the NodeBalancer could do with a price decrease or bump in concurrent connection.
DO has all of the above in the pipeline for releasing this year, so lets hope Linode react quicker.
Any idea why the sequential benchmark numbers improved 4-5x when it is still "short-cutting" to the HDs?
About 8 months too late.
I run a small blogging network. Linode have upgraded RAM, added cores, added disk space. They've done everything except improve random-access I/O, which is a major bottleneck for Wordpress installations thanks to MySQL's penchant for joining tables on disk regardless of indices.
I moved to DigitalOcean about 8 months ago simply to get access to SSDs. In most other respects I preferred Linode.
wp_supercache, nginx, varnish, etc. Rinse and repeat.
I have three words for you:
More info here: https://www.varnish-cache.org/trac/wiki/ESIfeatures
I think it would work well if the Recent Comments widget was modified to spit a HTML fragment to predictably-named files that varnish could pick up and include with ESI.
ESI sounds like it would couple your web application code to your cache. This sounds negative to my ear. How about modifying the Recent Comments widget to work with an IFRAME or some AJAX? It adds another request to the server, but now both requests can be cached and compressed.
An ajax solution isn't a bad idea, though it would mean at least hitting PHP (whereas the Varnish option never gets that far).
WP Supercache is a hack anyway, for folks running WP on shared hosts without root. If you have root, there's a plethora of better things for caching, even as ancient as Squid as a reverse. You can get your MySQL traffic down to <1 QPS fairly trivially, no matter what kind of traffic is hitting the frontend.
Don't forget wordpress.com is a huge MU installation, and they've existed since before SSDs became popular. The disk is not your issue here.
When I looked at where the slow runtimes were occurring on Linode, it was always jammed on disk I/O and it was always on PHP functions that are reaching into MySQL.
In my experience the MySQL query cache + an object cache do more for sites with a Recent Comments widget than whole page caching.
As it happens, I do all of the above. And I was doing all of the above. And still getting jammed on I/O. Because MySQL likes to join on disk. Whole page caching is useful only if you prevent that from happening. It's useless if the cache is rendered invalid every few seconds on a chatty site.
set beresp.ttl = 30s;
There's also the possibility that you had shitty neighbors.
If your page is basically static, then yes, whole-page caching will fly. But several of the sites under my supervision are, to quote Pagely's founder, used "like a chat room".
Edit, per your edit:
> I'm just saying you could have made this work on Linode (and I have), but I do see your post-purchase rationalization at work, so I know anything I say will be fruitless anyway.
Basically, I was there, I saw the numbers and I know why they wound up looking the way they did. I suspect that anyone in my particular situation would have evolved their approach in the same way that I have. I've been running Wordpress blogs since 2004. I feel that I've picked up some ideas on how to make it fast, but sometimes the general solutions don't work because you have a specific problem.
I completely understand how this could be a problem and how switching providers would fix it.
1. Recent Comments invalidates every page it appears on whenever a comment is posted in any thread. In practice that means that the entire site cache is invalid. That breaks whole-page caching models.
2. This means that Wordpress will regenerate from scratch.
3. This means first of all generating the page, which joins multiple tables including TEXT fields. Because of the brilliant design of MySQL, these joins ignore indices on the joining fields and frequently the join will occur on disk.
4. The Recent Comments plugin also causes joins on disk because it too refers to tables with TEXT fields.
5. The query cache helps a lot, but the site on Linode was still observably jammed on I/O, even when MySQL was given an entire server to itself.
However, if you feel you can do it better, I am happy to engage your services as a fulltime replacement. WPEngine said they could do it for $250/month (they couldn't). Pagely said they could do it for $149/month (they couldn't). I invite your bid.
Yeah, I spent this entire thread clueless about the issue you're running into, even though you spelled it out a few different times because you think I don't get it. Wordpress falls over under normal site load, film at eleven.
Since you want to switch to condescension, I'm assuming wise sir moved MySQL's tmpdir to a RAM disk and found that unsatisfactory for his mystical, MySQL-breaking SELECT/INSERT workload? Also, I'm far more expensive, and I know that WPEngine is multitenancy Wordpress on Linode in the backend. (That one's free.)
You think I'm an idiot. Possibly you think I'm a liar.
I don't think you're an idiot. All I can do is point out that I looked at the numbers, I've tested various strategies or tools (and adopted most of them), I referred the problem to the experts, and this is where I've had to go.
So let's just ignore each other from now on.
- I've been using WP Supercache since it was released
- [I've been using] Nginx for I think 4 years at this point
- Basically, I was there
- I saw the numbers and I know why they wound up looking the way they did
- I've been running Wordpress blogs since 2004
- You clearly don't understand what my problem is
How do you respond? "Let's ignore each other." So now I'm left wondering if you genuinely don't know how to scale MySQL, and you've tired yourself of appealing to your own authority in order to prove me wrong. What I'm telling you, is the notion that your blog network creating a workload for MySQL that it is incapable of operating on commodity disk is completely ridiculous, and I'd laugh you out of an interview if I pressed you like this. I think you gave up, but I wasn't saying it, but now that you've gone at me like this, I will. You're basically saying you couldn't make MySQL work with a <50QPS write load (I refuse to believe you're writing more than 50QPS to MySQL) because of some TEXT columns.
I'd have far more respect for you if you'd just say, yeah, I probably could make MySQL keep up with my blog workload, I just didn't put much effort into it and bought SSDs on a provider I don't prefer instead.
(But wait: I don't understand. Username oddly appropriate.)
I regret now being such a grump about it. But nothing you've so far suggested is new. I felt lectured down to and I felt supremely pissed off by it.
My remark that we should stop talking was because it was becoming increasingly acrimonious and I didn't see the point in further e-peen waving.
> you deduced this was the case by inspecting created_tmp_disk_tables
> I then asked if you tried removing the disk from the picture by creating a RAM disk
I did in 2007, actually, on a physical server I had access to. It would reliably lock up the DomU. I might not have been the only one. I think I moved to Linode in 2008.
> So now I'm left wondering if you genuinely don't know how to scale MySQL
Entirely possible. I have as little to do with MySQL as I can. When the site slows down I learn a little more.
Take for example the documentation you referred to, in particular:
Some conditions prevent the use of an in-memory
temporary table, in which case the server uses an
on-disk table instead:
* Presence of a BLOB or TEXT column in the table
> What I'm telling you, is the notion that your blog network creating a workload for MySQL that it is incapable of operating on commodity disk is completely ridiculous, and I'd laugh you out of an interview if I pressed you like this.
I didn't believe it either. Yet there it was, chewing up disk. I got a lot of relief from implementing various caching strategies, switching web servers and so on and so forth. But eventually it was consistently bottlenecked on the database. So I broke the site into two servers, which gave me a few more years. But eventually it was, again, bottlenecked on MySQL.
> You're basically saying you couldn't make MySQL work with a <50QPS write load (I refuse to believe you're writing more than 50QPS to MySQL) because of some TEXT columns.
I didn't say that it's inserting. I'm saying that it creates temp tables on disk to satisfy fairly standard page and widget queries. If you thought I was talking about insertions then I can understand your skepticism.
88 QPS since last restart, FWIW. Hardly the world's biggest installation. About 90% of queries are served from the query cache; but of those that aren't, around 44% of joins are performed on-disk. That's pretty much what I've seen every time I look: around 45% of joins going to disk.
> I probably could make MySQL keep up with my blog workload, I just didn't put much effort into it and bought SSDs on a provider I don't prefer instead.
At the start of this thread you said that you've had varnish serve 3k RPS in front of a Wordpress instance on modest hardware. I agree that such performance is doable, even quite straightforward, for the common use case.
But if you take away caching, Wordpress is not quite so performant. And that's my problem; the whole-page caching strategy that makes thousands of RPS fairly straightforward doesn't work for me, because Recent Comments invalidates the entire cache.
So I have two choices: either I do without that particular widget and let varnish or nginx server up what are essentially static pages 95% of the time (and I have an nginx rule that does this with the gzipped pages that WP Supercache writes to disk).
Or I can accept that, because of the unusual pattern of usage, I am closer to the uncached baseline than most Wordpress installations are. Because the bloggers I host asked nicely, I have chosen the latter.
Putting my own anger down for a minute, I am happy to take any other advice you have. I projected onto you my own frustration.
I've been looking at using DO instead of Amazon, the main stumbling block for me is I cannot seem to figure out if they offer any configurable firewal. Ie I want to modify port rules, blocking them mostly.
Does DO offer this, and how have you found them so far?
My use case is similar to yours - hosting for multiple shopping carts running on MySQL, hence the appeal of SSDs.
You can just bind to localhost if you don't want things to be open to the world, or modify iptables.
Your other issue in Australia is the latency to DO's servers - ~200ms for the US and 350+ for a European droplet. Cloudflare may help there, though.
Roughly 160ms from East Coast AU to DO's LA droplets, most of us are used to the extra milliseconds by now :-)