Hacker News new | past | comments | ask | show | jobs | submit login
Linode SSD Beta (linode.com)
177 points by kbar13 on Sept 6, 2013 | hide | past | web | favorite | 81 comments



I know there is this theory that server hardware needs to be more durable therefore you should pay an order of magnitude more, but all of my server workloads are write somewhat frequently, read randomly, and delete almost never. It is my understanding, that commodity consumer SSDs should work fine for this workload.

I assume Digital Ocean is using consumer SSDs, and it feels like it shouldn't be a problem with the exception of the bad neighbor issue.


Actually I thought it was going the other way: "Old school" is that server hardware should be reliable so it doesn't go down, and "New school" is that hardware should be cheap and there should be a lot of it so that if one server goes down we don't care.


Both DigitalOcean and Linode are in the "old school" camp. They are in the business of providing reliable hosting at good prices.

My question was along the lines of: Using consumer HDs in servers is a disaster because the 24/7 read workload eventually breaks spinning platters. Server grade magnetic disks are a must in servers. Consumer grade SSDs are acceptable in servers because they don't wear out from 24/7 reads. Consumer SSDs fail from constant deletes+writes. Server workloads don't produce many deletes, therefore it is safe to put consumer SSDs in servers.

Is the above correct? Is the premium for server grade SSDs a myth? Should I feel safe using Digital Ocean under the assumption they are using consumer SSDs for multi-tenant servers?


Some workloads are write-intensive and some aren't. For a hosting provider there's no way to know what the customers are going to do. I would expect that SSDs attract customers who are going to actually give them a workout, though.


> Some workloads are write-intensive and some aren't.

Did you read what I wrote? Do you mean delete-intensive?


I don't think there's any truth to your notion that deletes are somehow worse for a solid state drive than other kinds of writes. Overwriting a sector has the same effect on longevity as erasing it and filling it up again, but in the latter case, you can use the ATA TRIM command to defer the flash block erase latency (which is significantly higher than the flash program latency). The only way in which deletes are "bad" is if you're comparing to a workload that fills the drive once and then moves on to fill a different drive - but that's not a fair comparison against doing all the writes to the same drive.

Different workloads can somewhat affect how much write amplification results from the wear leveling, but the best-case there actually is to have no long-lived data on the drive, ie. lots of deletes.


I think he's assuming that since it's actually rewritten flash-memory bits that degrade, if you don't delete data, then you don't rewrite. So without deletes, all writes simply fill more of the drive and you shouldn't see degradation.

Modern flash drives do indeed try to level the writes across all bits, but he's talking about work-loads that don't rewrite at all.


I wouldn't call an application that writes once and never deletes anything on the hard drive a write-intensive workload. Filling up a SSD with write-once data means paying well over 50¢/GB for your writes, which isn't something you can do if you are "write-intensive" -- if you're writing 100x the capacity of the drive over its lifetime, then you are really write-intensive, at which point the distinction between delete-intensive and write-intensive is nearly nonexistent for obvious reasons.


All the dedicated server providers I know, do in fact use "consumer" HDs in systems, but allow you to spend more to use "server" HDs.

And, a properly equipped server for the workload being used, should not have its disks being used a lot, since loads tend to be spiky. So they should be busy during peak periods but much less busy other times.


> They are in the business of providing reliable hosting at good prices.

Which is also possible in the "new school" camp…


Reading, writing, and deleting all degrade SSDs due to wear leveling. It's a common misconception that only deleting files degrades SSDs. I'm guessing this is what linode is getting at here.


This sounds pretty cool! What sorts of considerations does one make when you are deciding between more RAM or SSDs?

Random IO is processed first through the SSDs (the thing that they are really good at) while sequential IO short-cuts to the hard drives - which is pretty slick.

I'm curious, what do you use to develop something like that? Is it build on top of something? Built into the kernel? I wouldn't even know where to begin...


Hmm, my guess, working only from that description, would be that they've created a RAID-1 mirror with equal-sized SSD and spinning HD partitions... but are able to route reads based on the actual position of the spinning disk heads.

Then, 'random' reads will usually come from the SSD, but those that happen to be under-the-head (as most often in long sequential reads) would come from the HD.

I don't know if usual RAID software/controllers are already optimized for such wildly-different device response speeds (as opposed to uniform devices).


Sounds like SSD's are used as a cache, like ZFS does with its l2arc.


For Linux specifically, there are https://github.com/mingzhao/dm-cache and http://bcache.evilpiepirate.org/ as well, which let you use arbitrary filesystems on top. I would bet that Linode is using one or the other of these.


Does ZFS use some sort of last-used eviction to purge the cache? It seems like differentiating between random/sequential IO is a bit different than LRU.



This looks like something you can accomplish with md raid: http://tansi.info/hybrid/


andrewcooke mentioned bcache, which is now included in the Linux kernel at of 3.10. Outside of that there are a number of storage cards and software products that offer this functionality now (essentially automatically tiered storage). For instance IBM has FlashCache, LSI has Nytro MegaRAID, and on and on.


Every time I think I'm done with Linode, they draw me right back in. I guess I'll wait to see about the pricing, but I imagine this will be a serious challenge for Digital Ocean to overcome, since their main selling point over Linode is cheap SSDs VPSs.


A staff member commented about pricing on the linked post after you commented this. Long story short, it won't cost anything and they're going to upgrade all existing servers eventually.


How is this a challenge to DigitalOcean? Linode's solution is inferior to the pure-SSD storage layout at DO and is still four times the price. Linode is trying (and failing) to play catch-up.


Waiting for digital ocean to give me some free upgrades =)


Have you been on Digital Ocean for a while? How do you like it so far?


I've been using them for a similar period as xur17.

Have not had a single issue, bar some packet routing at AMS1 that just lead to latency for a few minutes. Their API is getting quite nice to tie into, though I've had to resort to screen scraping for some of the newer options.


I've been on Digital Ocean since the beginning of this year, and I have been very happy with it. I switched my main site over to them about 6 months ago, and haven't had any issues, and 0 (or very close to 0) downtime so far.


Wow, Thanks folks!

Looks like DO is earning their keep. I'm planning to move a forum off a shared host, so VPS was the first place I looked. They were recommended to me, but I only had one other first hand account of them until now.


Also worth mentioning that for 59€/mo Hetzner is offering a dedicated i7-4770 Haswell with 32GB RAM and dual SSD raid1.

http://www.hetzner.de/en/hosting/produkte_rootserver/ex40ssd


NB: +99€ for setup.

How is everyone satisfied with hetzner? I have a few friends that run setups on their systems, mostly for heavy forums. I'm more interested how do you deal with large backups? Seems to me it's just easier to buy machines in sets for redundancy and every now and then move things over to glacier.


Hetzner includes 100GB on a SAN in a distinct datacenter with every dedicated plan (500GB is 10€, 10TB 80€.) Of course you could also push to Glacier, but that'll get counted in your outbound bandwidth (2€/TB after 20TB/mo)


Do they charge for traffic between machines at hetzner?


If you're OK with the latency to Europe from where you are, Hetzner is great.

I used them quite a bit (and still do), however even with Cloudflare my tests were a bit sluggish here in Perth,AU.


kind-of related, is anyone using bcache with linux yet? how easy is it to get working? does it speed things up as expected?

http://arstechnica.com/information-technology/2013/07/linux-...


Well, I've been using it for over an year now. Yes, it does really speed up things. I've found out that people grossly over estimate need for SSD space. I'm using 64 GB SSD with 3TB drive and hard disk is really rearely touched. I've even enabled power saving spin down for it, so I know when it starts. In normal daily usage HDD isn't being touched at all. It's only when something like linux updates are being run or so. I've also enabled write-back caching without maximum time limit. If you use write-through caching it naturally causes HDD to run all the time. I've chosen to cache everything, not only random reads. Because I have plenty of space with 64 GB SSD. I don't know what people are doing who claim thei need larger SSD than that. Maybe they're working with large data sets or have absolutely massive games or so. As summary, yes, I love it. SSD is never full, I'm not running out of disk space and yes, I do get pure SSD performance over 99% of the time. Only if I pickup some movies or music which has been around for months without being accessed then there's HDD access of course. Setup was ok, because I did it when I replaced my computer so I built everything from scratch anyway. I'm going to blog about that, but I have huge backlog of stuff to get blogged. P.S. Some cache vendors (like Seagate Hybrids drives) recommend using write-through caching, because in the case SSD dies, you'll still have fully working file system on HDD. With write-through caching, things are going to be very badly messed up if SSD dies. Practically totally unrecoverable situation. But that's why we got backups, right?


"With write-through caching, things are going to be very badly messed up if SSD dies"

You probably meant the reverse of that (With write-back caching... etc.).


Yep, true.


64GB SSDs aren't large enough to take full advantage of the wide data paths offered by modern SSD controllers that stripe accesses across many flash chips. Going up to at least 256GB is usually necessary to get the full speed possible from having every channel populated.

http://www.anandtech.com/bench/product/752?vs=750


For most applications you wont notice


If you're building a server and spending money to put a SSD in front of hard drives because you want throughput, it behooves you to consider throughput/$.

Maybe you'll notice, maybe you wont. There's a place for good enough, and any front-SSD cache will be a huge advance for sure over not. But I second the notion that you probably want to find a SSD that is performant, and low capacity ones often are heavily compromised in this department.


Generally the software will bottleneck before the drive. Its pretty hard to get rated performance out of an SSD. You need to keep the queue at pretty much full depth, which means aio or (much harder) threading. Most real world software at present gets nothing like SSD potential performance in real world situations, as it is mostly build for HDD where none of this stuff mattered.


> Chicks dig scars

Chicks? Really? Come on Linode, you should be better than this.


Purify! Purify!


Sounds over-engineered, slower and possibly over-priced. A consumer SSD will be faster and so long as it is trimmed correctly and not fully utilized, it should be sufficiently reliable.


For people that require a large bulk of data but who also leave most of it cold, unused, front-caching SSDs are brilliant. Paying four times as much for twice the performance and far better reliability is a no-brainer for this- what's the opposite of a bottleneck- accelerator? Data reserve+pump?

What would you guess is the average interval between accesses for any given byte-on-disk a Linode customer has? If it's hours, days, or weeks, I'd call it reckless to be spending money to put those bytes on expensive SSD systems.

Make the hot stuff fast, be price conscious with the rest.


It's a free upgrade coming to everyone eventually.


But your not the only person using the SSD, you can't trust that every VM, will A) Discard or B) use the SSD in a sane way.


Things I really wish they could add or improve.

PHPBB? What year is this? The Website design and structure seriously needs some thought and work. LongView, either have a trial only version of bump the free tier retention to 24 hours. What is the point of a 30 minutes graph? Get rid of the Add on, those pricing are just plain stupid. Give options to increase Memory without buying new plans for $10/GB at a maximum of double current capacity. So a $20 1GB plan could be increase to $30 2GB Memory with everything else the same $20 Plan. That should just make Linode competitive against DO. Linode CDN - A CDN coming from those 6 Linode DC with Data coming off your transfer pool. May be any Data served over CDN would count as triple the amount of data from your pool. SSD Speed up. From the Data on ServerBear, this new SSD tier is working as well as its competitors. I am sure the NodeBalancer could do with a price decrease or bump in concurrent connection.

DO has all of the above in the pipeline for releasing this year, so lets hope Linode react quicker.


Linode has "developed" nothing. Linode is using bcache as a tiered storage but there are problems such as certain IO patterns will bypass bcache and hammer the disks causing slower-than-disk-alone IO speed.


From the description it sounds like they're just using bcache for the storage layer, or one its equivalents (IIRC Facebook came out with a very similar patch). Still pretty cool


So SSDs are bad for server loads... what secret sauce does Digital Ocean have that Linode doesn't? Did they write their own storage layer that's doing something cool?


Lindo SSDs are expensive, and the good SSDs are really, really expensive. Although the cheaper SSDs exist, they wear out more quickly, potentially slowing down as they wear, and have slower overall throughput. Not a good combination for use in multi-tenant server workloads.


> Random IO is processed first through the SSDs (the thing that they are really good at) while sequential IO short-cuts to the hard drives - which is pretty slick.

Any idea why the sequential benchmark numbers improved 4-5x when it is still "short-cutting" to the HDs?


Possibly because the HDDs are free to only do long sequential reads, as opposed to having to seek to some random place every now and then.


Sounds promising. SSDs have good potential as a cache for bigger backend spinning disks.


while its a welcome move, it should have been done some time ago.


Too late.

About 8 months too late.


Too late for... ?


For me, at least.

I run a small blogging network. Linode have upgraded RAM, added cores, added disk space. They've done everything except improve random-access I/O, which is a major bottleneck for Wordpress installations thanks to MySQL's penchant for joining tables on disk regardless of indices.

I moved to DigitalOcean about 8 months ago simply to get access to SSDs. In most other respects I preferred Linode.


You need a cache in front of Wordpress so that it doesn't hit the database on every read, then you can run Wordpress anywhere. I survived close to 3,000 req/sec against a Wordpress entry using a single Linode 360, back when those were available.

wp_supercache, nginx, varnish, etc. Rinse and repeat.


I appreciate the advice, but I knew this already. I've been using WP Supercache since it was released. Nginx for I think 4 years at this point. I have it configured to the point that nginx serves the gzipped pages WPSC writes out to disk without ever hitting PHP.

I have three words for you:

Recent.

Comments.

Widget.


It's been a few years since I worked on a large WP install but I wonder if you could write a plugin that replaced the comments section with an esi (edge side include) directive. That would allow Varnish to cache the whole page and then call into WP using the url in you esi include to build the comments section. You could also then set the ttl on esi comments url so that you can fragment cache the comments for non-logged in users (for say 10 seconds).

More info here: https://www.varnish-cache.org/trac/wiki/ESIfeatures


It's funny you should mention this. I was chatting to a mate about this thread and he suggested using an ESI approach.

I think it would work well if the Recent Comments widget was modified to spit a HTML fragment to predictably-named files that varnish could pick up and include with ESI.


I never installed Wordpress in production, this is the first time I hear about ESI and in general have almost no relevant experience, but maybe this suggestion has some worth:

ESI sounds like it would couple your web application code to your cache. This sounds negative to my ear. How about modifying the Recent Comments widget to work with an IFRAME or some AJAX? It adds another request to the server, but now both requests can be cached and compressed.


I wouldn't mind coupling Wordpress to Varnish, since I'm controlling my particular installation.

An ajax solution isn't a bad idea, though it would mean at least hitting PHP (whereas the Varnish option never gets that far).


That's why Varnish is more effective, because you can configure it to cache the results of things that WP Supercache misses by design including the Recent Comments Widget and its data. If you need SSDs to run Wordpress, there's a flaw somewhere. Every time I've scaled Wordpress, caching has been the answer. Apply liberally.

WP Supercache is a hack anyway, for folks running WP on shared hosts without root. If you have root, there's a plethora of better things for caching, even as ancient as Squid as a reverse. You can get your MySQL traffic down to <1 QPS fairly trivially, no matter what kind of traffic is hitting the frontend.

Don't forget wordpress.com is a huge MU installation, and they've existed since before SSDs became popular. The disk is not your issue here.


Varnish is not effective in the face of Recent Comments because that widget breaks whole-page caching fatally. Every time anyone leaves a comment anywhere, the entire cache for the entire site is invalid.

When I looked at where the slow runtimes were occurring on Linode, it was always jammed on disk I/O and it was always on PHP functions that are reaching into MySQL.

In my experience the MySQL query cache + an object cache do more for sites with a Recent Comments widget than whole page caching.

As it happens, I do all of the above. And I was doing all of the above. And still getting jammed on I/O. Because MySQL likes to join on disk. Whole page caching is useful only if you prevent that from happening. It's useless if the cache is rendered invalid every few seconds on a chatty site.


    set beresp.ttl = 30s;
Varnish gives you your own throttle for how often you want invalidation. It's a tool specifically designed to make misbehaving apps -- i.e., that widget -- misbehave. You just have to slap the dog on the nose when it behaves. I'm just saying you could have made this work on Linode (and I have), but I do see your post-purchase rationalization at work, so I know anything I say will be fruitless anyway.

There's also the possibility that you had shitty neighbors.


Right, and when I hosted on WPEngine and then on Pagely (and both of them choked), my users immediately piped up that the recent comments were inaccurate. In fact there were a number of page freshness anomalies which I believe were down to whole-page caching that I was frequently quizzed about by users.

If your page is basically static, then yes, whole-page caching will fly. But several of the sites under my supervision are, to quote Pagely's founder, used "like a chat room".

Edit, per your edit:

> I'm just saying you could have made this work on Linode (and I have), but I do see your post-purchase rationalization at work, so I know anything I say will be fruitless anyway.

Basically, I was there, I saw the numbers and I know why they wound up looking the way they did. I suspect that anyone in my particular situation would have evolved their approach in the same way that I have. I've been running Wordpress blogs since 2004. I feel that I've picked up some ideas on how to make it fast, but sometimes the general solutions don't work because you have a specific problem.


Got it. So you have users that expect to have real-time conversations in the comments on a blog, meaning you can't optimize a blog application like a blog is designed to be used, meaning you have to pay for SSDs in order to make your blog function at all because apparently MySQL can't handle N inserts/second and however many people have these conversations refreshing every ten seconds, generating a few SELECTs that are rapidly in query cache.

I completely understand how this could be a problem and how switching providers would fix it.


Sarcasm aside, you clearly don't understand what my problem is.

1. Recent Comments invalidates every page it appears on whenever a comment is posted in any thread. In practice that means that the entire site cache is invalid. That breaks whole-page caching models.

2. This means that Wordpress will regenerate from scratch.

3. This means first of all generating the page, which joins multiple tables including TEXT fields. Because of the brilliant design of MySQL, these joins ignore indices on the joining fields and frequently the join will occur on disk.

4. The Recent Comments plugin also causes joins on disk because it too refers to tables with TEXT fields.

5. The query cache helps a lot, but the site on Linode was still observably jammed on I/O, even when MySQL was given an entire server to itself.

However, if you feel you can do it better, I am happy to engage your services as a fulltime replacement. WPEngine said they could do it for $250/month (they couldn't). Pagely said they could do it for $149/month (they couldn't). I invite your bid.


> Sarcasm aside, you clearly don't understand what my problem is.

Yeah, I spent this entire thread clueless about the issue you're running into, even though you spelled it out a few different times because you think I don't get it. Wordpress falls over under normal site load, film at eleven.

Since you want to switch to condescension, I'm assuming wise sir moved MySQL's tmpdir to a RAM disk and found that unsatisfactory for his mystical, MySQL-breaking SELECT/INSERT workload? Also, I'm far more expensive, and I know that WPEngine is multitenancy Wordpress on Linode in the backend. (That one's free.)


We're not being very productive here, are we? I could nitpick your comment just now but it wouldn't change your mind either.

You think I'm an idiot. Possibly you think I'm a liar.

I don't think you're an idiot. All I can do is point out that I looked at the numbers, I've tested various strategies or tools (and adopted most of them), I referred the problem to the experts, and this is where I've had to go.

So let's just ignore each other from now on.


I think you're unnecessarily combative in the face of advice and sitting comfortably atop your pillar of experience, ready to shoot down anyone that dare take time to offer you advice. You've appealed to your authority on this matter more times than I can count. Look at how you've approached the conversation from the very first reply, which set the tone for the rest:

    - I've been using WP Supercache since it was released
    - [I've been using] Nginx for I think 4 years at this point
    - Basically, I was there
    - I saw the numbers and I know why they wound up looking the way they did
    - I've been running Wordpress blogs since 2004
    - You clearly don't understand what my problem is
Now I've asked you something specific. You've lamented that you identified the issue as on-disk joins, when MySQL has to resort to an on-disk temporary table due to a TEXT column. That's discussed here[1]. I'm assuming, because I didn't assume you are stupid (unlike in the inverse), that you deduced this was the case by inspecting created_tmp_disk_tables. I then asked if you tried removing the disk from the picture by creating a RAM disk, mounting it somewhere, then instructing MySQL to use it for its temporary disk table area by setting tmpdir. I also assume you know that tmpdir defaults to the system /tmp, which might not be on a filesystem that you prefer[2]. Again, I assumed you knew these things, and just asked if you tried them.

How do you respond? "Let's ignore each other." So now I'm left wondering if you genuinely don't know how to scale MySQL, and you've tired yourself of appealing to your own authority in order to prove me wrong. What I'm telling you, is the notion that your blog network creating a workload for MySQL that it is incapable of operating on commodity disk is completely ridiculous, and I'd laugh you out of an interview if I pressed you like this. I think you gave up, but I wasn't saying it, but now that you've gone at me like this, I will. You're basically saying you couldn't make MySQL work with a <50QPS write load (I refuse to believe you're writing more than 50QPS to MySQL) because of some TEXT columns.

I'd have far more respect for you if you'd just say, yeah, I probably could make MySQL keep up with my blog workload, I just didn't put much effort into it and bought SSDs on a provider I don't prefer instead.

(But wait: I don't understand. Username oddly appropriate.)

[1]: http://dev.mysql.com/doc/refman/5.0/en/internal-temporary-ta...

[2]: http://dev.mysql.com/doc/refman/5.0/en/temporary-files.html


> I think you're unnecessarily combative in the face of advice and sitting comfortably atop your pillar of experience, ready to shoot down anyone that dare take time to offer you advice.

I regret now being such a grump about it. But nothing you've so far suggested is new. I felt lectured down to and I felt supremely pissed off by it.

My remark that we should stop talking was because it was becoming increasingly acrimonious and I didn't see the point in further e-peen waving.

> you deduced this was the case by inspecting created_tmp_disk_tables

I did.

> I then asked if you tried removing the disk from the picture by creating a RAM disk

I did in 2007, actually, on a physical server I had access to. It would reliably lock up the DomU. I might not have been the only one[1]. I think I moved to Linode in 2008.

> So now I'm left wondering if you genuinely don't know how to scale MySQL

Entirely possible. I have as little to do with MySQL as I can. When the site slows down I learn a little more.

Take for example the documentation you referred to, in particular:

    Some conditions prevent the use of an in-memory 
    temporary table, in which case the server uses an 
    on-disk table instead:

    * Presence of a BLOB or TEXT column in the table
I learnt about that after a long period of fiddling with the tmp_table_size and max_heap_table_size values.

> What I'm telling you, is the notion that your blog network creating a workload for MySQL that it is incapable of operating on commodity disk is completely ridiculous, and I'd laugh you out of an interview if I pressed you like this.

I didn't believe it either. Yet there it was, chewing up disk. I got a lot of relief from implementing various caching strategies, switching web servers and so on and so forth. But eventually it was consistently bottlenecked on the database. So I broke the site into two servers, which gave me a few more years. But eventually it was, again, bottlenecked on MySQL.

> You're basically saying you couldn't make MySQL work with a <50QPS write load (I refuse to believe you're writing more than 50QPS to MySQL) because of some TEXT columns.

I didn't say that it's inserting. I'm saying that it creates temp tables on disk to satisfy fairly standard page and widget queries. If you thought I was talking about insertions then I can understand your skepticism.

88 QPS since last restart, FWIW. Hardly the world's biggest installation. About 90% of queries are served from the query cache; but of those that aren't, around 44% of joins are performed on-disk. That's pretty much what I've seen every time I look: around 45% of joins going to disk.

> I probably could make MySQL keep up with my blog workload, I just didn't put much effort into it and bought SSDs on a provider I don't prefer instead.

At the start of this thread you said that you've had varnish serve 3k RPS in front of a Wordpress instance on modest hardware. I agree that such performance is doable, even quite straightforward, for the common use case.

But if you take away caching, Wordpress is not quite so performant. And that's my problem; the whole-page caching strategy that makes thousands of RPS fairly straightforward doesn't work for me, because Recent Comments invalidates the entire cache.

So I have two choices: either I do without that particular widget and let varnish or nginx server up what are essentially static pages 95% of the time (and I have an nginx rule that does this with the gzipped pages that WP Supercache writes to disk).

Or I can accept that, because of the unusual pattern of usage, I am closer to the uncached baseline than most Wordpress installations are. Because the bloggers I host asked nicely, I have chosen the latter.

Putting my own anger down for a minute, I am happy to take any other advice you have. I projected onto you my own frustration.

[1] http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1078


that popular posts widget sucks :D


Hey fellow Australianite.

I've been looking at using DO instead of Amazon, the main stumbling block for me is I cannot seem to figure out if they offer any configurable firewal. Ie I want to modify port rules, blocking them mostly.

Does DO offer this, and how have you found them so far?

My use case is similar to yours - hosting for multiple shopping carts running on MySQL, hence the appeal of SSDs.


Another Australianite here.

You can just bind to localhost if you don't want things to be open to the world, or modify iptables.

Your other issue in Australia is the latency to DO's servers - ~200ms for the US and 350+ for a European droplet. Cloudflare may help there, though.


It's a blog network. Latency matters, but not so much that I can justify paying the ruinous bandwidth rates Australian hosts want. I'd be looking at several hundred extra dollars per month for what is really a quite modest operation.


> Your other issue in Australia is the latency to DO's servers

Roughly 160ms from East Coast AU to DO's LA droplets, most of us are used to the extra milliseconds by now :-)


I don't think DigitalOcean themselves offer a firewall. You'd need to block ports at your server.


They don't :(




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: