Hacker News new | past | comments | ask | show | jobs | submit login
Linode launches free DDoS protection (linode.com)
369 points by hanru 24 days ago | hide | past | web | favorite | 177 comments



This isn't on the level of some other providers, they'll still null route you if you go over an unspecified amount of traffic. IIRC they use Juniper and Corero.

This is the reply I got from their support, just a few days ago:

>In short, our DDoS protection works by filtering out DoS-like traffic and is applied via the Linode network, so all Linodes are automatically protected. If your server were to be on the receiving end of a larger attack that impacts the Linode's host, we would need to prevent your server from receiving traffic until the attack ends. If you're concerned that you might be the target of a large DoS attack, there are a number of third-party DDoS mitigation services that you can use alongside your Linode.

>We aren't able to provide specific numbers since effects can vary depending on the attack. If you wanted to be sure your Linode is protected, we would recommend utilizing a third-party DDoS protection service overtop of your Linode's included protection. You also have the option of waiting to apply third-party protection until a null route is found to be necessary.


That's not protection, that's literally the opposite of protection lol. If you get attacked they take your service out the back and shoot it in the head.

Edit: To clarify, filter = protection. Preventing all traffic is not. Both were stated in the description above so they should be clear which one it is.


Heh, that reminds me of my first bank account. They told me I had something called "overdraft protection", which I stupidly assumed would protect me from overdrafting my account by declining transactions.

Then I forgot to deposit a check at one point and overdrafted my account. I assumed things were fine because none of my transactions were getting declined. Instead I was being charged an extra $15 fee on every transaction, so that $0.75 stick of gum? $15.75, etc. This went on for about three weeks before I got my statement and talked to my bank.

They informed me that in fact the protection was from my transactions from being declined, at the paltry expense of $15 per transaction.


This is why the CEO of TCF Bank named his yacht Overdraft.


And here I am laughing at this pretty clever joke only to realize this is real..

https://www.washingtonpost.com/news/get-there/wp/2017/01/20/...


I guess this is basically the same as OVH's "VAC" system? I sometimes get these emails:

>We have just detected an attack on IP address x.x.x.x. In order to protect your infrastructure, we vacuumed up your traffic onto our mitigation infrastructure. The entire attack will thus be filtered by our infrastructure, and only legitimate traffic will reach your servers.

and then:

>We are no longer able to detect any attack on IP address x.x.x.x. Your infrastructure has now been withdrawn from our mitigation system.

I never need to do anything, but I don't think these attacks are real anyway.


> I never need to do anything, but I don't think these attacks are real anyway

What would it take to convince you an attack is real when it has been 100% mitigated and you never saw it in your backend infrastructure?

I ask as the engineering manager for DDoS protection at Cloudflare, and we stop a lot of attacks. But I feel this tension in the communication and product offering... if we do our job well enough that a customer's system does not see the attack, how does a customer see and feel the value?

An example is that as a reverse HTTP proxy we are implicitly also a full TCP proxy for HTTP traffic and so we receive significantly large SYN or ACK floods. We stop these 100% by virtue of being the terminating TCP proxy, but also by using connection tracking, anycast, XDP + eBPF, and so forth... you won't see a single one of these SYN or ACK packets hitting your infrastructure... so what would we have to communicate to convince you that the attack existed?


>What would it take to convince you an attack is real when it has been 100% mitigated and you never saw it in your backend infrastructure?

I was running node_exporter, which exports a lot of detailed network info from my kernel to Prometheus. During the time intervals leading upto, during, and after the attack, there is nothing there. Not even a blip.

I don't find it likely that OVH completely prevented any kind of volumetric attack from hitting me with zero detection latency. I just have doubts about there existing a perfect technology that doesn't have any false positives and also kicks in instantly. I'll keep an open mind.


Maybe describe how big the attack was in a communication with the customer? ie: how many connections per second, bandwidth used, etc? If you could trace the attacker and prosecute them, that you be a lot better, of course (and possibly the way that would gain the most confidence). In other words, if any of your claims could be confirmed by a third party, it would be good. Or you could propose to them to be hit by the attack for a set amount of time before you move in.


Do you publish metrics on “attacks prevented” (or access to logging and monitoring) for customers?


Yes.

For HTTP customers there are full SIEM logs under Firewall > Overview on our dashboard, and for paid tiers there are drill-down analytics in addition to the full SIEM logs. There is also log push to receive near real-time full HTTP logs into Google or AWS for your own analysis and these show if a firewall feature touched the request or if it was served from cache.

In addition for HTTP customers we show graphs of SYN floods, etc for the IPs your web properties are advertised on.

For L4 customers via Magic Transit we also have Network Analytics showing what we received at our edge network and a log of attacks detected and mitigated.

There is still lots of room for improvement... that's really what I'm asking, what does the ideal system look like for someone where they see and understand the data and trust it.

For example, is it valuable to see the attack landscape and what is happening across our systems even when you are not the target? Would that help give perspective to attacks that do target you, and also increase faith that this system exists and is stopping attacks when attacks do not target you?


These are great examples of technical details, but they're difficult to translate into impact and business value.

Would 100k SYN floods have slowed me site down? Would it have taken it offline? Would it have caused the site to remain up but corrupt data on the backend for some reason?

Off the top of my head, I would think about offering a "replay attack against your staging infra" feature on higher tier plans. The price point should help prevent someone leveraging you as an attack platform, and customers will be able to understand the value that you're bringing to the table in a much more practical way.


I'd build a (metaphorical) visualization of the customer under siege, so they can watch it while they're being attacked and see what they'd be up against without your protection.


I think it'd be helpful to highlight the impact on YOUR infrastructure for an attack i am facing.

Will help add perspective to how disruptive the attacks are.


Yes, also perhaps some guidance figures on what the impact would have been had these measures not been in place.


Hard to answer the impact on your systems had we not stopped it... we don't know the full capability of your systems. Whether you can take a 10k packets per second ACK flood or a 1M pps ACK flood, or the 100M pps ACK flood depends on a lot of things we aren't privy to.

What we can tell you is the frequency, size and nature of attacks that Cloudflare sees, and when we can clearly identify that an attack was unambiguously targetting you specifically then we can tell only you about that too.

If there were a global dashboard which was vague about the target and source, merely the frequency, size and nature... would that be valuable?


> If there were a global dashboard which was vague about the target and source, merely the frequency, size and nature... would that be valuable?

Yes.

> What we can tell you is the frequency, size and nature of attacks that Cloudflare sees, and when we can clearly identify that an attack was unambiguously targetting you specifically then we can tell only you about that too.

Yes.

Also, even if you could tell us WHAT kind of attack it was that would be helpful too.


I should have made it clear I'm not a user,feeling your frustration at being 'invisble', that given, yes, I think a dashboard as you described, perhaps you could have some interactive option to enter your system config to allow you to see how that would have affected your infrastructure?


Simple reporting with relevant metrics, not logs.


You are looking at it as if DDoS protection provides some additional value to customers they don't comprehend, but not as a basic necessity for hosting providers to ensure competitive quality of service they can offer customers, which is how it is in competitive markets. Trying to convince customers about attacks to fake perceived value is the same as trying to convince customers of edge nodes failing over to other nodes, but you don't do that, don't you? Think about why you don't do that. Faking perceived value is AV companies level of shadiness.


They are very likely real and OVH has a very good system. You can thank them for making free DDoS protection mainstream, dragging all other hosts kicking and screaming into providing DDoS protection.

In the past providers like Linode were happy to just null route your IP for several hours/days or charge you thousands to block a small flood.


> You can thank them for making free DDoS protection mainstream

AWS does charge for WAF and Shield, I believe.

I also remember comparing AWS Lambda at Edge vs Cloudflare Workers (though Lambda allows for longer execution times and generally provides more flexibility like RAM, CPU, Runtimes since it runs on a Linux VM vs V8 Isolates for Workers), costs were something like 10x apart.

Can't wait for WebSockets support for Workers.


> I also remember comparing AWS Lambda at Edge vs Cloudflare Workers ... costs were something like 10x apart.

According to the AWS pricing example[1] 10 million requests per month on Lambda@Edge costs $9.13. The same thing on Cloudflare Workers[2] costs $5.00. So I would expect it to be closer to 2x. Although as you say there's a bit more flexibility with Lambda@Edge so it'll depend on your particular case.

I'm curious if your situation was different somehow that made for such a big cost difference between the two?

[1]https://aws.amazon.com/lambda/pricing/#Lambda.40Edge_Pricing [2]https://workers.cloudflare.com/#plans


$5 includes a generous free tier for Workers KV that can hold upto 10MiB of data against a single key. Cloudflare does not charge for bandwidth consumed, I believe. Also, use of Cloudflare's zonal http-cache is free.

I guess, when I compared, I took Lambda@Edge's per second billing into consideration and not per 50ms (which brings down the RAM usage cost from $62.52 to $3.13 and total usage from $68.52 to $9.13).

What really sealed the deal for me was the very low cold-start times with Workers. I'm not aware of recent improvements with Lambda@Edge, but the last time I tried them, it wasn't uncommon to hit 100ms+ start times.


That's interesting, thanks. I haven't really used Cloudflare Workers for much myself so it's interesting to hear folks' comparisons.


One more thing, I am not sure if Lambda@Edge charges based on wall-time or cpu-time. Workers' 50ms is cpu-time only and not wall-time. You could, in theory, spend 30s waiting for a fetch to return as awaiting on the network doesn't count against a Workers' 50ms cpu-time limit.

Ref: https://developers.cloudflare.com/workers/about/limits/


It wasn't just Linode (a provider that's generally much cheaper than OVH), but high end providers like Softlayer too (and, as far as I know, still are)


SoftLayer, now IBM Cloud, does not provide “DDOS Protection”. Their filtering service is only up to a certain amount, iirc max 5.5 Gbit or something like that. It is absolute trash and pretty much kills all traffic legitimate or not. If you’re filtered for more than 6-8 hours, they will just nullroute the IP. I’ve had the misfortune of hitting this every few months and it’s a major headache. If they remove you from the nullroute, and you land back on it, they won’t remove you again for 24 hours. A few months ago we hit a bug with their detection where it considered outbound to be the same as inbound, and produced some wildly off base numbers that made no sense. We’re not a small account, I’d say medium sized probably at this point, but I wanted to hop a plane to Dallas and strangle the techs there. It’s really gone downhill since IBM acquired them, and I dread to see how Red Hat fares...

If you ask for any estimated traffic size so you can go to a service that does do filtering for a living, they won’t give you that stating “nobody does”. It took a lot of time getting numbers out of them, and that finally finding a top level employee through our account manager was what led to them going “oh yikes, yeah something is off.” Sigh.


How is Linode “much cheaper than OVH”? Their margins are probably way higher.


Linode's origin is as a VPS provider. OVH's is as a dedicated server provider. OVH's top server is twice the cost of Linode's and their cheapest is 5x the price (10 vs ~50).


Cheapest OVH dedi I see is the KS-1 for 3.99eur, which is less than the cheapest Linode VPS. OVH also doesn’t try to nickel-and-dime you by charging for data transfer.

I wouldn’t recommend the cheapest Kimsufi offerings though, something like the SYS-WS-1 goes for $33 and is easily comparable with Linode offerings priced at multiple times that.

OVH has a VPS product that’s far cheaper than what Linode offers, but I can’t speak to the quality of that offering.


Comparing prices for bare metal with prices for shared infrastructure is pretty much useless though.


Why is that?


I’m curious, does anyone know what that means specifically? How can they differentiate normal traffic from malicious traffic? What exactly triggers it? Is a ping flood with a slow (50mbits) internet connect enough? I am aware that the details are mostly likely private to protect them from abuse and are also a trade secret but I have a very hard time to find a general approach that might be similar to their solution?


Most probably they have DDoS appliances (i.e. Arbor, corero, etc) installed in their network. One of the implementation is they will redirect all customer traffic to this appliance. And then the appliance will get some sample of the traffic and match it with their attack fingerprints database. If matched they will block the traffic. For the good traffic they will let it go to its final destination.


This is how we implemented it at an ISP I worked at before. All our peering routers sampled traffic using IPFIX and sent it to an Arbor collector for fingerprinting and analysis. If the collector detected malicious flows it would automatically send a BGP Flowspec message with the list of malicious flows to our peering routers. The BGP Flowspec message would cause our peering routers to redirect the matched traffic to a Arbor TMS server which would scrub the DDoS traffic from the dirty traffic and send the cleaned traffic back to our routers to be routed normally to the end-user. There are other ways to mitigate DDoS but this is what ended working best for us.


They have their own system (a friend worked on it). I don't know the details of the system though


I don't know their solution exactly, but what usually happens is they look for common packet signatures. Most DDOSes aren't very sophisticated, and can be blocked with fairly simple rules.


Linode support is fantastic and we host some critical infrastructure with them for this reason and have been happy for years. DO has the managed DB however so we've been migrating some services to them. If Linode offers a managed DB I'll move everything back.


It's much more advanced than you think : custom asics etc...


Source?



Thanks


I've been using OVH services to host game servers for several years now, and their "VAC" is a godsend. Other providers would prefer to terminate my account or offer some kind of protection for thousand of dollars.


Hacker: ping -t 1.2.3.4

OVH: go into lockdown!


Is it free [1], free* [2] or "free" [3]?

[1]: free as in free beer, at no direct cost to users

[2]: terms and conditions apply, free until you hit certain conditions (for example, constant barrage)

[3]: free as in the customers pay for the (mandatory) DDoS protection via increased prices (similar to how I remember OVH handling their "free" DDoS protection)


I don't understand the difference between 1 and 3.


I may have used "free as in free beer" in a wrong way, what I meant with 1 was there are no additional costs to the current or new users (the rates for the services on offer stay the same).

For 3 (as was in the example), the cost of the DDoS protection service is directly added to the rates of services on offer.

OVH was quite blatant in this, as it had offered an optional DDoS protection service for a fixed rate of 3€/mo (this was a few years ago, exact details might be hazy). After they had a large network overhaul (with major interruptions), they simply raised the prices by 3€ and advertised the new, "free" DDoS protection service which was included in all of the services.


3 is very unlikely, hosting plans generally go down over time, not up. And Linode, like most metered hosting services, where you're billed hourly, don't normally distinguish pricing for new versus recurring customers.


Instead the price would just go down more slowly. So the price drop that you'd otherwise expect is paying for the new features.


Serious question: why would I want to use Linode over GCP or AWS? Asking as someone who hasn’t really dabbled with smaller cloud providers. Is it cost? Support? Developer tooling?


Cost and ease of development by not throwing thousand of options in front of your screen.

Last time, I checked GCP costed me $26 (+ hidden charges) for the same I could get on many other places for $7. Some of them provide instant customer support too and are better because it's not an outsourced customer center in India or other places.

Check out:

vultr: https://www.vultr.com

Scaleway: https://www.scaleway.com/en/

OVH: https://www.ovhcloud.com/en/

DO: https://www.digitalocean.com

Some prefer managed infrastructure and want to write code. Though, you can do that via GKM but prefer more straightforward approach.

Nanobox: https://nanobox.io

Heroku: https://www.heroku.com

LastBackend: https://lastbackend.com

ML/AI

Paperspace: https://www.paperspace.com

Flyodhub: https://www.floydhub.com

Colocation for those who have big infrastructure needs and developers will cost them less.

Equinx: https://www.equinix.com

Datafoundry: https://www.datafoundry.com

Disclaimer: not associated with any of them. Have used some of them and for others, heard great things.

You can easily go lower for less support and most likely a shit interface with some reliability issues.


> ease of development by not throwing thousand of options in front of your screen.

This! i don't want to spend my life navigating the maze of options and hidden costs of AWS et al, this is important to most projects for two reasons - time and cognitive load... Until things get truly massive scale, it's not worth the brain drain and time is more precious. Navigating the interface of Linode is actually pleasant and takes minimum effort.

If anyone needs a reason not to use AWS for your boss in a nutshell: employee sanity.


Thanks a lot for the long list. I am in the midst of considering switch of hosting provider, so it is nice to know some of the options.


A couple of posters on HN say Vultr has intermittent internal network problems. I've found no wider mention of this - can anyone confirm? I ask as their High Frequency Computer looks (and in testing has been) good, but if the internal network blips then that counts for nothing in a multi-server setup.


We've had network issues as well as forced shutdowns that have corrupted data. Their support was also terrible. We only spent a few weeks with Vultr because of this.


Also: UpCloud https://www.upcloud.com


I have been keeping an eye on them too, along with DO, Linode, Vultr. If I remember correctly UpCloud has disabled HT due to Intel's security issues, so all of their Core are really Single Core and not a Single Thread.

They seems to be working hard [1] with 10! more DC planned in 2020. The entire Hosting market is growing like crazy!

[1] https://upcloud.com/blog/upcloud-secures-18-million-funding-...


+ https://render.com

(I am just a customer)


It's not "the same". GCP network quality and performance is much better.


Also www.wowrack.com


WowRack still has a notorious reputation for hosting spammers.


Egress traffic cost is massively higher on both AWS and GCP.

Instances of comparable power are somehow more expensive on both AWS and GCP.

Also, simplicity; AWS IAM is mightily complicated, things like Cloud Formation are totally non-trivial, etc. You can get going more easily with simple and moderately complex setups on Linode or DO.

Of course, AWS, GCP, and Azure have much bigger infrastructure, several availability zones, a lot of managed software (object storage, various databases, queues, email gateways, docker hubs, etc) which smaller players don't provide, or can't provide at the fault tolerance level which big players are able to offer. Something like AWS Aurora is hugely internally redundant to withstand link problems, node outages, etc transparently. If you want a thing like that, managing it yourself takes serious chops, and money.


At some level of evolution of your company you want complicated thing like AWS IAM, because manual management of access becomes much more complicated.


Everything is a tradeoff. Sometimes you are a large business and genuinely need an airplane, but sometimes you are a small shop and using a motorbike is much more cost-effective.


You start using GCP/AWS/Azure, you're going to start using managed databases, security groups, etc. These are really nice. They can save you a lot of time as a startup. ... They are incredibly difficult to migrate. If you use terraform, it does support GCP/AWS/Azure/DO/Vultr, but you'll have to rewrite everything for each provider.

For really simple providers (just a VM; in AWS, just EC2) you can still write all your own Ansible/Puppet/Chef (I recommend Ansible) to setup your servers for you. You can do your own databases, but there is complexity in scaling, multi-read only workers, etc. Managed solutions are nice in how they handle that for you and you really only need to do off-site backups. But the advantage is, once you have it all written and figured out ... you can move it anywhere.

As a startup, you want to get everything fast. So you're going to get locked in (most likely). That's fine if you start making money. If you want to start cutting costs later, it's not really going to matter who you originally started with. You're going to be rewriting a lot.

There are a TON of tradeoffs going in either direction. Linode/Vultr/DO really appeal to people self-hosting or startups that have infrastructure people from day one who can stand up things, platform-independent, from day one.

DO has started offering managed databases and load balancers. Now we see Linode offering DDOS (maybe saving you money from paying CloudFlare)? Everyone wants to get to the point where they can at least offer the minimum AWS/GCP/Azure stack (web + DNS + load balance + firewall + database .. maybe throw in some managed k8s like DO is doing now?)

It's really all about tradeoffs. What time do you want to put in now so it's easier to migrate later?


> Now we see Linode offering DDOS (maybe saving you money from paying CloudFlare)?

I work for Cloudflare, and we do not charge you money for our DDoS protection. It's free and included on every plan level including our free level, and the protection you get is equal to the protection our enterprise customers get.

In other product features we have we also work hard to make sure we do not charge you for any bad traffic, i.e. our HTTP rate limiting product has the pricing structure designed so that you aren't paying for the traffic stopped by it.

Pricing really isn't the issue here, but where Linode and other hosts adding DDoS protection helps is in the scenarios where your origin / host IP or provider is known. In those scenarios attackers may directly attack the host.

Just as elsewhere in security, you are as strong as your weakest link, and I am really pleased to see hosting companies expand their DDoS protection.

The various disclaimers: I am the engineering manager for DDoS protection at Cloudflare, and I run a little farm of machines at Linode :) I'm happy on both fronts with this announcement from Linode.


As a very satisfied CloudFlare customer (read "leech", since I'm using the free tier), I have seen your dashboard serve 300 GB of data for me some days, which never reached my server because of caching. It would be nice to be able to see a list of the ten most bandwidth-using URLs for a period, so I can maybe save you some bandwidth in case it's an attack or something similar.

Currently, since I use caching, I never see what consumed the bandwidth, so I don't know what file people are downloading so much.

(P.S. Hey David, long time! I hope everything is going well)


Consider that a feature request heard, I'll make sure the product and engineering managers who do the HTTP and cache analytics know about it.

Mind if I put them in touch with you?

And hey again, long time no speak... if you're in London let me know. Otherwise one day I'll make it to wherever you are in the world now.


Thanks! No, I don't mind at all, thank you.

I will let you know next time I'm in London, it's been a while. I'm still in Thessaloniki, let me know if you're ever around!


After your seed round this simply doesn’t hold up. If you’re making money the upside of the modern cloud is absurd, and cloud-to-cloud portability is a design pattern choice. There is zero lock-in with appropriate architecture decisions.


> There is zero lock-in with appropriate architecture decisions.

Sigh, you don't know what you're talking about.

Everything past "make VM" is lockin. Firewall rules, be they security groups, NACLs, or what have you are different for each platform. Each service you use is lockin. Even the build scripts themselves are lockin. The users within the console, and all their associated permissions are lockin.

Do you want load balancers? AWS - so.. thats ALBs, or NLBs, or CLBs? Or did you want to spin up a bunch of dinky ec2 haproxy dockers? Does your application use sticky sessions or have session data? That limits you in a bit of ways.

There is no neutral way to talk about network, compute, storage, and api assets that don't involve lockin of a great deal.


Linode gives me 1000GB of transfer for the $5 it costs me to rent a vm, while AWS would charge me $150 for just that bandwidth by itself.


Yeah, bandwidth costs is a thing that turns me off about AWS too and then a bunch of proprietary services too, so then you have to rewrite your application if you used some of them and wanting to move providers. However if you want to move fast and not worrying about some of the behind the scenes and have a budget for it, I can see how some large companies and well funded startups love AWS, even though there's sometimes cheaper and more open options.


Do you actually use it?


I’ve been a Linode customer for at least 5 years, and yes, I’ve used a large percentage of that cap on a few occasions. It’s real, if that’s what you’re implying. I use them over other options because their value and simplicity are outstanding.


Similarly with Hetzner, they give you 3 TB with their 2.5 € VMs and let you use it.

Unfortunately I've noticed that their "unmetered pipe" offering is quite a downgrade, as it's only 10 Mbit.

EDIT: Sorry, I'm wrong. All dedicated servers have 1 Gbit unlimited uplinks. I'm not sure why I keep getting confused on this point, their support has confirmed this fact many times to me.


The problem is Hetzner's sever are in EU only. Although I guess they could be used for cheap file server?


On my part, I prefer Linode now simply because I won't support a company as openly monopolistic as .. those guys .. if I can avoid it. Also, maybe a little superficial loyalty. Linode was the first provider I found with KVM support when it was a new feature in QEMU, but that reasoning is long expired.


Small thing perhaps but Linode offers new OS image so much faster than anyone else as I have seen them release CentOS 8 image pretty much on the day it was released and DO hadn't got it the last time I checked after a month or so.

When a company has a room to care to the detail like this, you can feel they're not crushed by support requests which may mean they're doing things right.


This is because DO uses cloud images from upstream projects where possible. They weren't available for CentOS 8 until recently.


I like Linode and DO and have my issues with the big cloud players, but I don’t see them as monopolistic at all? There are multiple highly competitive offerings in the market, each competing on features and cost. You have lots of choice and even vendor lock in is somewhat mitigable if you want to put the effort in.


ah- i meant i don't like those guys ' who rebuilt downtown Seattle in their own image.. (it's silly, I know, but I hate to write corp names as proper nouns once they start to abuse peoples trust, and HN is filtering the asterisks I would have used on another medium. ... i mean a$$azon is openly monopolistic.. that works.)


I personally would say simplicity and up-front pricing. I don't need to make a spreadsheet to figure out what I'm going to pay to host a simple web site or project.


Yes, up-front pricing is very essential. I don't want any surprises. If there's a chance for that, I won't be using the service at all.


Cost and ease of onboarding. For $5/mo you get 1 vCPU, 1GB RAM, 25GB SSD, 1TB egress quota. Try calculating the egress cost alone for GCP or AWS.


Amazon LightSail gives you 1 CPU, 1GB RAM, 40GB SSD and 2TB transfer for the same price.


Amazon Lightsail launched way after Linode and DO have been established in the low-cost segment, and it was slightly more expensive than Linode and DO when it launched. It appears they've lowered their prices again in order to capture the market but mind share is still low.


Last time I tested it Amazon Lightsail was also way slower.


It is. The "1 vcpu" is basically a lie. They throttle Lightsail viciously based on use.


> mind share is still low.

I can tell that - many people in this thread are using “AWS” as though it’s synonymous with “EC2”.


I assure you that the majority of users in these discussions is perfectly aware of the extent of the AWS ecosystem.


I like the AWS ecosystem but I wouldn't touch it for anything that doesn't pay me back for monthly rent.

I build things with consideration to platform and vendor lock ins. Kube, containers, etc have resolved many deployment issues for me. For other things like authentication, logs, database, analytics, cache, search etc. There is parse, elastic/solid, postgres, redis, logio, ackee etc

Obviously, there are still many spaces left which AWS cover but most applications don't need them. Compliance is a big hurdle for businesses which they pay for. For individuals experimenting, not so much.


Cost is a huge factor if you move a decent chunk of data. For $80 a month I have 4 VMs of various sizes and 8TB (pool) of bandwidth. 8TB of bandwidth by itself is $560 a month of AWS.


I've used Linode in the past but most of my non-GCP workload is in Digital Ocean now. It's just simply cost. I have some one-off services that run async and need RAM. It's much cheaper on DO.

I keep everything directly user-facing (ie, must-be-always-available) on GAE. Which is expensive, but nobody has to wear a pager.


You cannot hard stop billing, at least that i have found. So keeping to a hard budget on AWS/GCP is not possible, be careful.


Been hosting my project there for the last 8 years. During these, I have at least two times failed to have enough money on the bank to pay their service. Instead of shutting me down they prorated it for the next month. With AWS and GCP I would still be trying to talk to a human. Thanks linode!


I work fulltime on an ecommerce platform deployed in AWS and part time on a number of small side projects on a Digital Ocean droplet. Managing infra on AWS is a big, complex undertaking with lots to learn. With my droplet I need to know some bits of Linux sysadmin (how to update it, install fail2ban and other small things). Deploying is via a bash script that basically builds, tarballs, scp’s then restarts server.

Basically for small, simple applications, Linode or DO are great. They’re simple, the pricing is simple.

For more complex applications with lots of components, service buses, microservices etc, bigger cloud services offer you lots of features, but it gets difficult to operate if you’re just one guy (IME).


For me it's about price transparency. 2-ish years ago I was launching a web app and needed somewhere to host the data and processing. Cost was a major factor.

It took me a while to even find GCP's cost calculator and the AWS one required me to make an account before using it. I spent days looking through documentation an learning all the nomenclature ("elastic beanstalk - seriously??") so I could even start to understand the calculator. Their structure is incredibly convoluted (compute+load balancing+database+database storage+block storage+content delivery+container managment...), making it near impossible to know how much I would end up spending. Not to mention that the prices and performance vary wildly (reserved vs hot vs cold compute).

My rough estimate would've put me at around 3x the cost compared to Linode and I'd be living in fear of the bill every month. Linode told me exactly how much of what I would be getting and how much I'd have to pay - in words, not ec2_t2.micro_us-west_reserved.


Simplicity in cost and more user friendly features.

Very predictable how much you'd pay by spinning one up as bandwidth is a pooled limit among all your machines, so you won't pay until you exhaust the pooled monthly limit, and they don't charge you for disk IO and performance/cost ratio has been better.

You'd question why you'd pay more for less performance.

Will ec2 ever be able to boot into recover mode easily? Linode allows you to boot into recovery mode by attaching your disk and also easily access your console from browser in case you screw up networking or firewall to lock yourself out.

They provide easy daily/weekly backup instead of making you write script to take ebs snapshot manually.

Maybe my AWS knowledge isn't caught up but AWS feels like everything is for you to manage.

Also they don't do weird stuff like GCP resetting the hostname on every reboot but things are how you'd expect.


Perf+capacity / cost

DX/UX/UI

Fit small to medium business (i.e. better resource management decision. Yes, that imaginary scaling thing)

Price predictability

---

Launching a production on DO: 1 LB + 2 droplets + 1 managed PG. That's pretty much to cover a huge portion of problem space you are solving for customers. Mostly enough for a sustainable business.


I want to run demo software on a JVM, and the box beefy enough to do that is MUCH cheaper on Linode.


I can believe that Linode is cheaper than EC2, but isn’t Amazon’s LightSail a similar price?


Similar price, but LightSail severe throttling disk and network IO


Cost and freedom.


Perhaps you like problems?

https://news.ycombinator.com/item?id=3654110 Compromised Linode, thousands of BitCoins stolen (2012)

https://news.ycombinator.com/item?id=3655137 Linode Manager Security Incident (2012)

https://news.ycombinator.com/item?id=5552756 Linode hacked, CCs and passwords leaked (2013)

https://news.ycombinator.com/item?id=7086921 An old system and a SWAT team (2014)

https://news.ycombinator.com/item?id=10825425 Linode DDoS continues – Atlanta down for 16+ hours (2016)

https://news.ycombinator.com/item?id=10998661 The Twelve Days of Crisis – A Retrospective on Linode’s Holiday DDoS Attacks (2016)

https://news.ycombinator.com/item?id=10845170 Security Notification and Linode Manager Password Reset (2016)

https://news.ycombinator.com/item?id=10806686 Linode is suffering on-going DDoS attacks (2016)

> why would I want to use Linode over GCP or AWS?

If you include dedi providers like OVH into this comparison, you probably just wouldn’t.


Nobody's perfect, and Linode clearly had growing pains. Can you point to anything in the past four years? If not, I'd give them another chance.


Why would you “give a chance” to a commodity provider that had no differentiation but does have a bad history?

They’re not really cheaper, their hardware isn’t better, and they’ve shown a willingness to toss security out the window. Doesn’t strike me as deserving of “a chance”.


Because they're a small, bootstrapped company in a tough industry, and I figure I should treat them the way I'd want to be treated?


I get that, but there's such a thing as being so open-minded your brain falls out.


Comparison with dedicated servers is comparing apples to oranges, but except that line your comment has a pretty strong point.


Is it though? There are some very specific use cases where dedicated servers won’t work, but those are certainly a small minority of Linode customers.

Stuff like Terraform works just fine with OVH dedicated servers, so that can’t be the problem either.


Is this an on-demand solution using something like BGP+ Radware or Arbor? At what volume or pps will they announce a nullroute?


I’ve been a Linode customer for at least a decade. Cheapest bandwidth I’ve been able to find anywhere and their data centers are super reliable.


Maybe because you only host static websites


I run https://jsonip.com on linode. I push about 5-6 terabytes of data outbound each month. Linode is a dream.


> Cheapest bandwidth I’ve been able to find anywhere

Have you looked? Linode BW is incredibly expensive compared to just about any dedi provider.


Well done Linode.

I wonder how quickly DigitalOcean will add this to remain competitive.

It's a huge win to have your hosting provider handle this and it's also nice to not be "forced" into using Cloudflare for such an important feature.


DigitalOcean has had free DDOS protection for quite awhile. And it sounds like Linodes solution is fairly similar. DigitalOcean decided to not advertise the fact because advertising your defenses is an open invitation to break them.

They still null route when the upstream links become congested but this is becoming less and less frequent as their network edge grows.


Do you have any documentation that mentions your droplets are protected from a ddos attack without you having to do anything?

Even DO themselves mention they don't protect against it and even go as far as saying to use Cloudflare.

Here's a tweet of that from Jan 2018: https://twitter.com/digitalocean/status/958364631671758854?l...

Is that them taking the "not advertising it" line to the next level by publicly stating they don't protect you even though they do? I'm a bit skeptical.


Nice. Apart from the security incident that took place long time ago, are there any reason why everyone is going straight to DO instead of Linode?

For a long time Linode has had better features, performance and bandwidth. It wasn't until recently DO had Managed DB and many other additions.

Linode's High Memory Plan also has much better Memory : CPU Ratio.

Still waiting for their CDN, ( Not sure why they are not exposing it and instead requires going through CS ), Managed DB and Bare Metal. Once those three are in place, ( and well tested ) It should provide decent competition to the HyperScalers.


The security issue and its appalling handling was not trivial, it'll take a long time to regain that trust. DO is more widely used nowadays, which has resulted in a larger community utilising the API integrations etc. DOs managed Kubernetes etc are also much more stable (or at least claim to be, Linodes are still in beta).


Better marketing from DO. There strategy of content marketing with how-to guides has been massively successful.

Personally, think DO has a more pleasant UX too.


DO has been posting (or rather, spamming IMHO) /r/golang with fluff like "how to use switch statement in Go". You know, the sort of articles that someone who learned Go a week ago can write.


As a former marketing co-worker once told me: marketing is pretty much just standing around flapping your arms to get attention.

The content might be dumb, but it drives eyeballs and you remember DO.


Their server admin guides seem to be good, though. Whenever I google for how to do x things, their articles tend to come up. (I'm not a professional admin, obviously)


I am not sure how is it now but not that long ago there were slight differences in backup price and floating ips. If i remember on Linode backups cost almost twice than DO. On Vultr you had to pay for floating ip which you dont on DO.

Also from benchmarks ive seen Linode was inconsistent and overall not that faster than it looks on paper. Vultr was best in things like cpu performance but had slower networking. DO was just ok in any matrics.

If you add that DO has best admin panel, usually are first with most services like Spaces (which are not that amazing but OK) and sponsor lot of good content (tutorials, podcasts).

They are all similar services once you get one one of them there arent many reasons to switch. Like i looked into vultr high frequency compute for new service and then i realized i will have to deal with two invoices instead of one every month... so i just used DO:))


Marketing goes a long way. It makes me wonder how many products/services are out there which are actually much better than the well-marketed ones, but we just don't hear about them because they don't have the same level of marketing.

I actually don't use DO, but I've used their articles many times after searching for how to setup some things on Linux. Their tutorials are excellent and have saved me a ton of time. I'd imagine those articles alone drive a lot of traffic to them.


Who is "everyone"? Linode has been my primary choice over DO for some years.


Pretty much Reddit and HN Sentiment. And if you look at the growth of DO it is apparent DO is outgrowing its competitors.


Can you share where you have the growth number between cloud vendors?


Two things that made me migrate (mostly) to DO from Linode:

1. For a long time, Linode did not have a terrform provider 2. DO's managed Kubernetes offering Just Works, and is very competitively priced

FWIW, I still run a small Linode box. It's been rock-solid, and the support they provide is absolutely top-notch.


I've been a Linode customer since 2009, but I'm slowly migrating away to DO. Their k8s works great, and the managed Postgres/Redis is ideal for me.


DO has a better web interface and solid marketing, which includes writing a ton of tutorials that show in search engines when people are looking for system admin type stuff. This makes them pretty popular.

Linode has better resources for the price and really solid support, so I tend to stick with them.


A lot of it was inertia. But after some recent bad experiences with DO, we're planning on giving competitors a try. Looks like Linode is just the ticket; thank you for pointing it out.



Not sure why this is being continuously downvoted. But thank you for everyone else that took the time to answer the question.


> Apart from the security incident that took place long time ago, are there any reason why everyone is going straight to DO instead of Linode?

Which one? I count at least 4 off the top of my head.

And the problem wasn’t just those security incidents, it was Linode lying and covering them up.


Good for them. Having DDoS protection included in pricing is now one of our core purchase criteria. There are many ransom hackers that target nobody companies like where I work were this kind of protection is now mandatory to ensure uptime.


I wish they had a datacenter in Brazil.


Bandwidth in South America is approximately 8x the cost of North America or Europe. There are almost no carrier neutral datacenters or peering where you can exchange traffic with other in-country networks without paying for transit. Most countries will strong arm you in to buying "local" for your hardware - which means using in country re-sellers that drastically mark up prices for foreign businesses.

Next to Moscow it is one of the most difficult places I've tried to put servers.


The only semi-competitive option for bandwidth in South America is Oracle Cloud, but of course that comes with it's own issues (primary amongst them being you'll be using Oracle). But if you can deal with that, a basic 2vCPU/8GB VM comes to less than 25 USD, with bandwidth costing 8.5 USD/TB.


Does AWS or any other large cloud provider offer this type of service?


AWS has strong DOS protection and just doesn’t tend to shout about it. Putting services behind AWS for some basically free protection has been a trick against basic volumetric attacks for a while.


So how does that actually work?

Doesn't AWS charge an arm and a leg for traffic?


AWS charges an arm and a leg for outbound traffic, inbound traffic is free. Volumetric attacks are all about overwhelming your inbound; if AWS will swallow that at their ACL layer for you, that's seems pretty useful, and shouldn't generate billing.


OTOH, it's tricky to direct your inbound to AWS without involving them in your outbound...


Certainly... It was more of a if you're on AWS and you attract a volumetric attack, it's not going to cost you an arm and a leg.

Maybe you could run www on AWS and your real service somewhere with reasonable prices for traffic. In my experience, people who randomly DDoS tend to hit www rather than useful parts of a service.


In terms of running a data lake or keeping stuff for a long time, it's great, but of course they're banking on you moving a bunch of data to AWS to either train ML off of (compute costs) or keep it there and rack up storage space charges.


Seems to benefit AWS as well when if it's not blocked, your server can respond with some payload which means useless outbound traffic for AWS to pay and to charge customer to make them unhappy but instead if AWS drops them, good for both.


Basic volumetric - which is what the smaller providers can't handle, AWS can eat easily.

I don't believe I've been charged for this type of attack. The one you should look out for if you are new to AWS and trying to do this "trick" is L7 repeated downloads of high-file-size content to consume your budget rapidly.


> L7 repeated downloads of high-file-size content to consume your budget rapidly.

How an one protect against this type of attack?


Yes - this is my main concern about moving a website from a fixed-price service to S3+CloudFront.


All of them have both automatic and add-on DDoS protection, for example of branded offerings (which doesn't cover all of the built-in protection):

GCP: https://cloud.google.com/armor/

AWS: https://aws.amazon.com/shield/


Last I used AWS Shield years ago, you had to open up a support ticket to tell them about your DDoS to get a refund every time. Is that still true?


I don’t believe GCP has such a service explicitly, although in 2 years of using them I’ve never seen any DDoS like behavior despite having multiple public endpoints. I am assuming there is some kind of automatic remediation that happens, or else I’ve just been extremely lucky.


https://cloud.google.com/armor/

GCP Network has built in DoS mitigation as well (e.g. in the load balancing layer) so you get some protection from that for free.


DDoS isn't free from the attacker as well, unless your service means something to competitor or someone, why would you get one?


Not sure if DigitalOcean falls under big providers, but they offer the same solution as Linode, but made a business decision to not advertise it. Even though they haven't advertised it the reduction of null routes and better customer experience has paid for the cost.


When using services where the IP is shared by multiple users they really have to have good protection. They can’t nullroute all the tenants


Fascinating! I learned about fail2ban this week as well as how to search for bad SSH actors -- I was amazed at the traffic requests my Linode was getting decked with.

Having this as a default seems good.


Spurious SSH traffic is not a DOS, and isn’t the sort of attack this is talking about, rather volumetric floods and things of that matter.


If your SSH logins are key-only (and they should be) then fail2ban is unnecessary IMO. No one is going to gain access without your private key, and while it's a nice feeling that the bad actors are "blocked" - fail2ban is using more resources to block them than their attempts are using.

Assuming you aren't getting 1000s per minute, of course.


I was on this step. After some time you'll hate fail2ban. Big part of attacks comes from hacked "regular desktops", by blocking their IP permanently you will block access for legitimate (and non-hacked) users - providers often change user's IP.


Fail2ban (and IP blocking in general) breaks down a bit on IPv6 when the attacker has a /48 or /64. Are you going to block a single IP address or the whole netblock? What size block is safe to not cause collateral damage for e.g. mobile users?


i m always surprised by the obvious promotion posts here. I 've had free Ddos protection in my hetzner servers since forever and nobody ever mentioned it


There was a hn thread on it aswell when they finally added it instead of nullrouting ips

Might be this one? https://news.ycombinator.com/item?id=12403783


About time after everything was down for weeks in 2015.


The Christmas 2015 hack has now been fixed :) My boss has been begging us to leave Linode ever since. We haven't left.


One more reason to Like Linode.


Congrats to them. Linode always has a special place in my heart.


Couple reasons for those less experienced with VPS?


Not GP but for me, Linode had been fine for 5+ years and it speaks but Vultr has been having choppy network at least once a month for few years now (detected by port monitoring) and that also speaks too.

DO has been good on me too.


Same experience here with Vultr, always go down. But seems to be a bit better now than before.


OVH and Linode are what Bluehost is for shared hosting. Hosting providers that you want to stay away from if you want good for your server infrastructure.


can you elaborate?


Hope they use fastnetmon.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: