Hacker News new | past | comments | ask | show | jobs | submit login
How To Set Up Your Linode For Maximum Awesomeness (feross.org)
208 points by feross on Oct 5, 2012 | hide | past | favorite | 112 comments



Some random selections from my notes on building Linode web servers:

- Set up reverse DNS in the Linode manager: select the Linode, click on "Remote Access", click on "Reverse DNS" (under "Public IPs")

- Linodes don't offer very much disk space; use localepurge to keep the filesystem on a diet: # apt-get install localepurge

- After installing and setting up MySQL, don't forget: # mysql_secure_installation

- After installing Apache, change the following in httpd.conf: ServerToken -> Prod, ServerSignature -> Off, KeepAlive -> Off

- After installing PHP, edit php.ini to make it shut up: expose_php -> Off

There's a bunch of other fiddly stuff to do, and a seemingly endless combination of packages and strategies depending on what you're trying to accomplish. For instance, I currently run a stack with Postfix for MTA, awstats+jawstats for beautiful server-side site statistics, mod_deflate, mem_cache, fcgid/suexec to make it harder to break the server if a site is compromised, PureFTPd for really easy managed-by-MySQL FTP access, and a pile of other little minor tweaks and knobs turned.

MOST IMPORTANTLY: Backups, backups, backups. If you don't already have your own in-house backup service for your server (and I bet you don't!), then please take advantage of Linode's backup services: http://www.linode.com/backups/


Blindly turning off KeepAlive isn't a recipe for awesomeness.

KeepAlive is a trade for memory/cpu time; on if you want to speed up in exchange for memory, off if you want to conserve memory and pay the cost of initializing new conections.

If you're doing anything like loading a web page with more than a few images, KeepAlive will likely improve connection time as http requests things serially, and you'll have a speed up by re-using the same connection.

Protip: run apache's mpm-worker with keepalive on and save memory and speed up your site.


It just so happens that I have the very best test site possible for this argument: one of my alpha customers has a website built by Not-A-Professional with an incredible 156 requests and 2.39MB of data for the landing page. It is exactly the opposite of an optimized website.

So I tested the page load with KeepAlive on and KeepAlive off. I `/etc/init.d/apache2 graceful`'d before each test, and I cleared Chrome's cache before each test. I have nothing else running on my system right now and our pipe is nice and clear. I have a 92ms ping from the office to my web server.

KeepAlive On: 5.31s

KeepAlive Off: 6.87s

1.5 seconds is an appreciable difference. It works out to roughly 10ms per request, unless it's so late that the math part of my brain has quit for the night. If Yahoo.com were loaded from my web server, having KeepAlive on would delay total page load by about 750ms -- a noticeable difference. (However, given that it takes over 4 seconds to reach page load on yahoo.com for 70+ requests, their page might actually load faster if it were hosted on my server...)

Now here's the kicker: that same customer site has been featured on a popular radio show, twice, driving over 6 million hits over 24 hours, with peak traffic at 40Mb/s, thousands of simultaneous connections, and 100+ requests per second, sustained for several hours.

The site's loading time during that period was utterly unaffected. Seriously, never even a blip. (I stayed up during the whole period just to personally watch it.)

I wasn't about to experiment with turning on KeepAlive during all that fun, but I'm reasonably confident that having KeepAlive turned on would have crushed that server, given that my customer's site was receiving substantially more traffic than even Pinboard did immediately after the Delicious announcement (http://blog.pinboard.in/2011/03/anatomy_of_a_crushing/) -- traffic that Pinboard's founder notes will usually kill an unprepared site (or server).

And keep in mind, this was all on a memory-constrained tiny little vm.

So here's my final statement on KeepAlive, which I will link back to from now on: if, as a server admin, you're turning KeepAlive on to speed up your site, then you are optimizing your server and your site in exactly the wrong way.


You're not providing any data to say having KeepAlive on would've 'crushed' the server, actually quite the opposite. For example - with KeepAlive on, your site response time was 5.31s. With KeepAlive off, it was 6.87s. So with your own test it was slower to load.

With keepalive off, your server would spawn new connections for every request rather than re-using them. With mpm-worker and threads, that's not so bad. However I've seen a lot of people run pre-fork which uses processes instead of threads which can be a Bad Time(tm) with heavy spikes in traffic.

Anyway - found this here (not my site) - explains it fairly well: http://abdussamad.com/archives/169-Apache-optimization:-Keep...


From one of my other replies to the KeepAlive discussion: http://news.ycombinator.com/item?id=2588783

I promise you my little server was getting more traffic than that, and since patio11 -- another vocal "turn off KeepAlive" advocate -- personally commented in that thread that turning off KeepAlive directly resolved their site outage, I think I'm going to stick with what I've got.

The server is already using mpm-worker, as well as fcgid carefully tuned to make PHP nice and snappy without eating all of the server's available memory.

Final point: site response times aren't the only metric a server admin should care about. A server admin should also -- and maybe even primarily -- care about concurrent connections. If I can serve thousands of simultaneous connections at 6.87s or hundreds of simultaneous connections at 5.31s, guess which one I choose?


1.5 seconds is a lot of time to save off page load and this shouldn't be just ignored. Other people do ridiculous things to their architecture to shave off just a dozen milliseconds no one cares about.

I mean, KeepAlive shouldn't crash severs. Have you tried tuning other settings, like KeepAliveTimeout? It's either a config issue or general Apache issue, but it should be solvable in both cases. There is no unsolvable problem that would prevent a server from persisting connections under lighter load and dropping them when there is more traffic.


I liked your other comment on this better, fwiw.

I'm trying pretty hard not to get defensive here, so before I respond, I need to point out that I originally posted a comment intended to help people get a better Apache config; I didn't claim anywhere that it was the perfect one, or that there weren't better ways to go about it. I posted the comment to a thread started by a post clearly aimed at total newbies (and SEO), which is why my comment stuck mostly to really really simple stuff, and didn't include other notes, like changing the server's file descriptor limit. Now I'm letting myself get dragged in to a debate over, frankly, minutiae that doesn't belong under a post about "setting up your first web server". I work all the time on a mind-blowingly wide array of technologies, yet I never talk about it because

I

hate

this

shit.

So much.

So, that said: if my customer wants to shave 1.5 seconds off of their page load times, the very first thing they need to do is (drumroll...) _not_have_156_resources_on_their_landing_page_. We're not talking about 1.5 seconds for the average Wordpress site here (54 requests, based on a Wordpress blog picked at semi-random). So let's just be clear that we're talking about a 10ms delay per request over a connection with a 96ms ping.

If they did that, and still weren't happy with the results, then the next thing I would do is pay for a bigger VPS instance.

If I did that, and they still weren't happy with the results, then the next thing I would do is move their site to a server in a data center closer to their geographic location.

If I did that, and they still weren't happy, then the next thing I would do is charge them a brand new premium rate of about 10x what they're paying now, and I would re-configure my PowerDNS servers to do geoip-based results and I would set up nginx proxies (all of which I intend to do eventually anyway).

If I did that, and they still weren't happy, I still wouldn't bother farting around with KeepAlive, because at this point I've entirely removed Apache from the page load equation because doing all of this is easier than screwing around with the Apache config any more than I already have.

Please don't get me wrong. I think performance is really really important. It's a frequent soapbox point for me. I have already sunk a ton of time into making my server setups secure and fast and reliable. I will continue to sink more completely irretrievable time into that. However, as geeks it's easy for us to lose sight of what's actually important: should I spend more time trying to remove a 10ms delay per request without sacrificing reliability, or should I answer some more support requests and get another job done on time?

I also have to confess that part of my problem is that I don't trust any of the web server stress testing tools that I've used so far. Like, at all. It's possible that I'm too dumb to use them, but I can't find one (even a paid service) that I can use and say to myself, "Yeah, so that's what would happen if the website got on the front page of Reddit." Without some kind of really solid tools to use for testing, I'm disinclined to try to squeeze 10ms of extra performance out of the server and take it right up to the hairy edge of what a VPS can do. If you have a recommendation for a stress tester that you feel does a good job of emulating front-page-on-Reddit syndrome, by all means, please share. I'd love to try it, turn some more knobs, and maybe even write a post titled "the best Apache-on-Linode config for surviving the front page of Reddit".


interesting.

do you mind pastebin-ning your apache conf and your mysql conf ? It would be nice to see what parameters are you tweaking.

Which VPS hosting are you using ?


I'm using Linode. I can't say enough good things about them, btw. Their support is mind-blowingly good, and their service isn't too far behind.

I have zero incentive to post my config, sorry. One, it would take me an hour or so to format it for general consumption; two, it would take me many many hours more to justify every single setting in it to anybody who read it.

We've got a website overhaul in the works, which will feature a blog which will actually get used & updated, and one of the first items on there will be our server configurations. Promise.


Two other suggestions would be:

1. Test how your VPS comes back after a reboot. When you make big changes and at least every 6 months due to all the upgrading that ubuntu does by default it can break the bootup process and you won't know until that emergency unscheduled reboot at 3AM.

2. As long as you're customizing the firewall you should block pings entirely.

* Really, since the distributions are very compatible I would urge you to consider a distro that has selinux enabled by default. Fedora Core is a great place to start. It also has better tools to manage security and gives you good resume skills.


>> 2. As long as you're customizing the firewall you should block pings entirely.

Why?

I've never seen a threat model where filtering icmp doesn't end up being more trouble then it's worth. Then there is even the maintenance headache when basic but powerful tools like ping and traceroute are rendered useless.

It's the same BS as with fail2ban, thankfully the OP wasn't spreading the gonorrhea of port knocking/single packet auth. Lock down your sshd like everything else: Disable root, disable tunneled cleartext passwords, enforce proper key usage, use AllowGroups/AllowUsers.

EDIT: The BS with fail2ban is moving the ssh port around. I understand this being a problem, but surely iptables has something similar to OpenBSD pf's:

block in quick from <brutes>

pass in log on $if_ext proto tcp from any to ($if_ext:0) port ssh keep state \ (max-src-conn 3, max-src-conn-rate 4/32, overload <brutes> flush global)


Mind explaining your issue with single packet auth? Port knocking by definition is an extra layer of security through obscurity, in that gaining access requiring hitting the sequence of ports to "knock". Depending on the implementation, this either is very vulnerable to MITM/sniffing attacks (static knock sequence) or quickly gets really complicated (fully dynamic knock sequence).

SPA, by contrast, uses actual crypto to securely authenticate the user with the server. I'm a fan of fwknop, which uses GPG to sign a request packet which is read and understood by the server. It protects against 0 day attacks on OpenSSH, lets me drop 22 inbound to eliminate all those pesky attackers, and allows me to securely authenticate with the fwknopd.


Sure.

> "Port knocking by definition is an extra layer of security through obscurity"

Security through obscurity is not security, it's at best theater.

> "It protects against 0 day attacks on OpenSSH"

You substitute one problem for another, 0 day attacks on the SPA. Your model isn't safer, it's just different.

And personally, I trust the OpenSSH guys way more then any SPA vendor simply because they have a very good track record.


You're making no sense. Moving the SSH port is a trivial way to reduce your attack surface (undirected bulk scans go for 22).


If undirected bulk scans are a serious threat to your security, something is up.

Properly configured (AllowUsers, Disable root, no clear text passwords only keys etc), I'd say that the undirected bulk scans pose no security risk at all, they are only a nuisance in terms of spamming your logs, which is easy enough to deal with.

What I'm really trying to say is that each "trivial way to reduce your attack surface" has both cost and benefits.

I'm contending that moving the ssh port around gives you the benefit of less log spam with no security gain, and costs in terms of documentation and maintenance.

When I do this Cost/Benefit analysis, I conclude that moving the port around has more costs then it does benefits, so I don't bother.


undirected bulk scans pose no security risk at all

A future bulk-scan may leverage a new SSH-exploit before you know it exists.

To put it explicitly: You should disable passwords and change the SSH-port. That's the two measures that make sense, to reduce surface and prevent password brute-force.

The rest of your recommendations is security theatre. An attacker dedicated enough to find your SSH-port and be set back by !AllowRoot will just brute-force an allowed username - if that's even a prerequisite for the given ssh-exploit.


> A future bulk-scan may leverage a new SSH-exploit before you know it exists.

Sure, this is true. I consider this a "minor" issue, truth be told (I didn't want muddle up the conversation) I don't tend to run sshd faced towards the 'public' internet and in the cases where I do, ssh access is restricted to certain hosts/networks, and is enforced by a firewall.

> The rest of your recommendations is security theatre

Can you state why? I think they all provide measurable/real benefit, if this isn't the case I'd welcome some education.

Hm. I will give you that AllowUsers,AllowGroups is not a very good benefit in this case, I mainly enforce the usage of those directives to protect against problems such as bogus user account creations (exploit created or something simple as a admin mistake).

>An attacker dedicated enough to find your SSH-port

And Now for Something Completely Different.

Protecting against a dedicated attacker is a totally different ball game then protecting against drive-by's.


Why would you block icmp ping?

I never understood that practice, especially not for standalone machines (in contrast to company networks, on the router).

So, why really?


I commonly see advice to block all ICMP traffic which is even crazier.

I think perhaps the general suspicion of ICMP might be related to things like this http://en.wikipedia.org/wiki/Ping_of_death which are now mostly irrelevant.

Some sort of naive attempt to stop attackers mapping your network too perhaps? It's not exactly a high tech (or effective) means of intrusion detection though.


Why would you turn off KeepAlive instead of setting it to a small value?


Honestly, I haven't done serious high-traffic testing to see if there's a magic number for KeepAlive. However, I've had enough traffic to the server on a couple of occasions that Apache would've fallen over if KeepAlive were on at all, since you can't specify KeepAliveTimeout in increments of less than 1 second.

There are also quite a few Apache-tuning articles on the web that feature "turn off KeepAlive!" in big bold letters, not to mention previous discussion on HN: http://news.ycombinator.com/item?id=2588783, http://news.ycombinator.com/item?id=1980278, http://news.ycombinator.com/item?id=1875848, http://www.kalzumeus.com/2010/06/19/running-apache-on-a-memo... ...


Regarding keep-alive: yes, it will lead to memory issues when it's on and you use mod_php and the prefork MPM (which you really have to with mod_php). But this is just one configuration. You can easily fix it by either of:

- use php-fpm and switch to any other MPM in apache

- keep using mod_php but put an nginx or other reverse proxy before apache. It will do the keep-alive and you can configure apache to still close it's connections.

- use nginx and php-fpm directly

All will mitigate the memory issue while still allowing you to offer keep-alive


I think prefork/mod_php is still the default setup, and it's a really good one to get away from right away if you plan on handling any serious traffic.

Using nginx/php-fpm is great, so long as you don't need support for Apache-style .htaccess files. Since I'm adminning a shared hosting environment, I can't give up support for .htaccess files; if you intend to only host your own site, and your site doesn't need that, then by all means please use nginx & php-fpm, you'll save yourself a lot of headaches.

Setting up an nginx proxy seems like kind of a cheat in a discussion about tuning Apache -- "how to tune Apache: 1. don't tune Apache, set up nginx proxies instead..." -- but suffice it to say that setting up proxying nginx servers is on my to-do list.

I have not yet though had a traffic-related site outage (see also http://news.ycombinator.com/item?id=4619906), so it's not as high on my list as things like "recurring payments system". :-)


> - After installing Apache, change the following in httpd.conf: ServerToken -> Prod, ServerSignature -> Off, KeepAlive -> Off > > - After installing PHP, edit php.ini to make it shut up: expose_php -> Off

What is the purpose of this? Security by obscurity? Or being really frugal about header length?


Yep, security by obscurity. AKA, the exact same reason that people move ssh to other ports.

If it turns out that there is some kind of exploit floating around for a specific version of Apache or PHP, and if jackasses are looking for vulnerable servers by first looking at the server headers instead of just randomly targeting servers, then I want mine to be totally useless to them so that they (hopefully) just move on to some other poor schmuck's server.

Security-by-obscurity is only a problem when you depend on it, or when you are using it to cover up some kind of serious stupidity, like totally untested crypto. There's nothing wrong with using it to frustrate and annoy adversaries.


This is great SEO.

The author's website ranks first for "Linode Hosting Review".

He'll receive a commission for sales generated through his site, and being that his strategy is so targeted, it should convert well and deliver value to both him and Linode.

And he does it without ever having to resort to spam, the very opposite in fact; he does it through high quality content.

Good work feross!


Good for him for generating the content and getting the rankings. I don't think it's that competitive though. I launched my site last week and I am on page 2 for the same term.



"I originally compiled this guide as a .txt file of notes for myself, but decided to share it in case anyone finds it useful. If you're looking for something straight from the horse's mouth, Linode [offers guides](http://library.linode.com/) that cover how to set up a new server, but some of the info is out of date."


Huh, I never realized that "maximum awesomness" was synonymous with "basic functionality".


Veterans of Hacker News - why am I not allowed to down vote?


There's a karma threshold before you're allowed to downvote.


Could you tell me what that number is?


It goes up over time. Right now it's around 500.


For comments or posts?


Comments. I believe the threshold for flagging submissions is 1000, but I'm not sure; I didn't notice when I hit it. You cannot downvote submissions.


Flagging is considerably lower, this account can flag comments and submissions so I think ~20 karma is required to flag.


So you want to downvote his objection on the over hype in the blog?


No, I want to down vote comments that only serve to project a vague superiority vibe and don't add anything to the conversation.


Pointing out a blog post overhyped is useful.


Fair enough. That's not what I took from the comment though.


Your karma must be above 500.


Or even easier use one of many StackScripts[1].

But what I would really like to see is a decent introduction to Salt, Puppet or Chef. This way people that are going to deploy their first server could easily build and rebuild the instance and don't end up retyping shell commands from a blog article.

Also a note on the default Apache setup. There are several "flavors" of the server. If you are going to host Python sites with mod_wsgi then the apache2-mpm-worker package is a lot beter choice than the traditional prefork model.

[1] http://www.linode.com/stackscripts/


Pro tip: instead of changing ssh default port, setting up fail2ban and messing with iptables rules manually, just use ufw. You're welcome.


Is that a Ubuntu only tool? I'd be hesitant to commit myself to anything that runs in a single distribution if I can help it. Fail2Ban would work if I decided to migrate to Centos later down the line. https://help.ubuntu.com/community/UFW


It's an Ubuntu only tool in the same way that aptitude and dpkg-reconfigure are Ubuntu only tools.

The guide is not claiming to be generic, it's totally fair game to use specific tools imho - especially if they simplify things.


>the same way that aptitude and dpkg-reconfigure are Ubuntu only tools.

No. ufw can run on other distros, its just a frontend to iptables. A quick googling makes it look like its there in Arch repos, you have to compile yourself on CentOS. I don't know about all distros YMMV.

Even if ufw was Ubuntu specific, it would not be Ubuntu-specific in the way that aptitude and dpkg-reconfigure are. Those tools are Ubuntu specific because they're specific to that package manager.

And in actuality those tools aren't Ubuntu-only either. apt is the debian package manager and the tools should be present on any debian-based distro.


I'm pretty sure you completely misread that post.

"X is Y the same way that (false statement)" -> X is not Y


The number of completely wrong generalizations about distributions I hear by people being sincere prevents me from reading that as anything but one. People saying "only Ubuntu X" when they really mean "only debian-based X" is one of the most common mistakes I hear.

If I did misread it, that's an even worse thing to say. If a person is in the position of asking if ufw will work on CentOS, it is downright mean to give an answer that requires them understand esoteric particulars of the debian bloodline.


I don't know if I would call the package management system of debian esoteric in a conversation comparing distros. I will agree that the wording was a bad choice.


To my knowledge it is packaged only for Ubuntu, but it does work perfectly on centos.


Might I make one suggestion:

Make sure your SSH port is below 1024 (but still not 22). Reason being if your Linode is ever compromised a bad user may be able to crash sshd and run their own rogue sshd as a non root user since your original port is configured >1024.


thanks for this tip I never realized this. More in the link if interested

http://unix.stackexchange.com/questions/16564/why-are-the-fi...


This is an excellent suggestion. Thanks!


Thanks for the advice on the out of memory reboot. I probably have Apache misconfigured, or my app is slowly leaking memory, but I've run into random downtime and an unresponsive server as apache runs out of memory.

One of the most annoying things I'm running into with Linode and other VPS services is dealing with my drive running out of space. I've been moving all my database stuff to Heroku and I'm much happier (plus the automatic backups are a dream).


I find my Linode really useful for x11 forwarding to use Firefox in public places, or to get around web filters. Might be worth a mention of how to do this.


Write a blog post.


Though it is late, and I've been out at bars with friends, I made this for you:

http://dzoba.com/?p=182


What is the advantage you see of tunnelling X like that over just tunnelling the network connections with ssh -D?


Not all of the web browser traffic goes through the tunnel when you set a proxy in the browser. DNS queries for example do not. Flash programs such as video players also connect directly to the remote website and do not use the proxy. If you don't know those two facts you might be leaking way more information than you want on the local network!


Firefox defaults to not proxying DNS, but if you set "network.proxy.socks_remote_dns" to true in about:config, it will happily do so for SOCKS proxies. If you use FoxyProxy (which is pretty damn useful if you proxy over an SSH tunnel regularly), it has a checkbox that enables or disables it per-proxy.


AFAIK dns queryies can be made to go using proxy (I think its Socks v5 that enables this). I'm using ssh -D and socks v5 setting in my firefox when I need to see website that are blocked by very prohibitive proxy.

Why would flash programs bypass proxy? At my work they wouldn't be able to connect it they weren't using proxy.


Thanks for the write up -- I'll try it the next time I travel to China!


Very interesting. Thanks for writing it up!


It is so easy and dangerous to mess up setting the firewall from scratch, why mess with it.

Use the free, awesome CSF (configserver firewall) instead.

http://www.configserver.com/cp/csf.html


    echo /etc/hostname
really?


People make mistakes like this all the time. Obviously he meant to type cat /etc/hostname


It should be to run the `hostname` command, nothing more.


That is pretty cool! I don't think I've ever come across that. Thanks!


Nice. Didn't know about this.


Ironically that command is used to set the hostname but not read it.

Next you'll tell me you don't even know what `uname` does!


Everyone is learning something here, no need to insult someone because they might not know something you know. I bet you don't know something he knows.


No room for a bit of sarcasm or smarmy, now is there?


Oops - fixed.


There's a rule in HTTPS everywhere https://www.eff.org/https-everywhere for your site. It redirects feross.org to https://www.feross.org which currently doesn't work.


I got my site removed. https://gitweb.torproject.org/https-everywhere.git/commit/b7...

Thanks for letting me know!


Was wondering where I saw the name feross before...


Thanks for the help earlier today!


Thanks for letting me know.

I added HTTPS to my site back when I used Wordpress so that I could login to the admin panel over public Wi-Fi. But now that I'm using Jekyll (https://github.com/mojombo/jekyll) I didn't bother to set it up.

Will see if I can get my site removed from their whitelist.


"On Windows, you’ll want to use putty [...] criminals often try to guess the root password using automated attacks that try many thousands of passwords in a very short time. This is a common attack that nearly all servers will face."

yet another tutorial I wonder who's the target audience. if somebody doesn't know the term brute-force or how to open a terminal app and connect with ssh, IMO they are not supposed to set up their own servery by themselves.


Every sysadmin in the world once set up a server for the first time; every sysadmin in the world once heard the term "brute force" for the first time.


Some of these sysadmins were lucky to set up their first server when telnet was still considered acceptable to leave open.


"they are not supposed to set up their own server..."

What about people setting up a server for the first time? How else are they supposed to learn?

I built my first site when I was 14 and moved to a VPS when I was 17. Until I moved to Linode, I never had a reason to open Terminal.


> What about people setting up a server for the first time? How else are they supposed to learn?

oh wow. well, I'm kind of old fashioned, but maybe they could start with learning basics of how the internet works, what is ssh, what the hell is linux, and by then somebody will know why you need to open that wierd terminal where there is only text and no cute rounded cornered buttons.

I don't get why[1] some tutorials try to cover everything almost from 'first, turn on your computer' to 'deploy your scalable social dynamic api-driven mumbo jumbo architecture web app', and there have been a few around here. there is a place for basic education and there certainly is a place to help somebody kickstart in a particular tech when he's got the basics covered - what's the point of mixing the two?

[1] actually I do. clickity click.


I used feross tutorial abotu 18 months ago when I first started learning about Linux while I was also setting up my Linode for the first time. His tutorial was very helpful as I was not that much an ignorant, but still not that familiar with a terminal.

There's a need for tutorials like that, where people don't want to learn all the layers of internet, of ssh and the architecture of Unix only to setup their Linode box. Actually, BECAUSE I setup my Linode, I learned a hell of a lot in the last 18 months, about internet, ssh, Linux and Unix and a lot more.

Everybody has to start somewhere, and "to start" doesn't mean starting full bottom-up. At some point, to get something done, you need to use abstraction and then iterate over that when you have to get a more complete understanding about what you did a few months ago.


I'll ask again, Hacker News. I've been on this site for three years. I've derived an incredible amount of knowledge from just being here and reading. Lately though, I'm enjoying it a lot less.

I'm not sure if my tolerance for snarky, dipshit comments like this has gone down lately, or if there are a lot more snarky dipshits on this site now. The only thing I know is that for some reason I am unable to down vote these comments - my only active recourse.

Is this a bug or is there some magic karma threshold that I still haven't hit?


Well, since you already know about the karma threshold, which is apparently 500 right now (it has risen over time), and you're getting downvoted for asking about it, let me tell you that downvoting other people isn't all it's cracked up to be...

Meanwhile, I'll steal this from a real HN veteran... I know there's been a lot of talk about the quality of comments or about the increased negativity, but I still really subscribe to this theory, which I take no credit for.

          Quality of Hacker News Comments Over Time

   |                   . .
   |                  .   . 
  q| . .             .     .
  u|    .           .       .               . . .
  a|     .         .          .           .       .
  l|      .       .              .      .           .
  i|       .     .                  . .               .    
  t|        . . .                       you are here -->. .
  y|                                      (that's all)
   |________________________________________________________
    J A S O N D J F M A M J J A S O N D J F M A M J J A S O
http://news.ycombinator.com/item?id=4365778


Is there some sort of tool to generate Ascii graphs out there?


Is there some sort of source on this graph or is it a prove-a-point type thing. Just wondering, because if there was some way to measure quality of posts over time (up/down vote ratio?) that would be a really neat metric.


It's a reference to the notion of Eternal September, from usenet lore. To summarize from memory perhaps incorrectly, usenet used to be great but get worse qualitatively when school would start in September. Its quality would increase throughout the year as people learned to communicate appropriately. One year, however, it just never got better...


... when AOL started offering access to Usenet.

(In reality, Usenet was pretty great through most of the '90s; warez killed Usenet, not newcomers).


> I'm not sure if my tolerance for snarky, dipshit comments like this has gone down lately, or if there are a lot more snarky dipshits on this site now.

there are, I just spotted one.


heh, i saw the same thing myself and i had the same sort of reaction but i think it's more an issue with the writer misjudging his audience (you can see this if you've ever taken a continuing education class or the opposite problem if you read lots of technical documentation).

But in any case, everyone starts somewhere and everyone's entitled to post their own 'linode post-mortem' especially if they're beginners and want to ossify their hazy thoughts somewhere. i don't begrudge the guy for just wanting to keep some record of his process and maybe some day he'll look back and edit the document to keep up with his level of knowledge at the time.


My only gripe with linode is that they are unable to handle DoS attacks. If you're subjected to an attack, your linode will be shut off (null routed) for 24 hours. After that time, they'll check to see if it's still being attacked.

I really think they could do a lot better than that. Apart from that gripe, they're a great VPS host.


What is your suggestion?

sounds like they have a clear policy in place; most places it's a vague judgment call. But most places will kick any user that gets hit with a DoS attack that is large enough to disrupt service to other customers.

I mean, I agree that 'finishing the job' and letting the attacker win by null-routing the target is... a suboptimal way to deal with the problem, but with pipe-filling attacks? if the attacker can send more bytes than your pipe can handle, there is really no other realistic way to handle the problem. (I mean, you can try to trace down the source and call up the ISPs the packets are coming from, but BCP38 is still not widely implemented, so good luck tracing a spoofed source more than one or two hops up, and meanwhile, as you are calling people up and trying to get through to someone skillful enough to figure out where a spoofed packet is coming from, you are down.)

There are 'clean pipe' services, where someone with a very large pipe says they will programatically detect and block DoS traffic, then pass along the good traffic to your (much smaller) pipe, and yes, those services can work /if/ the service provider has a larger pipe than the attacker. Of course, buying such a service from a service provider with a sufficiently large pipe is, as you can imagine, quite expensive. Buying such a service from someone that has less available bandwidth than your attackers are able to throw at the problem, of course, is completely useless.

So yeah, uh, assuming you can't afford to be on a 'clean pipe' that is backed by a huge network, you are much better off with an ISP that proactively shuts down DoS targets than one that doesn't. Unless, of course, you are a DoS target.

I mean, the economics of pipe-filling denial of service attacks, right now, are heavily tilted in favour of the attacker. It is a problem that needs to be solved, but I don't know how to solve it.


Other services, for example Amazon ec2, just deal with a DoS, and charge you for the traffic.

Also, null-routing for 24 hours regardless of if the attack only lasted 2 minutes, is a little silly.


in the case where the attack is larger than your provider's pipe, they can't "just deal with it" - I mean, without blackholing the target IP address. If your transit ports are full, you are dropping packets. Undoubtedly, this is an advantage that larger providers (like amazon) have. On the other hand, most VPS customers would not want to pay $0.13 per gigabyte for DoS traffic.

>Also, null-routing for 24 hours regardless of if the attack only lasted 2 minutes, is a little silly.

It is a rather big deal for a hosting provider to lose connectivity for two minutes.



ssh-copy-id could be used to copy the SSH key


FYI, ssh-copy-id isn't available on OS X by default. So, it's easiest to just do it by hand.


Didn't know about this - thanks for sharing!


Remind me again why anybody is still using Linode ?

Poor uptime and a disgraceful attitude towards security. ANY service provider that hides the facts behind major security incidents from their customers should never be used.

I guess their new customers will soon realise how crappy it feels to find out from Reddit that your VPS is potentially hacked.


Who would you suggest then instead of Linode? I was thinking of renting their smallest one to have something to fart around with.


If you want something cheap to fool around with, there is:

http://nosupportvpshosting.com/

I just set up a VPS there to try out. No idea on reliability yet, but it does seem like a lot of (virtual) server for the money.


Prgrmr.com


Is this a joke? Looks like a malware site.


That does look like a malware site. It was a typo. Here's the correct URL:

http://prgmr.com/


That one looks quite good, I like the Ascii art design and the "the 64M option is an 'advanced option' - we don't have any distros setup that work well in 64 megabytes. You can strip down most distros to the point where they will work well, we just have yet to do so. If you still want to order it, use 'view source' to find the commented out link. "


futurevps.de

Based in Germany, amazing support.

Edit: I'm German, but not affiliated. I do have a small vps hosted there for a while though.


Linode uptime varies according to data centre.

Lots of people use the Fremont centre. It is a dud.

I've hosted in Dallas without incident and currently I use Tokyo.


citation or proof of claims ?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: