Hacker News new | past | comments | ask | show | jobs | submit login
Storage Space Doubled For All Linode VPS plans (linode.com)
167 points by prg318 on July 25, 2013 | hide | past | favorite | 145 comments



I had been a Linode customer for years, but I had to drop them when I lived through two security incidents where there was an almost complete lack of response by the Linode staff. They've completely lost my trust. I work in the information security industry, and I know the reality that almost everyone will get hacked eventually. I don't blame them for getting hacked. I blame their poor incident response plan. For my purposes, I just can't depend on an infrastructure provider who isn't brave enough to tell me up front that they made a mistake, or if they tell me "don't worry, the attackers weren't after your information!"

Because next time, they might be. You're going to have to do better than incremental upgrades to convince me to come back.


I have the exact same feelings/experience about Linode. I really liked their service but I cannot trust a company that doesn't promptly inform it's users that their credit card info has been compromised.


Eh? They did say the attacker got access to the database which holds the encrypted copies of the credit card info [and other information]

https://blog.linode.com/2013/04/16/security-incident-update/


I don't have the exact dates at hand but if I remember correctly the hacking happened few weeks before they acknowledged it.

And the only reason they admitted it (well at least it looked like that) was because the info had already been leaked through their irc channel[0].

[0]: http://turtle.dereferenced.org/~nenolod/linode/linode-abridg...


Please, lets not rehash the past again and again. If you want more information, this has already been heavily discussed on HN.

  Compromised Linode, thousands of BitCoins stolen (bitcoinmedia.com) 
  316 points by tillda 510 days ago
  https://news.ycombinator.com/item?id=3654110

  Linode hacked, CCs and passwords leaked (slashdot.org)
  732 points by DiabloD3 101 days ago
  https://news.ycombinator.com/item?id=5552756

  The story around the Linode hack (straylig.ht) 
  349 points by foofoobar 79 days ago
  https://news.ycombinator.com/item?id=5667027


All of which were publicly acknowledged are reasonably quickly. I didn't ask about breaches. I asked about ones that were not acknowledged or not acknowledged quickly which is what was claimed. ;)


I really wish you'd just read those threads instead of forcing it all to be re-hashed in yet another thread.

Brief summary: according to the hackers involved, they struck a deal with Linode whereby, if Linode made no moves to disclose the attack, the hackers would shred all of the data they had grabbed. Instead, the FBI forced Linode's hand in the matter. Even if that's not true -- and, in this incident, the hackers came out as more believable than Linode IMO -- there still was no mention of the incident on the Linode blog until after the hackers had claimed credit on Linode's IRC channel and the news of that had started making the rounds. This is identical to the previous incident, where Linode said nothing until after a customer started complaining loudly on their user forums.

Then, Linode wasn't forthcoming with details, despite the hack having occurred a couple of days prior. The second update from Linode came only after additional information had been made public by the hackers, and provided no information beyond what had already become public. Linode claimed that customers' credit card information was still secure, but the hackers claimed otherwise and in the days and weeks following the event, several people claiming to be Linode customers claimed that they were seeing suspicious activity on cards that could reasonably be traced back to Linode (cards that were Linode-specific or used for few enough other services).

The way that Linode has handled both incidents has left me, and many others, with the impression that they simply will not disclose that they've been compromised unless forced to by someone else -- a customer or the attacker(s) -- and then they'll attempt to be very opaque and not-specific about the incident.

It's a shame, because aside from this, I really like Linode. I wouldn't even be interested in looking at other VPS providers if it weren't for this. But now I'm being negligent if I continue to host customer data & services on Linode. I don't know yet if anyone else handles this sort of thing better, but I do know how Linode handles it and it's not good.

This'll be my only comment on this subject. You (or others that are interested) really should just go over past threads discussing the incident.


I didn't force you to re-hash. Maybe I'm just not cynical enough to believe a thief that appears to be on an egotrip [which is realistically what the hackers in this instance are].

They made a mistake on the 12th and corrected it by the [with some forum posts in between those two dates and someone claiming responsibility between those two dates] 16th. I'm not seeing the issue in regards to the previous question except 'Hackers say otherwise'.

None of those threads date before the 12th which was kind of the point. I generally assume incompetence before malice while everyone else seems to be the reverse.


Except this was after Linode did a "password reset" email to their customers on the Friday 04/12 without explaining anything and saying everything was fine. Said blog post on 04/16 after log files were released in which the hacker basically said linode paid them to keep quiet about the "incident"


Can you provide an example?

Every time someone says that I've noticed either: A) They missed the public notification. B) Their security expectations aren't in line with a VPS environment where everything is controlled through a WAN control mechanism. [e.g. $200k worth of bitcoins stolen...why on earth aren't you using a colo box with your own dedicated hardware that you can be certain is as secure as you can possibly make it. That is something you can do with $2k and at most $200/month.]


What the hell are you talking about?

a) Linode was extremely cavalier in releasing a public notification. It was all over slashdot and in their support channel, with ZERO notification from them (till long after the fact), leaving the community to assume the worst, which was inline with what actually happened.


This is complete and utter nonsense.

I was a Linode customer during their first security incident (Bitcoins stolen). I found out about it through Reddit. It was communicated to customers weeks after through a post to their status page. At no stage were customers informed via email and nobody knows to this day what exactly happened and what exactly has been done about it.


I also dropped Linode after my credit card was stolen in the recent hack.


I'm curious who you have switched to. Mind sharing?


I moved locally. I don't run a business from Linode, I just need access to a Linux box that I can destroy without hurting anything. I have a test lab set up at home that I can SSH to and when I moved off Linode I set up a box in my company's data center that I can use when I'm in the office. I haven't found anything that matched the quality of Linode, but I can't take the risks of a security breech, either.


I can't speak for freehunter, but know a lot of us have moved to DigitalOcean.


Perhaps they should take a page from Twilio's book


What are your thoughts on Apple's current security issues/hack?


I'm not an Apple user so I haven't paid close attention to it. I'm of the opinion that if customers are being affected, giving a generic "we're investigating a security incident but no cause for alarm yet" message with more info to come is better than "we're down for scheduled maintenance". The attackers already know they've been discovered either way.


The message isn't for the attackers but for the customers. Tim Cook sneezes and there's 50 different blog posts about his snot; what do you think would happen if an Apple developer site when down with a message that said, "There was a security breach!" and no additional context? Fox News would be running stories about how Russian hackers stole your grandmother's iTunes gift cards.

They played it safe by waiting until they actually knew what happened before releasing any information.


I'm a linode user and not impressed. They are just keeping up with the competition. ram/cpu/storage prices are always falling, so their prices have to fall or resources increased to compete. They won't cut their prices as that will cut into revenue, so they increase resources.


I can understand their tactic. Competing on price isn't always the best strategy and it tends to attract poorer quality customers that are harder to maintain. In any event, why shouldn't they charge a price that helps them remain profitable and fiscally healthy, rather than engage in a race to the bottom in terms of price? A race to the bottom almost always results in poor quality service -- think airlines. Plus, competing on support at a "premium" price is generally a good idea; not everybody wants the cheapest.


As a fellow Linode user, I find this to be a pretty silly comment. This will save some customers a little bit of money and provide some storage breathing room. It's an important latest step to firm up their offerings, and I commend them for consistently doing so over time.

If you think Linode is a good value for the money, continue using them. If you don't, go somewhere else. I don't think anyone here cares whether you're impressed.


It works both ways you know. Nobody here cares that you're not impressed.

The point is valid though that due to falling component prices it is expected that Linode will either increase resources or decrease price.


It's actually pretty common for hosts to keep charging the same prices for a server you rent in perpetuity, until you order a new server and migrate. I'd say Linode has been above average in that regard.


Because at Linode you don't just pay for numbers on a pricing table. You pay for value.

If you've had experience with other hosts, you'll know what I mean.


Dear Linode: I fucking love you. I was a loyal Slicehost user until Rackspace bought and ruined them. I should have switched to Linode long before that. My dearest hope is that Rackspace doesn't buy them too.


Counterpoint: Linode is incompetent at security. $228,000 worth of bitcoins were stolen because Linode's admin interface was compromised by an attacker. The reason you probably haven't heard about it is because Linode never publicly acknowledged that it happened, as far as I know. Those are two serious black marks against the credibility of Linode. Use at your own risk.

http://arstechnica.com/business/2012/03/bitcoins-worth-22800...


While the security breach is a big problem, it seems like storing $228,000 worth of bitcoins on a server that can be attacked from the internet is a bad idea.


It was a bitcoin trading platform. Many bitcoins had to be accessible by a sever.

It wasn't the platform's fault that they lost the bitcoins. Linode's admin interface was the attack vector. This is roughly equivalent to getting robbed because your landlord left your apartment's master key under a doormat.


Hosting bitcoins on any vps is not a good idea. I'd be hesitant to store just a few, but trying to run an exchange on one is like trying to run a bank on a vps.

I agree linode's response was not ideal and they should learn from that and be far more transparent (see recent twilio billing issues for a good example of the way they should have responded), but hosting digital money which can be stolen on a shared vps is not a good example of a failure solely down to linode - those bitcoins wouldn't be safe on any vps long term, and their presence sounds like it caused this targetted hack in the first place.


To continue that analogy, its roughly equivalent to keeping $200k cash in a briefcase the floor of your apartment instead of in a safety deposit box at the bank.


That analogy is completely invalid. It was a trading platform. How do you propose they trade bitcoins without those bitcoins being accessible by the server?


Bitcoin trading platforms do not need bitcoins accessible by remote servers. They should have a huge portion of the coins in cold storage (not available from remote servers). Also, the storage and attribution of coins to accounts should be done by a server under physical possession/control, so that authentication cannot even be forged in the event of a frontend breach.

There are a lot of things you can do to protect your bitcoins. Having them all on linode servers is the worst thing possible.

http://en.wikipedia.org/wiki/Air_gap_%28networking%29


Yeah, that's one of those applications that you really need to colo and secure yourself with outside security verification, or at the very least go with a host that has a reputation for ironclad security. Hosting that kind of thing on a low-cost VPS is ridiculous.


From what I remember, it was a 17 year old kid who built and ran the site, and it was his first time ever handling financial transactions (there was another bitcoin-related hacking around the same time, so I might be confusing them.) It was a recipe for disaster.


It's incredible how much people are willfully ignoring Linode's gross incompetence in this matter.


Their only gross incompetence was not informing customers in a timely manner, which I could forgive them for. Having a security breach is something that happens to, quite literally, every hosting provider.

I say all this as someone who dropped Linode for DO due to not being able to afford them anymore.


Yes, bitcoin is definitely not an appropriate VPS to run a bitcoin trading application from where the bitcoins are stored. Stored bitcoins are one of the most sought after targets since law enforcement generally will not investigate. For most web apps, it's an excellent choice.


Valid counterpoint. Your level of security measures should scale to the importance of the application. I've never been a big fan of web-based consoles for VPS (where you can login to the server even if you screw up your ssh certs for example). However, that ability saved my bacon a few times early on when I was learning how to configure my server on Slicehost. Now, I really wish I could disable it.

The one thing to be remembered is that as customers, we are only as loyal as the company is. When they start doing things that affect us in negative ways, we move to newer pastures.


http://status.linode.com/2012/03/manager-security-incident.h... 3/1/12 - 9:43pm, before the article was written.

They didn't acknowledge the amount but if you are expecting them to disclose their customer's information [even things like amounts of Bitcoins stolen] that is in violation of any reasonable privacy policy. :/


Out of curiosity, what's your preferred vps provider?


Every hosting service will have at least one security breach, especially if someone is doing something reckless like storing $230,000 worth of digital money on one of their servers. Leaving a provider because of a breach just means you're in an endless game of VPS musical chairs.

The way they handled the breach is another thing, though.


And this is why I love Linode. I have been paying $20/month for around 3 years. At first I had 256M of RAM and 12G of harddrive space... and now I have 1G of RAM and 48G of harddrive space. I dare you to look at other providers and see if they have offered such aggressive, yet free, upgrades


Meanwhile, $20 a month at digitalocean gets you 2G of RAM and 40 Gb of ssd.

Not to diss Linode, I use them myself, but they're not the cheapest. It's quite evident to me that these free upgrades are a defensive tactic.

How much memory or disk space would $20 a month get from other providers 3 years ago?


I switched to DO a few months ago, ... and just recently switched back to Linode.

I'll pay the extra few bucks for reliability and decent uptime.

I had very frequent downtimes and the support was often unhelpful. I heard from others that it was just the datacenter that my VPS was on and other DC weren't as bad, but I guess it gave me enough of a bad impression that I went back to Linode anyway.


I'm using it non-production things, and have had a similar experience. I'm in the SFO data center, and I've lost connectivity to the machine at least 5 times in the past month.


I've had the same issues and it's preventing me from using it for anything serious. A lot of the other things with DO (provisioning, backup, snapshots) are great but the network seems to the biggest issue at the SF datacenter.


The other thing about DigitalOcean is they haven't rolled out IPv6 or private IPs yet. They did seem to promise a release date for the former several months ago. I asked them about this yesterday and after I pressed the issue the rep snapped at me and told me that I didn't understand how difficult it is!

Linode it is!


I suspect DigitalOcean have more virtual servers per physical machine than Linode do, which is probably why they can offer cheaper prices. This means you probably won't be able to use as much CPU power and I/O (more people contending for the same resources.)


I've had super spotty luck with DO, and the support hasn't been great. Sometimes it's worth paying a little more for reliability and good support.

Also, my Linodes seem to have much more consistent CPU performance. With DO it's been very hit or miss at that 1/2GB size. They may be cramming too many VMs on their hosts.


I have both Linode and DO instances but I have lots of reliability issues with my DO instance. I usually have 5-7 minutes per day where it is unreachable by my offsite Nagios host which also monitors my Linode instance. My Linode instance has had 3 minutes of unreachability in the past 30 days.


For $20/month, Digital Ocean provide 2GB memory, 2 cores, 40GB SSD disk and 3TB transfer.


And they don't have a poor track record with regards to handling security breaches.


... they are kind of too new to have a track record one way or the other aren't they?

Same for the hardware. they are the new kids on the block. For now it's the newest and shiniest but long term lets see how they keep pace. Linode's been around a while and done a good job of dragging all it's customer's along to newer hardware.


The people behind DigitalOcean have been around for a while with ServerStack, the parent company. The main engineers also used to work at another major cloud hosting company.


No track record > Systematically failing to handle multiple breaches properly.


http://www.wired.com/wiredenterprise/2013/04/digitalocean/?c...

I'm not seeing how this is a record superior to Linode. That said, I use DO due to its price point for hobby/testing projects and dropped Linode completely.


I've used prgmr.com for a while now and am pretty content with it. That being said, I am not running anything mission critical on it, so the occasional hiccups are a pretty easy price to pay for a super cheap VPS. I pay $8 a month for 256MiB RAM, 6GiB HD, and 40GiB transfer. As far as price points go on the larger plans, Linode currently has the advantage in transfer and disk space. I'll be interested to see if prgmr increases it to compete.


Plans are in the works to respond to Linode and Digital Ocean. Plans have been in the works for... some time now; but... yeah. My 'automation debt' is absolutely killing me here. (lack of capital isn't helping much, either, but right now, insufficient automation, and my inherent conservatism are probably the biggest problems.) I'm doing existing customers first, of course, and starting with customers that I promised upgrades to a long time ago, which is actually slowing the whole process down quite a lot.

Another interesting thing is that right now, I charge a $4/account charge, then a incremental fee, right now $1 for every 64mib ram/1.5gb disk, which means that as you get a larger plan, you get more stuff for your money. As far as I can tell, none of the other players do that (and some competitors do the opposite, giving you more resources per dollar on the smaller plans than on the larger plans.) - so I am wondering if I should reduce or eliminate that fee.


It seems like transfer would be an 'easy' upgrade in that it doesn't require a lot of per-server manual effort, just a change to the billing system (threshold at which accounts are billed for overages).

But I dunno if that makes sense, given purchasing decisions and your cost structure. Are people's purchase decisions keying off transfer quota at all, or are RAM/HDD are the real price-point issues? And if they are, would doubling transfer, say, make a difference, or would you have to match Linode (currently offering 10x the transfer at equivalent price points)? So maybe this is a dumb idea.


oh, you are right on all above. right now I'm way overbuying transit... not quite 10x, but 5x, so I really should increase quotas.

For a long time, it was that my network was so shitty that doubling traffic would make the thing fall over, but that problem is fixed... new network is way more robust. so yeah, I should increase transfer expectations.

I mean, in reality, I follow the 'use all you want, limit you when you get crazy' model, rather than hitting you with a big bill when you go over. (I should make that a selectable option) so it's really similar to what digital ocean is doing with their 'unlimited' - obviously, if you go crazy on an 'unlimited' system, you get limited. My limits are 'setting expectations' - but right now they are setting expectations that would have been reasonable in 2007, which is silly, as my current network is pretty robust, and really can handle more.


Everyone interested in web dev should own a VPS.

After a year working at a Rails shop that deploys to Heroku, I decided it's time to learn some sysadmin basics. Here are some things I've done the past couple months that might give other noobs some ideas.

- Make aliases for most Linux commands. Abstract Linux into a simple interface of aliases that you can browse in your .zshrc/.bashrc/fish.config. Linux becomes much more enjoyable when you can type `untar <file>` instead of remembering the magical incantation `sudo tar -xzvf <file>`. Make aliases for everything. nginx-reload. nginx-restart. alias z="vim ~/.zshrc && source ~/.zshrc". I even abstract apt-get commands into ag-install, ag-update, ag-remote, etc.

- Use dropbox cli, a shared folder with dotfiles, and symlinks to keep dotfiles synced between your server(s) and laptop(s).

- Install tmux on it so that you can `$ tmux attach` to a persistent shell so that things still run when you close your remote connection and they're still there when you log back in.

- Run weechat (terminal irc client) in tmux so that you never have to log out of irc again.

- Learn how to rsync up and down files from your server.

- rsync your Sinatra app (or whatever) onto your server and run it on port 5000, immediately see it live.

- Install nginx and have your VPS serve multiple domain names, each domain name pointing to a Sinatra app running on port 5000, 5001, and 5002.

- Run each website in its own tmux tab. At the beginning it's easier to reason about a foreground process than some daemonized process.

- Install postgres or whatever database you use to your VPS. Get your db-driven app to connect to it. Connect to your remote db using your favorite db gui.

- Take notes of all the commands you use to set up each component so that you can refer to them later, see how you might have messed up.

- Learn enough bash/zsh/fish to automate all of those commands. Polish it enough so that you can curl your script from a Github gist on a fresh VPS and it will set up your entire environment.

- Install `htop` as a nice overview of your server's stats and the processes that are running. Run it in another tmux tab. My tmux right now has a tab for weechat (irc), tabs for my Clojure apps, and an htop tab. On Friday nights I pretend I'm NASA mission control.

- Learn how to use the `ssh` command to reverse tunnel a connection from your VPS to an app you're running on your local machine for a fast development feedback loop.

- Learn how to use your VPS to encrypt your connection when you're working at coffee shops.

I'm still a noob but I leveled up considerably from this quest arc. I highly recommend.

I still don't have a raging clue about most of Linux, even stuff like what all the folders are supposed to be for: /var/opt /etc/www /usr/local /usr/shared/local ... etc. and why files go the places they go, but it's all an iterative process.

Email me (profile) if you need help with any of the bullet points. I am trying to shape it into a more helpful blog post to share with other noobs.


>- Make aliases for most Linux commands. Abstract Linux into a simple interface of aliases that you can browse in your .zshrc/.bashrc/fish.config. Linux becomes much more enjoyable when you can type `untar <file>` instead of remembering the magical incantation `sudo tar -xzvf <file>`. Make aliases for everything. nginx-reload. nginx-restart. alias z="vim ~/.zshrc && source ~/.zshrc". I even abstract apt-get commands into ag-install, ag-update, ag-remote, etc.

One problem with this is you become a cripple when you have to use an outside machine or need to write a script for wide distribution.


I find that it's the opposite for me since my needs are so simple and I'm not logging hours into Linux cli every day. I simply don't use the VPS cli enough to retain everything and that just makes it frustrating. It's more like Billy's First VPS Side-project rather than a roadmap for industrial-grade sysadmin skills which I'll probably never have.

Aliases become a self-documenting interface. Using my aliases, getting exposed to them every time I add a new one, tweaking them, and simple browsing them from time to time is what helps me remember commands.

It saves me brain cycles sort of like buttons on a GUI.

For me, the alternative to writing an `untar` alias isn't that I remember `tar -xzvf` but that I have to google it every time (or scroll through dozens of possible flags in the man page). I'm not untar'ing things every day.

Now multiply that example across all sorts of commands. I never remember the right flags to `stat`. or `column -nts: /etc/passwd` just to quickly see users.

I won't remember that I like the `--show-upgraded` flag sent to `apt-get upgrade` every time. I don't remember that my nginx conf and websites are in `/etc/nginx/` while their static files are in `/var/www/`. When life gets busy and it's a month since my last VPS login, I'll remember my own `disk-space` abstraction instead of `df -h`.

Finally, I'm not a sysadmin. I don't find myself on random machines nor am I writing scripts for wide distribution. When I spin up a new VPS, I curl a gist that even installs fishshell and then bootstraps the rest of my niceties with fish scripts. If I'm lazy, I'll just wget my .bashrc aliases. If I'm even lazier, I'll just curl it like it's a manpage and browse my abstractions. Not much different than using manpages directly.

But actually over time it turns out that I end up remembering what many of these aliases represent.


The concern you are responding to here was my initial reaction, but I think you've changed my mind.

I like this a lot. Not only do you gain experience using commands by setting up your aliases, but you now have a reference for the future and are learning the commands by repetition; exposing yourself to the config during tweaks.

Add in the use of apropos to find appropriate commands (without googling) and learn to use the search feature while viewing a man page (/[STRING] same as vim!) and you're golden.


>For me, the alternative to writing an `untar` alias isn't that I remember `tar -xzvf` but that I have to google it every time (or scroll through dozens of possible flags in the man page). I'm not untar'ing things every day. //

I set up a permanent history and then for important actions I comment them then just use a script "$ hs keyword" to find what I'm after.

Mind I used to use slackware so "tar xzvf package.tgz" became second nature.


If you have trouble remembering things, one fun trick is to write your own man pages. Here is one way to do it:

http://www.illuminatedcomputing.com/manpj/

After a couple years you'll have quite a collection!


So store your dotfiles on github and then look up the contents of this file even if you don't have access to your home machine.


Agreed. Aliases are good if used as a learning aid, but eventually you need to wean yourself off them if you're serious about sysadmin/devops.


That's why I advocate auto-completion - it's serves both as a speedup in a day-to-day work and as a help in learning.

I sometimes have nightmares about spending all day in cmd.exe...


As do I, so I spend it in cygwin instead.


Try Clink and ConEmu. They make CMD.exe bearable.


> "Linux becomes much more enjoyable when you can type `untar <file>` instead of remembering the magical incantation `sudo tar -xzvf <file>`."

I love aliases as well, but I remember the particular command for tar extraction (tar xzf) via the mnemonic "eXtract Zee Files" said with an overly stereotypical German accent in my mind. Silly, I know, but I have never forgotten it since.


Modern tar versions detect compression automatically when trying to decompress, so you don't even need to the z. Heck, you don't even need the dashes!

eg. `tar xf foo.tar.gz`


I have heard people say this before, and it always makes me wonder. Why not remember it as "extract zipped file" instead of "extract zee file"? That is after all a more accurate description of the flags you are giving tar, z indicating that the tarball had been gzipped. You can also specify j for bz2, and J for xz (sorry I don't know any cute sayings for that).

Of even more interest to you is probably that with modern version of GNU tar, you can just say tar xf filename, and then tar will automatically determine whether it should use gzip, bzip2, or xz based on the filetype. So you can save yourself an extra letter there.


Silly/ridiculous references can be easier to remember with less repetition of use (to me at least). The more literal meaning might be more straight forward, but the more absurd one imprints on the mind easier at times. One of the silliest (but effective) examples I've seen is for identifying the songs/calls of birds[1]. Bird calls tend to be kind of abstract to classify, but if you can match them up to a silly mnemonic, they're pretty easy to remember (for me at least).

I think everyone encounters at least a few in school.

Rules of integration by parts[2]

Order of Operations[3]

OSI Model[4]

Biology Taxonomy

Resistor Color Codes

Trig functions

[1] http://www.stanford.edu/~kendric/birds/birdsong.html

[2] http://en.wikipedia.org/wiki/Integration_by_parts#LIATE_rule

[3] http://en.wikipedia.org/wiki/Order_of_operations#Mnemonics

[4] http://www.techexams.net/forums/ccna-ccent/34490-your-mnemon...


I understand what you are saying but it just seems a little silly to me when the "silly mnemonic" is exactly the same as the "actual meaning" except for the word associated with one letter -- xzf meaning extract zee file instead of extract zipped file.

To relate with some of the examples you provided, it would be like remembering the order of operations PEMDAS by remembering Parentheses, Exponents, Multiplication, Division, Addition, Sally.


I said 'eXtract Zee Files' in the overly stereotypical German accent before even finishing your sentence saying that you did it as well (And I said it out loud, not in my mind).

But more on context - that's actually a great mnemonic. Silly me, just memorizing things.


Complimens on all the achievements!

Instead of renting a VPS, you could also purchase a Raspberry Pi for about $35, and get the same learning experience. In addition to that, the visibility of the cute blinking LEDs (in a transparent case) gives an opportunity for actual "RELAXEN UND WATSCHEN DER BLINKENLICHTEN" and the joy of actually owning "EIN KOMPUTERMASCHINE FÜR DER GEFINGERPOKEN UND MITTENGRABEN".

Edit: the all caps is only because they are quoted verbatim from a famous blackletter sign in university computer rooms in mangled mock German.


> Make aliases for most Linux commands

I agree in principle, but what you should really be writing are shell functions. Aliases are defined once and cast in stone - if there is a variable in an alias body, it will be evaluated once, on alias creation, and never again. Functions have no such problem. They also let you select process arguments, while aliases delegate this to invoked commands. Aliases are for very simple things, anything more complicated and you should write a function.

Also, you should set up a decent autocompletion. Zsh needs a bit of configuring and fish has it by default - the need for most aliases is eliminated by autocompletion and history functionalities of a shell. There is standard CTRL+R combo which lets you search your history and there is a "up-line-and-search" action in zsh which searches the history for lines starting with what you have on your current line. Set your HISTSIZE to some huge number and never worry about forgetting how to do something or having to repeat (as in re-type) a command.

Use fasd (https://github.com/clvv/fasd). A simple little script that records all paths, files and directories, you visit and sorts them by "frecency" (both frequency and recency). It let's you just `z pr 5` to `/usr/home/you/project/something/test5`. Give it a day and you'll never want to get back.


Yeah, that's true. My bullet point wasn't a very good one-liner.

Aliases are nice because they're the most entry-level customization yet they solve the #1 thing I spent my time doing at the very beginning: hunting down magical incantations on google. When I found one that helped me immediately, I'd collect it in my .bashrc like a pokeball. I had a lot of teeth to cut before I started looking for solutions that were solved with functions.

However, autocompletion based off history is a volatile convenience to me, not something that I feel obviates the need for aliases. Depending on ctrl-r to find `tar -xzvf` isn't much different than autocompleting your own `untar` alias. Except that when `tar -xzvf` isn't in your history anymore as a noob (fresh VPS/fresh shell/different computer), then you're back on Google/manpages to find the incantation so that it's in your history once again.

Besides, here's a simple example of the sort of abstractions I'm talking about:

    transmission-edit -> sudo vim /etc/transmission-daemon/settings.json
    transmission-start -> sudo start transmission-daemon
    transmission-stop -> sudo stop transmission-daemon
    ...
I type "trans" and then hit tab. All my Transmission-Daemon related stuff is right there ready for autocompletion. I don't have to think about files or locations or paths. I don't have to remember that transmission-daemon is set up as a service. And I especially don't have to remember that its config is a .json file named 'settings' in `/etc/transmission-daemon/`.

This is me just talking aloud.

fasd looks great. Thanks.


While aliases may be great for saving time typing, you can use them to steal from yourself the opportunity to learn how unix commands like 'tar' work. Frankly, tar should never seem like an "incantation", because most of the flags are there as shortcuts. From your example, -f and -z are shortcuts for certain pipes and redirects. 'tar -xzvf file.gz' is equivalent to 'zcat file.gz | tar -xv'; and of course the -v is unnecessary so you have 'zcat file.gz | tar x', which is a lot clearer than 'untar', which won't work for tar.bz2 files, or even plain old .tar files (but, just switch zcat to bzcat and you're golden).


I feel like everyone who has responded to me so far plugged that coax cable from The Matrix into the back of their brain stem one day and just downloaded Everything Unix. I don't really relate to a single comment.

I call it an incantation because you're on your couch eating a frozen meal at 7:30pm on a Tuesday night with your laptop in front of you. The TV is on, but it's muted so it sits in the limbo between a social coping device and a distraction for the few hours you have between the commute from work and bedtime.

Yeah, one day you want to learn more about Unix and tar and how electrons flow on a circuit board and maybe what gravity is -- like, what it really is --, but tonight you just want to get nginx set up for the first time on your linode so it can serve your hand-rolled blog and move the needle just enough so you feel like you're making some progress in life.

You've managed to use `wget` to download nginx-1.2.9.tar.gz onto your VPS. And all you want to do next is get the files out of tar.gz. You'd love to right click it and select "Uncompress" from a context menu, but it's in a terminal window.

For now you just google "ubuntu uncompress tar", click the first result, and copy the incantation into .bashrc aliased to `untar` for future reference before pressing forward into the abyss.


> if there is a variable in an alias body, it will be evaluated once, on alias creation, and never again.

Really? That's not true in bash:

    $ alias x='echo foo$A'
    $ x
    foo
    $ A=zball
    $ x
    foozball


> Make aliases for most Linux commands.

I have to disagree here: knowing the actual commands and their arguments is far more valuable than the small amount of typing saved. Read the man page every time you use the command if you have to, but at least you'll know why `tar zxf` failed on that .tar.bz2 file.

As other commenters have mentioned, relying on aliases leaves you feeling crippled when working on systems that don't have them. If you're repeating commands often enough that you'd like an alias like nginx-reload for them, consider using your shell's history feature instead.

I also make a point of changing the prompt color on each of my systems, because it's much more obvious than reading the prompt for the $host entry.


I agree with you. I just want to point out that these days `tar xf` works for both .zip and .bz2 files (it recognizes the file type automatically). AFAIK it works with both GNU and BSD tar.


$ aunpack archive #how_hard


Harder since you need to install unpack (it's not installed by default on many installations). Also, "tar xf" has literally the same number of characters, so no hard at all.


I fail to see which of those things require a VPS. You could easily do them from having a local Linux install, or even a Linux VM.

It sounds more to me like you are saying every developer should gain experience working in a Linux environment, a sentiment I would agree with, but there are easier (and cheaper) ways of using Linux than renting a VPS.


The utility comes from the part that it's a remote box that's always on the internet, so it becomes a fun sandbox in that regard.

My laptop on the other hand is only connected to the internet transiently.


> - Run each website in its own tmux tab. At the beginning it's easier to reason about a foreground process than some daemonized process.

This is kindof iffy. I would not suggest going this route.


Wow, this is a great approach you've chosen, and it will teach you a ton. If you want to start learning more about how your machine is configured, the Nemeth book is very good:

http://www.amazon.com/Linux-System-Administration-Handbook-E...

I've only read the 3rd edition, but it looks like this newer edition also includes Linux.


I would use git for .dotfiles synchronization rather than dropbox cli


Yeah, that's better advice.

The dropbox thing is a fun experiment, especially when the dotfiles folder is a git repo that sits in dropbox. It saves me from the git commit/push/pull loop when my dotfiles are in flux.

I can add a fish shell function on my laptop and when I switch back to my VPS shell, the function is already loaded and available.

The git repo reminds me what changes I've made today and lets me formalize the changes I've made into commits.

Fun for the whole family.


All that stuff gets painful once you are used to it. Frankly since I found Route66 I'm never going back.

Edit: Err, Cloud66


That's a good point that made me think for a second.

In the end, even if I go back to Heroku, I'll have gone from Heroku -> VPS -> Heroku. But it's not full-circle or back-pedaling like it first appears. Rather, I've traded Heroku as a blissful crutch for Heroku as an automator of things I now know how to do.

The pain just means you're ready to reach the next abstraction because you've beat the level and paid your dues.


I agree completely.


Hi Dan (hope you don't mind),

The reason you have so many people replying saying that your shortcuts are a crutch is because you and I belong to a different group of people than they do.

Allow me to expand.

Unix people are conformists. By "unix people" I mean people that defend Unix constantly. Quick evidence that they are conformists: "where's Plan 9?" More evidence: unix utilities have the same crappy interface they had when they were created.

Think about this: Unix is based on the idea that programs should talk to each other via text (strings). Do you know of anything less structured than a string? A string is a bunch of bytes next to each other. And I don't mean that in a "well, look at how the computer represents it" kind of way (where everything is a bunch of bytes next to each other); I mean that a string is literally the representation of bytes next to each other.

When you call a utility on the command line, say tar, and pass it zxcfvg or whatever that crap is, then pass it a filename, there's no structure here. The program 'tar' is receiving a big string containing what you just wrote: "tar zxcfvg filename.tar.gz". In fact, even more moronically, you receive that data already cut up in pieces as an array, so: ["tar", "zxcfvg", "filename.tar.gz"]. Which means you actually were passed information that is worse than a string since you've now lost information (the spaces). Wow, thanks, Unix, for splitting a string on its spaces. Real handy.

Unix believes on this very flawed idea that strings are a good data structure. Remember Tcl? It was great. Then we came up with slightly better abstractions (Perl) and people ran the fuck away from Tcl as fast as they could. Tcl and Unix are based on the same flawed idea: strings are an OK structure. Tcl and Unix were both wrong. Strings suck.

You know what's a good interface? Messages. Look at Smalltalk.

This is what a good OS would look like (good, not great - plus I'll be exaggerating the messages to make my point clearer):

filename.tar.gz expandYourself or filename.tar.gz becomeAFreakingFolder or filename.mysteryextension expandYourselfWhateverYouAre,IDon'tCareIfIt'sBzipOrGzipOrBlahZip,JustDoIt,YouShouldBeResponsibleForYourself.

and instead of always having to google if ln -s takes the pointer or pointee first, it'd be this:

existing-file.ext createSoftSymlinkAt: new-name-at-new-location-maybe.ext

Unix people are conformists. They don't want change. They want to type tar xgvzg. Let them. Don't waste your breath.

Even Plan 9, the "good unix", is shortsighted. The gimmick is that "everything is a file". Shame, missed it again. Files are dumb. Our friend filename.tar.gz still doesn't know how to manage itself. Files are the wrong abstraction - not enough. How can they never have thought of objects? Because they're conformists. Not too much change, please! Just make more things be files. That'll win Unix people's hearts. And even that failed. Do you see now? Even such a tiny change failed to win over the unix people. You will not convince them that abstracting the behavior they fawn over is a good idea. They don't understand the concept. They're still writing scripts. You know what a script is? A recipe. You know who programs in recipes? Procedural programmers. When you let procedural programmers make an OS you end up with a system where scripts talk to each other via strings. Uhm, gross.

Let it be known that I use a Unix everyday (Mac OS X) and I think it's the best OS we have (I mean Unix, not Mac OS X). Nothing gets close. Still, it's FAR away from good. Smalltalk is good. If Smalltalk were an OS then I'd consider Unix "absolute garbage of the nastiest kind".

Keep abstracting away. If you want, later on when you have nothing to do, go ahead and print the man pages for tar. Read them start to finish. Study them deeply. Even marry them and go on a honeymoon trip together. Then you'll look good to a bunch of internet strangers. Or come with me and appreciate the view from high up our tower of abstractions. Unix people look like ants from here.


It's frustratingly difficult to find a high-storage VPS at an affordable price. This announcement by Linode is a start. But I think the market has a ways to go yet.


Check out these guys https://backupsy.com - I haven't used them myself, but if all you need is storage, the prices look great. There are coupons floating around for 40-50% off as well.


You can back your way into one at Dreamhost. Their shared hosting plans are unlimited disk/unlimited bandwidth for web-related data. Then add a VPS to that. You still get the same disk/bandwidth allocation. If you get really big, there's probably some throttling, but it works well for me. I have over 500GB of data stored there.


Dreamhost's policy is that your storage space is unlimited only for files related to the sites you're hosting with them. Not to mention, availability of Dreamhost's shared plans is, well, a nightmare. Don't get me wrong, it's still a bargain, but you get what you pay for.


I was in the same situation a few months back and I remember reading about mounting an Amazon S3 Bucket on a VPS. Did you consider that?

I never actually went through with the idea, but it's definitely on my things-to-do-when-bored list!


http://buyvm.net/ - Their storage VMs are pretty good, I only use it for backups tho. I'm not sure if that helps you at all.


I did a comparison of price per GB of cheap VPS hosts recently and BuyVM is the best.

I'm with http://budgetvm.com/ myself because for the same price (as a BuyBM box), you get a ton more RAM and CPU. The problem here is they're running OpenVZ, which among other things, doesn't support certain iptables (firewall) functions.

Also you sacrifice disk space. But for the application I'm running (Riak cluster), memory is important too.


Why would anyone use them over DigitalOcean? What's the advantage? Am I missing out on something?


I'm a customer of both DigitalOcean as well as Linode. I use DigitalOcean for random experiments and side projects and Linode for production sites.

I haven't directly observed any practical differences in ways that affect me in terms of VPS infrastructure itself, but I've found Linode's support to be significantly better and I'm willing to pay a premium for that on "sites that matter". I don't want to imply that DigitalOcean's support is bad -- it's not, but I feel they're average when compared with Linode responses.

On the other hand, my impressions may be biased since I rarely have to contact support in the first place. I'd be interested in whether or not others have had the same experience in terms of customer support.


I wrote this a while back: http://www.welton.it/articles/webhosting_market_lemons

Part of the problem is that it's very easy to compare on price, but difficult to compare on service. I think my conclusions may not be quite as valid today, because news gets around so quickly, but I think the concept is a worthwhile one to think about.

Anecdotally, I've been very happy with Linode's service: a few months ago, I had some kind of problem and things weren't booting. Panic! I sent in a support ticket, and went to their IRC channel right away. I started describing the problem, and one of the guys in the channel says "oh, I think I just answered that", and indeed, the support ticket had already been answered with real, helpful information.

My interaction with them has usually been quite positive.


I share pretty much the same experience. Although I have to say, DO support seemed to have improved in recent months and feels more solid. It's hard to put a finger on it, it's hardly scientific, but just a feeling. I'm at a point where some (smaller) production VPS are now hosted with DO, whilst the majority is still at Linode, just to "test the waters".

One more important thing however, network stability in particular seems way better with Linode currently. I keep getting random cron errors resulting from name resolution issues on at least two data centres of DO. To counter my own last point, Linode recently had quite a lengthy network outage at Dallas, so no provider is totally immune.

Bottom line: try to build an infrastructure where you can as quickly as possible migrate/recover onto a new droplet/linode/instance, ideally across different providers.


A couple of concrete things come to mind -

1. DO has no private network; all traffic between your nodes occurs via your public interface. In fairness, it looks like they're working on this (http://digitalocean.uservoice.com/forums/136585-digital-ocea...).

2. Linode's NodeBalancers are super handy if you need load balancing, and don't want to run your own. There are certainly some things I wish they could do (alerting comes to mind), but they're a heck of a lot better than running my own.


I started to use DO but they use a funky custom kernel setup where you can't use your own kernel - so you're at the whim of DO for security updates and such. However it's hard to beat them on price/value.


Experience comes to mind. Linode has been in the VPS space for considerably longer than DigitalOcean. Just based on recent HN threads, it appears that many folks are simply kicking the tires with DigitalOcean but may be hesitant to deploy production sites there. Given the low barrier of entry, I can imagine they see considerably more churn and volatility in their customer base compared to Linode.


I've run some CPU benchmarks, and the $80 droplet is roughly equivalent to the $40 linode. But if you get lucky and no one else on the same host is using much CPU, the linode will be over 50% faster. Of course, most customers aren't running CPU-bound tasks.


To be honest I would never trust my credit card number to linode again.


Why? You lose nothing if it's compromised. That's the bank's worry.


It's not even the bank's worry. It's the merchant's worry. They're on the hook for everything spent with a stolen credit card. The bank and cardholder and the ones who exposed are off the hook for everything.


For starters, I had to pay for the replacement card...


What kind of shitty bank do you have?


If OVH ever brings that $27 i5 dedicated to the US (CA) datacenter, VPS providers are in trouble.


They have $39 ones :)

https://www.ovh.com/us/dedicated-servers/kimsufi.xml

I wish they'd let Australians buy them... They actually forbid you from buying outside your country.


Only the K2 $50 one listed there is equal to the 16G, not the K1

The 16G is effectively half the price.

And I believe people buy ovh servers in other countries all the time for backups, maybe you need an account with them first in your own country (but they are not in AUS I guess?)


This is excellent news. I've had a VPS on Linode for a couple of years now and the only thing that was really lacking was the disk space.


Here's a question: where can I get a VPS with a large amount (multiple TB) of not-necessarily-fast storage? It seems like someone out there should have a SAN-backed VPS, yet I've had trouble locating one.


https://backupsy.com/

$80/mo for 2TB. Deals floating around as well as "40% Off For Life Coupon: 40PERCENT" on the home page.

Just signed up with them, and they are having problems with credit card billing, so only feedback so far is they seem responsive to tickets...


At that point, you should be using S3 or something similar.


I've recently started hosting with 6sync and been very happy with their services so far. It's a lot lighter on documentation and has fewer users than linode. But functionality and performance is there.


I think DigitalOcean is next shiny hosting thing :), even without this security thing everyone keeps reminding me.


The question I keep asking everybody is: "Where do I host my Postgres if I want to have the option to grow disk without paying for higher VM tier?". Specifically assuming I'm managing Postgres myself, not going through Heroku etc. So far only AWS seems to give you that option.


I believe Linode used to have the option to attach extra storage but you'd still need to reformat, merge, etc.

Personally, for high diskspace growth needs, I just use a dedicated where I can just get them to throw additional disks in with little to no downtime.


They still have that option, but you're limited to an additional 12GB (at least on the 2GB plan). Costs $1/mo per GB that you add on.


If you want tons of disk space, just get a cheap dedicated server.

OVH has a 2TB one for $39/month.

Hetzner has auctioned dedicated servers with 160GB space for 21 euro ($27.8) and 640GB for 23 euro ($30.4).


Still no SSD option? As the owner of a top forum, I think IOPs are far more important than gigabytes.


A lot of your woes would be solved by in memory caching.


If it's a LAMP app, then not really.

If you have tables with TEXT fields, MySQL up to and including 5.5 will perform certain joins with on-disk tables no matter what you tell it to do and will ignore indexes you create to prevent that on-disk stupidity. This behaviour is independent of engine.

Being I/O choked on Linode is why I moved to DigitalOcean.


It sounds like you should have moved from MySQL to PostgreSQL instead of switching hosting providers.


Please call me when Wordpress and thousands of plugins and themes have been expunged of all MySQLisms.

Otherwise, heartily agreed.


Ahh, Wordpress. You have my deepest sympathies.

If you don't mind my asking, is there some key feature of Wordpress or its plugins that holds you to it? I'm always interested to hear why people stick with it.


I personally detest it. But my bloggers are comfortable with it.


Is it because of competition or prices goes down?


Every time I upgrade I lose email functionality. Anyone having the same problem?


it's good to see more space but it would be nice to see IOPS options


What got hacked this time?


This thread reminds me of webhostingtalk.com.


Sweeeeeeeeet




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: