Because next time, they might be. You're going to have to do better than incremental upgrades to convince me to come back.
And the only reason they admitted it (well at least it looked like that) was because the info had already been leaked through their irc channel.
Compromised Linode, thousands of BitCoins stolen (bitcoinmedia.com)
316 points by tillda 510 days ago
Linode hacked, CCs and passwords leaked (slashdot.org)
732 points by DiabloD3 101 days ago
The story around the Linode hack (straylig.ht)
349 points by foofoobar 79 days ago
Brief summary: according to the hackers involved, they struck a deal with Linode whereby, if Linode made no moves to disclose the attack, the hackers would shred all of the data they had grabbed. Instead, the FBI forced Linode's hand in the matter. Even if that's not true -- and, in this incident, the hackers came out as more believable than Linode IMO -- there still was no mention of the incident on the Linode blog until after the hackers had claimed credit on Linode's IRC channel and the news of that had started making the rounds. This is identical to the previous incident, where Linode said nothing until after a customer started complaining loudly on their user forums.
Then, Linode wasn't forthcoming with details, despite the hack having occurred a couple of days prior. The second update from Linode came only after additional information had been made public by the hackers, and provided no information beyond what had already become public. Linode claimed that customers' credit card information was still secure, but the hackers claimed otherwise and in the days and weeks following the event, several people claiming to be Linode customers claimed that they were seeing suspicious activity on cards that could reasonably be traced back to Linode (cards that were Linode-specific or used for few enough other services).
The way that Linode has handled both incidents has left me, and many others, with the impression that they simply will not disclose that they've been compromised unless forced to by someone else -- a customer or the attacker(s) -- and then they'll attempt to be very opaque and not-specific about the incident.
It's a shame, because aside from this, I really like Linode. I wouldn't even be interested in looking at other VPS providers if it weren't for this. But now I'm being negligent if I continue to host customer data & services on Linode. I don't know yet if anyone else handles this sort of thing better, but I do know how Linode handles it and it's not good.
This'll be my only comment on this subject. You (or others that are interested) really should just go over past threads discussing the incident.
They made a mistake on the 12th and corrected it by the [with some forum posts in between those two dates and someone claiming responsibility between those two dates] 16th. I'm not seeing the issue in regards to the previous question except 'Hackers say otherwise'.
None of those threads date before the 12th which was kind of the point. I generally assume incompetence before malice while everyone else seems to be the reverse.
Every time someone says that I've noticed either:
A) They missed the public notification.
B) Their security expectations aren't in line with a VPS environment where everything is controlled through a WAN control mechanism. [e.g. $200k worth of bitcoins stolen...why on earth aren't you using a colo box with your own dedicated hardware that you can be certain is as secure as you can possibly make it. That is something you can do with $2k and at most $200/month.]
a) Linode was extremely cavalier in releasing a public notification. It was all over slashdot and in their support channel, with ZERO notification from them (till long after the fact), leaving the community to assume the worst, which was inline with what actually happened.
I was a Linode customer during their first security incident (Bitcoins stolen). I found out about it through Reddit. It was communicated to customers weeks after through a post to their status page. At no stage were customers informed via email and nobody knows to this day what exactly happened and what exactly has been done about it.
They played it safe by waiting until they actually knew what happened before releasing any information.
If you think Linode is a good value for the money, continue using them. If you don't, go somewhere else. I don't think anyone here cares whether you're impressed.
The point is valid though that due to falling component prices it is expected that Linode will either increase resources or decrease price.
If you've had experience with other hosts, you'll know what I mean.
It wasn't the platform's fault that they lost the bitcoins. Linode's admin interface was the attack vector. This is roughly equivalent to getting robbed because your landlord left your apartment's master key under a doormat.
I agree linode's response was not ideal and they should learn from that and be far more transparent (see recent twilio billing issues for a good example of the way they should have responded), but hosting digital money which can be stolen on a shared vps is not a good example of a failure solely down to linode - those bitcoins wouldn't be safe on any vps long term, and their presence sounds like it caused this targetted hack in the first place.
There are a lot of things you can do to protect your bitcoins. Having them all on linode servers is the worst thing possible.
I say all this as someone who dropped Linode for DO due to not being able to afford them anymore.
The one thing to be remembered is that as customers, we are only as loyal as the company is. When they start doing things that affect us in negative ways, we move to newer pastures.
The way they handled the breach is another thing, though.
Not to diss Linode, I use them myself, but they're not the cheapest. It's quite evident to me that these free upgrades are a defensive tactic.
How much memory or disk space would $20 a month get from other providers 3 years ago?
I'll pay the extra few bucks for reliability and decent uptime.
I had very frequent downtimes and the support was often unhelpful. I heard from others that it was just the datacenter that my VPS was on and other DC weren't as bad, but I guess it gave me enough of a bad impression that I went back to Linode anyway.
Linode it is!
Also, my Linodes seem to have much more consistent CPU performance. With DO it's been very hit or miss at that 1/2GB size. They may be cramming too many VMs on their hosts.
Same for the hardware. they are the new kids on the block. For now it's the newest and shiniest but long term lets see how they keep pace. Linode's been around a while and done a good job of dragging all it's customer's along to newer hardware.
I'm not seeing how this is a record superior to Linode. That said, I use DO due to its price point for hobby/testing projects and dropped Linode completely.
Another interesting thing is that right now, I charge a $4/account charge, then a incremental fee, right now $1 for every 64mib ram/1.5gb disk, which means that as you get a larger plan, you get more stuff for your money. As far as I can tell, none of the other players do that (and some competitors do the opposite, giving you more resources per dollar on the smaller plans than on the larger plans.) - so I am wondering if I should reduce or eliminate that fee.
But I dunno if that makes sense, given purchasing decisions and your cost structure. Are people's purchase decisions keying off transfer quota at all, or are RAM/HDD are the real price-point issues? And if they are, would doubling transfer, say, make a difference, or would you have to match Linode (currently offering 10x the transfer at equivalent price points)? So maybe this is a dumb idea.
For a long time, it was that my network was so shitty that doubling traffic would make the thing fall over, but that problem is fixed... new network is way more robust. so yeah, I should increase transfer expectations.
I mean, in reality, I follow the 'use all you want, limit you when you get crazy' model, rather than hitting you with a big bill when you go over. (I should make that a selectable option) so it's really similar to what digital ocean is doing with their 'unlimited' - obviously, if you go crazy on an 'unlimited' system, you get limited. My limits are 'setting expectations' - but right now they are setting expectations that would have been reasonable in 2007, which is silly, as my current network is pretty robust, and really can handle more.
After a year working at a Rails shop that deploys to Heroku, I decided it's time to learn some sysadmin basics. Here are some things I've done the past couple months that might give other noobs some ideas.
- Make aliases for most Linux commands. Abstract Linux into a simple interface of aliases that you can browse in your .zshrc/.bashrc/fish.config. Linux becomes much more enjoyable when you can type `untar <file>` instead of remembering the magical incantation `sudo tar -xzvf <file>`. Make aliases for everything. nginx-reload. nginx-restart. alias z="vim ~/.zshrc && source ~/.zshrc". I even abstract apt-get commands into ag-install, ag-update, ag-remote, etc.
- Use dropbox cli, a shared folder with dotfiles, and symlinks to keep dotfiles synced between your server(s) and laptop(s).
- Install tmux on it so that you can `$ tmux attach` to a persistent shell so that things still run when you close your remote connection and they're still there when you log back in.
- Run weechat (terminal irc client) in tmux so that you never have to log out of irc again.
- Learn how to rsync up and down files from your server.
- rsync your Sinatra app (or whatever) onto your server and run it on port 5000, immediately see it live.
- Install nginx and have your VPS serve multiple domain names, each domain name pointing to a Sinatra app running on port 5000, 5001, and 5002.
- Run each website in its own tmux tab. At the beginning it's easier to reason about a foreground process than some daemonized process.
- Install postgres or whatever database you use to your VPS. Get your db-driven app to connect to it. Connect to your remote db using your favorite db gui.
- Take notes of all the commands you use to set up each component so that you can refer to them later, see how you might have messed up.
- Learn enough bash/zsh/fish to automate all of those commands. Polish it enough so that you can curl your script from a Github gist on a fresh VPS and it will set up your entire environment.
- Install `htop` as a nice overview of your server's stats and the processes that are running. Run it in another tmux tab. My tmux right now has a tab for weechat (irc), tabs for my Clojure apps, and an htop tab. On Friday nights I pretend I'm NASA mission control.
- Learn how to use the `ssh` command to reverse tunnel a connection from your VPS to an app you're running on your local machine for a fast development feedback loop.
- Learn how to use your VPS to encrypt your connection when you're working at coffee shops.
I'm still a noob but I leveled up considerably from this quest arc. I highly recommend.
I still don't have a raging clue about most of Linux, even stuff like what all the folders are supposed to be for: /var/opt /etc/www /usr/local /usr/shared/local ... etc. and why files go the places they go, but it's all an iterative process.
Email me (profile) if you need help with any of the bullet points. I am trying to shape it into a more helpful blog post to share with other noobs.
One problem with this is you become a cripple when you have to use an outside machine or need to write a script for wide distribution.
Aliases become a self-documenting interface. Using my aliases, getting exposed to them every time I add a new one, tweaking them, and simple browsing them from time to time is what helps me remember commands.
It saves me brain cycles sort of like buttons on a GUI.
For me, the alternative to writing an `untar` alias isn't that I remember `tar -xzvf` but that I have to google it every time (or scroll through dozens of possible flags in the man page). I'm not untar'ing things every day.
Now multiply that example across all sorts of commands. I never remember the right flags to `stat`. or `column -nts: /etc/passwd` just to quickly see users.
I won't remember that I like the `--show-upgraded` flag sent to `apt-get upgrade` every time. I don't remember that my nginx conf and websites are in `/etc/nginx/` while their static files are in `/var/www/`. When life gets busy and it's a month since my last VPS login, I'll remember my own `disk-space` abstraction instead of `df -h`.
Finally, I'm not a sysadmin. I don't find myself on random machines nor am I writing scripts for wide distribution. When I spin up a new VPS, I curl a gist that even installs fishshell and then bootstraps the rest of my niceties with fish scripts. If I'm lazy, I'll just wget my .bashrc aliases. If I'm even lazier, I'll just curl it like it's a manpage and browse my abstractions. Not much different than using manpages directly.
But actually over time it turns out that I end up remembering what many of these aliases represent.
I like this a lot. Not only do you gain experience using commands by setting up your aliases, but you now have a reference for the future and are learning the commands by repetition; exposing yourself to the config during tweaks.
Add in the use of apropos to find appropriate commands (without googling) and learn to use the search feature while viewing a man page (/[STRING] same as vim!) and you're golden.
I set up a permanent history and then for important actions I comment them then just use a script "$ hs keyword" to find what I'm after.
Mind I used to use slackware so "tar xzvf package.tgz" became second nature.
After a couple years you'll have quite a collection!
I sometimes have nightmares about spending all day in cmd.exe...
I love aliases as well, but I remember the particular command for tar extraction (tar xzf) via the mnemonic "eXtract Zee Files" said with an overly stereotypical German accent in my mind. Silly, I know, but I have never forgotten it since.
eg. `tar xf foo.tar.gz`
Of even more interest to you is probably that with modern version of GNU tar, you can just say tar xf filename, and then tar will automatically determine whether it should use gzip, bzip2, or xz based on the filetype. So you can save yourself an extra letter there.
I think everyone encounters at least a few in school.
Rules of integration by parts
Order of Operations
Resistor Color Codes
To relate with some of the examples you provided, it would be like remembering the order of operations PEMDAS by remembering
Parentheses, Exponents, Multiplication, Division, Addition, Sally.
But more on context - that's actually a great mnemonic. Silly me, just memorizing things.
Instead of renting a VPS, you could also purchase a Raspberry Pi for about $35, and get the same learning experience. In addition to that, the visibility of the cute blinking LEDs (in a transparent case) gives an opportunity for actual "RELAXEN UND WATSCHEN DER BLINKENLICHTEN" and the joy of actually owning "EIN KOMPUTERMASCHINE FÜR DER GEFINGERPOKEN UND MITTENGRABEN".
Edit: the all caps is only because they are quoted verbatim from a famous blackletter sign in university computer rooms in mangled mock German.
I agree in principle, but what you should really be writing are shell functions. Aliases are defined once and cast in stone - if there is a variable in an alias body, it will be evaluated once, on alias creation, and never again. Functions have no such problem. They also let you select process arguments, while aliases delegate this to invoked commands. Aliases are for very simple things, anything more complicated and you should write a function.
Also, you should set up a decent autocompletion. Zsh needs a bit of configuring and fish has it by default - the need for most aliases is eliminated by autocompletion and history functionalities of a shell. There is standard CTRL+R combo which lets you search your history and there is a "up-line-and-search" action in zsh which searches the history for lines starting with what you have on your current line. Set your HISTSIZE to some huge number and never worry about forgetting how to do something or having to repeat (as in re-type) a command.
Use fasd (https://github.com/clvv/fasd). A simple little script that records all paths, files and directories, you visit and sorts them by "frecency" (both frequency and recency). It let's you just `z pr 5` to `/usr/home/you/project/something/test5`. Give it a day and you'll never want to get back.
Aliases are nice because they're the most entry-level customization yet they solve the #1 thing I spent my time doing at the very beginning: hunting down magical incantations on google. When I found one that helped me immediately, I'd collect it in my .bashrc like a pokeball. I had a lot of teeth to cut before I started looking for solutions that were solved with functions.
However, autocompletion based off history is a volatile convenience to me, not something that I feel obviates the need for aliases. Depending on ctrl-r to find `tar -xzvf` isn't much different than autocompleting your own `untar` alias. Except that when `tar -xzvf` isn't in your history anymore as a noob (fresh VPS/fresh shell/different computer), then you're back on Google/manpages to find the incantation so that it's in your history once again.
Besides, here's a simple example of the sort of abstractions I'm talking about:
transmission-edit -> sudo vim /etc/transmission-daemon/settings.json
transmission-start -> sudo start transmission-daemon
transmission-stop -> sudo stop transmission-daemon
This is me just talking aloud.
fasd looks great. Thanks.
I call it an incantation because you're on your couch eating a frozen meal at 7:30pm on a Tuesday night with your laptop in front of you. The TV is on, but it's muted so it sits in the limbo between a social coping device and a distraction for the few hours you have between the commute from work and bedtime.
Yeah, one day you want to learn more about Unix and tar and how electrons flow on a circuit board and maybe what gravity is -- like, what it really is --, but tonight you just want to get nginx set up for the first time on your linode so it can serve your hand-rolled blog and move the needle just enough so you feel like you're making some progress in life.
You've managed to use `wget` to download nginx-1.2.9.tar.gz onto your VPS. And all you want to do next is get the files out of tar.gz. You'd love to right click it and select "Uncompress" from a context menu, but it's in a terminal window.
For now you just google "ubuntu uncompress tar", click the first result, and copy the incantation into .bashrc aliased to `untar` for future reference before pressing forward into the abyss.
Really? That's not true in bash:
$ alias x='echo foo$A'
I have to disagree here: knowing the actual commands and their arguments is far more valuable than the small amount of typing saved. Read the man page every time you use the command if you have to, but at least you'll know why `tar zxf` failed on that .tar.bz2 file.
As other commenters have mentioned, relying on aliases leaves you feeling crippled when working on systems that don't have them. If you're repeating commands often enough that you'd like an alias like nginx-reload for them, consider using your shell's history feature instead.
I also make a point of changing the prompt color on each of my systems, because it's much more obvious than reading the prompt for the $host entry.
It sounds more to me like you are saying every developer should gain experience working in a Linux environment, a sentiment I would agree with, but there are easier (and cheaper) ways of using Linux than renting a VPS.
My laptop on the other hand is only connected to the internet transiently.
This is kindof iffy. I would not suggest going this route.
I've only read the 3rd edition, but it looks like this newer edition also includes Linux.
The dropbox thing is a fun experiment, especially when the dotfiles folder is a git repo that sits in dropbox. It saves me from the git commit/push/pull loop when my dotfiles are in flux.
I can add a fish shell function on my laptop and when I switch back to my VPS shell, the function is already loaded and available.
The git repo reminds me what changes I've made today and lets me formalize the changes I've made into commits.
Fun for the whole family.
Edit: Err, Cloud66
In the end, even if I go back to Heroku, I'll have gone from Heroku -> VPS -> Heroku. But it's not full-circle or back-pedaling like it first appears. Rather, I've traded Heroku as a blissful crutch for Heroku as an automator of things I now know how to do.
The pain just means you're ready to reach the next abstraction because you've beat the level and paid your dues.
The reason you have so many people replying saying that your shortcuts are a crutch is because you and I belong to a different group of people than they do.
Allow me to expand.
Unix people are conformists. By "unix people" I mean people that defend Unix constantly.
Quick evidence that they are conformists: "where's Plan 9?"
More evidence: unix utilities have the same crappy interface they had when they were created.
Think about this: Unix is based on the idea that programs should talk to each other via text (strings).
Do you know of anything less structured than a string? A string is a bunch of bytes next to each other. And I don't mean that in a "well, look at how the computer represents it" kind of way (where everything is a bunch of bytes next to each other); I mean that a string is literally the representation of bytes next to each other.
When you call a utility on the command line, say tar, and pass it zxcfvg or whatever that crap is, then pass it a filename, there's no structure here. The program 'tar' is receiving a big string containing what you just wrote: "tar zxcfvg filename.tar.gz". In fact, even more moronically, you receive that data already cut up in pieces as an array, so: ["tar", "zxcfvg", "filename.tar.gz"]. Which means you actually were passed information that is worse than a string since you've now lost information (the spaces). Wow, thanks, Unix, for splitting a string on its spaces. Real handy.
Unix believes on this very flawed idea that strings are a good data structure.
Remember Tcl? It was great. Then we came up with slightly better abstractions (Perl) and people ran the fuck away from Tcl as fast as they could. Tcl and Unix are based on the same flawed idea: strings are an OK structure. Tcl and Unix were both wrong. Strings suck.
You know what's a good interface? Messages. Look at Smalltalk.
This is what a good OS would look like (good, not great - plus I'll be exaggerating the messages to make my point clearer):
and instead of always having to google if ln -s takes the pointer or pointee first, it'd be this:
existing-file.ext createSoftSymlinkAt: new-name-at-new-location-maybe.ext
Unix people are conformists. They don't want change. They want to type tar xgvzg. Let them. Don't waste your breath.
Even Plan 9, the "good unix", is shortsighted. The gimmick is that "everything is a file". Shame, missed it again. Files are dumb. Our friend filename.tar.gz still doesn't know how to manage itself. Files are the wrong abstraction - not enough. How can they never have thought of objects? Because they're conformists. Not too much change, please! Just make more things be files. That'll win Unix people's hearts. And even that failed. Do you see now? Even such a tiny change failed to win over the unix people. You will not convince them that abstracting the behavior they fawn over is a good idea. They don't understand the concept. They're still writing scripts. You know what a script is? A recipe. You know who programs in recipes? Procedural programmers. When you let procedural programmers make an OS you end up with a system where scripts talk to each other via strings. Uhm, gross.
Let it be known that I use a Unix everyday (Mac OS X) and I think it's the best OS we have (I mean Unix, not Mac OS X). Nothing gets close. Still, it's FAR away from good. Smalltalk is good. If Smalltalk were an OS then I'd consider Unix "absolute garbage of the nastiest kind".
Keep abstracting away. If you want, later on when you have nothing to do, go ahead and print the man pages for tar. Read them start to finish. Study them deeply. Even marry them and go on a honeymoon trip together. Then you'll look good to a bunch of internet strangers. Or come with me and appreciate the view from high up our tower of abstractions. Unix people look like ants from here.
I never actually went through with the idea, but it's definitely on my things-to-do-when-bored list!
I'm with http://budgetvm.com/ myself because for the same price (as a BuyBM box), you get a ton more RAM and CPU. The problem here is they're running OpenVZ, which among other things, doesn't support certain iptables (firewall) functions.
Also you sacrifice disk space. But for the application I'm running (Riak cluster), memory is important too.
I haven't directly observed any practical differences in ways that affect me in terms of VPS infrastructure itself, but I've found Linode's support to be significantly better and I'm willing to pay a premium for that on "sites that matter". I don't want to imply that DigitalOcean's support is bad -- it's not, but I feel they're average when compared with Linode responses.
On the other hand, my impressions may be biased since I rarely have to contact support in the first place. I'd be interested in whether or not others have had the same experience in terms of customer support.
Part of the problem is that it's very easy to compare on price, but difficult to compare on service. I think my conclusions may not be quite as valid today, because news gets around so quickly, but I think the concept is a worthwhile one to think about.
Anecdotally, I've been very happy with Linode's service: a few months ago, I had some kind of problem and things weren't booting. Panic! I sent in a support ticket, and went to their IRC channel right away. I started describing the problem, and one of the guys in the channel says "oh, I think I just answered that", and indeed, the support ticket had already been answered with real, helpful information.
My interaction with them has usually been quite positive.
One more important thing however, network stability in particular seems way better with Linode currently. I keep getting random cron errors resulting from name resolution issues on at least two data centres of DO. To counter my own last point, Linode recently had quite a lengthy network outage at Dallas, so no provider is totally immune.
Bottom line: try to build an infrastructure where you can as quickly as possible migrate/recover onto a new droplet/linode/instance, ideally across different providers.
1. DO has no private network; all traffic between your nodes occurs via your public interface. In fairness, it looks like they're working on this (http://digitalocean.uservoice.com/forums/136585-digital-ocea...).
2. Linode's NodeBalancers are super handy if you need load balancing, and don't want to run your own. There are certainly some things I wish they could do (alerting comes to mind), but they're a heck of a lot better than running my own.
I wish they'd let Australians buy them... They actually forbid you from buying outside your country.
The 16G is effectively half the price.
And I believe people buy ovh servers in other countries all the time for backups, maybe you need an account with them first in your own country (but they are not in AUS I guess?)
$80/mo for 2TB. Deals floating around as well as "40% Off For Life Coupon: 40PERCENT" on the home page.
Just signed up with them, and they are having problems with credit card billing, so only feedback so far is they seem responsive to tickets...
Personally, for high diskspace growth needs, I just use a dedicated where I can just get them to throw additional disks in with little to no downtime.
OVH has a 2TB one for $39/month.
Hetzner has auctioned dedicated servers with 160GB space for 21 euro ($27.8) and 640GB for 23 euro ($30.4).
If you have tables with TEXT fields, MySQL up to and including 5.5 will perform certain joins with on-disk tables no matter what you tell it to do and will ignore indexes you create to prevent that on-disk stupidity. This behaviour is independent of engine.
Being I/O choked on Linode is why I moved to DigitalOcean.
Otherwise, heartily agreed.
If you don't mind my asking, is there some key feature of Wordpress or its plugins that holds you to it? I'm always interested to hear why people stick with it.