Linux distros mitigate the cold boot entropy problem by saving some state from the RNG on shutdown (on Debian, it's saved in /var/lib/urandom/random-seed) and using it to seed the RNG on the next boot. On physical servers this obviously isn't available on the first boot, and on cloud servers, the provider often bakes the same random-seed file into all their images, so everyone gets the same seed on first boot (fortunately this doesn't harm security any more than having no random-seed file at all, but it doesn't help either). What cloud providers should really do is generate (from a good source of randomness) a distinct random-seed file for every server that's created, but I haven't seen any providers do this.
Modern Intel x86 processors now have a hardware RNG built in, so even if the host boots without any devices, you have a source of entropy.
(Incidentally, KVM defaults to /dev/random which might create a DoS vulnerability if a guest exhausts entropy.)
Exposing hardware RNGs to guest VMs would be great; entropy in guests is pretty lousy in general, not just at first boot (though obviously at first boot it's at its very worst).
If you want more than that in a cheap Heath Robinson manner (Americans: think Rube Goldberg if you are unaware of Heath, their work came from very similar inspirations) then many SoC solutions have a built-in RNG of sufficient quality for general cryptographic use and some solutions expose them easily. If you enable the relevant module a Raspberry Pi can provide up to ~550,000 bits/sec when asked to, for instance, which if one-off cost is a factor beats paying a few hundred dollars or more for a USB device providing the same sort of rate.
Based on the HAVEGE research http://www.irisa.fr/caps/projects/hipsor/
I wouldn't say "surprised", more that I wonder why and if there's maybe a good reason for it--that seems more interesting than the notion of surprise to me, anyway :)
Especially considering pretty much everything I've heard about entropy-generation over the years mentions that getting entropy just after boot is a difficult problem.
I'm assuming there must be some truth to that, so your surprise is my curiosity for the explanation why that probably wouldn't work (yes I'm assuming it won't work, sorry, but I really think otherwise they'd be using it :) ). Does anyone know?
I don't post comments to demonstrate my cleverness (I hope). I post to tell about an interesting experience, make a careful argument, answer a question about which I'm knowledgeable, or solicit feedback. An expression of surprise for me is a solicitation for information. I was hoping someone with more domain knowledge would have some insight that I could integrate into my model. I realize that some people do just post comments to show their cleverness, and there's no way you could have known my intent without knowing me, so that feedback is well taken. I'll try to be clearer in the future about my intent.
I absolutely agree that what I said probably wouldn't work - no need to apologize! The prior probability for P(simple solution not used by experts | simple solution obvious to a non-expert) is low.
now I doubt we'll get an expert with domain knowledge to jump in after five days though :)
We've actually talked a lot about trying to write a xen driver to share a hardware entropy device, but it hasn't gone anywhere. (I mean, xen has 'vtpm' - a virtualized Trusted Platform Module. And nobody uses that, so why not a vrandom?)
But I agree this sounds like a good way to handle user authentication.
Essentially, the trust bar for entropy is set so high that no-one who understood the product enough to want it, was willing to trust it enough to actually use it.
(It's not just for SSH keys; random values are required for e.g. SSL connection setup, too)
Ah, also: by "HVM operators" I didn't mean AWS, Linode, Digital Ocean etc; I meant the admins of virtual machines running on their platforms, plus anyone running a private cloud or even someone with just a bunch of Xen/KVM boxes in a DC.
Many people, especially beginners, make the mistake of leaving the same SSH keys in a certain template or in a snapshot of a virtual machine that they later use as a template.
There are a few files that you really, really need to wipe out from a wannabe image template:
- /etc/ssh/* key* (for reasons explained in the parent article. stupid autoformatting, remove the space after the first asterisk)
- /var/lib/random-seed (the seed used to initialise the random number generator. this is the location on CentOS)
- /etc/udev/rules.d/70-persistent-net.rules (so that the VM's new NIC - with a new MAC - can use the same "eth0" name)
People who want to do this more exhaustively can have a look at libguestfs and it's program virt-sysprep which does all of the above and more!
Apparently they ignored it :)
What's happening on the left side? Is that you or the rep?
edit: actually I didn't realize there was a skype window up when I took the screenshot, thanks for warning me...
Your ticket had two problems described. The tech probably didn't understand the significance of the first problem, and so just discarded the information. Then she answered your second question. When you've got 100 tickets to sort through in your 8 hour day, you simply have to make some compromises on the thoroughness of your response.
To get her attention, it would have been better to explain a little about what the consequences are, and request that she have a developer follow up. Make it clear that it's a major security failure and could lead to compromised VMs.
Then open a second ticket for your other issue.
It's not a huge deal to me, but if Linode did the same thing, you all would be foaming at the mouth. Just thought I would point this out.
In case anybody is wondering, I'm referring to Volkswagen.
Personally I'm an advocate of full, anonymous, and public disclosure.
Imagine if we applied the same logic to phones and other devices -- I wouldn't be surprised if you personally would be offended at the idea that you have little recourse over your phone being hacked remotely and you can't do a damn thing about it because the mobile handset manufacturer locked it down. Thankfully phones are subsidized, ubiquitous, and cheap, so you can take your phone anywhere and get it fixed/replaced.
This is the future folks: locked down devices that you have no control over.
Caveat: we're all plugging our phones into these insecure systems too. Wrap your brain around that for a second to see where I'm going with this.
The immediate workaround may not be possible. In that case, they're just screwed. A company is not entitled to be able to save themselves from the consequences of their past fuckups in all situations. Sometimes, a mistake costs a lot of money or even kills the company. Perhaps this is one.
I find it unlikely that it's impossible to disable the keyless entry system on the cars in question. Surely there is some fuse or wire that can be pulled to shut it off. But it ultimately doesn't matter. Finding a workaround quickly is what they need to do, and if they can't do it, that's not his problem.
Even in the absolute worst case that the vulnerability is somehow built into the very fabric of the car, you can still secure it by removing all valuables from the interior and then clamping a wheel with a boot. Remove the boot once you figure out a fix. Inconvenient to the owner to be sure, but not impossible to deal with.
Harm to the end user is not my priority. Harm to society is, and it's clear to me that the long-term chilling effects on academic research far outweigh any temporary harm from VM issuing a recall or even going bankrupt.
The alternative is to say that entity B should suffer from a restriction on their free speech simply because entity A, due to their own negligence, finds it excessively costly.
There are so many different ways this could be handled other than "threaten to throw the researcher in jail if he doesn't shut up". But they are all more inconvenient and costly to VW. One can understand why, then, VW would go for the "threaten" option, if we think of VW as a sort of non-moral profit-optimizing organism. But I certainly can't understand why anyone would defend it, let alone say that we would do the same thing.
Harm to the end user is not my priority. Harm to society is
Wholesale removal of personal transport (even 'while we work something out') is "harm to society".
the long-term chilling effects on academic research
Are you not overstating the significance of a paper? Does this paper hold the solution to free energy? The impending food crisis? Sure, it's not ideal, but let's not blow it out of proportion.
I repeat for the third time: it is highly unlikely that there is no temporary workaround. Clip a wire, pop a fuse, remove a module, or whatever, one of these will get the job done for the moment.
Finally, even if these cars must be disabled in the interim, it's hardly "wholesale", since it's just one brand of many. Alternatives exist.
Security research is important. Does this paper hold the solution to free energy? No, but the precedent set will discourage further research in this area, which could result in leaving the power grid vulnerable to black hats.
You say I'm overstating the significance of this paper. I say you're vastly overstating the significance of this paper, in terms of what would happen to VW, to VW owners, and to society in general, if the information got out.
The chilling effect on security research of these kinds of actions is fairly well established. There are real-world examples of security researchers deciding not to work on a particular project because they fear persecution. That's a loss to society.
On the other hand, there are no real-world examples of chaos resulting from disclosures of automobile security vulnerabilities, even though car security is, in general, quite lax.
So kindly please, stop with the hyperbole and hysteria.
Regarding 'wholesale', in context the term just means 'non-selective' and didn't mean every brand on the road. This being said, the arrogance of "Alternatives exist" has got to be pointed out: what alternatives? If you immobilise all the VWs, how will those commuters now proceed? Rent another car? Buy another car? Some might be able to catch public transport, but hardly all.
This links back to what I said about ideal vs real world - you think that it's tenable to just take one brand of vehicles off the road, which is clearly nonsense. Even if there were no temporary fix, the real-world response to fixing the issue would be to leave the cars available to the owners. The idea that you'd even contemplate booting as a considered option is just farcical.
You appear to be interested in an adversarial discussion in which you score as many points as possible, rather than a collaboration in which we enjoy ourselves and learn. I'm not interested in that, so I'll leave you to it.
Leaving the responsibility to fix these security flaws in the hands of the automobile manufacturers is dangerous and foolish, they simply don't have the means nor the interest to fix these security flaws.
This is one of the most obvious cases where FLOSS shines -- everybody and anybody can fix their broken software because they know what's running on their machines and the machines are open and accessible to those that need it most: end-users.
e.g. When I golf, I refuse to take mulligans because it keeps me in a state of mind of, "this is my only chance". Whenever I break this rule, the rest of the day my golfing is worse.
Also, the researcher wanted to disclose this information without VW having fixed it. DigitalOcean got the opportunity to fix their system. Do you know what VW's next step is?
Cowboy images like this are exactly the reason trademarks exist. Commercial providers who don't get certification are in fact violating Ubuntu's trademark by telling you that you are getting Ubuntu, when in fact you are getting a modified image which is possibly compromised (such as in this case).
* SSH Host Key
* SSH Authorized Keys
* SSH PermitRootLogin
* Disabling password auth in favour of keys
* Security updates from the distro
* SELinux (maybe?)
b) Software firewall via e.g. IPtables is generally the first thing I turn on after rebooting SSHd with the new settings.
c) (Optional) Consider using an architecture where you have N boxes and SSH only listens on a local interface on N-1 boxes, with the Nth box running nothing but your VPN. (This is also a good architecture choice for admin consoles, folks. www.example.com resolves to a public IP, admin.example.com resolves to a private IP, so even if they're technically speaking on the same box/boxes you won't lose the admin console if someone unwisely uses the same password for a WordPress blog somewhere.)
It can be configured to automatically add (and later remove) ip addresses to iptables based on login failures (found by running regexes on logfiles).
I've got rules blocking ip addresses with multiple failed ssh or wordpress login attempts for an hour to be very effective. I've still got a bit of "brute force poop" in the logfiles, but much less than before. I've seen a suggestion for adding an additional fail2ban "recursive rule" - dropping any ip address with repeated fail2ban lockouts for much longer times - if you trigger multiple 1 hour lockouts for ssh auth failures, you might get dropped for a week or more. I haven't felt the need to implement that one yet.
Any service that has bound to a public-facing port on a machine I run has done so because I wanted it to; if I had a firewall I'd need to add an exception for it. Any service that's only meant to be accessed from the same machine has bound on loopback. Services that are accessed from another "internal" machine are properly authenticated, which means the network doesn't have to be trusted; spoofing packets won't help an attacker at all. Frankly if your defences rely on the idea that all packets from ip xxx.xxx.xxx.xxx are "safe" you're going to get burned.
Where's the value in a software firewall supposed to be?
The most dramatic example I have is that I manage a very heavy database-driven web application from an outside vendor that must be public to the entire world. Even a simple request of the home page results in dozens of queries, and users that are logged in put an additional load on resources. I've addressed performance with other optimizations, but since we have no need to generate additional traffic, I have iptables rules to detect unwanted crawlers or pen-testers and block them (based on probes to unused ports/IP addresses, user-agent strings, etc.). When we originally deployed the application, crashes were routine during peak cycles. Now, the server barely breaks a sweat. The ability to implement simple logic in a robust, time-tested software firewall plays an important role.
Other examples include mitigating SSH brute force attacks, port knocking, forensic logging and honeypots. My honeypots protect me from many undisclosed vulnerabilities, simply because attackers are poking around in places they shouldn't be and are automatically blocked. It's much nicer to review a simple report of blocked attacks than it is to troubleshoot a compromised server.
Or what if I want to run a mail relay but only allow machines I control to access it. Or block a user who's doing something nefarious?
Sure, most services allow you to control access by ip, but I'd much rather manage it all in one place. And the ports show up as closed if you block at the firewall.
Hmm. In that specific instance it makes sense, because spoofing the IP is the whole point of the attack. I'd argue that's basically unique to DNS though, in which case the advantage of managing it in the same place as your other services goes away.
>Or what if I want to run a mail relay but only allow machines I control to access it.
Then you use a real authentication mechanism (i.e. SMTP AUTH). Otherwise it would seem perfectly possible for a spammer to spoof one of your IPs and use your relay.
In any case, there was discussion on those issues not long ago: https://news.ycombinator.com/item?id=5316093
I assume you mean to disable it? I see that listed in various places, but I don't understand why it matters… I like to kill all passwords on my VMs (so there's nothing to brute force), uninstall sudo and only use ssh keys to authenticate. I would like to know why this is a bad idea.
So if you disable root logins, they have no idea what to use as a username. Oh sure someone who personally knows which machine you maintain might be able to guess jonwood is your username or my username is vlm... but they'll never be able to log in as root and botnets aren't smart enough to try anything else. Aside from root, I wouldn't make your "primary user name" = "hostname" either.
Thinking back at my occasional examination of logs, I don't know if I've ever been scanned by bot nets trying Anything other than root as a username. I'm sure it happens, but I can't afford to spend 5 minutes per syslog line in my life either...
The common problem is that sshd was started before creating the final image, so it has keys that are duplicated by provisioning. Always delete the keys before committing the final image, so that sshd will create new keys the first time it runs.
If our IP address changes (eg. ISP assigns a new one for the cable modem) then we just update the whitelist (and remove the old address). It's very infrequent. I could probably count the number of times I've done it on one hand.
It might not be the most scalable setup but at our small size with everybody working from home it works great.
The only slight hitch is updating it when traveling but even that isn't much of a problem. It takes a minute or two from the AWS console and its good to go.
I recently took a look at digital ocean ($5 servers gives me ideas...) but didn't see a firewall option similar to the security group setup in AWS. If it does exist then I highly recommend it.
[EDIT] The relevant example is 4th in the list.
I had loaded up an Ubuntu Desktop droplet with the purpose of checking something out through the browser on the node.
The startup page was https://www.americanexpress.com/
Since when is that default?
Didn't think much of it at the time, but now... whoa.
Somewhat related: chicagovps gave me a 'fresh' gentoo vps, and the default provided root password was identical to the original one from several months ago. I assume it is one gentoo image with the same password (for all customers)?
If you're using AMIs from some other third party you should verify that they do the right thing.
i need to automate deployment / reprovisioning of 30 digital ocean servers. as reprovisionined servers frequently use the same ip address, i always run into this. for me I had to disable the check :(
You're actually better off just baking a shared key into your image. So long as it's a key you generated yourself (not like this Digital Ocean scenario, where the key came from the cloud host), only someone who has already rooted one server can successfully MITM your SSH connections.
Whereas if you ignore host key checks entirely, anyone who gains control of one network hop between you and your servers can own you.
What I've done is added a whitelist to my .ssh/config to disable the alerts only for those hosts. The foo.datacenter.bar.com address (which I use often enough, usually when migrating it between datacenters) still alerts.
And yes, I know I'm living in the 90s what with my datacenters and whatnot. They're kind of like regions... what? Oh... why, you... You kids, get off my lawn!
Spend an afternooon to figure this out.
I believe if your version of OpenSSH is up to date, sshd will read the host key each time a session is opened and does not need to be restarted.
If you are still reviewing salt, I just wrote a post about salt-cloud and DigitalOcean that you should check out -
Create your own fleet of servers with Digital Ocean and salt-cloud: