(The "PGP is too hard for discussing security issues" thing, though, is total nonsense. Can't be doing that.)
I used to respect this vendor and recommend their tools to others. I am having to rethink this a lot. I am no longer happy to recommend Hashicorp products to others.
Packer, Terraform, Vault, all of them.
Using a bastion host in your configurations doesn't help if you invoke the tooling on your laptop, since the connection to the bastion host is done with the SSH package, again with no host-key verification.
> I am bemused by an approach to accepting
> security reports which is to go through the
> motions of having PGP public keys available
> for people to use to report issues but upon
> receiving such a request ask for it to be
> submitted without PGP because digging out
> the keys is too much of a hassle.
1) both of our MTAs do STARTTLS. And gmail is only TLS.
2) 99% of the PGP-encrypted emails we get to firstname.lastname@example.org are bogus security reports. Whereas "cleartext" security reports are only about 5-10% bogus. Getting a PGP-encrypted email to email@example.com has basically become a reliable signal that the report is going to be bogus, so I stopped caring about spending the 5 minutes decrypting the damn thing (logging in to the key server to get the key, remembering how to use gpg). But I recognized him as a knowledgeable person from the Internet, and I knew (1), so I just asked him to send without PGP to save me 5 minutes.
Even if I'd used PGP, I would've just replied cleartext anyway to our firstname.lastname@example.org list, except all the MIME would've been garbled and unreadable.
In summary, the PGP tooling sucks (especially in gmail, but really everywhere) and it's too often used by people who are more interested in using PGP than reporting valid security issues.
I understand where you're coming from but this bit of logic I don't get. Especially if you recognized him, then clearly it's worth spending the five minutes.
The five minutes you personally save, someone else has to pay them. The person who originally sent you the email now has to read the response, process it, send you another non-encrypted email and/or enter into a meta-debate with you. They end up mentioning it on their blog and now we're all having this meta-meta-chat about it on hacker news, wasting several man-hours on it.
Was it worth it?
Something we all do, and which I'm personally very guilty of.
In this case it was exacerbated by "99% of PGP-encrypted security reports being bogus" (to paraphrase).
I can clearly see both sides of this coin though (security!), and I'm not at all discounting what you're saying.
It's both ridiculous and horrendous that PGP remains an operational holy grail after how many decades?
This is strictly worse than the problem you're writing about (lack of SSH host key verification) because at least with SSH you know the link is going to be encrypted (perhaps only between you and an MITM, but I digress...), but with ESMTP you can't assume even that. For example, it is perfectly acceptable for Google to one day decide not to negotiate TLS with ESMTP MTAs it's delivering mail to. You could check the message Received: headers on all the E-mail you receive to see if it was used, but by that time it's too late.
I figured the PGP usability problems were severe enough that I did not call you out by name in the post. This aspect was merely a darkly amusing aside leading into the main point of how the Golang devs handled this so well.
You didn't know the content of the report, so the irony is only clear in retrospect: the whole point of the report was about vulnerability to MitM attack and email MX->MX delivery is highly susceptible to that without some kind of trust anchoring in place, whether DANE or MTA-STS, neither or which is in place for the golang.org domain.
So the fact that TLS is _advertised_ by the gmail servers, which handle golang.org mail, doesn't mean that the advertisement reaches the sending mail-server.
I've since configured my mail-servers to always require verified TLS for outbound mail to the golang.org domain, as a manual override.
* If you can get remote code execution in a Go program, use PGP. Otherwise do not.
> 1) both of our MTAs do STARTTLS. And gmail is only TLS.
Even if the GMail MXs are configured to only accept E-Mails over TLS, the user's sensitive E-Mail may, depending on his settings, traverse multiple servers with less secure settings before a connection is ever initiated with the GMail MXs.
If someone's E-Mail is going to be MITM'd it's far more likely to happen between the Internet café he's sitting in and his ISP's badly configured SMTP server than between his ISP's SMTP server and GMail's TLS-using MTA.
GMail using TLS will do nothing to protect against the message being MITM'd, whereas GPG would, because it's end-to-end encryption, unlike TLS settings for individual MTAs in a possibly long chain of mixed or no encryption before a message reaches you.
IIRC, STARTTLS may not be mandatory and be just purely opportunistic. So relying on STARTTLS only is not a good idea.
GPG integration doesn't suck in Mutt.
If PGP (or something like it) is going to catch on, it has to do so for everybody's MUAs, not just nerd ones.
(Yeah, if it were just signed, that'd be fine.)
The window isn't as small as you'd think. The hacker would have all the malware written except for the infiltration point. And a skilled team, who is familiar with the software you are targeting, can go from a whitepaper to something actionable in a couple weeks. Less if they don't sleep much. Even better, a lot of these vulnerability reports come with POCs that can be pretty easily adapted to your needs.
As for who could pull this off, you don't necessarily need nation-state resources. In fact, bulk email slurping probably wouldn't help, since STARTTLS is pretty ubiquitous. Since you're focusing on a single target, your best bet is compromising the SMTP server. If that's someone's personal server, that's well within the abilities of a moderately skilled group. If it's managed email (say gmail) it gets much harder, but maybe they get lucky and figure out your password, either from good guessing/brute force or from an account leak.
Of course, a group could get access to those vulnerability reports by hacking the laptop of someone with the keys, but at least using PGP lowers the attack surface.
Other than that, not really.
We use this:
If you use the DO API to provision servers my feature request is here:
Please upvote it or at the very least copy the cloud-init script to help provision your servers.
Or go with a convergence style system and probe it from multiple locations.
Or just give up and go with TOFU - if you never get an error even on different connections, you probably haven't been mitm'd.
Or just embed the signed host certificate in cloud-init.
This issue hit me while building a tool for internal use at my employer. I am using the glide vendoring manager for this project, added another dependency which triggered an update of all other dependencies. At that point my tool broke and forced me to actually think about host key verification.
> Security. A security issue in the specification or implementation may come to light whose resolution requires breaking compatibility. We reserve the right to address such security issues.
In general I am very happy that the big emphasis on a stable APIs was taken up by the community, and that we have a lot of stable packages out now (even though they might not be 100% stable like the standard library). Since I also have to work with NodeJs where changing APIs and packages are much more common, I came to really appreciate that fact about the ecosystem.
But this sort of issue is exactly the sort of real-world review and hardening which justifies having a namespace for stuff to go _before_ it becomes stdlib.
/x/exp is expiramental.
edit: I'm still on LAMP stack for clarification. Too bad to hear about Golang though I'm still looking to learn it I hear a lot of great things about it.
For web, using TLS (SSL) is a good start. This could be improved further by using HSTS, HPKP, DANE etc. (not sure if A+ already implies them anyway).
For SSH, you need to have an out-of-band way to get the host keys or use something like an SSH CA.
edit: literally band? Like another wavelength/connection?
One way to get the key out of band might be from the AWS console for example. Presumably that connection is protected via HTTPS where the CA infrastructure (theoretically) can protect against MITM attacks.
BTW, it's also possible to setup ssh to use certificates instead of simple keys which might make sense depending upon how many hosts you manage.
At the moment I'm just dealing with a cheap domain-mapped single-core vps
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA4Ljuxwb9ss74agSmMRlBZdnIwMHprWIZ3Ts3G+hxnMmcQxeMAWoA4YXZwbrpQulFjDhjGqQoAGF+MKWXBpaeU= root@ip-172-31-17-215
tl;dr: replacing HOSTNAME with the hostname by which you access the server, run the following command on your server and append its output to ~/.ssh/known_hosts on new clients:
cat /etc/ssh/ssh_host_*_key.pub | awk '$3="HOSTNAME"'
I am worried because I don't know. I covered some basics like SSL, XSS (htmlescape), sessions are redirect related, PDO/sql... no ddos/load balance... backup backup
I'm not sure if it's possible to "bounce off" an attack from someone else's ip, by "attack" I mean a specific requested URL that is looking for an exploit.
This may also be a good read:
it is crazy to just randomly try different octet combinations to form a new ip and see what you get... what about ipv6 hoho
The easiest MITM attacks for a civilian to perform involve a wireless router that traffic is routed over, with mildly more complicated but similar setups near a server -- say an untrusted server farm for example, or a malicious employee at a self-hosted location. More complicated attacks require messing with DNS so that people wanting to connect to that host instead connect to you; and really big entities like governments or advertising-hungry service providers could of course try to set something bigger up that affects more people.
The basic necessity is that the MITM computer needs to have an IP address, because the premise of MITM is, "you issue a request to connect securely to server IP 18.104.22.168, but you send your packets through me, and I have access to some IP 22.214.171.124. So now I intercept them rather than forwarding them on, and I connect securely to 126.96.36.199 myself, as well as to you. Now everything you try to send to them gets through, and its response gets back to you, but not before I intercept it and decrypt it and store it for later retrieval.
So the simplest way to do something like SSH is to imagine that I ask for your public key, you give it to me, I encrypt a shared private key with your public key, I send it to you, you decrypt it, and we use this private key for the rest of our communication. (That's not what SSH does, SSH does Diffie-Hellman, but similar reasoning applies.)
A MITM attack is then, you ask for the public key, I make a public/private key pair, I send you that public key, I ask for the remote server's private key for myself. You send me a private key, I send the remote server a different private key, and when you send something to the server, I now decrypt what you sent me, then encrypt it and send to the server.
Your main means of security in this scenario is actually 100% the same as your main means of security in SSH, it is that you trust that the private key of the server does not change very often. Therefore when you connect, you store the first public key you ever see; then each time you connect, you double-check it's the same or else you shout at the user "HEY! someone could be overhearing your data! Are you SURE you want to connect still?"
What's at issue in the article is that this crucial step was totally ignored by Hashicorp, it was brought to their attention and they just missed it.
I saw the wireless thing when briefly looking up MITM attacks.
>you issue a request to connect securely to server IP 188.8.131.52, but you send your packets through me, and I have access to some IP 184.108.40.206
Sounds like a VPN no? Haha (not a serious remark)
You say decrypt, I thought you can't decrypt or hard to if you're using SSL?
Yeah this is beyond me right now the whole time I'm thinking Port 80, regular web not SSH (22? or whatever port assigned), it's hard to think that's intercepted as well but why not, its just a port? Yeah thanks for taking the time to write this, I'll refer to it when learning more about this "have my bases covered"
I'm always skeptical connecting to public networks (try not to) or use a burner device or something without typing in passwords/https.
I did say decrypt, and I did really mean it. SSH and TLS are not nebulously "hard to decrypt", otherwise we wouldn't use it because our servers couldn't decrypt the communications from our clients. They are hard to decrypt if you don't know a shared key. In MITM there are two shared keys: the client shares one key with the interceptor, the interceptor shares another key with the server. That's why the interceptor can decrypt.
Furthermore if we're talking the normal open Internet, the interceptor-server negotiation is dead simple because it looks to the server like any client-server negotiation. If you let anyone on the Internet access your pages through TLS, there is no hope to catch MITM on the server-side. It has to be defended client-side.
There is fundamentally no way to stop an interceptor who you have to send messages through from throwing away those messages and sending you the reply, "why yes, I really am https://mail.google.com/, here's my public key, let's negotiate a shared key so that we can talk in secret." Nothing. Because, you don't have an encrypted connection before you've negotiated that shared key. You aren't encrypted before you're encrypted. So for that setup phase, there's no way to stop them from throwing out those packets and sending that reply; we can only hope to detect that they are lying.
Now they are only telling us one thing, "here is my public key", and we want to know if they are lying about that. We have come up with exactly two strategies to deal with this client-interceptor side.
1. Like SSH does, admit defeat: but only for the very first connection. The very first connection, yeah, that could be intercepted. Then I will store your public key forever. Every time afterwards I will either use that public key, or else I will fire up the klaxons warning my user, and they will have to manually approve connecting and storing the new key. And that means you'll have to keep intercepting my traffic--even if I take my laptop to a different location or whatever--or else I'll find you out eventually.
2. Like TLS does, communicate some way of authenticating that public key through some other mechanism than the current internet connection. This usually happens quietly as part of downloading a web browser: that browser comes with a list of public keys of authorities whose digital signatures are trusted by the browser for authenticating public keys. If a site just sends me their public key? Forget it! They have to send me their public key plus a certificate issued from one of these authorities, saying "Yes, that is a valid public key for that domain," signed by a private key whose public key I know and trust. As long as I can verify digital signatures and they cannot be copied, I am good.
These are called "Trust on First Use" and, well, I'm not sure what you'd call what TLS does. Something like an "out-of-band" configuration of a "web of trust" I suppose. So those are the only options we've gotten working at scale, TOFU and OOBWOT.
Obviously doesn't apply to the interwebs but it does apply to some setups.
However, one of the artifacts of a popular language having a lean-and-mean standard library is that custom code proliferates, and the Go community's distaste for frameworks (as opposed to libraries) means that the it's not just the business-specific edges of the code that's unique in each implementation (as you'd expect), but also a good amount of the plumbing and domain-specific control code and their immediate callers. In some other languages, where there's more of a culture for using a dependency to intentionally simplify your problem space in exchange for ceding control, this style would be derided as NIH.
The vendor's response here is a function of not only the vendor's own rationale and priorities, but also of the above developer philosophy. This is surprising to me, given that Go is an opinionated language, and yet opinionated third-party code driving your logic is frequently discouraged by its community.
On the other hand, the language maintainers' response was measured, proper, and commendable. They made a breaking change to an experimental API, and improved their product in the process.
Nor is there any reason not to use SSH just because you trust your network, except possibly performance of huge file transfers.
As for port forwarding, if you trust the network, presumably you don't need forwarding? And for file transfer just use zmodem, or run rsync/ftp? (remember: you trust the network..).
I will admit that there's a minor convenience to be able to use one client/api/interface - but I'm not sure it's worth the tradeoff of suddenly not knowing if you should be trusting ssh to be actually secure, rather than just convenient.
Clearly the maintainers of the go package feel the same way (after some gentle prodding).
I'd first like to be up front about exactly which of our software doesn't perform host key verification, since we have a lot of software and this CVE doesn't apply to most. There are three places that were identified as affected: Packer and Terraform with SSH provisioners, which both create a machine resource and can perform SSH connections to setup the machine; and Vault’s SSH backend in Dynamic Key mode performs SSH connections from the Vault server to hosts (other modes do not).
Any other usage of our software is unaffected.
We’ll discuss each of these cases in detail, since the details matter to understand our thought process and response.
The SSH secret backend has three modes that can be used for generating SSH credentials: certificates, one-time passwords, and dynamic keys. Only the dynamic key mode ever actually makes connections to other machines, but more importantly, our documentation has always recommended that the dynamic key mode only be used as a last resort because of its various (documented) drawbacks compared to the other modes. With the addition of the ability to generate SSH certificates (which was on our roadmap for a long time and added in 0.7, prior to both the original report and the blog post), we did not explicitly mark the dynamic key mode as deprecated in our documentation, but we probably should do so.
Given that it is not recommended for usage (but maintained for backwards compatibility), we chose to warn users of this additional drawback of the dynamic key method, and documented the lack of host key verification (https://github.com/hashicorp/vault/commit/251da1bcdc27678fea...). As we stated in our response to the reporter, "It isn’t something we want to hide (and we’re not trying to) and we will document this."
Terraform and Packer support the ability to use "provisioners" to bootstrap a machine. In both, the provisioner is run very shortly after the machine is initially created, representing an extremely small window of attack. Neither support connecting to a pre-existing machine via SSH under normal use cases (you can make it happen through some advanced configuration trickery with Terraform, but it's abnormal). Because of this, we didn't register this as a high-priority issue.
However, we admit that this can be improved and we likely should've been more reactionary in our response. I apologize for that. We have added plans to improve this to our roadmap, covered in a couple paragraphs.
As the blog post states, the reporter suggested parsing console logs to determine the host key. And, as the blog post correctly says, we don't want to do this. There is a combinatorial explosion of complexity in supporting this, we have experience with this (due to Vagrant supporting this type of behavior), and we've found maintenance of this sort of functionality to be difficult to support over time. We came to this conclusion though only because there is a viable alternative: SSH certificate authentication. If a viable alternative didn't exist, we may have been forced to take the more complex route.
SSH certificate authentication was introduced many years ago and is broadly supported. This type of auth also provides authenticity to a first-use connection. We mentioned in our response email that this is something we're open to doing instead. I admit that in our response to the reporter, we explicitly said this "is not a priority" but shortly after decided to schedule this work for the next major TF release. We should've followed up again, but didn't.
And that's where we're at currently! I hope this helps make our response to the report and our future roadmap around this issue more clear.
I think most people launching instances manually or by other means never do this.
Edit: I'd love to contribute it but my golang skills are weak, few months ago I contributed something very small. I really love terraform.
Is a `null_resource` with an s `file` or `remote_exec` provisioner really "advanced configuration trickery"? I don't do this often in production, but when hacking on a module it's nice to be able to re-provision a resource without going through the entire destroy/create process over and over.
1. how can an experimental library (x/) get a CVE?
2. what is "hostkey verification"? Probably the fingerprint check you usually get when you ssh into a machine + the blocking warning you get when the fingerprint of the machine suddenly changes.
3. if this is what "hostkey verification" is. How is it so hard to implement? create some sort of fingerprint out of the server's public key; prompt the user for input; cache the result.
A lot of people ignore the importance of this part. It is also a bootstrapping issue: At some point you need to get the key into known_hosts. Most people do this by confirming it the first time they connect. That's usually fine for new servers, but what about old servers, e.g. if you're a dev who just joined the company? Most companies don't seem to have a secure way of seeding clients with the initial host keys. I've never seen it.
We solve this with a script that gets them from Puppet (via PuppetDB, its repository of host "facts"). That script has to hard-code the host key of the Puppet host, of course.
Another solution is using the X.509-style certs to sign each host with your own CA, but I haven't looked into that workflow. Edit: It's actually a much nicer workflow. You can sign users, too.
One way to do this is via SSHFP (ssh fingerprint) records in DNS. Unfortunately they do not seem to be particularly widely supported nor widely used.
Looking here: https://golang.org/doc/go1compat
> may be developed under looser compatibility requirements
looks like that might explain why it's a CVE. Also another good answer from Zaki: https://twitter.com/zmanian/status/853342150272008192
I'd hope these don't use experimental libraries. Although as others pointed out to me, this is not an experimental library :) (which was the answer I was looking for)
We thought that was a good idea for a while, but we've recently decided we should figure out what the rules are for the x/* packages. That is https://github.com/golang/go/issues/17244.
The golang.net/x/ libraries are widely used in production, so it's absolutely reasonable to use CVEs.
The word "experimental", in this context, means "not covered by Go's standard library's (rather strict) backwards-compatibility promise" (as can be observed in this case).
"Code in sub-repositories of the main go tree, such as golang.org/x/net, may be developed under looser compatibility requirements."
CVEs are for tracking response to issues, so that people can clearly communicate about what they are reacting to. Everyone using the library needs to update their code (unless they were already setting the callback) and so having a CVE lets them describe exactly what they're reacting to.
This was a terrifying decision, because of ease of use concerns. Having done so and shipped it, TL;DR: It worked awesome, outside of some early kinks in TOFU that we worked out - and now everyone can sleep well knowing there's not a single install that thinks they are running an encrypted setup when they really aren't.
Anyone that came back asking for a flag to disable host key verification seemed happy with our argument for why that's not really much different from just disabling encryption.
See "Trust" here: https://neo4j.com/docs/developer-manual/current/drivers/conf...
If you're interested in doing this as well, we wrote code to do it in Python, JS, Java and C#, it's all Apache licensed:
This isn't as much about the concrete technology as it is about the willingness to implement any host-key checking.
EDIT: Actually, it looks like it's Hashicorp.