Has this been audited in any way? Is there a garuntee that the Go runtime won't, say, keep a duplicate of the buffer you copy() from in memory somewhere that can be paged?
Additionally, a technical description of how this works in the README would be nice for those that aren't familiar with how the memory gets locked.
Neat project. Curious to see how this develops.
Vault performs an mlock on its entire memory space (all current and future pages). So even as Go copies and moves around memory, none of the process memory should ever be paged to disk.
As cheeseprocedure correctly stated, we exclude memory access from our threat model. Even if you're disallowing the system from paging your memory, there are other ways of accessing a process's memory. Direct access to Vault's address space is not covered by our threat model. This doesn't mean we don't care about the problem completely: we'll make a reasonable effort in cases to protect against things even not covered by our threat model. However, due to the nature of Go or the complexity to create a locked down solution, we can't claim to cover it in our model.
Vault has mlocked memory (what this library does) since its first release (0.1).
I feel more comfortable with the first, but can't exactly explain why. memguard seems better organised in the repo like a ready to go package; I think go.secrets would be a better solution if it was organised as well as memguard.
This is why I used libsodium (I also implemented this same concept, much more maturely for Rust). If you want this approach to work, you have to manage the memory yourself.
In the Rust version, I also use Rust's ownership rules to automatically `mprotect` with `PROT_NONE` when it's not in use, `PROT_READ` when it's being borrowed immutably, and `PROT_WRITE` when it's being borrowed mutably, all with static compilation guarantees. Plus libsodium creates guard pages before and after the allocation (ensuring no underflows or overflows either into or out of the allocated memory space), and also places a canary before the allocated region that panics when the memory is freed if the canary has been modified. It's far, far more than a simple `mlock`.
I have a rewrite half-in-progress that handles stack-allocated secrets with fewer guarantees (`mlock` and zero-on-free) but that's more appropriate for short-lived stack secrets.
The go runtime might optimize away your memzero, or it could have created other copies that you don't have a handle to.
In the Rust version of my library (and maybe in the go version, it's been ages since I worked on it), I go out of my way to make it difficult to copy data from runtime-managed memory into a secret buffer. You can do this, and it makes a best-effort attempt at zeroing the data when you do, but you lose a lot of hard guarantees when you do.
Is there more to this than the fact that the Go language specification doesn't forbid it? Have you seen it happen?
First, the fact that go's language specification allows for this should be enough to stop you right there. Even if today it doesn't move memory around, an update very well could. This library is supposed to be used for cryptographic secrets; "it works by accident for now, probably" is not even close to the kind of situation you should be designing an API around. At any point, without serious warning, an update to the golang runtime can render these protections useless.
There are situations today where go will move data on the stack. I am unsure if it will move heap allocations, but when the garbage collector adds compaction support this will absolutely be the case.
There are more important differences: go.secrets calls panic() if it fails to lock, memguard seems to log a warning with Printf. go.secrets protects against buffer overflows and underflows using a canary, etc.
They either do SecureZeroMemory for windows, memset or bzero for linux and in worst cases they manually wipe the array which is what memguard does, so, how is sodium_memzero any better in this case?
I definitely will be managing memory myself. The project is in very early stages at this point, just in v0.1.0 right now.
Right now, this is a primed hand grenade of a project. You should disclaim its insecurity at the top of the README. Be very, very specific that it is not currently functional and discourage anyone from using this until it is functional.
All of this stuff will be fixed soon.
If CGo isn't acceptable, at least use their implementation to guide the design your go-native version.
* On many server systems swap is already disabled today (for availability reasons), so you don't really win anything. (Although what you're doing also won't hurt, so it's fine.)
* On many desktop systems on the other hand, swap is not the only reason for memory to end up on disk, mainly due to hibernation modes. This doesn't just include OS-implemented hibernation modes, but also firmware-provided ones (e.g. Intel's RST.)
* If you're running in a VM (in the cloud?), the hypervisor pretty much doesn't care what you lock in memory.
* If your process crashes you may end up with a crash dump, depending on the system's configuration. (On Linux you can avoid that using prctl's PR_SET_DUMPABLE option.)
Even if none of that is a problem for you, you still need to fill and use the values you protected. Where do you get the keys from? Is that path protected as well? This might be fine if you can generate them in place, but even then it's pretty hard. The same is true for key usage: How do you make sure that the key doesn't end up in another portion of your memory? It's almost certain that it ends up either on the stack or somewhere else sooner or later...
They also set the ipc_lock capability before starting the process in docker: https://github.com/hashicorp/docker-vault/blob/e8edfef53deb6...
I think the only feedback would be to show sample usages and adding a bit more technical information*
[*] That would be understandable to people with basic encryption knowledge.