Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Crypto Anchors: Exfiltration Resistant Infrastructure (diogomonica.com)
142 points by diogomonicapt on Oct 9, 2017 | hide | past | favorite | 57 comments


Another form of "crypto anchor" is Blind Hashing which uses a large pool of random data to defend the hashes. An attacker would need to exfiltrate over 90% of the data before they could run an offline attack on hashes blinded by the data pool. The bigger the data pool, the more data an attacker would have to steal, and the more hashes/sec you can run.

So while iterative/computational hashing is only secure if it is slow and if the password is strong, Blind Hashing prevents offline attacks even against weak passwords and actually runs faster as you increase the cost factor.

In this case it's more like an an actual anchor -- technically we call this Bounded Retrieval Model -- the idea that we size the network bandwidth to make it take 300 days at full line rate to steal the data over the network. So it's a physical limitation rather than trusting a black box to protect 256 bits like an HSM.

If you're interested here's an intro [0], a tech spec [1], and an academic paper [2] by Moses Liskov at MITRE.

Disclaimer: I'm Founder/CTO of BlindHash.com which is basicallly Data Pool as a Service -- we provide an API into a geo-replicated 16TB (and growing) data pool.

[0] - https://s3.amazonaws.com/blindhash/BlindHash+Architecture+Gu...

[1] - https://docs.wixstatic.com/ugd/005c1c_5996c661899e4d09a28b9a...

[2] - https://eprint.iacr.org/2017/917.pdf


This looks like a pretty good technique, that's coming from someone who has collected 240GB+ of user:password dumps.

I certainly wouldn't get 16TB of disks just for that if it were ever leaked.

Bummer(not for me :p) that you guys went the route of patenting it and keeping it proprietary & only available through an API.

I think it would be adopted in no time if it were open source, and I'd definitely like to see something like this available as a service on clouds like GCP/AWS/Azure/etc for my day job.


Thanks for the kind words and feedback.

The approach has an economy of scale where a shared pool can secure many sites' hashes at very low cost to individual sites, but where the sum-total can fund a very large data pool. I would love to grow this to 1PB and beyond. The idea behind the patent is to give us a chance to try to grow exactly that service.

Fundamentally the technique is quite simple and easy to copy, yet IMO it is better than computational/iterative hashing in every way -- cost, performance, scalability, and security. It seemed to me a perfect example of something worth patenting. If we're ultimately not successful in commercializing it, I would want to relinquish the patent to the public domain.

The most important part -- and what's kept me working at this for years now -- is that it protects even weak passwords after a company is breached. It takes the onus (and a lot of the blame) off the end user, and solves the usability problem with passwords.

By the way, the same technique works equally well for adding BlindHash to your KDF used to decrypt your SSH key, or your laptop or your TrueCrypt volume. We can also add additional checks when running the BlindHash call for a given AppID to enforce things like;

1. must first rely to an SMS or enter a TOTP code 2. Request must come from a certain IP range or during certain hours 3. Request only valid after date X (time lock)

So this can be used to shore up password-based encryption as well in some very interesting ways.


This can't be too hard to build:

1. Generate 16TB of random data, backup/replicate many times

2. Think of data as 16 billion 1k pieces

3. Generate 64 random piece addresses using hashA(key) as seed

4. Concatenate the 64 pieces into one 64k chunk, and store hashB(chunk)


Our algorithm is close to that;

1. We don't partition it into fixed size blocks, but rather index directly into the array

2. The site calculates a salted hash and sends us just the hash. We recommend at least a 32 byte CS-PRNG salt

3. We HMAC the hash with a 64-byte site-specific token (AppID) to produce the seed

4. We generate 64 uniformly distributed locations from the seed and perform 64 reads of 64 bytes each to form a 4096 byte buffer which we HMAC with the AppID to produce a second salt.

5. The site uses this second salt to HMAC their original hash, and store that.

This design allows multiple sites to securely share a single data pool and also means that our service a) does not see usernames or passsords, b) does not know if a login is valid/invalid, c) cannot do anything to make an invalid login look valid to the site.

There are some additional details to handle upgrading hashes as the data pool grows, and also to provide virtual private data pools for each site (so I can give you a copy of your data pool if you ever want to self-host). This is all detailed in [1] above.


That is an excellent idea. But why 16TB of random data ? Why not encrypt some high entropy value (digits of pi, whatever) with a 100 character password and generate 16TB like that. You then use the 16TB as a password but you could regenerate and recover using a scrap of paper.


You can do either. But if you generate the data pool from a seed that you retain, then you're back to trying to protect a 256-bit value from leaking.

Generating the data pool with constantly cycled and discarded keys (i.e. /dev/urandom) means the only way to have the pool is to go and get every single bit of it.

We went the second route because I like sleeping at night and it just felt like retaining a seed would defeat the whole purpose of bounded retrieval.


Sure, but that's a 256-bit value that does not have to be present at the use point. So it's a lightweight anchor ! It's extremely heavy when someone else tries to move it, and yet when you move it yourself, it easily fits in your wallet on the tiniest of sd cards, or even on a scrap of paper.


How about this? Take the old Blowfish block encryption algorithm and eliminate the key expansion and expand it so that the s-boxes and p-array take up 16TB of data? What you'd wind up with is a block cipher that has a 16TB key. Since Blowfish is clearly "prior art," and is unencumbered by patents, this might make this approach harder to attack using patent law.


I don't think it's a bummer that they patented it. In October of 2037 (assuming they received it today), it will be available for the whole world to use. Until then, it will still be available for the whole world to use, just for a small licensing fee. In the mean time alternatives can also be developed.

This technique could have been invented and promoted starting in 1997 (20 years ago) but only through the protectionism of the patent regime do you have this beautiful write-up and promotion of it by researchers pushing it forward: it's the patent regime working in action.

It works EVEN WITH WEAK PASSWORDS. That is pretty amazing if you ask me.

I am glad they patented it and are promoting it.

"But wait, it's so simple".

Let me give you an example of a $684.23B company that you've heard of that is making a mistake in security that even a small child could detect and correct, but for which there is no proprietary solution in the space pushing them forward.

The company is Google and their silly security mistake is that when I give out "jsmith543+weeklytechupdate@gmail.com" where my true address is jsmith543@gmail.com, and I'm signing up for the Weekly Tech Update newsletter but I'm afraid they could start spamming me, or sell my address for any number of third parties to start spamming me, then this allows the creation of a gmail inbox that tags the incoming mail with "weeklytechupdate". Pretty clever. Only the issue is that it is possible to strip the +____ and spammers actually do that. Here are examples of HN people saying they actually do that: https://news.ycombinator.com/item?id=15396446

>I’ve run a fair amount of email campaigns where we strip out the + if gmail is the domain to ensure it doesn’t end up in some weird filter.

The solution is extremely simple. Allow me to specify a key-value pair from the GMail interface that generates a high-entropy key, and pairs it to a value I choose. Deliver all address to that key to my inbox, tagged with the value I chose, until I start marking it as spam. Very easy. Example: I go to gmail, I click "generate rescindable read address address", I am given affj3fjd and I assign it "weeklytechupdate". I see that affj3fjd@gmail.com gets assigned to weeklytechupdate and if I need to give my email address to that web site in the future I can always look it up in some list. Easy. Gmail doesn't do it, and its spam solution is broken.

The only thing is: nobody has come up with something clever enough to patent in this space, and then promote the @#$# out of. If they had, I could give my email addresses out in confidence to whoever I want.

Actually I made a full gmail email address dedicated only for spam. The problem is I can NEVER read the stuff that goes there as I just don't even look. I just looked. The last piece of spam that I got delivered to it occurred 7 days ago. There are just 2 pieces of mail in my inbox.

That means Google's spam filter is very, very, very good. Wait, what? So good that it silently filters spam that I expect to get, that I explicitly give out my email address for? (Okay, I just looked, and there are 2 messages from 4 days ago - nothing more recent - in the "promotions" tab).

No. It's not what it means. It means that some of these sites I give my address out to aren't able to email me at all. They're just not getting through, because GMail's spam fiters are too draconian.

When I give out "jsmith543+weeklytechupdate@gmail.com" I expect ALL of the mail sent to there to go through - not to be caught by the spam filter. Instead, presumably what happens is gmail throws away most mail that isn't sent to an individual by an individual.

Sorry to rant on this aside, I just wanted to show, in action, the difference between a patented solution that a company promotes, versus an EASY solution that would WORK, that GMail doesn't do. It actively does something broken. Nobody has come up with and promotes some fancy solution that works, so instead they don't use the weak solution that works; they use nothing, only a broken non-working security through obscurity solution that you can see HN'ers actively strip out in order to be able to spam effectively.

And this is Google. So this is a question as clear as day for why I don't mind patented novel algorithms with companies behind them licensing and promoting them. I kind of mind when it's a race to the patent office with new technology, but grandparent poster's technique is one that could have been done in 1997 so I don't really buy that excuse. I like that they're patenting it and promoting it. It's a good way to get companies to use better solutions. Companies just don't do it by themselves, as my Google example shows.


>Until then, it will still be available for the whole world to use, just for a small licensing fee

They don't appear to be selling right to use licenses. Most of the text on their site suggests a cloud based service, which I suspect will be usage based.

All that to say it is perhaps too soon to judge the end user cost as small. Maybe it will be, maybe not.


But my point in this case is that if they hadn't patented it and be pushing it we wouldn't even be talking about this. It promotes it OR alternatives.

The impact on consumers is positive even if they only get meager access for 20 years. (For example the patent owner could just be bad at economics and set their price too high, thinking they would get more profit than via wide adoption: they might not set it at the monopolist's profit-maximizing price point.)

Even so, everyone gets it after a while (20 years.)


Simply filter all email to a non-plus address to spam, and then only give out random oh_sigh+aslkdfjslkdjf@gmail.com addresses. Now, if a spammer strips it off, they just get put directly into your spam box. Where stupid regexes don't like the plus, you have the . allowance for gmail, where foobar@gmail, f.oobar@, f.o.obar@, foob.a.r@, etc all get routed to the first address. gmail lets you have up to 30 character user names, so you can encode 2^28 = ~268M unique emails into that. But those sites are very rare.


1. Out of curiosity, do you actually do this? (The first part you propose.)

2. As a theoretical solution it is a bit weaker than the "simple" solution I think Google should obviously do, because under your proposal different spammers can coordinate, invalidating your privacy. (You didn't tell two different unrelated sites that you're the same person, but actually you are, which they could build into a targeted profile if they coordinate or, for example, are owned by the same parent company.) Granted this is a theoretical concern but it is there.


Not exactly - a little more complex actually.

I have two emails: super_private@gmail.com which is only handed out to people I know in real life...I've had this one since 2004 and I still get zero unwanted emails on that address. Then, I have another address, super_public@gmail.com which mass forwards all mail to my super_private email, which then filters it according to the rules I've set up.

The reason I have the extra layer of indirection is because it wouldn't by very user friendly to force someone you know to email you with a plus sign and then some junk. This way I can give a 'normal' email address to normal people, and my filtering email address to auto signups and things like that.

2. You're right - I guess I'm not too worried about a profile being built for me, but this definitely would not handle that issue. I also use anonymous remailers like getnada.com if I am signing up for something which I think is particularly embarrassing if it gets out, but that is rare.


thanks. I also have set up forwarding on some gmails. it's a bit of a pain.


You cannot patent the use of a really long salt. Thats like patenting hashing of any string longer than 2000 chars. Its a trivial operation. They may think they have a patent but I trust it not to hold up. Go build your own 12 TB pool of data to use for salting hashes. I trust theyll never find out or have any ground to sue you if they do.


Their patent is meaningless. A patent for using 16TB of data as a salt is trivial.

password + salt + password or salt + password + salt are known and trivial patterns in hashing. Unpatentable and even if a patent was somehow gotten, unenforcable.

If your salt is 5 characters it can certainly be 500000000 characters instead without the patent overlords having any slimy grounds to indemn you


A similar technique can be used in embedded systems to enhance the speed that password derived keys become unrecoverable from memory after power off: instead of storing the key directly, store it as a value that must be xored with a hash of, say, 4k of random values that are only stored in memory. Then your key is fully unrecoverable after any 256 bits of the 4k bytes have decayed as long as the RNG used to generate the random bytes is suitable and the executed code (including the OS if there is one) is verified to not store temporary values that could be recovered.

For password authentication, IMO a much better solution is to generate strong random passwords (21 character base64) for users and tell them to write them down and/or use a password manager (I think web browser based storage of generated passwords can be done without the user needing to see the password at all). You can still memorize a small number of those over a few weeks if necessary and there is no good reason to memorize a bunch of passwords.


Even though I adore the concept (remember the original posts by J.Spilman in 2012 and kept rolling it in my head for a while), this introduces new remote SPOF for authentication process, doesn't it?


Very flattering that you remember :-) It's still me.

One nice thing about the design is that since the data pool isn't actually storing hashes, it doesn't change over time (except when you want to grow it) it's easy to have multiple data centers that operate completely independently.

Different copies of the data pool, different networks, different DNS, etc. The client library will retry/fail-over between data centers. So while yes, you do have to make a successful API call, it's not a SPOF.

It's very easy to replicate / add redundancy when there's no active sync required between sites. The only inter-site communication we have currently is when new accounts are created, to distribute the AppID, and to aggregate usage stats, which is batched and when it fails will just pickup where it left off once the network is back up.


I am ambivalent about using HSMs to protect password databases, because if you're going to do that, you might as well simply introduce a minimalized authentication server (ie: something with an <AUTHENTICATE(user, password) bool> interface). It'll have approximately the same attack surface, it actually helps your architecture in other ways, and it precludes attackers from getting hashes in the first place (at least, the same way an HSM does with the HMAC key).

A Go, Rust, or (minimal, non-framework) Java authentication server speaking HTTPS to solely that AuthN interface and sharing no database with anything else is extremely unlikely to be part of any realistic "kill chain"; it'll be among the last things on your network compromised.

Meanwhile: you get to stick with technology you fully understand and can manage (simple HTTP application servers and a decent password hash) and monitor.

HSMs have a lot of uses elsewhere in secure architectures, but the password storage use case is overblown.


We discuss exactly this architecture in the talk we gave back in 2014. See here for the part where we discuss it: https://youtu.be/lrGbK6fE7bI?t=16m31s

Basically we 100% agree with you that an authentication service should do this job. The HSM is extra credit. Although it does help in cases where the auth service's DB is leaked through some other means (e.g. backups).

I will say that I'd depart with you on the return value of that service. It shouldn't be a bool. It's better to return a token that downstream services can use to independently verify that the authentication service verified the user. Its better for your infrastructure if you aren't passing around direct user IDs but rather a cryptographically signed, short lived token that is only valid for the life of a specific request.


We recently developed/deployed a simple “crypto as a service” API for other apps/services to use for easy encrypting, integrity protecting, etc. It was originally developed with an HSM and eventually decided to redo it without. There were lots of unanswered questions with the HSM in terms of having the operational experience to know how they would scale across data centers, how well replication would work, how well failovers would work, etc. We had much stronger confidence in a plain old golang service, MySQL, and leveraging Vault as a master key issuer. We basically key wrap/integrity protect everything in the DB and present a simple Grpc interface. An HSM would have been nice, but a small/simple service isolated from other systems largely gets us what we want, and with the confidence to scale it as we would any other application.


I agree with you, but I'm apples/applesing that service with an HSM and deliberately keeping the interface minimal, just for the sake of argument. The subtext is my worry that normal developers on HN don't really understand why HSMs are operationally secure --- minimal attack surface, not magic hardware.


FWIW I was concerned folks would get caught up on the password storage use case since so many are familiar with that problem. The crux of the idea of crypto-anchoring is to segment crypto operations in to dedicated microservices and use those minimal microservices to do per record encryption, decryption, or signing. HSMs are a natural extension to those microservices if you have budget.


HSM have other security properties that you cannot replicate with software. They are tamper resistant in a way a regular server is never going to be, and they have been engineered to prevent sidechannel attacks. The latter is something very hard to prevent with a regular server.

I agree that for the majority of usecases, a HSM is not necessary, but they do bring security to the table that a simple auth server cannot, at least not without significant engineering effort.


Don't disagree; I think what I actually tried to argue for was doing both: segregate data access to a new, minimal, service that also requires a key in an HSM to operate.


Given a servlet-only Java AuthN server with no interface other than "Authenticate", what is the likely attack that the AuthN server falls to but the HSM doesn't? From what I can tell, both the HSM and the Java servlet app have really just one major weak point, and it's shared: the management interface.


I may be reinventing something that is already done here, but it occurs to me you could also put canary accounts in your data (say one every 100 entries) and use it as a tripwire to alert ops the moment one is passed to the auth service.


I've gone with this approach for a niche social networking site I'm building. The biggest benefit an HSM provides is vendor wrapped keys, which does simplify key management and allows you to lean more on your vendor to support key material confidentially. In my case I didn't feel the added cost and complexity was worth it.


This isn't only about HSMs or dedicated services. To anyone reading this thread: the key thing to understand here is: How do crypto-anchors help against attacks that allows `select *` from a database? A: Per-record encryption mediated by a dedicated microservice.


But there's really nothing "cryptographic" about an isolated authentication service. To drive the point home, and don't do this, but if you (1) used dedicated hardware to run it, (2) IP filtered the box down to just HTTPS, and (3) ran the service using Go, Rust, or Java Servlets, you probably wouldn't even need to use a good password hash.

I'm only talking about the AuthN problem, by the way. I'm not making a general argument against circuit breaker architectures.


Folks need to worry about being able to protect more than just passwords. Engineers should be doing a good job of protecting SSNs, phone numbers, home addresses, etc. Crypto-anchoring can help for the general case of protecting sensitive information, not just passwords. `select *` shouldn't give anything in your infrastructure bulk access to sensitive information. The 'cryptographic' thing here is per-record encryption.


I think tokenizing services are a very good idea in general. I just think there are easier and more effective ways to handle the AuthN problem.


But if you have your authentication server, that server becomes a target. And unless you're using an HSM under the hood, you're still exposed to hashes being stolen.

I think the idea of the author is to protect the operation with hardware.


A dedicated AuthN server presenting only a trivial interface built on a minimal-runtime memory-safe language with no shared database is an extremely hard target. Not that either outcome is likely, but a reasonable person can argue that you are more likely to make a mistake implementing HSM-augmented password hashing on a general-purpose app server than you are to screw up a dedicated Java AuthN server.

"Hardware" isn't magic. The magic power of an HSM isn't the hardware; it's the minimalized attack surface of the software.


If you're serious about preventing exfiltration, you'll do what the military does and use data diodes

https://en.wikipedia.org/wiki/Unidirectional_network

On a slightly tangent, people with physical access to a server can extract encryption keys from RAM by plugging into a PCI slot:

https://github.com/ufrisk/pcileech


I think what will (should, perhaps?) ultimately happen -- and this is probably still years off -- is that we will stop using default routes on (most) hosts.

Publicly accessible servers and such will, of course, still have them, but things like, say, internal database servers or the PC belonging to Debbie in Payroll, won't.

Access to things outside of the "local network" (i.e., a company's entire network, not just the directly-connected subnet) will go through an intermediary (e.g., an HTTP(S) proxy) that performs per-connection authorization with a defauly deny.

It may end up looking a little differently than this -- a default deny on all outgoing IP traffic, for example, with only specific traffic permitted -- but I believe that, eventually, this is how we'll keep random hosts from being used to exfiltrate mass amounts of data.

TL;DR: Companies need to start filtering outgoing traffic and not letting any random host on the internal network connect out to any other random, arbitrary host in the world. This will be inconvenient and expensive (to manage), however, so we'll need a few more Equifax's before it begins to catch on.


I've used data diodes. It isn't clear to me what is the high side in this case.


One of the great things that helps when building a crypto-anchor enabled infrastructure is to have Mutual TLS between all applications/containers. This allows you to authn/authz and only allow connections from specifically allowed apps/containers/microservices.

Mutual TLS can be a bit of work to get set up but leads to huge security wins over time as every RPC within your infrastructure is mediated by an authorization layer. We've helped out a bit with the SPIFFE project which is looking to make mutual TLS easy: https://spiffe.io/


SPIFFE's lucky to have Docker, Google, and others helping drive forward the idea of consumable service authentication frameworks like SPIRE. OSS was just launched a little more than one week ago (https://blog.scytale.io/say-hello-to-spire-7e133fad72ca).


The thing about security is that there is a point where you end up locking yourself out.

Locking your data to your hardware raises the question of what would happen if the hardware failed? Also at first glance this seems to introduce difficulties with scalability across multiple machines. Also it might make it difficult to switch between infrastructure providers.

The cost of this approach should be mentioned as a footnote.

Maybe the better solution is for society to support more small tech companies with smaller user bases that have fewer dissatisfied rogue employees to leak hashed passwords in the first place.

The root of the problem is not technical, it's political.


HSMs all have a way of enrolling multiple units into a shared state so that you can have them all be logically equivalent.

If possible, it's nice to keep an offline copy of your key material too. Maybe locked in a safe, gpg encrypted or something.


Commercial HSM have ways of exporting the key they hold onto smartcards. Usually the keys are split onto a number of smartcards, let's say 3. For increased robustness, each third is written to two smartcards. (We now have 6 cards.) These smartcards each belong to one person, who is the only one knowing the PIN that protects the third of the key it holds. Each of these smartcards is then brought to different bank vault, sealed in tamper-proof bags.

To restore the key, you need to bring 3 out of 6 persons to the so-called key ceremony, and each has to bring his smartcard and his PIN.

The same mechanism can be used to provision multiple HSMs with the same key material. But there are other means to do this. As soon as two HSM share a common secret, also known as Key Sharing Key, they are able to exchange all key material they possess in a secure manner.

Some HSM don't even bother to store the keys they generate within the bounds of their hardware. Only it's master key is stored in it's hardware, any other key is encrypted with the master key and stored on a shared filesystem.

If this sounds artificial to you, let me assure you that such procedures are in place at various companies who deal with raw credit card data, at least in Europe. The EMV committee, the PCI organization and each issuer of credit card do mandate such procedures.

And they are very strict. We once had to ship HSMs back to the vendor, because at some point they were not supervised by at least two persons. (At least the documentation thereof was missing.)


... because at some point they were not supervised by at least two persons, before it was taken into operation, that is. (Afterwards, they have to be locked into a rack that requires two different badges to open it's doors and which must have a CCTV system recording it at all times.)


Smaller userbase means less damage for data holder (the company), not the actual damaged party (person, whose password is leaked). It's not the type of attack implemented on the threat vector that matters - you remove one, you introduce another, it's inevitable cycle of change. The problem is that this is a threat vector and it needs to be solved for no matter how large the sensitive dataset it.

So, yeah, the problem is political in a way that everyone is coming with their own agenda into it, which has little grounding in reality, yet affects decisions of many people substantially.


Folks shouldn't necessarily be scared off by the use of HSMs in this model -- HSMs are an add-on that adds an additional layer of security. That said, there are still significant wins to segmenting the applications that hold keys, particularly if they are on hosts separate from your front-end or application logic hosts. This architecture still forces attackers to only have access to data within your infrastructure, which allows your detection systems to have a chance to catch people before they leave with all the data.


So can you do any of this in a public cloud setup or is this an argument for having infrastructure that you control directly?

(I think AWS might have launched some sort of HSM service, but I haven't looked at the details and not clear if it could provide the right sort of guarantee)


You can definitely do this in public cloud HSMs.

Azure: https://azure.microsoft.com/en-us/pricing/details/key-vault/ AWS: https://aws.amazon.com/cloudhsm/

The only thing you're not able to do in a public cloud is run these in Secure Execution mode—where you get to actually execute arbitrary code inside of the enclave instead of just doing operations with keys that are protected by the HSMs.


Delegating security-sensitive operation to a dedicated service (e.g. an authentication service) is security 101. It's just an application of the principle of least privilege.

Giving it a fancy name makes it look like it's a new idea.


A promising title, but a disappointingly banal application. When surveillance companies already have "our" data, we've lost the game. Equifax et al should leak, so society will push back against "voluntary" commercial surveillance. Meanwhile, exfiltration from end-user networks by negligent or malicious adversarial embedded software is a growing threat to privacy.


Personally, I think our current trend is very useful and should be pursued to the most extreme level:

1. Assume that attacker will get data X

2. Make what you keep in data X as useless and uninteresting as possible.

3. Hash data X with the most expensive and safest hash possible.

4. If you really can't do steps #2 and #3, warn your customers about what you are keeping and encrypt the heck out of everything.


Out of curiosity I went to look at the pricing for Amazon’s CloudHSM service for AWS and nearly spit out my drink. $5,000 up front cost per device and $1.88/hr and they suggest running two for high availability. At those prices I don’t think you’ll see this catch on anytime soon.


I bought a "real" HSM off of eBay (used, of course) a couple weeks ago but, unfortunately, it's apparently broken and so I need to return it. The price of these things is huuuuuge and that puts them out of reach to all but large companies. I think that Amazon is trying to solve that problem but, yeah, if I had that much to spend I would just buy a couple outright (they'd pay for themselves in a few months, compared to CloudHSM).


If you have an HSM in the loop for all authentications, why bother with hashing? Just encrypt the password database with the HSM and be done with it.

There are cheaper ways of keeping secrets secret. Using a TPM on the server would be one way. SGX would be another.


How do people cope with the threat of HSM hardware failures? Would be awfully bad to lose the ability to read all of your sensitive fields including backups.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: