NOTE: This is written from the perspective of "My god, what does this all mean?"
NOTE #2: You can do away with almost ALL of this complexity if you just force everyone onto SSL connections, but then you have to be ok with the increased latency introduced connection-(re)negotiation; just depends on what your API needs to do.
NOTE #3: I think the title of this article is misleading... there is no security on an untrusted client using HMAC -- if your client knows your secret AND they are untrusted, then you have problems.
I wasn't clear from the article how the JS library running client side is adding the secret to the digest before sending the request to the server to verify and process.
For those AWS folks out there, that is what your AWS secret is used for -- your request, e.g.:
to sign the entire request, so when AWS receives the request, the first thing it does is attempt to re-create the exact same signature using the secret it has on file for "bdillon" (or whatever customer-identifying info was sent)
This requires both the caller and server to know the secret and I am not clear on how filepicker is solving this from this article... very broad strokes, no specific impl details from what I saw.
You also described Public key cryptography as having "2 different keys. One allows for encryption and the other for decryption. In this case, the decryption key is public so everyone can decrypt." This misses the mark a little bit. The public key could be used to encrypt as well, so that only the holder of the private key can read the information. Using the private key to encrypt is generally used in digital signatures so that the recipient can verify that the sender is who they claim to be. This scenario doesn't attempt to keep the data secret, because anyone with access to the public key can decrypt the data.
You distinguish between "knowing, doing, or owning" and "something you know, something you have, or something you are"? I guess there might be a slight difference between doing and something you are, but I'm not sure I see the difference?
re. Public Key Crypto. Yes. Using the public key to decrypt and the private key to encrypt is how to use Public Key for author verification and you can reverse it for encryption. One validates the author and the other protects the data.
You are right to say that a better way of saying the sentence would be:
"in the case of digital signatures, the public key is used to decrypt thus everyone can decrypt."
Thanks for helping me with this. Will have to be more more clear next time I write something.
I think as a first step the approach makes sense. I'd propose security is broader than secrets--I think one frame is Authentication, Authorization and Audit (sometimes called the "Gold Standard", because of the repeated "AU" prefixes. Haha, CS meets chemistry humor.).
Anyway... In my mind, Authentication is about secrets, but that can fail. So you use Authorization, so when bad people get access you limit what's compromised. When those two fail, you at least have Audit to either catch the bad actor when they start making trouble, or at worst you can figure out how to stop breaches in future.
And these can mix and match. My two cents.
Not true. All that that proves is that whoever is on the other end knows the secret.
In that sense the title ("Security on an untrusted client") is misleading. The only way anyone's going to secure an untrusted client to a point where the system gains my trust is by using:
- A one-time password  to mitgate MITM, key loggers and screen grabbers.
- Identity federation (not OpenId or OAuth) that means I can use credentials other than the credentials I use from a trusted client.
- An application firewall  that redacts HTML on the fly so that the disclosure impact of a single GET request is reduced.
- Ideally (although this is taking things to a whole new level of paranoia) a data diode  that protects my server.
[Edit] The use case will obviously drive out the requirements, and the shopping list above is for a basic web application that only goes as far as form data. If you're looking to work with files you'd have to write up some code that ensures files don't contain nasties. While AV will go some way to doing that, the only way to really get around that is through conversion. I.e. .DOCX is converted to .PDF, .PNG to .JPG, and so forth. Yes, that limits the file types you can deal with, but if you're this far down your requirements aren't exactly main stream.
 RSA SecureId (http://en.wikipedia.org/wiki/RSA_SecurID)or Chip & PIN challenge/response (https://en.wikipedia.org/wiki/Chip_Authentication_Program)
 Eg. F5 Big-Ip app security manager (http://www.f5.com/products/big-ip/big-ip-application-securit...) or Microsoft's Unified Access Gateway (http://www.microsoft.com/en-us/server-cloud/forefront/unifie...).
 Tenix is a high-end solution used by governments. The old Whale Commuications Intelligent Application Gateway used to be a cheap but very effective alternative (http://en.wikipedia.org/wiki/Unidirectional_network).
Being able to encrypt the message proves that the party knows the secret, which if they are the only one who knows it, proves that they are who they say they are.
Indeed, the part about "which if they are the only one who knows it" is important. If you can't trust that, then security starts to break down.
You can sidestep it by instead trusting that they own a device that no one else has, like an one time password. Now, the statement gets a bit longer:
Being able to encrypt the message =>
the party knows the secret =>
(If you ask for multiple one, you can beter trust that)=>
they have the otp =>
they are the only one who has it =>
they are who they say they are.
I understand that your scenario calls for trust where none is warranted†. That's risk. And your mitigation is to be clear how you handle a compromise (likelyhood).
† As does probably 99.9% of sites on the Internet.
This is not exactly right. Encryption ensures people cannot eavesdrop on a message but it does not ensure you can verify the sender. You need authentication instead, which is what HMAC does in this case.
Pretty much sums up life. It's gotten to the point that zero-day exploits scare me less then users writing passwords down on sticky notes, clicking on random links, or letting random people in the building.
But yes you are right, we have to ensure that we factor in the human angle when designing the solution.