Temporary credentials are used specifically to reduce latency, since you get a token that is valid for a period of time and doesn't need to be checked against the auth service on every call. Since the credentials last for 12 hours the work to retrieve them should be negligible. Because they don't need to be checked against the auth service on every call it seems like they would more resilient to API failure than standard AWS credentials.
You still need to provide an access key id and secret access key with that session token as returned by Security Token Service. And properly sign every request with those credentials. With a proper session token, but invalid credentials, the request fails with HTTP 400: "__type: com.amazon.coral.service#InvalidSignatureException, message: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.". Amazon still checks the signature. The same logic that could be used for IAM credentials that don't expire, without the need to check for a valid session token. If it is an Amazon screw up, that static credentials take more time to evaluate vs temporary credentials - that's not out fault. It is a pure Amazon screw up. The signing logic is still there for every API call. So (apparently?) we're back to square one: extra requests and wrapper logic for zero benefits, and worse overall experience. The session credentials aren't stateful. I specifically checked for this behavior. Therefore what you say, doesn't seem to happen in reality.
And for the love of God, don't blindly trust the documentation / what the AWS folks say. As an AWS library author myself, I had a lot of fun debugging failed requests because the smarty who wrote the signing procedure docs forgot to mention some HTTP headers that are mandatory to sign. I had to reverse engineer an official SDK in order to patch my own code. While being implemented as the docs say, the signing method failed at every request, although I had a valid session token. Trial and error is always a broken way of developing things due to broken docs. If you're an AWS employee, please send my regards to the documentation folks.
This is exactly what they said on their blog post:
Long term archival data is different than everyday data. It's created in bulk, generally ignored for weeks or months with only small additions and accesses, and restored in bulk (and then often in a hurried panic!)
This access pattern means that a storage system for backup data ought to be designed differently than a storage system for general data. Designed for this purpose, reliable long term archival storage can be delivered at dramatically lower prices.
Their architecture page seems to confirm this. It seems that their service is explicitly designed to have different performance characteristics from Amazon S3, so maybe they aren't quite a direct competitor to S3, but there are probably a lot of people using S3 for the use cases that Nimbus.IO claims to do better on, simply because S3 was available at the time.
Yes exactly. Nimbus.io is designed for long term archival storage at more affordable prices. We think it's a great time to be competing on price.
We may compete with S3 for low-latency service later on (latency can be made arbitrarily low by spending enough money on caching.) Initial calculations suggest we could be almost as low-latency as S3 and still under price by a good margin.
Note that the BackBlaze machines are optimized for very cold data since they only need to support backup and restore. We also do custom hardware at SpiderOak, but we support web/mobile access, real time sync, etc. That makes our hardware slightly more expensive because of the generally warmer data. So you're off by a few pennies, but certainly in the right zone.
For Amazon, I suspect their internal S3 cost is actually quite a bit higher than either BackBlaze or SpiderOak since their data is warmer.