2. OAuth 2.0 is published as an RFC from IETF. It may be a bear to read (and yes, it's a framework rather than a protocol!), but the spec is open, easy to find, and carefully edited (https://tools.ietf.org/html/rfc6749). Is TAuth meant as a specification, or a one-off API design? If it's a specification, has there been an attempt to write it down as such?
The argument the author is making is that this level of security is not sufficient for a bank. I think I agree with this statement.
For general use, OAuth 2 provides a sufficient level of security since the platforms that use it are usually only as secure as TLS, too.
Some people use the ability to be authorized to access an account on e.g. Facebook as a stand-in for authentication, but that's a different issue.
Can we not assume someone with access to a Facebook account is authentically the owner of that Facebook account for all of our intents and purposes?
Now there are two people with authorisation to access this Facebook account so the process no longer unique authenticates a single individual.
Of course this is a contrived example and I'm sure there are better examples. But this is why OAuth is authorisation and not authentication and why something like OpenID Connect exists on top of OAuth2.
You would generally be much less rigorous in the Facebook example between giving someone access to your shared photo albums than to your account settings. Having an oauth token does not make you signed into Facebook at all, but just says that you have valid rights to do something.
1) MyLittleApp wants OAuth access to BankOfMars
2) MyLittleApp bundles BankOfMars SDK into MyLittleApp
3) MyLittleApp requests oauth access via SDK
4) SDK opens WebView for user to log into BankOfMars
5) MyLittleApp has full control over the DOM presented to the user since the WebView is technically its own.
6) MyLittleApp extracts the user's password from the DOM of the WebView
7) MyLittleApp disappears and... profit?
1. MyLittleApp opens web view to log into Mondo
2. User enters something to identify themselves into the web view (eg. email address, phone number)
3. We dispatch a notification to the user's registered device (ie. the Mondo app where the user is logged in – this may be the same device or a different device)
4. User opens the Mondo app and accepts/rejects the authorisation request
5. User returns to MyLittleApp, OAuth flow completes
In this flow, the user is not exposing their login credentials to the app… at worst, the app could extract their email/phone number. It also introduces another factor into the auth flow: the user's registered device.
The reason webviews are present in the SDK is because of the possibility that your app is not installed.
You cannot depend on SMS pin without your app, because that can be spoofed. I don't see any option except for manual pin copy paste from the app to a well known website that can be opened in the browser.
I agree in the "native app on same device" situation, we can bypass all this by bouncing straight to our app though.
I think you should talk about the case when a customer does not have Mondo app installed and another app asks for authorization to access. How will the auth flow work.
Of course, that doesn't work without a phone.
> and the only SMS is one from the service to the user
There is zero possibility of a customer actually recognizing a phone number...or even a shortcode.
Assume that whatever SMS comes in your phone is going to be trusted by users.
However the one thing that people will know how to do is Google their banks name and go to the corresponding website.
It's like me telling you I'm going to email you the passcode to a gate. If Bob overhears this and knows your email and my email, he still isn't in a position to do anything if he can spoof emails. Best case is he sends you the wrong code and you don't get access until you get the real code from me.
Bob is impersonating you in the first place and asking for the password. How do I know whether it is you or him?
If the spoof app has a "Connect with Twitter* (and you don't have the Twitter app installed), and then a webview is opened, the spoof app can replace Twitter's login page with their own, and capture the username and password.
User education only goes so far, this type of attack can also make a web view that traditionally asks for a TOTP one-time password code susceptible to leaking a users password, even if the normal login flow doesn't ask for that password.
[ed: note that it's pretty trivial to eg: set up hidden cameras in voting booths, if you want to spy on a few people, or perhaps have people film themselves in a voting booth - the point is rather that if most people make an effort to follow the common rules wrt. voting booths, the system is reasonably secure. And it's not trivial to make similar claims about a (presumably) centralized on-line system.]
We'll almost certainly add additional required factors to this process (eg. biometrics), as we see the user logging into the Mondo app on a new device as one of the most critical from a security perspective.
You must have a flow where this works without your app being installed.
If your flow is blocking on Mondo app being installed - that's fine. This means that the surface area of attack is restricted around your app. That's totally OK.
However - that is a very different positioning than oauth. I would say Oauth will degrade gracefully to your protocol if the endpoint is restricted to another app that must mandatorily be installed on the host device.
1) MyLittleApp wants OAuth access to BankOfMars
2) MyLittleApp bundles BankOfMars SDK into MyLittleApp
3) MyLittleApp registers itself with BankOfMars,
which then has sole discretion over whether
to allow it to access data hosted by BankOfMars.
5) If approved by BankOfMars, MyLittleApp can now
request oauth access to a user's data
with the BankOfMars SDK.
6) SDK opens WebView for user to approve MyLittleApp's
request to access user's data hosted by BankOfMars.
User may reject the application's request.
7) If the user approves the application's request,
the user is then prompted for authentication.
This can be in the form of a username/password,
but may also include two-factor authentication
or whatever BankOfMars deems necessary for security.
8) Should BankOfMars or the user choose to do so,
either can revoke the right of MyLittleApp
to access BankOfMars data.
This is a problem with mobile apps unfortunately, since this type of browser interaction is going to be all over the place. For web apps it works just fine.
Plus it's way more complicated and has way more failure scenarios than simple password auth.
The only saving grace that I saw was that a service no longer has to store users' passwords to other systems, for persistent interaction with their data. I think this is really why people bother using it.
How so, if you're in a browser and you check the URL? It's only flawed here because the app controls the browser itself, but that wasn't the original use case of OAuth.
One way to circumvent that would be to enforce password change after any oauth authorization, but that's not very user friendly.
Someone should internally want to review code, especially for a bank.
The proper solution would be either 1) the ability to register 3rd party libraries with apple and require some kind of integrity check before approval (but even then, the 3rd party app could override library methods at runtime), or 2) code signing the binary blob library separately for every 3rd party developer (but then the problem is enforcement of where developers get the library from -- how do you verify SDK integrity from the bank server side?)
The fundamental problem is that, as soon as you give 3rd party developers the ability to natively integrate with your service via an SDK in their own app, you are playing a cat and mouse game.
But yes, aside from that potential difference, I understand what you're saying.
Also, it would help greatly to be able to generate multiple auth users on your bank account (e.g. a read-only identity for giving access to MyLittleApp). I have seen this occasionally on banking portals, but it's very rare.
When I pay for something at a webshop via my bank account using a common standard created for that purpose (IDEAL in the Netherlands, other countries have similar systems) I get forwarded to my bank's authentication service to authorize that payment. I can clearly see that the TLS certificate belongs to my bank, and my browser is content that it is valid.
I am not experienced with iOS but I also suspect there are more advanced WebView detection tricks. It also doesn't help that Apple really doesn't like fast app switching.
mmm, thinking about it, might be compatible with the current spec.
For one, bearer token  is only one type of "Access Token" described by the OAuth2 spec . In fact, the OAuth2 spec is very vague on quite a few implementation details (such as how to obtain user info, how to validate an Access Token), which the author seems to just assume are part of the spec, as he does with bearer token. Other parts, like the client/user distinction, and the recommendation for separate validation of clients, the author ignores completely, generating his own (ironically mostly OAuth2-compliant ) spec.
> Shared secrets mean no non-repudiation.
Again, not true. Diffie-Hellman provides a great way to come to a shared secret that you can be cryptographically sure (the adversary's advantage is negligible) is shared between you and a single verifiable keyholder.
> Most importantly using JWT tokens make it basically impossible for you to experiment with an API using cURL.
sigh. If only there was a way to write one orthogonal program that can speak HTTP, and in a single cli command send that program's output to another program that can understand the output. Maybe we could call it a pipe. And use this symbol: |. If only.
> OAuth 2.0 is simply a security car crash from a bank's perspective. They have no way to prove that an API transaction is bona fide, exposing them to unlimited liability.
TL;DR: This article, led by comments like this ("unlimited", really?), strikes me as pure marketing (aimed at a naive audience) for a "spec" that probably would not exist had proper due diligence into alternatives, or perhaps some public discussion, occurred. At the very least, inconsistencies (a few of which I've mentioned above) could have been avoided.
 https://tools.ietf.org/html/rfc6750  https://tools.ietf.org/html/rfc6749  https://tools.ietf.org/html/rfc6749#section-2.3.2
curl -i ... // Perform authentication to obtain JWT
export JWT="eY..." // Place JWT in a shell variable
curl -i -H "Authorization: Bearer $JWT" ... // Call your API
curl ... // perform authentication to obtain JWT | xargs -I TOKEN curl -H "Authorization: Bearer TOKEN" ... // Call your API
> For one, bearer token  is only one type of "Access Token" described by the OAuth2 spec . In fact, the OAuth2 spec is very vague on quite a few implementation details (such as how to obtain user info, how to validate an Access Token), which the author seems to just assume are part of the spec, as he does with bearer token. Other parts, like the client/user distinction, and the recommendation for separate validation of clients, the author ignores completely, generating his own (ironically mostly OAuth2-compliant ) spec.
Last time I checked other access token types were still drafts and bearer tokens were the only stable kind.
> Again, not true. Diffie-Hellman provides a great way to come to a shared secret that you can be cryptographically sure (the adversary's advantage is negligible) is shared between you and a single verifiable keyholder.
I as a bank cannot attribute liability for an erroneous transaction to a developer if with both share the secret with which a signature is computed. If I as a bank am compromised and want to cover my arse by moving the blame to a poor external developer I can do that with a shared secret by forging signatures after the fact. This is precisely why I don't want shared secrets.
Even if your point is valid re DH, why push that up to the application level and reinvent the wheel when you can get the same benefits, less intrusively by using a battle tested protocol circa 20 years old?
> sigh. If only there was a way to write one orthogonal program that can speak HTTP, and in a single cli command send that program's output to another program that can understand the output. Maybe we could call it a pipe. And use this symbol: |. If only.
This is shit developer experience. Why bother with a Rube Goldberg sequence of piped commands when you can just curl?
Finally, despite everything you say no OAuth 2.0 based protocol can guarantee privacy. People like that privacy when it comes to their finances I find.
Sorry for any typos, I'm on the move. Thanks for your comments :)
If you are a developer tasked with working with web security related techniques, I would expect being able to use Bash, Curl, and anything needed to string together a couple of HTTP requests on the command line before your first cup of coffee of the day to be the least requirement of their skill set.
> Public key cryptography can be used with JWT tokens but they don't solve the problem of how the client will generate key pairs, demonstrate proof of possession of the private key, and enrol the public key with the API.
JWT is not in any way attempting to solve the problem of client identity and authentication. Rather, the question of federated user identity and how to validate that the identity assertion came from a trusted source (where the PKI and assertion signing comes in).
Furthermore it is signed with, among other assertions, the audience assertion so that you can cryptographically verify that a token was given with the authorization of a user by a trusted service (your JWT provider, via whatever authN methods it allows) and to a given client. This should give a substantial enough audit trail to enable reasonable proof that an end-user authorized a client (which itself had to authenticate to the provider) to perform an API transaction if it can be proven that the signature was validated and that the token issuer was clear about exactly what the user was giving the client authorization to do.
OAuth2, OIDC, and all modern standards I'm aware of also require client validation of some form. From the OAuth2 spec:
Confidential clients are typically issued (or establish) a set of
client credentials used for authenticating with the authorization
server (e.g., password, public/private key pair).
If your goal is to provide a non-repudiable audit trail of user identity and authorization and client identity and agency (authorized by the user to perform X) then OIDC, JWT, and client AuthN via request signing with registered keys should be more than sufficient to avoid liability in the case of rogue clients or shady users. As always, the audit trail is the most important piece, along with sound crypto and standard practices that have been audited by appropriate experts, so that your audit evidence cannot be reasonably called into question.
That lets developers maintain key confidentiality (devs keep their CA private keys) and maintain control over the app installations' access to signed certs (as well as cert lifetime).
Even if it's not a CA, OIDC has some brief words on signing JWT with a registered keypair, which gives a similar, though less robust, ability to keep the private key secret.
No matter what, any of these scenarios still involves figuring out a way to trust the installed app is authorized by the resource owner and the client developer to obtain a signed cert/token (thus shifting real financial liability onto them in OP's scenario). Which probably means requiring the end user to register for your service also, validating the user again rather than the app.
The fundamental fact remains that the human mind remains the only truly secret place, which is why passwords aren't going anywhere, and why DRM solutions have to rely on making it illegal to attempt to obtain the decryption key embedded in device, or making attempted recovery involve physical destruction of the key.
1) You create a "normal" client in Google Developer console (i.e. a web client)
2) You create a native/Android client in the same project. This client is shared across all phones.
3) You add a scope of audience:server:client_id:$NORMAL_CLIENT_ID to auth requests from the mobile.
4) You get back token minted for the web client, from the native client!
This is how it works:
The reason it is safe is because you can only do the cross client stuff from a mobile client, which disallows any redirect urls except for localhost and a couple of other special URIS (see https://developers.google.com/identity/protocols/OAuth2Insta...)
It's ok that the secret is not really secret because it's not possible to use it to making a Phishing site since the redirect URL is localhost.
I guess that doesn't answer your "how does it identity the app developer" but it does tell you how these things are deployed at least, and the important fact that there's just one client (not one for every device)
At the end of the day though, everyone has to sign their apps with certs that are pretty well validated. So, it really cuts down on funny business like you mention.
They suggest using PKCE (challenge-repsonse) https://tools.ietf.org/html/rfc7636 to authenticate clients that can't be trusted with a client secret.
security is a very hard problem, especially asymetric crypto security. rolling your own is generally not advised.
If there is a way to define and use client tls according to the current spec, that would be best. if not, I agree that it's probably a good idea to create a new spec
I agree though that the curlness of the spec is orthogonal to the discussion.
- I am not reinventing TLS
- This has nothing to do with OAuth
- It uses WebCrypto for key gen and CSR signing
What makes you say that OAuth 2 cannot guarantee privacy? I think you must have a very different definition of privacy than I'm used to if you can make this claim.
For which developers?
As far as I know, secure on-line voting is still an open research question (and that's just the theoretical bit, never mind building a real, concrete system).
1. The client (or the device holding the authentication token, or the app, etc) should be able to maintain (on its own storage!) an audit log of all transactions it has authorized, that log should be cryptographically verifiable to be append-only (think blockchain but without all the Bitcoin connotations), and the server should store audit log hashes and verify that they were only ever appended to. And the server should send a non-repudiable confirmation of this back to the client.
Why? If someone compromises the bank or the bank-issued credentials (it seems quite likely that, in at least one implementation, the bank will know the client private keys), the client should be able to give strong evidence that they did not initiate a given transaction by showing (a) their audit log that does not contain that transaction and (b) the server's signature on that audit log.
2. Direct support for non-repudiable signatures on the transactions themselves. Unless I'm misunderstanding what the client certs are doing in this protocol, TAuth seems to give non-repudiation on the session setup but not on the transactions themselves. Did I read it wrong?
3. An obvious place where an HSM fits in.
How does TAuth stack up here?
Also, there's a very strange statement on the website:
> to unimpeachably attribute a request to a given developer. In cryptography this is known as non-repudiation.
Is that actually correct as written or did you mean "to a given user"?
A blockchain related technology is overkill, you just need forward integrity: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.111...
But you're still protected against transactions alleged to have occurred before your last real transaction or, equivalently, you're guaranteed to (in theory) notice the fraud the next time you try to do a genuine transaction.
No one knows the private key other than the creator, in this case the developer.
> Direct support for non-repudiable signatures on the transactions themselves. Unless I'm misunderstanding what the client certs are doing in this protocol, TAuth seems to give non-repudiation on the session setup but not on the transactions themselves.
There is nothing stopping the API provider enforcing layer 7 signatures too, it's an application concern. The same private key can be used to compute those signatures or as X.509 certs can embed arbitrary public keys, you can choose another one for transaction signatures.
> Is that actually correct as written or did you mean "to a given user"?
The first version of TAuth is for Server to server apps. In this case it means developer. I will clarify that in the post. Thanks.
So what, exactly, is non-repudiable? If I go to a highly enlightened court wielding a signature, what can I prove to that court? That my app really did connect to your server at the time I allege it did? This seems weak to me.
OAuth 2 is not "bad" in general, you just need to consider the implications of using it. If you have an API that allows clients to move customers' money or take out loans, you should take additional steps to defend against MITM attacks. For example using client side certificates :)
That said, TAuth looks really good and tidy. Of course the developer may still lose the private key, so in the end you'll always need to additionally monitor API requests for suspicious behaviour.
IIRC we didn't go too far down the client cert route because we're behind CloudFlare and we like it that way. Something to revisit in the future.
The secondary complaint seems to be that OAuth 2.0 is a mess. That one I heartily agree with! A few years ago I wound up having to figure out OAuth 2.0 and wrote http://search.cpan.org/~tilly/LWP-Authen-OAuth2-0.07/lib/LWP... as the explanation that I wish I had to start. In the process I figured out why most of the complexity exists, and whose interests the specification serves.
The key point is this: OAuth 2 makes it easy for large service providers to write many APIs that users can securely authorize third party consumers to use on their behalf. Everything good (and bad!) about the specification comes from this fact.
In other words, it serves the need of service providers like Google and Facebook. API consumers use it because we want to access those APIs. And not because it is a good protocol for us. (It most emphatically is a mess for us!)
That's not just plain not true. In the OAuth2 authorization_code grant, a "confidential" client is REQUIRED to send a client_id and client_secret to authenticate itself to the server.
> If the client type is confidential or the client was issued client
credentials (or assigned other authentication requirements), the
client MUST authenticate with the authorization server as described
in Section 3.2.1.
Count me as pretty dubious of letting some unknown group try to re-implement bank authentication without fully understanding the specification they're trying to fix.
All secrets go over the wire, which is protected with TLS. Ultimately the security is delegated to TLS. You're simply wrong here.
> Count me as pretty dubious of letting some unknown group try to re-implement bank authentication without fully understanding the specification they're trying to fix.
Your misunderstanding is also indicative that OAuth 2.0 is too complicated.
The statement in the OP that:
> the server does not authenticate the client. This means the server has no way of knowing who is actually sending the request.
is incorrect as written. [In the case of a confidential client] The server does authenticate the client, and it does know who is making the request.
If you're going to claim that TLS-protected authentication somehow counts as "does not authenticate the client" then I guess you'll agree that Gmail "does not authenticate" my IMAP client when it makes a TLS-secured connection and sends my 'app password' over the wire.
Another alternative to this would be to perform an OOB flow, wherein the redirect URI is actually hosted on the authorization sever itself and the client can scrape the access token from the Location header.
Your whole premise is surrounded by the threat a client browser would not properly validate a server certificate... come on... really?
They want cryptographic proof of client identity. That means somehow the client has to prove they are the real user and not an attacker who intercepted the connection somehow (which, again, is completely possible). Client certs are a way to verify with each message that the user themselves, using their private key, validate what's going on, and that the message they validated came from the real server and not a fake intermediary.
This is different from 2fa because 2fa is authentication of identity that only happens once and does not provide cryptographic proof of identity. TOTP will give you something closer, but it's still a "dumb token" that can be intercepted.
Client request 1: "Gimme $5."
Bank reply 1: "Who are you?"
<man-in-the-middle starts listening>
Client request 2: "StrawberryNewtonManicDresser"
Bank reply 2: "Okay, you can now use session ID 1234 to request more money."
MITM request 1: "Gimme $100000."
Bank reply 1: "Who are you?"
MITM request 2: "Session id 1234."
Bank reply 2: "Okay, here's your money."
Client request 1: "Gimme $5."
Bank reply 1: "Who are you?"
<man-in-the-middle starts listening>
Client request 2: <'Gimme my money.' ^ PRIVATE_KEY>
Bank reply 2: <verifies CR2 against stored client cert>
Bank reply 2: "Okay, you can now use session ID 1234, starting at iteration 2, to request more money."
MITM request 1: "Gimme $100000."
Bank reply 1: "Who are you?"
MITM request 2: "Session id 1234, iteration 2."
Bank reply 2: <checks MITMR2 against stored client cert, is not valid because iteration 2 wasn't signed with the client private key>
Bank reply 2: "You're a faker, get lost."
It should be noted that carders, whom normally get their Bank credentials from malware on a user's device, can already inject commands into active valid sessions started by the user, so verifying the user's identity is completely pointless in this case.
The victim does not know they are being MITMd and enters the 2FA code.
The bank could encode the permission (amount, beneficiary, read access, etc with an expiration date) given into an OAuth bearer token, and the app can use the token to do exactly the things that the user consented to.
If they tried this at a Canadian bank, every non-technical person would immediately switch to a competitor and they'd lose more money than they'd save via fraud prevention.
The correct URL is https://teller.io and then you wont get an SSL cert warning. Not everyone uses "www". Nowhere on teller.io do you see a link to www. You put garbage in and got garbage out.
You need full integrity verification, with a secure store and whitebox crypto keys to make such a scheme secure.
All of that is available in the banking world and is often deployed by people like Irdeto (who I work for) and Arxan etc.
I'd say the same but they've done just fine publishing anything to the App Store, which uses certs everywhere. And it was even worse the first few years.
"Just fine" is a relative term here. It's still a shit show managing them—AFAIK XCode is the only realistic option, which makes me want to remove my eyes with forks.
If you're banking on strong app protection working you really need to be notified of it's state on the server which this won't do, you need to use a securely signed message from the verification/protection libraries on the client.
That can be done by storing this key into a cryptographic whitebox and then linking using it to integrity verification.
Problem two bemoans the bearer token in Oauth 2. Yes, it's not as secure as OAuth 1, but it's also far simpler. But you don't have to use bearer tokens; you are free to use MAC tokens instead. Why reinvent the wheel?
So, how exactly does adding a client certificate solve that problem? If server certificate validation is disabled on the client, the MITM can still accept the client certificate and substitute their own.
The difference is that in this case the attacker will gain access to the API but the client will not, unless they are being actively MITM. If the client tries to access the API outside the MITM their client cert will be rejected as invalid.
I guess I missed something about how the client certificate is being provisioned. I see the video showing a client certificate being downloaded onto a desktop, but that's obviously not the intended UX for actual end-users...?
Pulling a certificate via the browser is not great assuming we want a highly controlled chain of custody over the private key bytes and that these certs will expire and need to be regularly rotated. But it's not much work to build some command line tool to send a CSR off for signing, that seems reasonable for server-to-server authentication.
I wonder if you'll run into issues with various languages' HTTPS libraries not properly supporting client certificates.
It's nice to think this could all just work with the lower layer taking care of everything, but I also wonder with the shitshow that is TLS if you can even be sure the client cert validation code can really be trusted as much as an application-layer check.
It's already implemented in Chrome.
I'm so jealous!
But you're correct that eavesdropping is possible.
Actually, the same logic should be done for cookies. You COULD replace cookies (which are bearer tokens) with signing every request to the server, but then you're just avoiding the REAL solution: https!
Actually the biggest security theater I have seen on the web is httponly cookies to "mitigate XSS". As if the main thing an attacker will do once they inject JS is to send your cookies somewhere. They can just execute anything in the context of your session while they still have it! So by being security theater, httponly cookies are worse than useless. The right way is to prevent XSS by escaping everything properly.
I am under the impression that we are now in a phase where security needs to be stepped up, but in the mean time tokens send via SMS are considered 'good enough'. There are lots of initiatives for the next step, each providing proper two-factor authentication, but a lot of services are waiting it out because the hardware tokens or smartcards you need for each user cost money, and if you adopt one of the current solutions such as TOTP tokens, users would need such a device for each service they use (again, for banks this is already accepted; at least in the Netherlands).
Ideally, a standard such as FIDO U2F gains ground, so users can safely and conveniently reuse a single hardware token for any service supporting that standard. Who knows, perhaps having your 'internet key' on you can become as commonly accepted as having your house keys on you.
Also, relying on SMS means all these services have a single unique number to identify you with across services. I dislike the privacy implications this entails, and prefer to keep (some of) my on-line identities neatly quarantined the rest. FIDO U2F addresses this problem; even if you use the same hardware key for every service you use, they cannot be linked.
Unfortunately, most FIDO U2F services allow SMS as a fallback authentication method, including Google and Github. At least Github has some strong warnings about it.
If a service is actually guarding private data by definition (like a bank or an insurance agency) than phasing out SMS in favour of FIDO U2F or another true hardware factor is a much more likely scenario.
TOTP should always be used before SMS auth, and SMS auth should always be used in addition to an offline secret (separate from a password). It's just too easy to abuse the unencrypted, open-network nature of SMS.
Let's say your API server followed the standard OAuth 2.0 protocol except required client-side certificates? Would that be as secure as TAuth?
If so, then the OAuth 2.0 option has the advantage of being well-supported by existing libraries and well-understood from a security perspective. It's less likely that a previously-unknown issue with OAuth 2.0 will crop up and force everyone to scramble for a fix.
And while client certificates prevent an attacker from forging client requests (i.e. tricking the API server) an attacker can still trick the client. An attacker capable of MITM'ing server-cert-only HTTPS can also trick TAuth clients into sending it's banking API requests the attacker's servers. It can respond to those requests with whatever it wants.
To summon the activation energy to adopt (or switch to) a new, less-popular protocol, I'd expect more security benefits.
Why can't a developer do exactly what you did in your second video, which is to save the JWT to a variable, and then use it in the request?
Heck, you could create a quick wrapper "jwt_curl"/"jwt_http" or something that automatically pulled in that variable…
There's two big things about this scheme that leave me confused: how do you know what the correct certificate for the client is? Do you just send it over HTTPS? But then, one of your opening premises is that we don't get TLS verification correct and are open to MitM, so this seems to contradict that, or are we hoping that "that one request won't be MitM'd", like in HSTS? (which seems fine)
SimpleFIN seems simple and still secure. But maybe I'm missing something?
And the authenticator clearly does not require this global behavior: if you immediately log out from a Google page, you remain “logged in” at the 3rd-party site that you started from. So why doesn’t it log you out globally? Probably to convenience Google, at the expense of security when you auto-identify yourself to who knows how many other web sites before you realize what happened.
Logging into one page with one set of permissions should mean “LOG INTO THIS PAGE”, not “REVEAL MY SECRETS TO THE INTERNET”.
1) Problem: app authors disable TLS (server) cert validation.
2) Solution: give each app author the responsibility of managing and distributing a client side certificate.
Sounds like now you have two problems? In particular, you now have to make sure that every lost/compromised certificate is added to your growing CRL? And you need app developers that demonstrably do not even have the vaguest idea how public key cryptography can be used for authentication to take responsibility for doing this? And there's still no guarantee that they won't disable certificate verification?
Did I miss anything?
Any reference for this? The text of PSD II is here — http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:320... — but it's too long and it isn't clear to me whether it is actually ratified.
From an attackers point of view, this sounds like a very tiny ray of hope. It sounds like a cool feature/vulnerability that will probably be going away soon because it is so easy to fix.
The solution to that is to run your own CA but then it won't be 3rd party anymore. It's sort of the catch-22 with SSL/TLS: Either you use a 3rd party or you get to automate things. There doesn't appear to be any middle ground.
Why is there no middle ground? Because if the 3rd party CA is doing their job they're investigating every single request for a new certificate. That means you can't just get a new client-side certificate on demand, instantaneously whenever the need arises.
The 3rd party CA may have issued a cert to malicious party that issued another cert to their man in the middle.
You can't be sure unless you are your own CA, but then you aren't a 3rd party anymore.
Have you seen Let's Encrypt?
This is kind of a pet peeve. Anyone who ignores or wants to disable server certificate verification has to understand the risk.
I wonder if this is a custom built solution or if Teller.io is using something like HashiCorps Vault to do the whole SSL cert dance.
Either way, this looks promising.
Not when you consider we've all been subjected to decades of "don't write your own security!!!"
Also, you forgot the question mark on the end of your sentence there. Unless you meant a sarcasm mark or an interrobang and the comment parser stripped it?