
Introducing TAuth: Why OAuth 2.0 is bad for banking APIs and how we're fixing it - AffableSpatula
https://blog.teller.io/2016/04/26/tauth.html
======
JoshMandel
1\. To my mind, the fundamental problem OAuth solves is "letting a user
decide" to share data with an app, without making the user responsible for
jumping back and forth between the app and her API provider (her bank, in this
case). OAuth holds the user's hand through a series of redirects, and the user
doesn't have to copy/paste tokens, or remember where she is in the flow, or
know what comes next. Does TAuth have a similar capability? The blog post
mentions "User Tokens" in passing, but doesn't define or describe them.

2\. OAuth 2.0 is published as an RFC from IETF. It may be a bear to read (and
yes, it's a framework rather than a protocol!), but the spec is open, easy to
find, and carefully edited
([https://tools.ietf.org/html/rfc6749](https://tools.ietf.org/html/rfc6749)).
Is TAuth meant as a specification, or a one-off API design? If it's a
specification, has there been an attempt to write it down as such?

~~~
iLoch
I disagree with your first point. The fundamental problem OAuth solves is
secure authentication. OAuth 2.0 provides this at a bear minimum as it can be
broken in any way SSL/TLS can be broken.

The argument the author is making is that this level of security is not
sufficient for a bank. I think I agree with this statement.

For general use, OAuth 2 provides a sufficient level of security since the
platforms that use it are usually only as secure as TLS, too.

~~~
chrisrhoden
Nope, authorization. Authentication is left as an exercise for the
implementer.

Some people use the ability to be authorized to access an account on e.g.
Facebook as a stand-in for authentication, but that's a different issue.

~~~
Navarr
Is that not a fair thing to do?

Can we not assume someone with access to a Facebook account is authentically
the owner of that Facebook account for all of our intents and purposes?

~~~
mfjordvald
As it is right now, yes. But imagine a scenario where Facebook might implement
a child account where a parent has access to monitor the usage.

Now there are two people with authorisation to access this Facebook account so
the process no longer unique authenticates a single individual.

Of course this is a contrived example and I'm sure there are better examples.
But this is why OAuth is authorisation and not authentication and why
something like OpenID Connect exists on top of OAuth2.

------
chatmasta
One big problem with OAuth on mobile apps is this scenario. I've seen this in
the wild for non security-critical apps. As far as I can tell, it's not a bug
so much as it is a problem with the OAuth protocol and webview permissions:

1) MyLittleApp wants OAuth access to BankOfMars

2) MyLittleApp bundles BankOfMars SDK into MyLittleApp

3) MyLittleApp requests oauth access via SDK

4) SDK opens WebView for user to log into BankOfMars

5) MyLittleApp has full control over the DOM presented to the user since the
WebView is technically its own.

6) MyLittleApp extracts the user's password from the DOM of the WebView

7) MyLittleApp disappears and... profit?

~~~
obeattie
The flow we (Mondo) are considering for this:

1\. MyLittleApp opens web view to log into Mondo

2\. User enters something to identify themselves into the web view (eg. email
address, phone number)

3\. We dispatch a notification to the user's registered device (ie. the Mondo
app where the user is logged in – this may be the same device or a different
device)

4\. User opens the Mondo app and accepts/rejects the authorisation request

5\. User returns to MyLittleApp, OAuth flow completes

In this flow, the user is not exposing their login credentials to the app… at
worst, the app could extract their email/phone number. It also introduces
another factor into the auth flow: the user's registered device.

~~~
sjtgraham
What's to stop the app (or anyone in the context of a rooted device) replacing
the legit Mondo auth page with a shim? Nothing.

~~~
jhuckestein
What does replacing the auth page gain the attacker? At worst the user enters
their email address into a phishing page. The important thing is that there
isn't a password :)

~~~
nemothekid
> _What does replacing the auth page gain the attacker?_

If the spoof app has a "Connect with Twitter* (and you don't have the Twitter
app installed), and then a webview is opened, the spoof app can replace
Twitter's login page with their own, and capture the username and password.

~~~
obeattie
In the proposed implementation above, the _only_ piece of information that a
user enters inside the web view is a username. The user must then use the
native Mondo app on a previously-authenticated device to complete the OAuth
flow. The Mondo app could also require biometric (ie. Touch ID)
authentication.

While a malicious application can inject JavaScript to intercept the username,
this alone is useless to an attacker.

~~~
e12e
Well, the malicious application can inject a password-field, and an
unsuspecting user might not realize that (s)he is giving an app/attacker the
password, and not the correct third party.

User education only goes so far, this type of attack can also make a web view
that traditionally asks for a TOTP one-time password code susceptible to
leaking a users password, _even if the normal login flow doesn 't ask for that
password_.

[ed: note that it's pretty trivial to eg: set up hidden cameras in voting
booths, if you want to spy on a few people, or perhaps have people film
themselves in a voting booth - the point is rather that if most people make an
effort to follow the common rules wrt. voting booths, the system is reasonably
secure. And it's not trivial to make similar claims about a (presumably)
centralized on-line system.]

~~~
obeattie
On Mondo, there simply are no passwords at all. Instead, when the user wishes
to log into our first-party apps, we send a login link to their registered
email.

We'll almost certainly add additional required factors to this process (eg.
biometrics), as we see the user logging into the Mondo app on a new device as
one of the most critical from a security perspective.

~~~
e12e
I hope that's not biometrics in a potentially attacker-controlled web-view (if
such a thing is possible) - biometrics are difficult to revoke...

------
andrewstuart2
In my opinion, having worked extensively with OAuth2 (mostly in the form of
OIDC) and other modern AuthN/Z protocols, the author of this post does not
truly understand OAuth 2, nor have they looked in any appropriate depth into
supplements like OIDC or alternatives.

For one, bearer token [1] is only one type of "Access Token" described by the
OAuth2 spec [2]. In fact, the OAuth2 spec is very vague on quite a few
implementation details (such as how to obtain user info, how to validate an
Access Token), which the author seems to just assume are part of the spec, as
he does with bearer token. Other parts, like the client/user distinction, and
the recommendation for separate validation of clients, the author ignores
completely, generating his own (ironically mostly OAuth2-compliant [3]) spec.

> Shared secrets mean no non-repudiation.

Again, not true. Diffie-Hellman provides a great way to come to a shared
secret that you can be cryptographically sure (the adversary's advantage is
negligible) is shared between you and a single verifiable keyholder.

> Most importantly using JWT tokens make it basically impossible for you to
> experiment with an API using cURL.

 _sigh_. If only there was a way to write one orthogonal program that can
speak HTTP, and in a single cli command send that program's output to another
program that can understand the output. Maybe we could call it a pipe. And use
this symbol: |. If only.

> OAuth 2.0 is simply a security car crash from a bank's perspective. They
> have no way to prove that an API transaction is bona fide, exposing them to
> unlimited liability.

TL;DR: This article, led by comments like this ("unlimited", really?), strikes
me as pure marketing (aimed at a naive audience) for a "spec" that probably
would not exist had proper due diligence into alternatives, or perhaps some
public discussion, occurred. At the very least, inconsistencies (a few of
which I've mentioned above) could have been avoided.

[1] [https://tools.ietf.org/html/rfc6750](https://tools.ietf.org/html/rfc6750)
[2] [https://tools.ietf.org/html/rfc6749](https://tools.ietf.org/html/rfc6749)
[3]
[https://tools.ietf.org/html/rfc6749#section-2.3.2](https://tools.ietf.org/html/rfc6749#section-2.3.2)

~~~
sjtgraham
Author here:

> For one, bearer token [1] is only one type of "Access Token" described by
> the OAuth2 spec [2]. In fact, the OAuth2 spec is very vague on quite a few
> implementation details (such as how to obtain user info, how to validate an
> Access Token), which the author seems to just assume are part of the spec,
> as he does with bearer token. Other parts, like the client/user distinction,
> and the recommendation for separate validation of clients, the author
> ignores completely, generating his own (ironically mostly OAuth2-compliant
> [3]) spec.

Last time I checked other access token types were still drafts and bearer
tokens were the only stable kind.

> Again, not true. Diffie-Hellman provides a great way to come to a shared
> secret that you can be cryptographically sure (the adversary's advantage is
> negligible) is shared between you and a single verifiable keyholder.

I as a bank cannot attribute liability for an erroneous transaction to a
developer if with both share the secret with which a signature is computed. If
I as a bank am compromised and want to cover my arse by moving the blame to a
poor external developer I can do that with a shared secret by forging
signatures after the fact. This is precisely why I don't want shared secrets.

Even if your point is valid re DH, why push that up to the application level
and reinvent the wheel when you can get the same benefits, less intrusively by
using a battle tested protocol circa 20 years old?

> sigh. If only there was a way to write one orthogonal program that can speak
> HTTP, and in a single cli command send that program's output to another
> program that can understand the output. Maybe we could call it a pipe. And
> use this symbol: |. If only.

This is shit developer experience. Why bother with a Rube Goldberg sequence of
piped commands when you can just curl?

Finally, despite everything you say no OAuth 2.0 based protocol can guarantee
privacy. People like that privacy when it comes to their finances I find.

Sorry for any typos, I'm on the move. Thanks for your comments :)

~~~
andrewstuart2
My DH comment was a bit aside the point I probably should have made. (Also,
apologies for my sarcasm re: bash pipes -- that was unnecessary and probably
unproductive).

> Public key cryptography can be used with JWT tokens but they don't solve the
> problem of how the client will generate key pairs, demonstrate proof of
> possession of the private key, and enrol the public key with the API.

JWT is not in any way attempting to solve the problem of client identity and
authentication. Rather, the question of federated user identity and how to
validate that the identity assertion came from a trusted source (where the PKI
and assertion signing comes in).

Furthermore it is signed _with, among other assertions, the audience
assertion_ so that you can cryptographically verify that a token was given
with the authorization of a user by a trusted service (your JWT provider, via
whatever authN methods it allows) and to a given client. This should give a
substantial enough audit trail to enable reasonable proof that an end-user
authorized a client (which itself had to authenticate to the provider) to
perform an API transaction if it can be proven that the signature was
validated and that the token issuer was clear about exactly what the user was
giving the client authorization to do.

OAuth2, OIDC, and all modern standards I'm aware of _also_ require client
validation of some form. From the OAuth2 spec:

    
    
        Confidential clients are typically issued (or establish) a set of
        client credentials used for authenticating with the authorization
        server (e.g., password, public/private key pair).
    

This implementation _is_ unspecified in OAuth2 but could (and in your case
probably should) certainly include digitally signing each API request (much
like twitter and amazon require) with a private key and validating the
signature against the client's registered public keys as well as the
constraints (especially audience, scope, and expiration) given via the token.

If your goal is to provide a non-repudiable audit trail of user identity and
authorization and client identity and agency (authorized by the user to
perform X) then OIDC, JWT, and client AuthN via request signing with
registered keys should be more than sufficient to avoid liability in the case
of rogue clients or shady users. As always, the audit trail is the most
important piece, along with sound crypto and standard practices that have been
audited by appropriate experts, so that your audit evidence cannot be
reasonably called into question.

~~~
uncleyo
I completely agree with everything you said and know a bit about OIDC too, but
since I'm far from mobile/3rd party apps development, there is one thing I
don't understand: client credentials (confidential client) in case of mobile
app installed from the marketplace. How is it done? You'd need dynamic client
registration, right? I know there's a spec for that and I think I understand
the mechanics. That would let you identify the client but not sure you can
ever identify app developer with it (if needed for audit purposes). Or am I
missing something, maybe?

~~~
bobbyrullo
I had the same questions, and it's very hard to find the answer - took me a
very long time to piece this together but this is how Google does it:

1) You create a "normal" client in Google Developer console (i.e. a web
client) 2) You create a native/Android client _in the same project_. This
client is shared across all phones. 3) You add a scope of
audience:server:client_id:$NORMAL_CLIENT_ID to auth requests from the mobile.
4) You get back token minted for the web client, from the native client!

This is how it works:

[https://developers.google.com/identity/protocols/CrossClient...](https://developers.google.com/identity/protocols/CrossClientAuth#androidIdTokens)

The reason it is safe is because you can only do the cross client stuff from a
mobile client, which disallows any redirect urls except for localhost and a
couple of other special URIS (see
[https://developers.google.com/identity/protocols/OAuth2Insta...](https://developers.google.com/identity/protocols/OAuth2InstalledApp#formingtheurl))

It's ok that the secret is not really secret because it's not possible to use
it to making a Phishing site since the redirect URL is localhost.

I guess that doesn't answer your "how does it identity the app developer" but
it does tell you how these things are deployed at least, and the important
fact that there's just one client (not one for every device)

~~~
uncleyo
I understand that. Problem is that I can "steal" other dev's app client_id and
use in my app. So it seems impossible to use such client_id for
auditing/evidence. With a web client I cannot do that since I don't own the
domain, so I can be proven to be a party in some transaction

~~~
blazespin
They should allow for push notifications. That'd be more secure

At the end of the day though, everyone has to sign their apps with certs that
are pretty well validated. So, it really cuts down on funny business like you
mention.

------
amluto
Some features that I think a system like this should have:

1\. The client (or the device holding the authentication token, or the app,
etc) should be able to maintain (on its own storage!) an audit log of all
transactions it has authorized, that log should be cryptographically
verifiable to be append-only (think blockchain but without all the Bitcoin
connotations), and the server should store audit log hashes _and verify that
they were only ever appended to_. And the server should send a non-repudiable
confirmation of this back to the client.

Why? If someone compromises the bank or the bank-issued credentials (it seems
quite likely that, in at least one implementation, the bank will know the
client private keys), the client should be able to give strong evidence that
they did _not_ initiate a given transaction by showing (a) their audit log
that does not contain that transaction and (b) the server's signature on that
audit log.

2\. Direct support for non-repudiable signatures on the transactions
themselves. Unless I'm misunderstanding what the client certs are doing in
this protocol, TAuth seems to give non-repudiation on the session setup but
not on the transactions themselves. Did I read it wrong?

3\. An obvious place where an HSM fits in.

How does TAuth stack up here?

Also, there's a very strange statement on the website:

> to unimpeachably attribute a request to a given developer. In cryptography
> this is known as non-repudiation.

Is that actually correct as written or did you mean "to a given user"?

~~~
wslh
> an audit log of all transactions it has authorized, that log should be
> cryptographically verifiable to be append-only (think blockchain but without
> all the Bitcoin connotations)

A blockchain related technology is overkill, you just need forward integrity:
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.111...](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.111.6973&rep=rep1&type=pdf)

~~~
icebraining
"The URL does not match any resource in our repository."

------
jhuckestein
As much as I love Stevie, teller.io and this demo: Why not both?

OAuth 2 is not "bad" in general, you just need to consider the implications of
using it. If you have an API that allows clients to move customers' money or
take out loans, you should take additional steps to defend against MITM
attacks. For example using client side certificates :)

That said, TAuth looks really good and tidy. Of course the developer may still
lose the private key, so in the end you'll always need to additionally monitor
API requests for suspicious behaviour.

~~~
sjtgraham
Hey Jonas! TAuth is simpler than OAuth 2.0 and doesn't suffer the same
security issues. So… why use OAuth?

~~~
jhuckestein
The devil you know I suppose ;)

IIRC we didn't go too far down the client cert route because we're behind
CloudFlare and we like it that way. Something to revisit in the future.

------
btilly
The main complaint about OAuth 2.0 seems to be that bearer tokens are a bad
idea. Well, you can implement OAuth 2.0 to use any kind of token you want,
with any property you want. People do bearer tokens because it is easy, not
because it is required.

The secondary complaint seems to be that OAuth 2.0 is a mess. That one I
heartily agree with! A few years ago I wound up having to figure out OAuth 2.0
and wrote [http://search.cpan.org/~tilly/LWP-Authen-
OAuth2-0.07/lib/LWP...](http://search.cpan.org/~tilly/LWP-Authen-
OAuth2-0.07/lib/LWP/Authen/OAuth2/Overview.pod) as the explanation that I wish
I had to start. In the process I figured out why most of the complexity
exists, and whose interests the specification serves.

The key point is this: _OAuth 2 makes it easy for large service providers to
write many APIs that users can securely authorize third party consumers to use
on their behalf. Everything good (and bad!) about the specification comes from
this fact._

In other words, it serves the need of service providers like Google and
Facebook. API consumers use it because we want to access those APIs. And not
because it is a good protocol for us. (It most emphatically is a mess for us!)

------
ForHackernews
> One of the biggest problems with OAuth 2.0 is that it delegates all security
> concerns to TLS but only the client authenticates the server (via it's SSL
> certificate), the server does not authenticate the client. This means the
> server has no way of knowing who is actually sending the request.

That's not just plain not true. In the OAuth2 authorization_code grant, a
"confidential" client is REQUIRED to send a client_id and client_secret to
authenticate itself to the server.

[https://tools.ietf.org/html/rfc6749#section-4.1.3](https://tools.ietf.org/html/rfc6749#section-4.1.3)

> If the client type is confidential or the client was issued client
> credentials (or assigned other authentication requirements), the client MUST
> authenticate with the authorization server as described in Section 3.2.1.

Now, this doesn't work for "public" clients like a pure-javascript webapp, but
that's a separate question.

Count me as pretty dubious of letting some unknown group try to re-implement
bank authentication without fully understanding the specification they're
trying to fix.

~~~
sjtgraham
> That's not just plain not true. In the OAuth2 authorization_code grant, a
> "confidential" client is REQUIRED to send a client_id and client_secret to
> authenticate itself to the server.

All secrets go over the wire, which is protected with TLS. Ultimately the
security is delegated to TLS. You're simply wrong here.

> Count me as pretty dubious of letting some unknown group try to re-implement
> bank authentication without fully understanding the specification they're
> trying to fix.

Your misunderstanding is also indicative that OAuth 2.0 is too complicated.

~~~
ForHackernews
You're correct that OAuth2 ultimately delegates all security to TLS--if that
concerns you, you're better off using OAuth1a that has its own
signing/verification protocol.

The statement in the OP that:

> the server does not authenticate the client. This means the server has no
> way of knowing who is actually sending the request.

is incorrect as written. [In the case of a confidential client] The server
_does_ authenticate the client, and it _does_ know who is making the request.

If you're going to claim that TLS-protected authentication somehow counts as
"does not authenticate the client" then I guess you'll agree that Gmail "does
not authenticate" my IMAP client when it makes a TLS-secured connection and
sends my 'app password' over the wire.

------
krooj
Their description of the MITM attack is entirely dependent upon how the
authorization server validates redirects in the implicit and authorization
code grant flows. This is tied to how client registration is performed. So, if
you want to ensure that the authorization code or access token is only
delivered to a redirect URI that is trusted, that should be part of the policy
enforced in your infrastructure... More specifically, you can require domain
verification and validation as part of the client registration process, and I
would expect that at a minimum when dealing with delegated access to
financials.

Another alternative to this would be to perform an OOB flow, wherein the
redirect URI is actually hosted on the authorization sever itself and the
client can scrape the access token from the Location header.

~~~
sjtgraham
These are two separate things: MITM and open redirect. A MITM attack is not on
the the auth code, but on the bearer token.

~~~
krooj
By design, OAuth2 doesn't allow for open redirects: this is just part of how
clients are registered. What I'm getting at is not strongly validating the
registered redirects on a sensitive client, which can lead to leakage of the
access token in the implicit flow and the authorization code grant flow. Once
you perform that intercept, the token may be presented by a malicious third
party until it expires or is revoked.

------
thallium205
This is unnecessary. Many banks can and will enforce 2-factor authentication
with their oauth flow, which sufficiently validates the client and would
prevent a MITM attack.

Your whole premise is surrounded by the threat a client browser would not
properly validate a server certificate... come on... really?

~~~
peterwwillis
You do know about phishing, right? There are many ways to get a user (not
client) to accept an invalid cert, and some cases where a client will accept
an invalid cert.

They want _cryptographic proof of client identity_. That means somehow the
client has to prove they are the real user and not an attacker who intercepted
the connection somehow (which, again, is completely possible). Client certs
are a way to verify with each message that the user themselves, using their
private key, validate what's going on, and that the message they validated
came from the real server and not a fake intermediary.

This is different from 2fa because 2fa is authentication of identity that only
happens once and _does not provide cryptographic proof of identity_. TOTP will
give you something closer, but it's still a "dumb token" that can be
intercepted.

tl;dr

2fa:

    
    
      Client request 1: "Gimme $5."
      Bank reply 1:     "Who are you?"
      <man-in-the-middle starts listening>
      Client request 2: "StrawberryNewtonManicDresser"
      Bank reply 2:     "Okay, you can now use session ID 1234 to request more money."
      
      MITM request 1:   "Gimme $100000."
      Bank reply 1:     "Who are you?"
      MITM request 2:   "Session id 1234."
      Bank reply 2:     "Okay, here's your money."
    

client certs:

    
    
      Client request 1: "Gimme $5."
      Bank reply 1:     "Who are you?"
      <man-in-the-middle starts listening>
      Client request 2: <'Gimme my money.' ^ PRIVATE_KEY>
      Bank reply 2:     <verifies CR2 against stored client cert>
      Bank reply 2:     "Okay, you can now use session ID 1234, starting at iteration 2, to request more money."
      
      MITM request 1:   "Gimme $100000."
      Bank reply 1:     "Who are you?"
      MITM request 2:   "Session id 1234, iteration 2."
      Bank reply 2:     <checks MITMR2 against stored client cert, is not valid because iteration 2 wasn't signed with the client private key>
      Bank reply 2:     "You're a faker, get lost."
    

....at least, I think that's how it works, iirc. The messages are re-signed so
a stolen session token doesn't allow replay by an intermediary (the same sort
of protection modern TLS has, but for the server's protection, not the
client's)

It should be noted that carders, whom normally get their Bank credentials from
malware on a user's device, can already inject commands into _active valid
sessions started by the user_ , so verifying the user's identity is completely
pointless in this case.

~~~
pweissbrod
That seems right to me. It's the same approach as SSH key-based auth in a
nutshell

------
JDDunn9
I visited the homepage ([https://www.teller.io/](https://www.teller.io/)) and
got a warning about the SSL cert being invalid. Kind of ironic. :)

~~~
levemi
> I visited the homepage ([https://www.teller.io/](https://www.teller.io/))
> and got a warning about the SSL cert being invalid. Kind of ironic. :)

The correct URL is [https://teller.io](https://teller.io) and then you wont
get an SSL cert warning. Not everyone uses "www". Nowhere on teller.io do you
see a link to www. You put garbage in and got garbage out.

~~~
amgreg
many if not most people input the www subdomain by rote. unless teller does
not care about that category of people, it should probably fix the issue

~~~
mderazon
Of course they should. Redirect traffic from www.teller.io to teller.io

~~~
anaptdemise
Or, correctly teller.io to www.teller.io. Previous discussion
[https://news.ycombinator.com/item?id=11004396](https://news.ycombinator.com/item?id=11004396)

------
bgidley
This is unlikely to work - developers in general can't cope with managing SSL
certificates. They won't know what to do with them or handle them securely.

You need full integrity verification, with a secure store and whitebox crypto
keys to make such a scheme secure.

~~~
gyre007
I gathered the target group are developers. Devs should be capable of dealing
with this if they want higher security.

~~~
bgidley
Even dev's can't cope. Most apps leak credentials severely. You need
integratity verification, obfuscation and whitebox crypto to do this sort of
thing securely.

All of that is available in the banking world and is often deployed by people
like Irdeto (who I work for) and Arxan etc.

~~~
gyre007
Is that why irdeto.com does not use SSL on their site? Because you're not
willing to manage SSL certificates?

~~~
jessaustin
Wow it doesn't even redirect 443 it just hangs...

------
pkulak
Problem one exists because, apparently, MITM is a problem with TLS because
it's possible for bogus certificates to get through? Well... I guess. But then
that's a TLS problem. And your entire banking website is served through TLS.
So, if it really is an issue, then solving it just for auth is like putting an
Abus padlock on a screen door.

Problem two bemoans the bearer token in Oauth 2. Yes, it's not as secure as
OAuth 1, but it's also far simpler. But you don't have to use bearer tokens;
you are free to use MAC tokens instead. Why reinvent the wheel?

------
wyattjoh
I think my biggest bug here is that as far as I understand this flow, it
essentially says that a given certificate that is generated and signed by a
third party (Teller in this case) would be expected to bundle this private
certificate with the application. Isn't it possible to extract the certificate
from the app bundle after the fact? Or am I missing something here...

------
zaroth
The premise for adding client certificates is a MITM made possible because
careless app developers will disable server certificate validation.

So, how exactly does adding a client certificate solve that problem? If server
certificate validation is disabled on the client, the MITM can still accept
the client certificate and substitute their own.

The difference is that in this case the attacker will gain access to the API
but the client will not, unless they are being actively MITM. If the client
tries to access the API outside the MITM their client cert will be rejected as
invalid.

~~~
sjtgraham
Actually no. The certificate must be signed by Teller (or it's rejected) and
associated with the same application as the auth token.

~~~
zaroth
Right, so what stops an attacker from getting a client certificate signed by
Teller?

I guess I missed something about how the client certificate is being
provisioned. I see the video showing a client certificate being downloaded
onto a desktop, but that's obviously not the intended UX for actual end-
users...?

~~~
zaroth
So I realize now at this stage you are focused on server-to-server only, in
which case there's no issue with trying to deploy individual client certs to
end-user devices.

Pulling a certificate via the browser is not great assuming we want a highly
controlled chain of custody over the private key bytes and that these certs
will expire and need to be regularly rotated. But it's not much work to build
some command line tool to send a CSR off for signing, that seems reasonable
for server-to-server authentication.

I wonder if you'll run into issues with various languages' HTTPS libraries not
properly supporting client certificates.

It's nice to think this could all just work with the lower layer taking care
of everything, but I also wonder with the shitshow that is TLS if you can even
be sure the client cert validation code can really be trusted as much as an
application-layer check.

------
arnarbi
Client certs are still a bit of a pain. There is already an IETF spec in the
works, called Token Binding, on how to bind tokens to key pairs that clients
maintain, and create on demand.

[https://github.com/TokenBinding/Internet-
Drafts](https://github.com/TokenBinding/Internet-Drafts)

[http://www.browserauth.net/home](http://www.browserauth.net/home)

It's already implemented in Chrome.

------
0x0
I thought client certificates were being phased out, didn't Chrome just remove
the <keygen> html tag?

~~~
sjtgraham
A PKCS#10 request is built using PKI.js, all crypto is done by native
WebCrypto APIs.

------
guelo
> The EU is forcing all European banks to expose account APIs

I'm so jealous!

------
EGreg
Actually oAuth 1.0 is less secure than oAuth 2.0 because it engages in
security theater. It doesn't even require https and as a result any man in the
middle can eavesdrop on the requests. And if the token is leaked, it's game
over.

~~~
cakoose
oAuth 1.0 supports both PLAINTEXT and HMAC-based signature schemes. I assume
the article is assuming HMAC-based signatures (the PLAINTEXT option seems to
be less well-known). With an HMAC-based signature, the token will not be
leaked.

But you're correct that eavesdropping is possible.

~~~
EGreg
Right, the HMAC based approach is what is recommended over the bearer token
approach. But you still leak everything else in the request.

Actually, the same logic should be done for cookies. You COULD replace cookies
(which are bearer tokens) with signing every request to the server, but then
you're just avoiding the REAL solution: https!

Actually the biggest security theater I have seen on the web is httponly
cookies to "mitigate XSS". As if the main thing an attacker will do once they
inject JS is to send your cookies somewhere. They can just execute anything in
the context of your session while they still have it! So by being security
theater, httponly cookies are worse than useless. The right way is to prevent
XSS by escaping everything properly.

------
yodasan
So, it seems like the main concern here is that a client will not validate the
SSL certificate, so the SSL layer is now manually added into javascript code
using the WebCrypto API to prevent this? I see not validating SSL certificates
being a potential problem with something like a REST API, but is it common to
disable SSL verification at the browser level where you would need to use
javascript to do this?

------
educar
One of the things about OAuth is that the user needs to check the website url
where he is giving his credentials. Amusingly, many mobile apps seems to
forget this important bit. The redirect me to a web ui _inside_ the app itself
and expect me to enter my password inside the app. I guess they thought this
was a better user experience than handing over control to the browser :/

------
cakoose
Two things: 1\. Why not just add client-side certificates to an OAuth-based
API? 2\. Client certificates do not prevent an attacker from pretending to the
be server.

Let's say your API server followed the standard OAuth 2.0 protocol except
required client-side certificates? Would that be as secure as TAuth?

If so, then the OAuth 2.0 option has the advantage of being well-supported by
existing libraries and well-understood from a security perspective. It's less
likely that a previously-unknown issue with OAuth 2.0 will crop up and force
everyone to scramble for a fix.

And while client certificates prevent an attacker from forging client requests
(i.e. tricking the API server) an attacker can still trick the client. An
attacker capable of MITM'ing server-cert-only HTTPS can also trick TAuth
clients into sending it's banking API requests the attacker's servers. It can
respond to those requests with whatever it wants.

To summon the activation energy to adopt (or switch to) a new, less-popular
protocol, I'd expect more security benefits.

------
deathanatos
> _Most importantly using JWT tokens make it basically impossible for you to
> experiment with an API using cURL. A major impediment to developer
> experience._

Why can't a developer do exactly what you did in your second video, which is
to save the JWT to a variable, and then use it in the request?

Heck, you could create a quick wrapper "jwt_curl"/"jwt_http" or something that
automatically pulled in that variable…

There's two big things about this scheme that leave me confused: how do you
know what the correct certificate for the client is? Do you just send it over
HTTPS? But then, one of your opening premises is that we don't get TLS
verification correct and are open to MitM, so this seems to contradict that,
or are we hoping that "that one request won't be MitM'd", like in HSTS? (which
seems fine)

------
iffycan
How does this compare with SimpleFIN:
[https://bridge.simplefin.org/info/developer](https://bridge.simplefin.org/info/developer)

SimpleFIN seems simple and still secure. But maybe I'm missing something?

------
hobarrera
It's still a bit unclear to me how a client generates his certificates and
somehow links it to his bank account. The demo shows a web-UI generating it,
but would a mobile user have to visit the website to fetch a certificate?

------
makecheck
By logging into a 3rd-party site using Google+, for instance, you remain
logged-in to Google when you go to _any_ other web site.

And the authenticator clearly does not require this global behavior: if you
immediately log out from a Google page, you remain “logged in” at the 3rd-
party site that you started from. So why doesn’t it log you out globally?
Probably to convenience Google, at the expense of security when you auto-
identify yourself to who knows how many other web sites before you realize
what happened.

Logging into one page with one set of permissions should mean “LOG INTO THIS
PAGE”, not “REVEAL MY SECRETS TO THE INTERNET”.

------
e12e
Let me see if I understand this correctly:

1) Problem: app authors disable TLS (server) cert validation.

2) Solution: give each app author the responsibility of managing and
distributing a client side certificate.

Sounds like now you have two problems? In particular, you now have to make
sure that every lost/compromised certificate is added to your growing CRL? And
you need app developers that demonstrably do not even have the vaguest idea
how public key cryptography can be used for authentication to take
responsibility for doing this? And there's still no guarantee that they won't
disable certificate verification?

Did I miss anything?

------
EGreg
At Qbix we developed a much more secure way than oAuth to _instantly_
personalize a person's session -- and even connect the user to all their
friends -- while maintaining privacy of the user and preventing tracking
across domains by everyone except those they choose to tell "I am X on Y
network" ... it also restores the social graph automatically on every network
and tells you when your friends joined one of your networks.

------
eemph
>The EU is forcing all European banks to expose account APIs with PSD II by
end of 2017.

Any reference for this? The text of PSD II is here — [http://eur-
lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:320...](http://eur-
lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32015L2366) — but it's too long
and it isn't clear to me whether it is actually ratified.

------
seanwilson
What bothers me about OAuth is the way you're on one website and are then
asked with a pop-up to enter your Gmail or Facebook etc. password as a normal
part of the flow. Users aren't savvy enough to check the URL or understand
what's going on here so getting them used to this flow is asking for phishing
by the look of it. Something that forced two factor authentication would be
good.

------
smarx007
It's pretty strange to see a new authentication protocol (they describe it as
authorization protocol, but they do authentication as well), just as W3C's
WebID-TLS is being finalised. Oh, did I mention it uses client X.509
certificates as well? And how does the author imagine that banks would rely on
his new protocol to ensure non-repudiation?

------
0xdeadbeefbabe
> The most realistic threat is the client developer not properly verifying the
> server certificate, i.e. was it ultimately signed by a trusted certificate
> authority?

From an attackers point of view, this sounds like a very tiny ray of hope. It
sounds like a cool feature/vulnerability that will probably be going away soon
because it is so easy to fix.

~~~
riskable
The problem with the, "was it signed by a trusted authority?" concept is that
you generally can't automated the 3rd party since they're not under your
control. Also, they typically charge every time you request a new certificate
(even if client-only).

The solution to that is to run your own CA but then it won't be 3rd party
anymore. It's sort of the catch-22 with SSL/TLS: Either you use a 3rd party
_or_ you get to automate things. There doesn't appear to be any middle ground.

Why is there no middle ground? Because if the 3rd party CA is doing their job
they're investigating _every single request for a new certificate_. That means
you can't just get a new client-side certificate on demand, instantaneously
whenever the need arises.

~~~
0xdeadbeefbabe
"client developer not properly verifying the server certificate" makes it
sound possible, but I think I understand the problem now maybe.

The 3rd party CA may have issued a cert to malicious party that issued another
cert to their man in the middle.

You can't be sure unless you are your own CA, but then you aren't a 3rd party
anymore.

------
defiancedigital
I didn't see anything about renegotiation. If clients present their
certificates during first handshake, it will lead to security concerns.
Attackers could observe client's certificates (extract meta-data, de-ano
clients ...). If renegotiation is used it will drastically reduce "Bonus DDOS
mitigation"

------
baoha
tl;dr: it forces client to have a certificate so that the server can verify.

This is kind of a pet peeve. Anyone who ignores or wants to disable server
certificate verification has to understand the risk.

------
jeremiahlee
How is this better than Hawk and Oz by OAuth's creator, Eran Hammer? TAuth
seems to solve fewer problems, as it cannot be used by public clients.

------
gyre007
It's kinda crazy that it has taken so long for someone to actually take an
initiative and attempt to make the authentication more secure.

I wonder if this is a custom built solution or if Teller.io is using something
like HashiCorps Vault to do the whole SSL cert dance.

Either way, this looks promising.

~~~
mgkimsal
> It's kinda crazy that it has taken so long for someone to actually take an
> initiative and attempt to make the authentication more secure.

Not when you consider we've all been subjected to decades of "don't write your
own security!!!"

~~~
sjtgraham
Author here. This is not a new invention. This is standard TLS, and some newer
things like WebCrypto brought together.

------
chrisallick
Has this been tested against a broad user base? It seems rather involved.

------
imaginenore
Relying on SMS for bank security has always seemed crazy to me. It's not
secure. Didn't Telegram creator just got hacked by the Russian mobile provider
that sent an SMS to itself?

~~~
Freak_NL
A lot of banks do use proper hardware tokens (TOTP and similar) for all
transactions though.

I am under the impression that we are now in a phase where security needs to
be stepped up, but in the mean time tokens send via SMS are considered 'good
enough'. There are lots of initiatives for the next step, each providing
proper two-factor authentication, but a lot of services are waiting it out
because the hardware tokens or smartcards you need for each user cost money,
and if you adopt one of the current solutions such as TOTP tokens, users would
need such a device for each service they use (again, for banks this is already
accepted; at least in the Netherlands).

Ideally, a standard such as FIDO U2F gains ground, so users can safely and
conveniently reuse a single hardware token for any service supporting that
standard. Who knows, perhaps having your 'internet key' on you can become as
commonly accepted as having your house keys on you.

Also, relying on SMS means all these services have a single unique number to
identify you with across services. I dislike the privacy implications this
entails, and prefer to keep (some of) my on-line identities neatly quarantined
the rest. FIDO U2F addresses this problem; even if you use the same hardware
key for every service you use, they cannot be linked.

~~~
tadfisher
> Ideally, a standard such as FIDO U2F gains ground, so users can safely and
> conveniently reuse a single hardware token for any service supporting that
> standard. Who knows, perhaps having your 'internet key' on you can become as
> commonly accepted as having your house keys on you.

Unfortunately, most FIDO U2F services allow SMS as a fallback authentication
method, including Google and Github. At least Github has some strong warnings
about it.

~~~
Freak_NL
That is to be expected at this time though, and for a service like Github
letting the user choose their level of authentication strength is fine — you
are mostly responsible for what data you store there yourself. In the mean
time it will help the adoption of this standard to at least have the option
available.

If a service is actually guarding private data by definition (like a bank or
an insurance agency) than phasing out SMS in favour of FIDO U2F or another
true hardware factor is a much more likely scenario.

------
poorman
Must be British... "authorisation" ?

~~~
takno
It uses a UK phone number and supports an EU banking initiative, so I would
guess so. Why is that relevant?

~~~
camhart
authorisation is spelled authorization, atleast in the US

~~~
maknz
It's spelled authorisation everywhere else.

------
robinduckett
[https://xkcd.com/927/](https://xkcd.com/927/)

~~~
emodendroket
Can someone just blacklist every post that is nothing but a link to this
comic.

~~~
robinduckett
Can someone just blacklist every post that is nothing but a new standard
trying to add to the list of crappy pre-existing standards?

Also, you forgot the question mark on the end of your sentence there. Unless
you meant a sarcasm mark or an interrobang and the comment parser stripped it?

