I am working on the problem (and by that I mean "it's in private beta"). Mobile is sort of a unique opportunity because a typical mobile app doesn't have the same interoperability requirements as the web more broadly.
However, it's hard to sell, which explains why nobody does it. First it's hard to sell commercially, which you really need if you want it audited by people who know what they're doing. But it's hard to even give it away, because nobody wants to be the test pilot when it comes to security.
Solving this requires a level of cooperation that we may not have as an industry right now. That may mean dark days ahead.
Email in profile, if you want to chat about it
It's not a controversial opinion at all... it's a well known fact. Something that relies on you trusting corporations which are known to answer to their government's gag orders is broken from the get go.
Still, TLS is a great tool if you're not an enemy of the NSA. Security is a game of expectations, remember.
Imagine a file storage service accessible via a desktop client, mobile client, API and web. You can't get around using trusted public CAs for API and web, but the desktop and mobile clients could only trust your own CAs with no loss of functionality.
I think the major stumbling block is that the best practices for running a CA aren't well known or broadly disseminated or easy to follow. If you're developing a system that makes it easy to securely manage a signing key, and otherwise do the right things, that would be sweet. Not sure that there's monetization potential there though.
Seems like a pretty sensible opinion to me. The entire CA system is full of flaws that can be exploited by anyone with enough money, power or influence.
I've done this for sharing information between close friends, and the worst part is that browsers react quite badly to self-signed certificates (the problem is really getting worse - they are essentially telling you who to trust to vouch for the identity of others: faceless and essentially state-controlled entities.)
The only thing that holds them back (even still) is probably risk of exposure. They don't do this except on very high-value targets, at the risk of burning the CA they control.
Your points are really valid for the former, but the CA's as big business and government entities seem well placed to counter phishing.
(There is an idea in Safecurves of "rigidity", which is roughly the extent to which we can be sure that the curve parameters weren't chosen to somehow advantage their designers; the curves mentioned in this post are not particularly rigid. If you believe the NIST curves are all backdoored, then yes, you can't trust these curves).
Focus is on the simplicity of correct implementations, and on trying to avoid manipulatable variables.
However, I do not on balance think the NIST curves secp256r1 and secp384r1 are backdoored, and I've tried hard to look into it. The reason behind the incredible opacity surrounding their progeny appears to be Certicom IPR (at the time), although they did work with Jerry Solinas at the NSA and I do not feel comfortable ruling foul play out completely. The NSA certainly do really use them internally, but I'm not sure what conclusions we can really draw from that.