Hacker News new | past | comments | ask | show | jobs | submit login

Err, no it is not safe unless you trust the app you are running to validate the certificate chain. Not so long ago, I found out my bank's app didn't validate the cert and I could happily put a proxy and intercept all calls.



If your app does that then it's not particularly safe over an encrypted wifi either.

The solution is to fix that app, not to rely on a very weak defense that may help you in a fraction of the possible attack scenarios.


Good luck fixing your bank app. For a consumer it’s a lot easier to avoid situations where your communication is easily intercepted than it is to change the code of their banking apps.


In that case the best option might be to use the bank’s mobile website is it exists. In any case, the point is that this is a problem with the application, not the network, and a more trustworthy network doesn’t resolve the issue.


That’s an interesting point. As an app developer, I’d assumed that would be handled automatically by the OS.

What’s the best way to test for certificate validity? (In my case I’m interested in iOS, but the same concern must exist on all platforms).


In my experience, the OS _does_ handle that automatically. If the app isn't verifying it, it's because they went out of their way to disable certificate validation.

Which is alarming.


What's the odds that the corporate network the developers are on does MITM https interception, and the only way they could get their app to work was to remove certificate validation


Been there done that. Corporate IT often doesn't want to acknowledge that devs exist in the company because it's so much easier to just lock down the admin and marketing use cases.

It's fucking scary how far they're willing compromise security internally and externally to avoid extra work and maintain control.


It's amazing how well such companies can repulse developers:

- Everyone works on 8GB windows machine and sticky keyboard

- Remote desktop

- "You wanna install your IDE? Yeah contact IT, gonna take few days"

- Can't install any of cli tools

- Atlassian suite

- Spend half a day in meetings

- Scrum

Then they complain how hard it is to get a good developers...


am I the only developer(ish) who likes JIRA?


Probably. I’ve never seen one that’s useable. Also BitBucket is quite terrible when compared to Github.


It has some friction in spots, but I like it a lot. A little configuration made it fit how we work. It does the job.

However, I hear other companies like the configurability too...and their crappy processes were so easily configured that it became a crappy tool for their poor developers.


Not at all. In my opinion, it sucks, but it's the best tool available for the job.


The pros and cons of jira all depends how you use it


I think we found a yeti!


MitM is pretty difficult if your app is validating, a lot of corps will install their in-house CA on your company issue devices so they can do this. If your software is using certificate pinning they can't even do this.


Very slim, as you can still verify the certificate chains up to a trusted root certificate and it’s trivial (and generally part of the enrollment process) to load the companies root CA on your device.

We MITM and certificate validation works correctly.


Except that there should be validations at even the Root CA level and most corporate MITM CAs don't pass those verifications either:

- Is your root self-signed only? (Root CAs haven't been allowed to be self-signed only since roughly 2007 according to the principles of most browser root CA policies for public Roots. All public roots today are cross-signed among each other.)

- Does your root certificate have a valid revocation chain? Can you query up-to-date revocation information on it? (Modern Roots all have to have working revocation information, and Root CAs have been revoked in internet history, you cannot blindly trust your device's Root CA store over time without up to date revocation lists.)

Those are just two warnings I see most often from my dev tools on the MITM infrastructure I'm forced to deal with it. I know that this is compromising my security stance as a developer, and I know that turning off/ignoring those trade offs is a risk I directly pass on to users of anything I build. I've felt it a responsibility of professional ethics to pass on this concern to others in my company. I have debated many times whether if the right Root CA CVE or Self-Signed Certificate CVE comes across my dash if I will have to attempt to exercise the company's "Stop Work Authority" and refuse to continue development while being MITMed in a way that the company's security/safety infrastructure will not understand how to handle, but remains on my radar because I'm a professional and worrying about such things is my job.

Running a Root CA is a huge responsibility, and still has a ton of risks for the "real" Root CAs. (Just look at the recent battle between browser security teams and Symantec, for instance, over generating bad certificates.) Running a corporate MITM has all the same responsibility, with an even worse risk if you get it wrong (your entire company's device footprint has a single point of failure). It's such an incredible vulnerability/risk that whatever tiny gain it gives companies in surveillance over SNI sniffing and endpoint/device-deployed auditing tools is never worth the risk of subjecting so many developers to badly MITMed developer environments, especially some of the developers most at risk (bank software, health software, etc) of passing on the software equivalent of a bad MITM plague given the worst happens. I cannot imagine the blasé with which Corporate America has MITMed itself can be seen as anything but an incredible folly, if not today than certainly tomorrow (hopefully not after the worst happens).


As far as I understand, this is no longer possible on modern iOS versions at least, except if the app developers explicitly disable that validation.


You can pin your certificate in your app bundle such that your app only allows certificates you specify or ones signed by CA's you specify. That clearly isn't the case here. Normal iOS operation will verify the certificate chains up to a trusted root certificate and it is indeed possible to load your own trusted root CAs on to a device for purposes of MITM. Again, some apps may pin their own certificates, but that clearly wasn't happening in this example.

I deal with this virtually every day.


It's not the same as pinning though. The device trusts that a cert was signed by _any_ CA on your phone, not necessarily the one that really issued the one you expect.

So, if my company installed a CA on my phone that they issued in-house, and MiTM my traffic, they can spoof certs and most software will accept it.

To be really safe, you should pin the certificate to ensure that your code only trusts a specific certificate or specific authority, so, for e.g. it was signed by Let's Encrypt X with fingerprint Y and not, say, Digicert Z with fingerprint A.

If you're writing the backend and frontend, yor could go one step further and embed your own CA in the app and follow secure practice for managing the private key and issuing certificates to your infrastructure.


Pinning is a serious step, there's a lot of opportunity for a foot gun. You absolutely need to decide up front what the intended behaviour for your app is when the pin is invalid. Don't say "That will never happen" because it will happen. Maybe the client is happy that their app simply does not work if the pin condition isn't satisfied. A bank might feel that way for example. But if it's a surprise I guarantee they aren't going to be happy and that means you did a bad job.

Building your own PKI is always potentially the safest option, and in practice it will usually be the least safe and most unreliable. The main attraction of your own PKI should not be the safety/ security you likely won't actually achieve in practice but other conveniences. For example your PKI can issue a 20 year cert. Maybe it shouldn't, but it can and that might work better for you than certificates which expire and introduce exciting last minute changes.


Yes, I agree with all of the above.

We specialise in this sort of thing at my work, I'm not suggesting anyone does this without first understanding the risks you mention above as well as the long term commitment required.

But done right, it is the most secure approach.


Not necessarily, there are plenty of applications that use their own trusted root CAs and thus technically the app is validating it. Most popular example being Firefox.

Often libraries have options to specify trusted root CAs as well as options to disable validation per host and/or globally. I've never come across any library that would have any of these options enabled by default.

With that said, apps like banks should still not be putting trust into the OS or anything else. Certificate pinning is a good and should be utilised, especially for sensitive systems such as banks' apps.


Historically it was very common to default disable or entirely omit essential checks. CWE-297 https://cwe.mitre.org/data/definitions/297.html is about this common mistake. The happy path is invariably well tested and doesn't show this, unhappy paths often use garbage self-signed certs which fail non-host based checks and so those behave as expected too. Testing usually misses the host mismatch check.

OpenSSL for years only provided some fairly hairy code if you actually wanted to do dnsName matching, which you absolutely should do. What that means is, lots of software was written (say, 10+ years ago) in which OpenSSL is checking that your peer has a "real" certificate but it doesn't care which one. A certificate for we-are.literally-thieves.example ? Cool, that's issued by a trusted CA and so it's fine. Oh you thought you were connecting to my-real-bank.example? You didn't ask me to check the name on the certificate and I don't bother providing a sensible API to do so anyway.

Here's their actual documentation:

> Versions prior to 1.0.2 did not perform hostname validation. Version 1.0.2 and up contain support for hostname validation, but they still require the user to call a few functions to set it up.

Modern (1.1 onward) releases of OpenSSL provide a sane API which checks names you give it, so if you tell OpenSSL to connect to my-real-bank.example it realises you don't think certificates for other names are OK. But the old ones didn't do that and the ones 10+ years ago expected you to grok PKIX (the Internet's agreed way of coercing the X.509 standard intended for the X.500 series Directory into a way to certify things on the Internet) or else give up.


Unless the libraries you are using are fubar then you normally have to explicitly tell it to ignore certificate chain errors i.e. requests.get(..., verify=False)


With apps there’s two levels of validation that you can do, and only one is done by the OS.

The most common, and automatic, is the verification of the chain of trust. On iOS this happens automatically if you use the standard network APIs against an HTTPS URL.

You can take it a step further and avoid MITM attacks where the middle party is able to mint trusted certs by doing something called certificate pinning. This is a manual verification that the certificate used by the server you’re connecting to has certain properties that you know match your API server’s.


It's handled by your http client which may or may not be part of the OS or even something in between (speaking of Android, it's probably far more likely to be on the OS side on iOS but if the OS allows raw TCP then it can't really keep an app from running its own http(s) on top of it).

A frequent problem on android that might lead some to throw validation under the bus, is that certificate validation is much less robust than the one in browsers, particularly on old devices. A typical scenario is that the certificate of one of your backend approaches EOL, ops dutifully obtain a new one and it checks out nicely on all browsers. But a whole bunch of older Androids that might make up a quarter of your user base of you are unlucky has never heard of the root certificates involved so the app becomes unusable. A similar situation can arise if you have clients that check revocation (good) but don't check for alternative signature chains like a modern browser would do (not so good).

The correct way to solve these situations is extending the server configuration with another certificate chain that is valid on the devices in question, no doubt about that. But when the app is not a core use of the backend and there server is not run by the same organization, breaching the wall of "but it works in all browsers, clearly the error must be on your side" defense can be quite hard. Nontechnical leadership will be extremely tempted to do the writing thing.


On iOS certificates will be validated by CFNetwork provided you haven't disabled ATS.


People who think app store approval is a reliable quality gate, please take note.


Was that Chase? I read about it a while ago that there was a flaw in the Java API that made it ignore cert warnings by default.

I feel like this needs to be an OS-level requirement. All network comms should be encrypted and any unencrypted traffic needs to be allowed with a user opt-in.


Passive attacks from open wi-fi allowing everyone in 100m range to read your traffic and active MITM are entirely different classes of attacks, with entirely different barriers to entry.


A problem I see quite regularly is self-CA is an afterthought, many don't write their software for self-CA and so when people have issues, the top answer is "turn off verification".

Between that, and the number of "important" pieces of software that don't certificate pin, it really rubs me the wrong way.

Many people have no idea that any one of the CAs installed in your browser or device can sign a certificate for any domain and most software won't care.


While you’re probably not going to be instantly attacked, I still wouldn’t do online banking on a public network.


Honest question... why not?

Modern banks use HTTPS throughout. The banks I use all have HSTS and use preloading so no hijacking to a non-HTTPS site. I use a password manager so if somehow I do get hijacked and get sent to a phishing site, and even if that phishing site is using a Lets Encrypt cert to prevent the “Not Secure” banner in a modern browser, my password manager isn’t going to recognize the domain so it would not let me attempt to log in even if I wanted to.


This comments thread literally starts with someone who discovered their bank's app wasn't validating the certificate, though.


In that case it wouldn't be safe to use the app on a private network either.


True, but doing so on a private network would still be a whole lot safer than on an open AP.


Every commercial network is a public network in a sense


In a boolean sense sure, but the world is more complex that true or false, and some are more public than others.


I suspect the average public network is more secure than the average bank network




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: