Hacker News new | past | comments | ask | show | jobs | submit login
Charles Proxy now available on iOS (charlesproxy.com)
252 points by eddyg on Mar 28, 2018 | hide | past | web | favorite | 108 comments



This is awesome! The first thing I discovered was how much network noise crashlytics.com was causing. Used AdBlock's[0] DNS proxy feature to black-hole the offending domain (they even mention blocking crashlytics.com in their FAQ[1]).

Note that both AdBlock and Charlex rely on iOS's VPN feature, and only one can be enabled at a time.

[0] https://itunes.apple.com/us/app/adblock/id691121579?mt=8http...

[1] https://www.adblockios.com/privacy/


Hm, I thought Apple cracked down [0] on VPN-based adblockers. Or is AdBlock running a real VPN? I.e. your traffic goes through their servers.

[0] https://www.google.hu/amp/s/www.macrumors.com/2017/07/14/app...


Looks like this app is the one mentioned in that article. They appear to do both Safari content blocking and use the VPN profiles to block DNS requests (as in no traffic hits their servers).


Could be that Charles itself uses the crashlytics? How to tell which app is it?


Genuine question here: How is it not absolutely terrifying that an iOS App Store app can man in the middle HTTPS communications made by other apps? Is there some way in which this isn’t poking a hole in exactly the sort of security sandbox that iOS tends to be good at? (And yes there probably is some part of what’s going on that I don’t understand, that’s why I’m asking the question)


Except on iOS you get:

- Prompt to allow app to act like VPN

- Having to enter your passcode after said prompt

It's impossible for apps to MITM silently.


And for the record, the prompt is explicit about the risk: "All network activity on this iPhone may be filtered or monitored when using VPN."


And you get a big "VPN" status icon that's at the top of the screen the entire time it's running.


But is really common user reading these warnings?

I know that without it we wouldn't have amazing apps like https://itunes.apple.com/us/app/adblock/id691121579 , but i'm really not sure if its worth that risk for a common user that apple mostly targets with iOS.


Total noob question: is it possible for iOS app to fake/emulate that system prompt to ask your passcode?


It certainly was possible https://arstechnica.com/information-technology/2017/10/bewar...

Not sure about the current state though.


Can't say much about the security, but I suspect it's working by pretending to be a VPN provider and then proxying the traffic. It's then able to install a CA root to generate any certs it needs to MITM traffic. Cert pinning will prevent this from working, but that's the only thing that will.


Supposed sandboxing against malicious apps is precisely why I run iOS rather than Android. I get that Charles isn’t malicious, but what’s keeping any random free game app from doing the same thing? (Again, intended as a real question not a rhetorical one)


Setting up Charles requires two explicit authorization steps (each requiring passcode/fingerprint/face verification): First, network interception requires adding a VPN config (the dialog warns that "All network activity on this iPhone may be filtered or monitored when using VPN"). Second, SSL MITM requires installing and trusting a root CA certificate (the relevant prompts in iOS are less clear -- they say that the cert will not be trusted until you enable it, but don't explain the implications if you do enable it).


"what’s keeping any random free game app from doing the same" It's not that simple, as author stated, it was some of the most challenging code he ever wrote.

Charles desktop app is well respected in the developer community. There is no reason that the iOS app will be treated any differently.


1. You have to turn on VPN in the system settings.

2. You have to trust the Charlesproxy root certificate; again, in the system settings.


1a. This requires your passcode, and any time a system is asking for your permission to do something is worth questioning. Even my least technosavvy friends and family have learned that if something is asking for your password and you don't know why, abort.


I’d guess App Store reviews stops that.


The documentation states you have to follow the instructions to install the certificate manually which is what I would expect.

You can also click a link to a certificate on a webpage and install it manually on iOS.


How is that different from Android?


How do you intercept traffic from apps that use cert pinning? Is the only way to patch the app binary and reinstall the patched binary using a dev certificate?

How exactly does one go about patching the binary – is there a tutorial somewhere?


>Is the only way to patch the app binary and reinstall the patched binary using a dev certificate?

Yes

>How exactly does one go about patching the binary – is there a tutorial somewhere?

https://www.guardsquare.com/en/blog/iOS-SSL-certificate-pinn...


Won't an app worth its salt use certificate pinning to prevent this mitm ? In other words - Can I use Charles to sniff FB or watsapp traffic ? I do not use both services, but interested in analyzing their traffic.


You'll see the attempted request (the fact that there was a request to a named server) but none of the request details, except the encrypted stream.

There are guides you can google for cracking apps and replacing the certs they compare against. IIRC they all require a jailbroken device.


Client certificates will prevent it too.


Do iOS apps not require cert pinning by default for their respective APIs/whitelisted https domains?


Whitelisted domains? What? Never heard of that in iOS.


Apple introduced App Transport Security[1] with iOS 9. The setting is configured in your app's Info.plist[2].

[1] https://developer.apple.com/library/content/releasenotes/Gen...

[2] https://stackoverflow.com/a/48089038/2044952


Yep. App Transport Security mandates that you have to explicitly whitelist the domains [0] which you want to access via plain http. This however, has nothing to do with certificate pinning, which the OP was mentioning.

[0] Of course you can use the blanket NSAllowsArbitraryLoads to allow plain HTTP everywhere.


> It's then able to install a CA root

Under what conditions does iOS allow an app to do this?


Apps can do this by presenting a configuration profile to the user. This requires entering the passcode and a few steps - it’s not something they can do silently.


Also, after installing a certificate the user needs to explicitly go into Settings sub menus and toggle trust for that certificate.


It's your phone. Of course software you installed should be allowed to do anything you want.

The fact that Android has recently made it impossible to MITM apps is really making me consider switching. I don't think I will, because in many other ways Android is still more open, but the analysis is no longer as lopsidedly in Android's favour.


I use the desktop product daily so I picked this up. I frequently proxy my phone through my desktop but I figured this would be fun to play with if nothing else.

I turned it on for literally one second and the first thing it captured was traffic from an app I used briefly several years ago and not since. Cool!


That sounds like a great tool for enhancing battery life, if you delete all the „overzealous“ apps.

Charles could show a list of recommended apps to delete.


This sounds pretty far outside the realm of what Charles does (it's a web dev tool, not a system scrubber).

But something like "Little Snitch for iOS" would fit the bill.


Literally the same thing happened to me. I am deleting apps aggressively now.


Charles can already be used as an http/https proxy on iPhone via https://www.charlesproxy.com/documentation/faqs/using-charle...

I generally would only be needing to inspect requests when developing at my workstation, so how is this native app providing additional value beyond what the Charles Mac software already provides?

Big fan of Charles over here, I just don't understand the use case for the native app.


Straight from the announcement:

"Running Charles on your iOS device means you no longer need to fiddle with WiFi network proxy settings. It also means that you can capture and measure network traffic that goes over the Mobile / Cellular data network.

Measuring networking performance over Mobile data is especially important for your mobile apps (as that is how a lot of users experience your app), and it can reveal large or slow requests, as well as opportunities to increase perceived performance by parallelising network calls."

AFAIK before this you could only inspect traffic over WiFi connections since you had to set the proxy address via WiFi network settings.


>I just don't understand the use case for the native app.

I can't use Charles to inspect traffic from my device at work because of network restrictions. Now I can.


You can't use the Mac app to inspect any requests that go over the cellular connection.

Using the Mac app also requires being able to connect to your Mac from your iPhone (in order to use it as a proxy), which is not doable on many setups. For example, at work we use a different wifi network for mobile devices than we do for laptops.


> I just don't understand the use case for the native app.

You can watch his presentation [1] (which is linked to in the announcement page that story links to) for a few of the use cases.

[1] https://www.youtube.com/watch?v=RWotEyTeJhc


I've had to do audits of apps that required you to be connected to a specific Wifi AP (usually with a dev server instance attached) to function.


If you are a developer, consider the free alternative https://www.github.com/kasketis/netfox ;)


Came here to say the same thing: If you're interested in seeing the traffic caused by your own app (and also making that info accessible to other stakeholders during dev time), netfox is the way to go. Super easy to integrate and provides usually enough info. Also no tinkering with the system settings or third party apps required.


How widespread is certificate pinning nowadays in iOS apps? Does anybody have any experiences?


I'm working at a European bank in their iOS team. We use cert pinning for all of our apps, but I have never heard or seen teams using it outside of this project.

I guess it's mostly used if the application is doing something critical like money transactions etc.


Cert pinning seems to be gaining momentum in the US several high profile apps are using it now.


Sucks that more and more 3rd party apps are adding pinning to their code so you can't sniff their traffic. This is a great tool for first party debugging though :) Nice work Charles!


Of course, pinning is trivially defeated if you have debug-level access to the app because you could just intercept any network call.


Could you go about describing how this would work for a 3rd party app like Uber for example?


There was a discussion some time ago about mitmproxy (a tool similar to Charles) and specifically about certificate pinning / iOS apps. See here[0]

It seems like you need a jailbroken device, and that there are tools such as SSL Kill Switch[1]

[0] https://news.ycombinator.com/item?id=15757878 [1] https://github.com/iSECPartners/ios-ssl-kill-switch


I've never done this personally, but I'm pretty sure there is no way to protect against hooking and/or patching functions in Secure Transport (iOS's low-level TLS stack), since all network traffic goes through these APIs. I'm sure there's something similar in Android.


You're not forced to use system facilities for TLS on Android. Back when you needed up to date TLS support for your app on older Android versions you would use e.g. BouncyCastle instead of the system's TLS facilities. Probably the same for iOS.


You can still patch statically linked libraries. Either statically in an editor and resign or dynamically by changing page protection to rwx, and writing a jump to the alternative implementation. Latter requires entitlements, or jailbreak on iOS.


So just figure out which library they’re using and patch that.


That's right, there are many ways. I just wanted to point out that you could roll your own tls.


Sure, but then you'd have to JB the phone. Most of this stuff is pretty straightforward, but not exactly 'trivial' - especially given the context is an iOS app specifically aimed at making MITM easier.


Certificate pinning is inherently security by obscurity; it's intended as an annoyance for anyone trying to reverse-engineer the service, rather than an insurmountable barrier.


Certificate pinning is also a secure way of protecting against MITM attacks, mis-issued certs, and enterprise proxies.


Yes: that's what it should be used for. It's not a way to keep your HTTP REST API private.


It’s intended against a rogue CA supplying certificates for a service they shouldn’t be supplying certificates for.

For instance if a CA gives a CA certificate to a government running an SSL inspection service.


Basically anything you do client-side falls into that category. If your code runs on my device, there's not much you can do to stop me from fiddling with it.


That gives you the same access as full control over the network. You still can’t make the app believe it has a connection encrypted using a specific certificate.


Can you describe what this means?


Charles works by sitting between a server and a phone. It decrypts traffic from the server, displays this to the user to 'inspect', and then re-encrypts the traffic to be sent to the phone.

It re-encrypts the traffic using its own SSL certificate (technically a CA, but no need to get bogged down in details).

In many modern apps, code has been added such that an app will only accept traffic which has been encrypted by the original server certificate (ie: the Uber iOS app will only accept traffic which has been encrypted by a certificate from www.uber.com). When Charles attempts to sit between this traffic, the app will not allow network connections, and thus no traffic can be inspected by Charles.

This procedure of an app only permitting traffic encrypted by a predefined certificate is known as "certificate pinning" or "pinning" for short. It is becoming more and more common for native apps.


The reason for this is primarily to protect against MITM attacks by a hostile network and rogue CA, enterprise proxies, etc.


Hardcoding the exact certificate they expect, to prevent MITM attacks. Of course, not every MITM is malicious, as this item shows.


Then Geotrust, in a petulant hissyfit, email ~20,000 of their customers private keys to DigiTrust - and the carefully laid plans of updating the app with newer ssl certs pinned 12 and 6 months in advance of expiry are - ummmm - revealed to be flawed...

Hilarity ensues.


Well, the certs don't _have_ to be shipped with the app, there are workarounds to refresh the certs without sending out a whole new app binary.


Yep - but guess what wasn't done in this case...

(Yes, this is a still-trying-to-fix-it real world event... Glad it's not my problem, just one I hear about from people I used to work with...)


If you do pinning, you could just as well use a self signed cert for your API and pin that. If your API is not just used for the app, add a private proxy/load balancer that uses your pinned cert.


Awesome news. Charles has been such a helpful debugging tool over the years. Less so for web stuff in these days of browser dev tools being so advanced, but the ability to inspect traffic system wide is still really useful outside webdev, and sometimes it can be useful to verify something dev tools tell you.

All developers should get this for iOS, it’s bound to be useful and if not it will at least be interesting to see what you’d phone is getting up to online!


I'm trying to get it to work but whenever I have the VPN enabled all network traffic fails (HTTP and HTTPS).

Anyone else have this issue? The website isn't giving me much insight :(

edit: I'm on the latest iOS beta. Could that be why? funny that I'm troubleshooting an app which is largely meant for troubleshooting apps...


I found that it didn't work on WiFi networks that block client-to-client connections, if that gives you any pointers.


Damn I didn't even try disabling wifi.

Yeah it works once I disable it, kind of an important pointer the app could alert users to...

I was originally thinking that maybe the MITM VPN IP clashes with my LAN subnet.


Wish I could get a more consistent way of intercepting websocket traffic from iOS (specifically, wss traffic).


Charles for the desktop can intercept secure websocket traffic.


Yes, but the problem with Charles (well, iOS related at least) is that iOS websockets don't go through the HTTP Proxy configured. They're just considered a raw socket. Thus, even on desktop Charles, it's a nogo.


I don't know about intercepting iOS apps, but I definitely do this exact thing for web app development targeting iPads using Charles for the desktop.

Be sure that your default proxy port doesn't conflict with the default WSS port.

Edit: for reference I'm on Charles 4.2.1


It works in Safari and Webviews, but definitely not in any native apps. That's what I'm referring to.


I like this a lot, but most of the time I use Charles for more than recording traffic. For example, checking how my apps behaves if I throttle certain endpoints, or rewrite responses. Hoping those features makes it into a future version!


Maybe it is worth mentioning that iOS has the ability to throttle the network itself—it's under "Settings"->"Developer"->"Network Link Conditioner". There is also a pref pane in Addition tools download for Xcode which allows to do the same on the Mac.


mitmproxy has worked on iOS and Android for years now and is OSS and easy to use


This runs on the device itself.


And the desktop version allows to intercept the traffic of the machine it runs on. Mitmproxy cannot do that afaik.


Yes, mitmproxy does as well..


What do you mean? This has been working since way before 1.0.


Ok, I need to clarify, I had macOS in mind and this note in the mitmproxy documentation:

  > Note that the rdr rules in the pf.conf given above
  > only apply to inbound traffic. This means that
  > they will NOT redirect traffic coming from the box
  > running pf itself. We can’t distinguish between an
  > outbound connection from a non-mitmproxy app, and
  > an outbound connection from mitmproxy itself - if
  > you want to intercept your OSX traffic, you should
  > use an external host to run mitmproxy. Nonetheless,
  > pf is flexible to cater for a range of creative
  > possibilities, like intercepting traffic emanating
  > from VMs. See the pf.conf man page for more.
That's for transparent mode only though, maybe that had me confused.


... besides which, mitmproxy (and its partner mitmdump) are scriptable and extensible in ways Charles simply isn't.


It seems to include SSL support as well https://jasdev.me/intercepting-ios-traffic


mitmproxy supports TLS just fine. It had a simple setup for installing a root CA in your device.


iirc, you have to do additional setup for Android 7+ apps -- installing a custom apk with modified manifest.xml for each app that you want to intercept TLS traffic.


I get the sense the Charles users are just not familiar with mitmproxy. Here’s a guide I wrote awhile ago to get you up to speed: https://gist.github.com/joshenders/2b0dc14c89a8769f64a7


Sure, but (a) this is an on-device app, and (b) some of us have been using Charles on the desktop since before mitmproxy existed.


Awesome tool, picked this up immediately. Amazing what things you can discover with it.


This is much easier than the proxy option we used to need to go through!


It's neat that it works on the device itself, that will come in handy when during field testing.

However I'll continue to use wireshark for debugging my network code when at the office.


Imagine if the user could compile their own kernels for iOS^W^W [edit] that can control an iPhone. She enables IP forwarding in the kernel configuration. Maybe she can also disable some crucial bits for interacting with the baseband. She only wants wifi to work.

Then she uses this phone with the custom kernel (phone #1) as a gateway for another phone (phone #2). She can easily block ads and other undesired traffic destined for phone #2, using a variety of methods of her choosing (firewalls, dns, proxies, etc.).

She does not use phone #1 for anything other than being a gateway for phone #2. There does not have to be any data of any value to an advertiser generated, sent from, or stored on phone #1 (e.g, logs). It is just a gateway.

Cant do this, but imagine if she could.


At that point of paranoia, wouldn't it be better to just compile the XNU kernel for a different brand of phone?


You're free to compile XNU yourself; it's just the part where you load it onto your iPhone that doesn't work.


...then how does it help that you can compile xnu?


Uh, you can put it on another phone maybe?


Sending all traffic over a VPN to a server of your choosing essentially accomplishes the same thing. It's true that there could be some kind of code in there that overrides this choice, but the implications would be pretty severe.


Looks like she wants Android, specifically, something like LineageOS, or even CopperheadOS for better security.


For example, use an Android phone as "phone #1", the gateway for an iPhone, "phone #2".

Also I used the term "destined" but just to be clear I meant both ingress and egress traffic.


If you want to do this, you can set up a VPN in a cloud somewhere and do the filtering there. Better than maintaining two phones.


"If you want to do this, you can set up a [third party server] in a cloud somewhere..."

s/you/she/g

s/want/&s/

If she were able to use Android as gateway for iPhone on same subnet, "sett[ing] up a [third party server] in a cloud" is precisely what she would be trying to avoid.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: