This turns out to be a pretty difficult thing to build well. Every pentester who can code that owns a Mac wants to rewrite Burp; I myself have died on that hill several times. But rewriting Burp is a bit like rewriting Microsoft Word; so much of the value is in the details, and there are so. many. details.
HTTP proxy - Fairly trivial, annoyingly harder if you want to generate certs to MitM SSL.
Scaling it out - not too bad. fuzzing is highly parallelizable. How your parallel tasks work together is more challenging, (think rotating cookies) but do able, and you need to really do this right to maximize requests/seconds.
The audits themselves - If you actually want pre-built attacks, basic attack+response regexs (ala Nikto) are very easy, though building the DB is time consuming. Multi-request tests (looking for stored XSS, etc) are harder because an audit thread need to share state, or become somewhat sequential.
Keeping state - Harder. What does logged out look like? How do you detect this? (and no, don't make the user supply a Regex...) Has you audit screwed up your account so that "logged in" is all jacked up? Was there Stored XSS, but another audit clobbered the payload before your sweeper found the reflected value? (think updating your address on one page, displaying it on another, all while multiple things are auditing that update address form...)
All that said, I have found the main challenge people overlook when writing web pen test tools is the UI/UX. A shitty UI/UX ruins a tool because it ruins your flow. How do I specify what to fuzz with what rules? Can I resuse and reapply settings easily, how are results reported? Can I easily mark something as a false positive? But can I do that in a general way so that all issues matching on this host, or this file, or this URL frag are ignored? etc etc etc.
Please, if you are going to go down this path, think about how people are going to use the tool, and focus on maximizing their throughput.
So, if you asked me what the hard parts of getting Burp right when I took a stab at replacing it in Cocoa, and more recently in ko.js and Riak (more successfully):
* There is just a whole lot of UI involved in making a navigable proxy with the breakdowns you want, and, in particular, with editing and annotating raw requests and having those views do the "right thing" as the user edits the request and flips back and forth between views.
* Storage of requests is hard, because it's not intuitive how they need to be stored. The proxy wants long-term storage with indexes; the repeater wants rapid writes; the fuzzer wants to generate a shitload of requests (multiplying one request by 20k isn't rare), and a request from one part of the system can bounce through all the other parts.
* The proxy wants realtime communication/control with the UI, because some of your requests need to be intercepted and modified before delivery; in Burp, that UI shares most of the controls of the Repeater, which is surprisingly complicated.
Network performance is the easiest part of this to get right. Just don't use Java threads, and use a real timer library. But getting the network performance to cooperate with persistent storage is harder, unless you just let storage and memory explode.
We wanted fine-grained control over all of the tools so that we could automate retesting and regression. Like I said, we had a lot of luck hoisting the components of Burp into a series of web applications and backend systems backed by Riak (and Postgres, for search, index, and UI). We replaced Intruder with a command-line tool written in Golang that has a small specification language for the rules, instead of Burp's clunky interface, but there's still a whole lot of UI involved in parsing a request, automatically finding the places to template-ize, and then allowing the user to alter those selections.
When I took my stab at doing this in Cocoa a few years ago, I made a beeline for NSWebView; having a real webkit render is an enormous win. We do something similar to you with headless webkit now. This is a place Burp is genuinely constrained, due to Java.
Unfortunately, replacing Burp is a bitch.
It just happens that this is a conversation that happens semiweekly at Matasano. :)
These are all valid points. Writing security tools is technically very challenging even for the most basic things. Even a spider, regardless how basic this looks like in principle, is really hard once you get down to implementation. "In theory there is no difference between theory and reality but in reality there is."
This is why I am a big believer at simplifying the workflows in order to be more manageable. One of these workflows is recording requests and responses generated to/from the application. This is what Proxy.app does well. This feature can be extended further by other tools which we have already developed.
Proxy.app is not meant to compete with Burp. Burp is a different beast and not necessarily, in our humble opinion, the best approach to penetration testing web applications. We know how difficult it is to write security tools because this is what we specialise doing. So you comment resonates with us very well.
Although we could have bundled-in all our security testing frameworks, it was decided to take a different approach for the time being. Later on we may decide to add security testing capabilities or these features may be provided by some of the other tools that we are currently working on.
As an aside, IMO claiming that Charles isn't native is a little disingenuous. Yes it's built with Java, but I wouldn't dismiss it as non-native (unless you're using native in the purest of ways, meaning that its "natural" - e.g. Cocoa - to the system). I consider tools like Eclipse and IntelliJ "native" even though their UI may be poor compared to Xcode.
For example, it doesn't integrate with Keychain, for SSL certificates, so you have to do a bunch of jiggery pokery with the openssl command if you want to generate your own CA for use with SSL interception.
For a more trivial but still annoying example, control-A and control-E don't work in Charles's text fields. In normal Mac apps, this goes to the beginning/end of the line, but in Charles it does nothing.
"Native" isn't about "poor" UI per se, but rather about whether or not it has the basic UI features provided by the system. In a native app, you have to go out of your way to disable stuff like ^A/^E because the system gives that to you by default.
I'm not a purist and will happily use non-native apps like Charles if they get the job done, but it's still a legitimate point against it.
The application is in effect a single binary and makes use of various Apple frameworks which are optimised to run flawlessly on apple hardware. Only one platform is supported which means that there is less chance for bugs and platform-specific oddities. Proxy.app works as a sandbox application on OS X, distributed by Mac App Store and makes use of other native components like Keychain, etc.
Under the hood the tool is only making use of Apple frameworks. We believe that in the long term this will provide more stability and performance enhancements. Right now it takes just a moment to launch the application and there is zero friction when switching between different project files. We believe that it just works in the same way OS X applications are supposed to work and this in effect makes Proxy.app more pleasant to use.
As for the comparison with Burp, I have much less experience with Burp, but the pro version has a huge amount of functionality. Things like plotting the distribution of cookies so you can see if they are backed by a good RNG. It doesn't look like this is quite that advanced yet, but those are the sorts of features that only real pen-testers would be using. For most developers who just need to test stuff, this looks excellent, I really look forward to trying it out.
Some tools are much more useful in the browser - and not in the proxy. As for the session management security - you will rarely need this functionality these days unless you do security research. This tool was useful years ago when everyone was implementing their own session management systems. These days most of these sub-systems are part of the core web frameworks and therefore already tested - we hope. That being said, there are still bugs to be found but this falls into the realm of security research.
It's also something that is tested on the CREST Certified Web Application Tester exam: http://crest-approved.org/information-security-testers/certi...
Did you guys run into any issues getting the app approved by Apple?
If I can give any tips, whenever I'm questioning if a feature is going to get called out when I submit an app, I always fill that "info for reviewer" section with tons of stuff. Explanations, links to docs & legal to show that I'm not doing anything against the law/rules.
On the other hand, Proxy.app is just a good proxy and already comes with many of useful features. It is faster, feels native and available at the fraction of the cost of Burp. It is more general purpose proxy tool than a general purpose security testing framework. The tool can be nicely complemented by some of the other tools from Websecurify.
* Proxy records, categorizes, and makes searchable and navigable all the requests you make through Burp, and includes an internal CA to generate on-the-fly SSL certificates.
* Spider crawls websites and discovers new pages.
* Repeater takes requests from any other part of the tool (or, if you're masochistic, requests you write from scratch), delivers them to the target application, and renders their results. Repeater includes Burp's UI for breaking requests into keys and values for quick editing, and a small battery of fixups to make sure that edited requests are (if you want them to be) valid HTTP requests.
* Intruder takes a template request and a series of rules to transform that request, delivers the permutations rapidly to the target, and records and classifies the responses they generate.
Of these four tools, only the "Spider" is really particular to penetration testing. The other three tools are incredibly valuable for day-to-day debugging and testing. Intruder, in particular, is criminally undersold to web developers.
There are a bunch of other things in Burp (the Burp Scanner, for instance, is a second-tier web application security scanner that nobody I know relies on; I justify pulling it out of the core four features because it's also a recent arrival to Burp). Some of them are given whole tabs in the UI, but are really simple features --- the "Decoder", for instance, simply does character decoding. In any case, even penetration testers don't use them often.
Indeed security tools should be part of the developer's workflow but not necessarily the way they are presented in Burp. We are yet to see a horde of developers who can spend spare time fiddling with requests in order to find bugs in their own code. There are not only time constraints but also this type of thinking is not natural to everyone.
Websecurify is working hard to find out better ways to enable developers do the minimum security testing without getting in their way.
I'm saying that in terms of the day-to-day workload of developers building web apps or APIs, a tool like Burp is as valuable or more valuable than a debugger is to a C programmer.
Don't give up on us. :) It is probably something very simple. Just contact us via our contact form and we will get it sorted. http://www.websecurify.com/desktop/proxy.html
Looking forward to this. I've just bought the app and being able to monitor e.g. what my iPad is doing would be great.
So I guess the answer is yes (although I haven't tried yet).
I know everyone is asking the same kins of question but how does it compare to fiddler?
Our use case is kind of simple. We are building APIs in top of websites that don't have APIs. So a lot of "spying" is required. We have to use fiddler because it was the only one to correclty handle flash forms, file upload forms and to be able to globaly search for a particular token in every requests has been so much of a life saver.
Thanks for the support. Both fiddler and Proxy.app are proxies. Forms, JSON and XML are handled without a problem. At the moment there is no support for AMF if this is what you are referring to. We did not wanted to add this initially because it is not a standard and it requires creating a parser and serialiser from scratch - it is on our todo list though.
Burp was a pain to setup (OS X makes installing and using java from the commmand-line a ridiculously complicated process) but it looks 1000x more useful than Proxy.app is.
On a marketing note, you might want to think about who your market is and what job they use a proxy for. It doesn't seem to offer anything for a security researcher but maybe there's enough there for a web developer.
That being said, dumping more tools inside is not an issue. We already have the frameworks to support it but we are not sure if this will be a good thing to do from usability point of view.
But still can't get it to proxy HTTPS. The MitM certificate provided seems to be invalid: https://dl.dropboxusercontent.com/s/0z8khpq5p4guf9f/2014-05-... && https://dl.dropboxusercontent.com/s/6efwllajcnbpjvs/2014-05-...
I really want this tool to work, since I agree there's a big need, but frustrating so far...
1) the certificate generated by Proxy is 512 bits which OS X doesn't support; and
2) i'm not sure that Proxy will work with sites that use HSTS -- you might be able to clear the HSTS database in Chrome, but still for sites like Google, I think the certificate information is hard-coded.
On point two - unfortunately this is how Chrome works. However, you can control HSTS via chrome://net-internals/#hsts. Unfortunately, this will not work for preloaded entries.
We will be issuing guides soon.
When you have a lot of tabs open, there is always pollution that get added to what actually matters.
That is a feature we are thinking about from day one but unfortunately to do this we either have to run multiple proxy servers and allow you to configure them individually or do process monitoring to understand who is connecting to the server. Both are on our todo list but we think that the second one have no chance to get approved by Apple due to the sandboxing limitations. We are currently researching this.
Do you know if there is a way to setup the proxy manually? This way we can setup an instance of firefox doing the requests that matters but also monitor our ruby scripts that replicating these requests. It will make comparison so easy for our use case.