I've had pentests that found nothing before but I had logs full of attempts to compromise the app, including in some ways I'd never even heard of before. I didn't consider them to be failures either.
Also, while its good that they busted him based on the powershell use, I'm surprised that they didn't install the listener on an IT system, since he had a short window in their office. If the author disguised as a cleaner, they most likely could have talked their way out if they were caught.
Years ago an office I was working in was 'broken into' by a former locksmith that had since taken a job with our main competitor. I knew something was fishy based on the specific towers that were stolen (yes, they stole complete hardware...) -- one from payroll, another from sales, etc. The systems were spread out across the office, so it wasn't a simple snatch and grab.
At first glance it looked like the server room door had been picked or kicked, but looking at the damage itself, the damage was completely superficial, if not lazy. I even Hollywood kicked the door myself to see if it opened, and nothing.
After checking some security footage, someone recognized the locksmith -- which later revealed that he quit a few weeks earlier and had been working for the competitor.
But I agree, it seems strange that he didn't succeed.
When corporations hire pen-testers, the set-up is that their CIO/CSO has full knowledge of what is going down and when so that they're prepared to intervene in the unlikely scenario of the pen-tester getting caught.
Knowing the mentality that is common amongst c-level suits, however, it would not be surprising if the suit tipped-off his IT guy in advance. After all, wouldn't catching the well-paid pen-tester look better to the other suits than being utterly pwned and paying for privilege?
The CIO might have successfully "kobiashi-maru'd" the whole exercise. Not saying that happened here, but if large corporations are capable of horrific security breaches, I think they're also capable of trying game the system to avoid things going sideways.
I specialize in the remote version of the kind of test the author describes. I'd be happy to do it in-person too, but usually the clients were less interested in that.
> spent some time hunting the Domain Controller
The DC address(es) are trivial to find.
You have a GPO that turns on command and ps logging. Ok, so if you type those commands, you alert script logging engines.
And then you search how to find the DC: https://serverfault.com/questions/78089/find-name-of-active-...
It's all PS or cmd. Trapped.
You might find the DC listed in the registry if you look for key "DCname". However again, that's another thing you'd use app whitelisting for, and catch every invocation of regedit. A lot of unattended driveby attacks attempt to do bad things here, so you catch/trap them.
Again, dead end.
So, I'd love to see a good walkthrough how to find the DC if these avenues don't work.
[edit: I see the SO link mentions SRV. Still]
Explorer address bar (or start>run) > %logonserver%
When you get to larger orgs, the DC doesnt always have to be the logon server. In fact, you're liable to see a slew of logon kerb redirectors talking to a few ldap backends running as pri and secondary AD.
When we gain access to a customer's internal network, we do try lateral movement on the network and give recommendations for how to isolate different parts.
Or don't jam the door open in the first place.
Security, no scratch that - the human psyche works in mysterious ways :)
This company seems to run a tight ship though!
no one will use it if it [...] is unnecessarily
difficult to work with
For example, imagine an office with a security gate, but it takes a few days for new employees to be issued with a working gate pass.
One way to reduce that inconvenience would be to trade off security, by having staff members buzz strangers in if they know the magic words "I just started, my pass hasn't arrived yet".
An alternative way would be to trade off cost, by having while-you-wait gate pass printing, and every gate having a guard who can check an online employee directory before buzzing someone in without a pass.
I feel a lot of people say "Security is a trade-off with convenience" when they actually mean "Security is a trade-off with convenience and spending, but we've already taken spending off the table"
No reason that such a system can't be done for the first week or so, even if you don't want to have the facility to print permanent cards in every office.
A guard can still be useful to let in people who've forgotten their pass or left it at their desk, or whose pass has broken, or who have come from a different office, or who've got their hands full carrying things; and to deal with people without passes like contractors, interviewees, and other visitors; and to tell off people who tailgate now there aren't any good reasons to do so.
There's a trade-off between small battery size and battery capacity.
You don't really need to qualify that with "if you've taken spending off the table".
However as a developer and business owner myself (as opposed to a security person), I genuinely thought more people would be more concerned about the security of their website. I had one person explain to me their XSS injection vuln was intentional because patching it would break some JS plugin....
You see not knowing is one thing, but knowing and being complacent is quite another. But we see that in more areas in life I guess :)
Changing the "password" field on http site to "textarea" with hidden css rule is another great one- to get rid off the browser warning. As it was explained to me, because TLS is too much hustle.
Well that and expectations. Most small business owners see security as part of the developer’s job. However don’t realise it’s something that needs non-stop attention as websites and technologies in general are affected by erosion. What’s safe today might not be tomorrow. The developer isn’t at fault for that, and yet the website owner refuses to foot the patch bill, or pay for continuous monitoring.
I could write all day about this and we’ve only been in the trenches for a few weeks now :p
No, it’s not. You don’t have to be perfect, but “critical vulnerabilities and half the owners don’t care” sounds a long way from approaching perfect. This is why we have botnets made of webcams and blogs, and constant leaks of people’s private information and passwords.
When someone pays for your product, gives you sensitive information, or interacts with you in any way, they have a set of expectations. When purchasing medicine at a pharmacy, they expect to receive something that has a chance of making them better. When they deposit money into a bank, they expect that the bank will try to have the money there when they want it later. When they drop their car off at a mechanic, they expect the mechanic will try not lose the car. Anything less is called a scam.
So when someone purchases a product, they expect that appropriate measures have been taken to ensure that they will not be hurt by that product. The average consumer doesn't know about proper electrical grounding, the interactions of their prescription medications, or XSS. They shouldn't have to, they're paying to not have to.
"security people" overvalue security in relation to _your_ bottom line. In truth, they aren't just there for you, they're also there to advocate for the people who will actually be hurt when your garbage security gets circumvented.
In your example you expect the mechanic to lock your car and keep the keys safe, but you don't expect it to be put in a walled in area with armed guards and full-time surveillance.
The latter is what IT security can quickly lead to.
>Meanwhile 4 put of 5 webshops we check have critical vulnerabilities and half the owners don’t care because “it just costs money to fix it and things have been fine so we’d rather spend more on Adwords”
Waiting until you get owned to actually patch up critical vulnerabilities in order to save money is negligence.
It's an availability heuristic.
Amazing. I guess... better late than never?
I think one step forward would be also approaching security the same way epidemiologist track down causes of diseases. In that they take patient data and trace back the factors that caused it, just instead of patients we're talking about security vulnerabilities and breaches. Having a corpus of causal diagrams that then we can develop software to analyze risk factors that we can then systematically test for.
In short, people won’t change before shit has hit the fan, and a pentest is the closest you can get to a controlled shit-hit-fan situation without it being a meaningless drill :) How, and what is uncovered is besides the point and merely secondary when viewed from that perspective.
Is there much (any?) public evidence that this tends to happen after a successful pentest, though?
One of my friends used to be a pentester. He said (I paraphrase) "we go in, break stuff, write a report, go home".
What's the betting that in a company with poor security culture that the pentester's report might just end up locked in a safe?
The pentest came back with some recommendations. Mostly to do with the use of HTTP headers. Absolutely we fixed them, and made damn sure that the next time we had a site to be pentested those unforced errors were not repeated.
So on a small scale, yes. Pentesting improved the way we developed websites. I don't know about how it affected the "culture". Visa has a really strong security culture already.
So if the security culture is strong, the pentesters reports are read and implemented; if the security culture is weak-to-completely-non-existant, they'll likely be ignored?
> if the security culture is weak-to-completely-non-existant, they'll likely not even be budgeted or done.
Security isn’t fun, and at best a relief (when nothing is found). When a pentest was successful, as in, the tester got in you can be sure it’s kept under wraps.
So no, I don’t think there are many, if any public records. The fact that there is shame and status involved in not being completely air tight is a big driver of the persistent insecurity of the world at large.
Anonymized records would go a long way in achieving a shift to safety and awareness but as you can read here they are easily construed as stories of fiction.
Everyone likes talking about that growth hack that drove a 1000% revenue increase. No one wants to talk about the database hack that spilled thousands of client records out in the open.
At risk of repeating myself I’m chalking this up to basic human behaviour in all fields of life and the lack of taking responsobility by 80% of all people.
Security is very lopsided in that you just need 1 person to be careless for the attacker to get in, while the defender needs to be 100% secure across all vectors.
I could discuss this all day, and you know the importance of the topic, I know it, but the fact of the matter is that most non-tech people think of security as an annoyance. The solution? No idea yet, other than finding the right chord to strike and “fix” this psychological problem. We’ve made significant strides the last few months but getting companies more security conscious has been a tougher nut to crack than I first anticipated.
Feel free to email me at firstname.lastname@example.org if you want to exchange thoughts on the topic. I’d love to take a deeper dive into the matter with anyone that’s passionate about solving the security problem in any way shape or form :)
Infosec in practice (not imaginary scenarios) is also about good hygiene by the regular plebs and investing in proper QA by hiring some people who are naturally paranoid and with enough clout to push back on bad lazy/ideas.
That plus regular fixed check ups, where more deep dives are done.
Like you said, and as the article points out, it seems to be as much a cultural day-to-day thing as it is about technical searches for vulnerabilities. Or worse renstalling noisey monitoring systems with a bunch of full positives and pointless investigative rabbit holes.
Strictly helping quanitfy technical and personnel risk? Grey all the way.
Getting the C-class to understand what a worst case scenario could be from a completely external threat. Black drops the most jaws.
Ultimately the later is a cultural hack. The absolutely hardest hack of all in my opinion. A way to get the herd moving in a different direction.
So it depends on what you are "hacking."
The more apt comparison would be paying the costs for a seasoned CPA but with 50/50 probability of the numbers only being run by her assistant.
It really does take skill to accomplish some things and breaching defenses (and protecting from breaches) is one of those. Be it a pen tester vs the blue team or a foreign spy and the intelligence agencies thwarting them, when there’s one and one against another it really does come down to the skills, persistency, and resources of the people on both ends, and a bit of luck. There’s just so many variables and as long as there’s people involved there’s a vulnerability somewhere that can be exploited that needs someone to wake up that day and use his mind and body to defend it.
For web apps, there are ready-made packs of exploits looking for common vulnerabilities. Not sure about desktop software, though.
I know nothing about Windows, but I'd have thought checking password policies far less likely to alert than plugging in your own device on the network.
Anyway, my favourite bit was that they didn't stop the people in Accounts running Powershell, they just raised an alert. I much prefer that approach to blocking people most likely just doing their job.
These logging things do get in the way of devs. They run PowerShell after ps... Its not uncommon for MB's a day of log per dev. So if you're wanting to run crap and get away with it, hack a dev machine and bury your commands in there.
I didn't realize this story was fiction until I got to this sentence.
Update: The author confirmed on Twitter that other than the dramatization, this story is in fact true.
> And, aside from the beating up and tying down, it was true!
I can admit when I'm wrong. I stand corrected.
Does the blue team tech guy tackle the red team guy and restrain him? Give chase and hope the guy exits through a door with a security guard? Let him leave and hope the cops will care?
> The DFIR lead leaned down next to my ear and whispered, "No one in Accounts Payable ever runs Powershell..."
> Alright... That last part had a bit of dramatization added to it.
I interpretted the "last part" to mean the part where the DFIR whispers in his ear.
I'm still not convinced this is real. It smells like fiction.
Storytelling as a knowledge share is fundamental to human culture. Drama and story refinement are required to make the knowledge easy to remember and spread.
As for you being convinced, my personal experience is that this story is entirely believable. Many of us in security have stories we cannot share that would make this one look like a Saturday morning cartoon.
Certainly not. But at the very least, dramatization decreases the credibility of a story.
Credibility is found in the citations, which here is only the story teller. As that is only one data point, I totally understand doubting its credibility because one needs more citations and voices for proof.
Further, I never stated I found the story creditable. I was operating from a believability standpoint. Inferring one's experience to weight if the story could possibly be believed. You shared you found it hard to believe based on it's dramatization. Where I shared that I found it completely plausible based on my experience.
And that is my main argument, that in this equation drama shouldn't be used as a weight. Positive or negative.
Edit: For the folks down voting this. Please don't conflate dramatization with persuaion, propaganda, or fake news. Dramatization is a tool used in those techniques.
Right? What shop has that good security. Even CRI would be easier to hack, and they're not even on the net.
There are a lot of specifics for something that’s fictional.
The first thing I do with credentials is find out what they open. Seems like if you have the resources to follow up on access attempts, you have the resources to set ACLs correctly so you’re not scared of them.
The security of banks is certainly not in their IT departments.
I guess it's a place where "getting work done" doesn't involve downloading 1000 deps with npm and what have you from random sources on the internet.
It costs more to run IT well, but there are good payoffs, like this.
Just log use of powershell then have your good old SIEM system alert when it's used by a group outside of the IT dept.
When we agreed, they supplied an external penetration tester with the Drupal superuser password and said, "See? Unsafe!" when the pentester was able to log in and put the site into maintenance mode.
I suppose in the strictest sense they were correct: trusting IT with credentials did turn out to be a security issue.
Finally placing and retrieving physical devices/physically messing with machines is risky and luck based. It would be one of the last things you tried. As it was in the OA.
Why on earth would anyone thing twitter is an appropriate blogging platform.
> Want to see some magic? See a thread. Mention @threader_app with the word "compile". Get a reply from our bot with the link to the thread
What if this was a global Corp, would someone be monitoring this 24/7 across timezones?
What kind of damage could he have done if he had, say, 1 hour of unfettered access?
Sounds realistic, from how most Windows shops are run.
Would it help to stop using AD to manage the IT infra, or have tiny domains (say, max 10 computers) without centralised control, and no company-internal workstation networks? Maybe throw in a rule that devices are recycled (to be wiped) frequently, say every 6 months.
That would be doable if setting up devtools at most places didn't take an entire day.
I’ve seen it where it was EXPECTED that it normally took THREE WEEKS or so for a developer to have the set up correct to be able to develop the application they were hired for.
It was a Java web app. Not some weird special embedded hardware or something.
The app was just that (pointlessly) complicated to set up. Even in production it took quite a while despite being automated buy and installer and people having lots of experience installing it.
Edit: sorry about the duplicated text. I’ve fixed it.
I’m not a Java hater, I am in Java developer. I actually like the language. But the way this particular app is designed and configured, along with the Extertal systems that had to be reconfigured to be able to run at (which could have been eliminated) meant that setting up the system for developmental just took a ton of trial and error.
This would have been easy on a Linux desktop: installing Java, Maven, an IDE, and changing some settings files for the corporate proxy and artifact repository etc is very basic for Ansible, Puppet or Chef. It's probably achievable on Windows or OS X, although I'm not sufficiently familiar with them.
Heavy customization of the IDE could be trickier; the configuration system doesn't necessarily stay very stable between releases. Maybe something that's part of a larger, well-designed system, like KDevelop, would be easiest.
Because it's hard to mandate a flag day where everything must be automated, a transitional strategy might be: document each dev setup so well stepwise that you can outsource it to the IT dept or other internal support org. Maybe the dev team can provide a screencast of it, for example. The support org would then have an incentive to automate it to replace manual work, along with a measurable payoff for it.
Always with software people there is this tendency to add more abstraction and machinery when encountering complex abstractions and machinery. You only exacerbate the problem by doing this!
Less central management == more costs, right? you need more people to administer disparate systems
Most corporations view IT operations as an overhead. What do you do with overheads.... you minimize them :)
That said from a security standpoint, all those little disparate systems aren't really any easier to secure, in fact they're likely harder, it's just the consequences of compromise may be lower
Hang on, if he can boot his laptop does it not follow that he has the necessary information to decrypt the drive?
Also (possibly wrongly), I trust my ability to remember a good passphrase more than I trust the TPM to not have bugs.
Most of us here can walk into most companies and engineer end to end encrypted,least access,zero trust,mfa authenticated network using strictly foss tools and methodologies. Question: Who will let you?
No joke,OP wasn't exaggerating about how easy most of his pentests are. Most companies throw money at it,do risk analysis and say "hmm,this is enough,a compromise is tolerable".
IMO, when it rains,it pours. Risk analysis only tells you what the risk is based on known data. Unknown unknowns will be your doom. Best to build things right even without an incentive.
Social engineered their way into an unlikely scenario bound to raise suspicion.
Tests and methods were pretty successful, otherwise.
Still, I cannot fathom why companies insist on the rickety tinkertoy that is Microsoft Windows.
that sounds more unrealistic than properly protected Windows systems tbh. New marketing employee hauling lots of laptops, no one noticed? Like, people that work nearby might've noticed that?
Tho, I don't know how he managed to carry 30 of them in one go, or what he did with them after he left. I guess the number is a bit dramatized.