This stuff is a lot more common than we like to acknowledge.
Unrelated tale of interest: I did my first web pentest in 2005 (I've been doing security work since '94-95, but I was a developer for several years before 2005 and missed the start of web pesting). And, I shit you not, the very first input I tested --- a login form --- had a 'OR''=' SQL injection on a plaintext password lookup. It warped my expectations for what to expect on web pentests for years.
You'll generally get the PUBLIC VLAN which can only see most of the internet, via a transparent Squid proxy. I also have a THINGS VLAN for well, general IoT stuff and a SEWER VLAN for things that scare me. Those last two have SSL bump enabled.
I'm sure everyone has at least 10 802.1Q VLANs defined on their home network.
I had a Belkin AP for a while, which was okay, but couldn't do anything fancy, like DD-WRT, and the microwave would knock it offline — even after the microwave was done. (It had to be power cycled.)
Today I have the big black Comcast all-in-one AP/router/modem, and it resets the WiFi password (and only the password, not the SSID!) any time it soft or hard resets.
> a SEWER VLAN for things that scare me
This is an awesome name.
> SSL bump
This is a MitM of SSL traffic, or something else? (And if the former, requires installation of a CA cert on devices on that VLAN?) (I found the Squid page for it, but wasn't really clear on it even with the docs.)
SEWER: I came up with the name at work after reading a Register article (probably scraped from HN) about yet another crappy IoT problem. I immediately renamed one of the VLANS there 8)
SSL bump is a Squid thing that does MitM for SSL as you found out and yes, you need to get the CA trusted to make it really transparent. Some devices don't bother testing the CA ...
Please tell me that's sarcastic. :)
(that said, I've got 6-7 at least)
I don't know about everyone but I can safely say if they don't then they're certainly no friend of mine.
(This could be because other people don't like what happens when my laptop connects to their network and so won't become my friend.)
In my web development work, I inherit a lot of websites, services and sprawling architectures developed by small consultancies. Virtually every single time, the passwords are basic and often the same across all clients. I understand that it lowers their support burden, but this is a disaster waiting to happen.
If there's a lawyer's, dentist's or other small business website out there developed from scratch (not Wordpress, etc), it seems you're almost guaranteed a basic, guessable login.
And if so, what were they responded with?
There are just too many questions to ask, and I'd love for every single one of them to be answered with honesty, but my hope at this point is dwindling.
Everything in the news about Equifax just exclaims all the worst possible words one can use to describe a company.
Boss: Find all the holes.
Engineers: (begin to iterate through a list instantly over the mic)
Boss: Did we capture all of that?
Boss: Okay, everyone please make a list and we will prioritize them.
The next thing you know, other tickets like my business application just crashed came up and everyone started to work on them. Surely manager and project manager collected the list (from 1 or 2 people out of 10, 15), and asked tech lead to prioritize. The prioritization is now available, but either people argue over and over about how to change, or people just look at the most expensive shit before doing the least expensive.
Security is expensive. But fraud/breaches at a CC company hit the wallet directly and hence it is relatively cheaper to invest into securing their infrastructure. With EFX though, there is no direct loss of revenue; it is the CC companies that are hit. Until now there was no directly measurable effect of their security practices and so it didn't incentivize any investment. And lastly these are old organizations with old systems and a lot of momentum, and again without a correcting force.
If someone was starting up a new credit reporting agency today can you imagine the security/compliance/auditing gauntlet they would have to run through to even open the doors? Very interesting events indeed.
If someone was starting up a new credit reporting agency
today can you imagine the security/compliance/auditing
gauntlet they would have to run through to even open the
doors? Very interesting events indeed.
My experience with compliance is not in the fin/banking industry, and perhaps doesn't apply to anybody at all. When I had to deal with SOX compliance, I just had to make sure audit logs were in place and they were exported to a safe and auditable log, along with clear documentation about where things were, roles and privileges of different user groups, how accounts are created/terminated/updated etc, rather we have backups or not.
If you say developers must have access to this production S3 bucket, totally cool, for as long as the manager responsible for this system is aware (if written somewhere will be even better). The auditors don't care the actual implementation. If your internal site superuser login is admin/admin, they don't care. If you allow public access to a secret portal, they don't care. Your boss signs off the risk, auditor is happy to move on to the next item. Auditors don't care how many times you backup a day, or which copy is retained for 7 years as per SOX; as long as you did everything SOX requires, you are good.
YOU DO NOT tell auditors how your system actually work because that's digging a grave for yourself. You sell your system to auditor like speaking to a customer, with as little information about the backend as possible. This is called minimize impact zone. If your system runs on five different DBs, have ten micro-services, a couple monitoring and alerting tools, and a dozen other stuff, well, please do not tell them all of the above. Choose what you can present and what you can defend. Limit what you show.
The auditor just wanted to see if there were logs and whether management had any clues what was going on. Don't spill the secrets so they won't question (e.g. do not tell them there is a publicly accessible secret portal). Communicating with auditor is a very mindful skill, not something to be taken lightly. If you encounter a very technical auditor, yes, you'd face a tougher interview, but they are not there to judge your incompetence, just going to keep asking questions till you spill secrets, then HAHA, they now have something to write.
For an institution like Equifax, there are too many holes to cover up at once, so they will limit exposure as much as possible. I'd say being a credit agency they also have leverage, although that's just my conspiracy: all four agencies work with each other to make sure no one's credit is affected by compliance report... No one wants to piss off a credit agency.
If this can manage to stay in the news cycle for several more days, something may actually be done. Otherwise, (and this seems far more likely) the world will move on, Equifax will rebrand, and the cycle will continue...
Problem is, though, it only takes one incompetent person - or even one person making a mistake one time - to open the door for a massive breach. Requiring perfection of humans in order to maintain security... that's not a workable approach.
Yes, this was inexcusable. But also, our current approach to security is fatally flawed.
In the absence of effective governance and process, sure. But half the point of them is to ensure the single-actor miscarriages get caught and handled.
The real problem today is that the maturity of an organization's governance/process is not directly linked to the sensitivity of the data being protected. Instead it's linked to the size and age of the corporation, and the amount of resources (people, money) available to expend on them.
For systems of this kind of scale, for organizations of this kind of size, handling data of this level of sensitivity, you'd expect a huge bank of governance and process designed specifically to guard against single-actor breaches. Things like active automated monitoring for change to the network and systems, full change control/approvals process, system certification, risk analysis, penetration testing.
These things are hard to implement, take time, and are a significant investment that lacks easily measurable benefits. It doesn't help that this stuff is seen as 'not sexy', either.
Back in the 90's hackers used to get criminal charges for getting into secure systems. Now tech companies are a little more intelligent about it and they pay bounties to hackers. It should go all the way in the other direction though, the responsibility for getting hacked should fall on the company that gets hacked for their shit security.
This has been going on for a few years at least from my perspective. Shaming doesn't work well enough IMO, but maybe that's because I haven't seen or heard from companies who got shamed and then changed. Anthem and Target hacks were both high in the news, but both settled all of their lawsuits.
Every time there's a big-name company in the news, all the various security firms seem to go to town seeing who can break into the rest of their systems first. Regardless of intention, it still seems potentially criminal.
I know I've considered it.
Veraz is a piece of shit also (I'm argentinian), much like the us counterparts.
Most of the Equifax subsidiaries used to be different companies, and when I worked there they all had different technology stacks.
It's even worse than in the U.S. in some cases - in Uruguay, the ONLY credit bureau was acquired by Equifax, so if it goes down, so does the financial industry.
On the other side, SSN equivalents (DNI - National Identity Document in Argentina and CI - Identity Card) in Uruguay and Argentina are NOT treated as a secret in the same way as they are in the U.S. - most every company has access to them, and you can even request access to an API to call them (as noted by other commenters - https://news.ycombinator.com/item?id=15234806 ).
Uruguayan cards now have chips and biometric facilities too.
Still, more gross negligence on Equifax's part. The Uruguayan operation was run pretty tightly, I was really surprised to learn that in the U.S. and Argentina they were not.
additionally, the twitter account claiming responsibility (@real_1x0123, on friday, sep 08) for the webshell/compromise just protected their tweets. this will turn out to be much more interesting than it has been thus far.
Here's the login for their finance blog:
admin/password and admin/admin doesn't work on that one. It lets you keep trying passwords for a pretty long time, maybe possible to brute force.
I haven't been able to get through any of the others with admin/admin as well. Maybe someone is on the job.
I can't help thinking of how on-the-ball companies like yahoo, github and dreamhost are about security breaches for much lower-stakes information. This whole story is so pathetic. Makes me think the company is being run by people like my dad, who is barely capable of using a computer for Youtube and the news, and is constantly fearful and paranoid of "cyber" threats, but won't take the most basic steps to educate himself or take precautions.