Then again I didn't expect much, their MSSQL in prod had SA/SA credentials active.
A few brave companies have tried to put their FTP systems behind VPNs, but the momentum is hard to overcome. What is more popular is firewall rules that only allow large blocks of IPs owned by other vendors they deal with. It is good in theory, until you see how large/diverse some of these blocks are (e.g. all of AWS's Eastern data center).
It was a very loud wake-up call seeing what inter-business stuff looked like. It is the wild west or a flashback to the 1990s security wise.
On HIPAA PHI.
(I know HIPAA doesn't actually mandate 2FA, but it's recommended by many best practices and guides.)
Apparently some tech folks don't like the inconvenience of 2FA.
Many of the recent attacks I've seen simply bypass it altogether in favor of phishing or other traditional techniques.
Clients (Ansible?) simply don't work with it or do it well, which leads to hacks that undermine your 2FA deployment anyway-- rogue admins opening reverse tunnels to allow file transfers, webshells, etc.
EDIT: Oh, and the network controllers that ran them were uniformly updated and managed with fully open "admin" username no-password telnet and ftp services. IoT insecurity began a looooong time before the term IoT even existed.
How could audit, both internal and external, not find this? 2003 to today is 16 years. Audit is a last line of defence and certainly not to be relied on upon as a buddy to catch your errors. But... how? This is a major financial institution in the most developed country in the world (the clue's in the name). It should subscribe to the the highest integrity and tightest scrutiny. This seems an opportunity for both internal and external auditors to tighten their game.
Outside of audit, surely an employee might have noticed? Was there no formal method to speak up without fear of recrimination? According to Wikipedia  there are eighteen thousand employees. Someone never noticed?
This seems an organsiational failing, not a technical one.
These things are highly related to what’s going down in a thread  from yesterday (about “shitty projects”).
I’m sure these guys spend many millions each year on security products, but either people in the know on the tech side is ignored, or they have no competencies left.
In the thread I mention above I have actually posted about my general experience from a major insurance player.
A concrete example:
We were making changes to a custom software and as there were concerns about bandwidth requirements and latency I took it upon myself to figure out what a specific process looked like, from the business perspective.
In short, in the middle of the workflow, customers journals was written to CD and mailed to physicians. Encryption? Eh, no... Any process in place to ensure safe keeping and return/destruction? Uh, forget about it...
This was in the time when a lot of these “lost usb devices” and hacked systems seemed to pop up daily.
I obviously raised this with the security team, the security officer and the business unit.
No one wanted to touch this finely tuned business process.
It felt like I was working at fawlty towers.
Again, that companies have drawn this line between business and tech, “‘cause tech is not core bidniz”, will haunt a lot of big players for years to come.
That's a manually initiated transaction done internally and should be a red flag to anyone. Data outside of the organisation is data with no control. You could keep escalating this. That's an example of no 'speaking up' channel. If a channel to escalate is missing or poorly implemented, frauds will happen by internal or external agents. The process doesn't sound finely tuned at all.
What I’m saying is that in spite of having, in a sense, all the resources at their disposal, this process was chosen by the business, for the business.
An encrypted on-line service could, and should, have been implemented. But being far from tech & dev the business choose a process matching their compentecies.
Messing with this several years in, and trying to digitize a process obviously in need for it, is met with much resistance.
Another gem of a process:
Many (like hundreds) employees needed personal printers. But why?!
- printing claim from “modern” client/server system.
- Pinning an also printed bar-code to the pages from step one
- scanning these in to software that reads the bar-code and adds them to queues for mainframe processing.
D/A -> A/D? Huh?!
Holy cow! I almost fell off my chair...
And the inherent security risks in play here, not to mention acres of forrest consumed during the years. My mind is boggling...
Am I actually living my working life inside a Dilbert strip?! It’s not even funny, because it’s true.
What I’m saying is that many large corps are anything but in fine tune with tech.
It’s gonna’ cost em’ in the long run.
I recently opened a new bank account in the UK and chose a 'challenger' bank. The process was secure, very smooth, the customer support very nice. They have no branches. This is regulationtech, not so much fintech, and challengers are coming from all sides, including in insurance. I wish these challengers well as being on the inside of incumbents I'm just left scratching my head "Why?".
I’ve been out of that game for a few years and have no ambitions make a career for myself at such a place.
In a very big, top down org. ponder the following:
Granting, in a specific scenario, that I’m right — this “whatever” is a disaster waiting to happen or possibly an already flaming disaster, heads have to roll.
Someone always have to take the blame, as this most likely will affect someones budget or set goals.
It might have profound effects on the current “1.” or “.One” consolidation & synergetic tech project that management is giving misdirected focus at the moment.
The “1Whatever” projects usually have bizarre amounts of $$$ attached, and end up holy.
Have you worked big enough companies you know about the “whateverOne” projects I’m referring to!
It is important to escalate what doesn't seem right. Sometimes that means email after email after email (written record) and that if it still doesn't smell right to keep pushing. Ops was a strange place, but 500 emails per day is no longer a challenge.
I commented this as an organisational failing rather than a technical one as a debate about UUIDs seems to be missing the point that people could have been aware something was not right but did not, or weren't allowed, or it got drowned in organisation, to do anything.
These projects often bare a description such as “ProgramOne”, “Platform1” or 1SomethingAwesome, and is of a “bite of more than you can chew” character.
At least at three of class leading companies I’ve worked, all with 90.000+ employees.
It’s just my disillusionment shining through! :)
I believe we agree — the future holds a merger of tech with business and what I’ve stated above are org failures. That’s true.
IT must not block business, it should expose opportunities and be inherently secure by convention.
Completely agree. And regulators are pushing for this. When.. in retail and SME banking and financial services the future is here, in Europe, but has yet to gain traction and public trust but that's coming quickly. That will be 5 years, change takes time, a long journey for VC money but not too long, they will see returns.
Asia will be slow. NA perhaps slower still. AU might pick up the ball but NZ will be faster if they choose. SG will lag HK because of technical debt in SG. I'm Asia based, not much idea on South America. South Asia are hand-tied by regulation, mainly currency, restrictions, the above regulators will be providing markets with the best retail and SME financial products. A hot place to be, Europe probably hotter coz PSD2.
There’s probably 90% lower hanging fruit than blockchain, if you know what I mean.
Throwing inventive projects at failing orgs will most likely fall flat.
I understand many of the challenges though, and no true recipe for change exists.
The closest I can think about is “letting go”.
I mean, you have people hired for a reason!
If you don’t trust them doing their thing, who’s the one hiring them?
Needless to say, I'm making moves to get the hell out of there
The market forces those orgs to start offer services online. They run those (relaxed security) services inside their intranet, so they start poking holes in their firewall. The next decade is not going to be pretty in that regard.
Interesting skill set: COBOL, DB2, JS, Angular.
Anyone conceivably responsible for ignoring the developer's complaint should be on trial right now.
Didn't know this was happening in your organization? Fuxk you, go to prison.
That doesn’t necessarily mean that this hole has existed that long.
1. Using sequentially incremented integer sequences as object IDs, and
2. Failing to protect sensitive data using some kind of authentication and authorization check.
This is becoming a trend with data breaches. Several of Krebs' other reports on behalf of security researchers were originally identified by (trivially) walking across object IDs on public URLs.
My cynical take is that Krebs couldn't go public before this afternoon because First American wanted it to hit the news at an opportune time, then get ahead of it with their own messaging. Krebs got in touch with First American on Monday May 19th. The story is only just breaking now on a Friday afternoon at 5 pm; markets are conveniently closed for the weekend.
I expect them to issue a hollow PR statement about valuing security despite being unable to act on security reports until an investigative journalist threatens to go public.
It was an absolute nightmare. Maintenance was a nightmare, you're constantly having to generate or replicate these things that add an extra layer of complexity to everything, and almost always unnecessarily.
It's also extremely bad for db performance, causes massive page fragmentation, indexes become useless almost straight after rebuilding them, etc.
For almost everything, sequential int IDs are fine. It's the things you expose to the users that you need to be careful with, and then don't use the primary key to access them, add another unique key to them, but keep the id in there for the db to use and for your own use.
My lesson was to go back to always using int ids, and on a few objects have a separate unique key column to expose to users for sensitive stuff.
Most applications don't use UUIDs and many of them are fine and I definitely wouldn't ding an app for using monotonic IDs, but I'm increasingly thinking that it's worth praising UUIDs more.
To illustrate this for you, let me turn it around a bit. Is it security by obscurity if the only thing stopping someone from logging into your account is knowing your password?
Security by obscurity is when you (for example) roll your own cryptosystem and rely (in whole or part) on the secrecy of your new-fangled algorithm to save you. That is unsafe. But if you're saying high-entropy strings shouldn't be the only barrier to authentication, you're throwing out half a century of complexity theoretic cryptography.
I can see your point. If UUIDs are handled in such a way that they are discoverable by anyone, they are not enough to make the references secure.
I think the point tptacek and others are making is that this is an instance of the defence in depths principle, though. In scenarios where UUIDs are not simply discoverable, using UUIDs is inherently more secure than using a monotonic ID, simply because the monotonic ID can be easily guessed. Yet, they are still not enough in isolation and you should be additionally using proper access control (due to eventual leakage of particular UUIDs in emails and such).
If I can see in this HTML page that your reply is /reply?id=12345, then it doesn't matter if Hacker News uses integers or UUIDs, if there's a bug in /edit?id=12345 that just lets me edit it without the appropriate security. If we say that UUIDs always make everything inherently more secure, we're doing everyone a disservice.
Now, the original discussion was about (1) discovering for read, and not about (2) escalating a read to a write. But if anyone reading this mistakenly takes from it that UUIDs are the way to solve these problems then they will go on optimizing for (1) at the expense of (2).
That's been bouncing around at least since the time I noticed it on /. Which was a couple of decades ago.
For an elegant solution to this problem, check out Twitter's Snowflake.
> Do not assume that UUIDs are hard to guess; they should not be used as security capabilities (identifiers whose mere possession grants access), for example.
HN discussion: https://news.ycombinator.com/item?id=10631806
In this kind of system you can also generate deterministic UUIDs, which are useful for idempotency (e.g. The same event can be recognised as a duplicate)
That being said I'm a little surprised to hear about the complexity. Are you able to share which DB/stack you were using? This functionality should be natively supported at two distinct abstractions: your programming language and your database.
But it's not just the support that's such the problem. You're testing, you need to switch category, you can't just change a 1 to a 2. You have to go find what random uuid the categories had added to it. You can't just go into the DB and add a new line, you have to open a UUID generator. You can't just quickly add a foreign key relationship, you have to look up the UUID. And a ton of other little annoyances.
Actually, categories are an excellent example of something that shouldn't be a UUID, they're actually supposed to be discover-able.
I think my present project has UUIDs on the user, company, invoice and payments tables, but still ints as the primary key. Everything else isn't worth it. There's a merchant table, but again, they're all supposed to be discover-able (and aren't editable by the merchant themselves).
I also generally implement controller level security that checks access to the root object being returned by default, so I can't really make a mistake exposing an unauthorised object. There's an occasional controller where I've made a conscious decision not to implement that level, generally actions that allow both authenticated and unauthenticated users (e.g. viewing merchants or categories).
The keys will be easier to guess again, but if all you have to do is guess a primary key to get access to the underlying data, something else isn't right anyways.
It's not about using hard-to-guess UUIDs, but restricting access to the underlying data.
Then of course there is the issue that email is for the most part un-encrypted (or encrypted without validating certificates).
And a side note: I wouldn't trust that the prng for your UUIDs are cryptographically secure. That's not a part of the spec.
I thought they were a bit of a hack to raise the bar a touch. In which case the crypto security properties of that function isn’t interesting. Instead the ergonomics are.
That provides IDs that are both opaque and, if you want, user-friendly.
(disclaimer: I wrote it.)
And yes, Google also posted a sitemaps file (or rather, 50,000 sitemap files) with all profile IDs. But that was last marked updated in March 2017, for some reason. Being able to validate that would have been nice.
But as a mitigation against blind bulk scrapes, a useful tool. I'd consider that one of G+'s good design elements.
The typical web app has the concept of a validated user session per request. How hard is it really to
Select ... From Documents where documentid = ? and userid = ?
Every web framework that I am aware of let’s you add one piece of middleware that validates a user session and won’t even route to the request if the user isn’t validated.
The other commenter's point about leaking information is also correct. In the finance industry one of the basic tricks to obtaining alternative data is to scrape it from private APIs which expose sequential IDs corresponding to a source of revenue. For example, a publicly traded car company might have its revenue extrapolated from an open API which sequentially increments an ID every time a vehicle is sold. Research groups will reverse engineer mobile apps from companies with only one or two dimensions of revenue, find the private API endpoints (reversing request signing as needed), and then look for object IDs which can be thrown into a timeseries on a quarterly basis.
Generally speaking the risk and compliance department of a hedge fund disallows this kind of data if it's gathered from an actual security vulnerability (e.g. leaks PII). It needs to be "only" a neutral information side channel without sensitive data, so that doesn't really apply in this specific scenario. But it does apply for people considering using integer IDs for user-facing APIs.
I couldn't say it better myself when I'm speaking to management that makes these kinds of decisions. Now I can quote throwawaymath verbatim to drive the detailed point home.
The idea pre-dates web APIs many decades :-)
- It exposes the count you have of a particular item
- It exposes your growth rate of those items
- If a developer accidentally breaks your authentication (or somebody hacks it), it becomes trivially easy to download all your items very quickly
And it isn't like using a randomized token is hard. In the most common implementation, it is just one additional column that gets filled with a random string and an index on the column.
If they could somehow change your code, all hope is already lost.
But I do agree with it does allow someone to determine rate of growth which would be valuable more from a business intelligence side than a privacy violation.
The larger issue is that a developer forgets to add the “and userid = ?”
I guess the work around for that is to have a database that ties user authentication to records in the table/object store directly like DynamoDB or S3.
So the developer may think it is safe to say select value from stock positions left join account on account.id = stock position.id left join user_accounts on user_accounts.accountid == account.id left join users on user_accounts.userid == user.id where user.id == session.userid.
Safe right? We checked userid. But then clicking on the position to drill in on the position data, they just select * from stock_position where stock_position.id = params.stock_id... there's no "and stock_position.userid" on that table, and the developer might be too lazy to spin up the entire join again especially if you don't need account data for this view. Whoops, suddenly a vulnerable page query.
I imagine there are other ways to screw up. Like insecure cookies, and just checking cookie.userid, ah yes, you're the right user. Whoops, didn't realize cookies could be spoofed.
But you don’t do cookie.userid.
You send the username and password to an authentication service which generates a token with a checksum. The token along with the username and permission is cached in something like Redis.
On each request, middleware gets the user information back using the token.
> Yet another security vulnerability caused by...
I mean, yes, but these are also some of the easiest vulnerabilities to miss even with out-of-the-box static analysis (code scanning and data analysis), automated dynamic analysis (pentests [edit to clarify for tptacek: automated pentests]), and a basic code review process. They're usually identified in live environments during manual penetration tests or, in more security-mature environments, with custom static analysis checks and custom linting rules.
As for best-case prevention: accomplished generally architecturally, e.g. language/framework decisions that enforce secure coding practices by design, or implementing certain patterns in development which whisks away some of the more risky coding decisions from engineers who may not be qualified to be making them, such as mandating authn/z and limiting exceptions only to roles and change processes qualified to make them. Checks including linting for specific privacy defects (direct object referencing using sensitive data or iterative identifiers as opposed to hashes/guids/etc) can help with catching them during development, and as you might've guessed, such checks tend to be custom for a given environment rather than out of the box.
I distinctly recall a card issuer whose name starts with a C in the United States having an http endpoint which allowed for enumerating account details by iterating full PANs (16 digit card numbers)... around a decade ago. Here we are today, and you're seeing the same bugs continue to arise.
Mitigation options in organizations with immature security practices typically rule out remediation simply because their existence might not be known, and practices traditionally reserved for defense-in-depth may need to be relied-upon instead (think monitoring web requests for anomalous behaviors and blocking traffic when detected) rather than trusting that one can fix all the defects, and even then you'll still lose a few records... but that might be the only solution available to you as a CTO, CIO, or CISO simply because of resource constraints and bureaucracy in an entrenched org e.g. in the financial or insurance space.
tl;dr: these defects are among the harder ones to catch for legacy applications especially in environments with weaker security postures, and they're as old as time. What I'm saying is that as much as we can call companies out for making these mistakes in hindsight, their existence in larger legacy systems is to some extent inevitable and must be managed in other ways.
This is absolutely not the kind of vulnerability that pentests tend to miss; rather, they're the first thing pentesters check for. You can miss bugs like this when they're in obscure backend features and your client or team didn't document the project adequately --- though you still shouldn't, and that's part of the point of getting an assessment, to find stuff like that --- but you generally don't miss them in an assessment where the bug is literally "edit a number in a URL".
Web scanning tools will miss findings like this. But, regarding web scanners: see static source code security analyzers.
As for code review: a competently constructed application shouldn't be relying on developers to catch every possible instance where numeric ids are used individually. In modern web frameworks, it should be obvious when you're looking an ID up without doing an authorization check; for instance, in a Rails or Django app, you can simply regex for lookups coming off the ORM class rather than the appropriate association instance.
In sum: I dispute much of this analysis.
People do miss things, even when they're things they shouldn't miss. Put 3 different test teams on the same application and you will get 3 overlapping but distinctive sets of vulnerabilities back. But this is not an instance of the kind of vulnerability that is hard to catch.
You're right; they don't. Which is why I called out automated dynamic analysis. I.e. the web scanning tools which you subsequently mentioned:
> Web scanning tools will miss findings like this.
> As for code review: a competently constructed application shouldn't be relying on developers to catch every possible instance where numeric ids are used individually. In modern web frameworks, it should be obvious when you're looking an ID up without doing an authorization check; for instance, in a Rails or Django app, you can simply regex for lookups coming off the ORM class rather than the appropriate association instance.
Right, which I also stated:
> As for best-case prevention: accomplished generally architecturally, e.g. language/framework decisions that enforce secure coding practices by design, or implementing certain patterns in development which whisks away some of the more risky coding decisions from engineers who may not be qualified to be making them, such as mandating authn/z and limiting exceptions only to roles and change processes qualified to make them. Checks including linting for specific privacy defects (direct object referencing using sensitive data or iterative identifiers as opposed to hashes/guids/etc) can help with catching them during development, and as you might've guessed, such checks tend to be custom for a given environment rather than out of the box.
A sibling comment makes the obvious point that no pre-auth endpoint should be touching this kind of data to begin with, which is another layer of "stuff you can just regex for".
That's fine, but I'd appreciate it if you just read the entire analysis next time. It shows that you respect the time people invest into constructing and presenting guidance, even if you don't necessarily respect the guidance itself.
Editing mine to match your edit... as if to make my point about reading the analysis in its entirety:
> A sibling comment makes the obvious point that no pre-auth endpoint should be touching this kind of data to begin with, which is another layer of "stuff you can just regex for".
Correct, something which I'd also stated:
> Checks including linting for specific privacy defects (direct object referencing using sensitive data or iterative identifiers as opposed to hashes/guids/etc) can help with catching them during development, and as you might've guessed, such checks tend to be custom for a given environment rather than out of the box.
> I do want to take every opportunity I can get to disabuse people about the effectiveness of scanners.
This entire exchange is frustrating because it's exactly what I said in my root comment:
> these are also some of the easiest vulnerabilities to miss even with out-of-the-box static analysis (code scanning and data analysis), automated dynamic analysis (pentests [edit to clarify for tptacek: automated pentests]), and a basic code review process.
I'm going to step away from my keyboard a bit; please forgive me.
I really appreciate this as this at least concludes that a miscommunication took place, thank you. I'll accept that there's likely a bit too much flourish to what I write for the sake of targeting nuanced clarity.
> If you think scanners suck too, we might just not have anything worth arguing about.
Largely yes, but I do think they have their place. I view them more as platforms to build upon or add to (e.g. custom data rules or enforcing the use of specific best practices) than generalized security salves, but as you'd pointed out, many of those objectives can also be achieved through much simpler means, e.g. just grep the code for things as a commit test.
That vuln has been an explicit part of the OWASP Top 10 since 2007...
Unlike other common web app vulns (e.g. XSS SQLi) IDOR usually can't be fixed by a development framework (e.g. ASP.Net or Rails), it needs app. specific coding for proper Authentication/Authorization checks.
Good thing he didn't post this bug online after getting no response. I remember reading about someone who did that on an AT&T website a while back and was sent to jail for simply incrementing an id number in the URL and talked about it on Twitter.
What are the odds they have access logs going back to 2003?
If debugging is turned off it's entirely possible that they have been appending lines to the same log file for the last 20 years and haven't run out of disk space which would cause them to notice. Say 200 bytes in the log per request, and even averaging 10000 (probably more than they get) requests per day, in 20 years that's only 13GB.
It's also entirely possible they turned logging off or redirected to /dev/null in order to "be more efficient".
I did notice when I was reviewing my docs that they emailed links to unauthenticated copies of docs, but they were mostly public records so I didn't think twice about it.
So they have my Name, address, email, SSN, copy of ID, copy of check from my bank with account/routing on it and much more, all in the open apparently.
I just went through an SSO implementation with a small team for a large user base. It was a bigger project than we had anticipated, but nonetheless manageable. I can't fathom that a financial institution of that scale could be that lax with basic security. Wouldn't their systems be subject to some regulation and require some kind of audit on a regular basis? Is this a failure of auditing systems, as well as internal security or even basic IT?
Listen until C-level funds these programs properly and security is taken seriously by all issues like this will forever be in the news.
I would be willing to bet their security like most have a long list of security gaps they cant get fixed because resource issues just hope they documented or it could fall on them.
Most coding classes just teach how to make things work in Mister Roger's world. Secure coding is an elective! Most run the DevOps model instead SecDevOps and only involve security after it is ready to go into production no matter what flaws security finds.
Why are black box pentests still taking place? Because company required to have pentest but really do not want testers to find things. Their goal is not to improve security rather check that box ... we had a pentest.
C-level, this keep the lights on budget you give Security/IT is costing you more than properly funding us! Oh yeah you put that $ into cyber insurance! Lol let's see how well that works.
Not the fact that John Doe can get to John Doe2 stuff without authenticating? WTF
Sequential or not if no auth I can run a scanner and get it all so what the hell does that have to do with the price of tea in China?
Also this github repo maintained by OWASP seems pretty exhaustive. The cheatsheets directory has a lot of different vulnerability classes.
This "Insecure Direct Object Reference" was recently combined into the "Broken Access Control" category with a few others.
Who is coming up with these statements?
If you kept royally screwing something for years that you claimed to be your "highest priority" - then what can one expect from your normal lines of business?
is such a meme.
Things will continue this way until there are serious repercussions for entities carelessly handling data.
> We are currently evaluating what effect, if any, this had on the security of customer information.
It's downright dishonest to even say "if any": they were presented with concrete examples of leaking customer information; they don't get to wonder whether it had an effect on their security anymore.
The chickens will continue to come home to roost until people treat digital security as seriously as physical security.
Still not enough.
Do people take physical security seriously? It doesn't seem like it.
Anyway, when I was an undergrad in the 1990s and took a computer security class our professor (Gene Spafford) talked about security being primarily an economic question. And that is generally how security, both physical and digital, has been treated since forever. And how it will always be.
The economic and physical damage caused by poor digital security is a rounding error compared to everything that happens in the real world.
As long as you understand that the following link is at least partly tongue-in-cheek, you may find this to be an entertaining read:
Cybersecurity is not very important
What investors should be concerned about is the reputational risk and loss of business to competitors that are able demonstrate more transparent and secure practices.
Maybe laws for monopolies, but not for competitive markets where consumers have choice to shop around.
For a competing business these dumps are a powerful marketing tool. It’s a direct client list. They just have to be able to show that their security is better.
Laws would make things so much worse for everyone. The key is to keep hacking away at all systems. Break things apart and build them back together. And win customers by showing that you can!
Should security solution vendors be held to account for failing to live up to the bold claims they make?
For example, if I install an application whitelisting system, but whitelist too much, pay no attention to logs and alerts, or never patch it, then that's not really the vendor's fault.
That's my line :):
"It forces you to think about data as a liability, rather than an asset and that particular mindset is a good one to have when you are dealing with end user data."
It stood the test of time rather well. Now we see a US push for a similar law and articles such as this one hopefully will cause that to arrive sooner rather than later.
Absolutely agree, and to further it I think this data liability goes beyond PII. Any data which could be used nefariously if publicly available is a potential liability if leaked - NDA'd documents, product roadmaps, source code of closed source software, private keys, pre-results earnings, the list is enormous.
With the shift in the economy from physical goods to IP I don't see why laws for physical goods storage, warehousing and safekeeping (eg. safety deposit boxes) won't be updated to include the digital equivalents in the not too distant future. And at that point I wouldn't want to be a Dropbox, EC2 or DigitalOcean unless I was very very sure of my security systems, never mind being a Facebook or Google.
Unless/until this breach results in a large financial hit to the company (possibly via a class action suit) I doubt it'll have any impact and I'm not even sure a class action suit could show damages without evidence of misuse.
I suspect this is going to hit First American pretty hard.
Whereas the Equifax situation was intentionally breached by attackers and it can be assumed that the breach was used to capture information for later sale.
I suspect that First American knew about this earlier this week and intentionally did a garbage dump on a Friday evening on Memorial Day Weekend. Maybe trade down a few tenths of a percent on Tuesday and their CISO will probably get axed. Nothing to see here, move along.