(The above is sarcasm).
Less than a month old? A single screenful, mostly Australian.
Kind of makes me want to take Geology there, sounds like a fun place.
Imagine this scenario: you maintain a network of web servers, database servers, file servers, etc. They all combine to generate a large website used by tens of millions of users every month. One day you are just doing a cursory look over a certain server, but you see something strange. Someone is logged in to your server. And they have a Russian IP address.
What do you do? Obviously, the first step is you login to your edge routers and null route all of Russia. GFTO. Next, you've got an idle session on one server. What were they doing?
How can you reconstruct what they were doing? bash history? maybe. Network forensics? Your network probably isn't recording every historical connection between servers—99.9999% of the time useless—but critical in this case. File system access? Your file system probably isn't logging every historical access—useless 99.99999% of the time—but would be really freaking useful in this case.
So, you investigate their history, double check some database logs, check netstat, check lsof, and in the end, you really have no idea what they were doing at all. Our systems don't leave enough bread crums around to reconstruct even interior hostile activities, much less semi-intelligently disallowing Google to not index confidential information when accidentally left exposed.
I can't remember the quote exactly, but if you're reacting to a breach it's too late.
Plus, any notifications depend on actually instrumenting any monitoring or triggers or processing to even notice your "sensitive" content has been accessed out of context.
(and this is just web stuff. imagine how impossible it is to track who forwards your confidential emails or other internal documents around without your permission.)
Anyone who needs records of what has been accessed, so larger companies and organisations.
> Plus, any notifications depend on actually instrumenting any monitoring or triggers or processing to even notice your "sensitive" content has been accessed out of context.
Yup. Hence a cron job automatically emailing its result (crude (or simple?) but it would work).
> (and this is just web stuff. imagine how impossible it is to track who forwards your confidential emails or other internal documents around without your permission.)
I don't have to imagine that. This is why DRM exists; document/knowledge management systems should have the ability to allow access to information but not further dissemination. There's still the user education aspect though (and users don't like change...).
Oh, and the insistence of wanting to using external services like Dropbox... gah. "But, but, everyone else uses it!"
But we live in a new world. A world of BYOD and now, in 2015, Bring-Your-Own-SaaS. Employees put content up on company platforms, on third party platforms, on high heel platforms.
The problem of solving data privacy at a _competent_ level across every organization is intractable with so many "just do whatever you want" vibes in the air.
Now, that obviously doesn't happen everywhere, but it happens everywhere until it doesn't. Biggest offenders are usually non-technical offices: sales using 8 hosted platforms for metrics, email, surveys, project management, job hiring, etc. All impossible to actually control at any sane level outside of 340 UI clicks of the mouse across webby webby land.
tl;dr give up and go live in a cave for the next 30 years until all this gets sorted
when you decide to buy something for $x instead of paying someone who knows what they are doing to implement something with proper standards for $5x show on things like this
Sure, it's weak, but at least it won't be accessible through Google.
Seems like reason enough to avoid ext: and/or be ignorant of it.
(Thanks, now I do.)