We do this kind of work for startups, and we have a small roster of companies where we're on the hook for this kind of corpsec.
But then I think about what it'd be like to actually deploy and operationalize this stuff, and it gives me pause.
I have two big questions --- real questions, like, I have no pretense of having an answer and am speaking from ignorance --- about how this stuff works in practice.
The first is privacy. They're deploying this across a fleet of company laptops. People do all sorts of stuff on their corp laptops. I don't know enough about Dropbox's company culture to know whether people ever use their machines for personal stuff, but I do know that occasional personal use is an SFBA startup norm. They're collecting essentially a continuous bash history from every user in their fleet. How comfortable are engineers with that? I don't have a strong opinion, but it gives me enough pause that I'd be a little afraid to ask a client to do it.
The second is: what do you do with all that information? I see how this addresses malware response, and I see how this would be useful as a forensic archive for general IR. But as a day-to-day tool, connection-by-connection, file-by-file granular data from a mostly technical team seems incredibly noisy. Most engineers look like attackers: they make out-of-process connections to random backend resources, copy things around, and run new code.
I acknowledge right away that my vantage point is small startups, not organizations running at the scale Dropbox is.
I worked for the corporate infrastructure team at Dropbox, at the time my team was responsible installing/managing things like osquery on laptops, with the data going to the security team's tools, so I have thought about this. Speaking for myself, the Dropbox culture takes trustworthiness very seriously. "Be worthy of trust" is #1 on the list of company core values, and it is deeply embedded into the culture. The trust covers so much of how the company operates, including privacy. Spying on employees is so against company culture, even though I knew exactly what data being sent, I trusted the security team would not use that for anything other than protecting me. The detection and response team is made up of the most security and privacy conscious people I've ever met.
I think the mindset of respecting coworker privacy is important for anyone in corporate infrastructure, including network and system administrators.
But I've worked for other big corporations too, without that trust. I think employees need to know that when they use corporate computers and communications systems, they aren't private. Don't do stuff you shouldn't be doing at work.
> what do you do with all that information? I see how this addresses malware response, and I see how this would be useful as a forensic archive for general IR. But as a day-to-day tool, connection-by-connection, file-by-file granular data
When I worked there the security team reached out to me to ask about anomalous behavior, so there is active detection going on. That was a couple years ago, I'm sure they have improved since then too. Keep in mind, they have a full time detection and response team, in addition to all the other security teams.
I don't believe there was any intention of making Dropbox hard to uninstall. I think the intention was to make the process seamless for users, and there was a bug that caused a setting to get reapplied after a user changed it.
I was talking to someone just two days ago, who was tearing their hair out because their application's "Open" panel took something like ten seconds to open. Why? Because the Dropbox extension decided it didn't want to play along nicely. It's foolish to think that your software will not have bugs, and outright foolhardy to do this for so little benefit.
The kernel extension is another example. AFAIK it was added for Smart Sync (née Infinite). Infinite is amazing, truly amazing, and the only way to implement it is via a kernel extension. Microsoft tried to implement it via the GUI with OneDrive's "smart files" (aka placeholders), they removed it because the files didn't work in too many places, like via syscalls and from command line.
Dropbox Infinite is kind of like the source control systems Google and Microsoft created to handle their huge repos. Microsoft's GVFS uses a file system filter driver (kernel extension) called GvFlt (or ProjFS). From what I understand, Google's Piper uses FUSE, which would be a third-party kernel extension on macOS.
My point is, these are technical achievements that provide seamless and intuitive user experiences, they aren't betrayal of trust.
You having put a lot of effort in this thread into weaving trust into your description of Dropbox operations, I'm surprised that you wouldn't trust an end user when they tell you that this in fact is a betrayal of their trust in Dropbox.
> Dropbox Infinite is kind of like the source control systems Google and Microsoft created to handle their huge repos. Microsoft's GVFS uses a file system filter driver (kernel extension) called GvFlt (or ProjFS). From what I understand, Google's Piper uses FUSE, which would be a third-party kernel extension on macOS.
I find that your examples really show a lack of understanding of why I'm frustrated by Dropbox's behavior. The products you linked are source control systems used by software engineers. Dropbox is aimed at nontechnical users. One of those groups understands what a kernel extension is, and the other one can't point their finger at Dropbox when their computer crashes. How is this not betrayal of trust? When a user installs an app they don't expect it to literally put its fingers all over the operating system. Not only does Dropbox look like any other application on the surface, it actively works to perpetuate this myth by spoofing operating system dialogs and prompts. If a doctor did a procedure on a patient that they didn't fully explain or even mention that they were doing they'd be sued for malpractice.
Also, I don't really like doing this, but you seem to be overly positive about Dropbox's work in this field. Do you work there, or have you ever worked there? Did you have a vested interest in this project?
They use Munki for software updates, which, just like you describe, gives the user a grace period where they can reboot at their leisure. Ignore it for long enough and it'll crank up the nagging, and eventually force a reboot.
It talks about killing GUI processes. Is it common to run a display server on macOS servers?
It is fairly evident they are talking about carbon black.
Carbon black's kernel modules panics in 10.13 and when you use it for development workloads it eats up half of your CPU in kernel space, making the entire machine unresponsive.
All of the large SFBA tech companies have something like carbon black, and they proxy your DNS requests with openDNS. Every network interaction is logged and so is whatever you do on your computer, personal or not. Whatever controls they have over security engineers looking over that info is probably not enough. Creepy stalker sysadmin is a tech industry scandal waiting to happen.
I've also heard rumors at some companies they even give reports of what employees are doing on their computers to managers, and managers are told not to tell their reports. I also think google records every keystroke. Those are rumors so take that with a big grain of salt.
I don't like it, but if I want SFBA pay, I've come to accept there is no privacy on work computers. The only things I do on my work computer is HN & Twitter. Things that are effectively public. Everything else goes through my personal smartphone. I have my own computers, and in the end it mostly means I have to carry 5 extra pounds in my backpack sometimes.
It has always puzzled my why anyone began to assume they did have this privacy. I have been around long enough to see the token ring to ethernet transition, and (formerly) as a network engineer trying to jam web 1.5 business through a fractional T1. Never mind your privacy, all that network throughput and CPU is paid for and hard-fought-for and it costs a lot in CAPEX and OPEX. Getting to do a little online banking or email on work gear has always seemed like a short term scam to me.
Can you put a bitcoin rig under your desk and use your employer's electricity to make money? Yeah, if it looks like a footrest, until the culture and policy catch up. This already happened with postage and long distance phone calls.
You never had a right to privacy on work computers. We all pretended it was a little perk for a few years, but the risk pf letting you do personal stuff is not worth it anymore for companies of a certain size.
While I agree with you, many countries with good work laws do limit what employers are actually allowed to see from their employees' data.
Meaning if the company lets someone go based on data they capture, and it was the kind of illegal one, then they better prepare to sign up some checks.
For example, on Germany a company is not allowed to make use of how long one is online to create some kind of productivity measure.
The problem is that many prefer not to bother with work law, not even "work law for dummies" kind of books.
Now imagine if most employees didn't quite understand what the 'door security gates' are really doing, and you start to understand where the problems start.
That's the crux of the issue for me.
Also there is another perspective about labor laws, and if a society should allow employers to do such things in the first place.
Actually you do. That scenario is legally distinct from your computer use.
Like Austria :) https://www.taylorwessing.com/globaldatahub/article_austria_...
That said I agree with you and do think being cautious about what one does on ones work machine is very important.
It was made pretty clear during orientation that there was some monitoring software installed on your computer that needed to be installed so we could keep our various compliances (HIPAA, PCI, etc). I think they were pretty clear that humans were not actively reviewing these logs (i.e. a computer would flag activity based on a heuristic) which definitely made me feel better. I was friends with people on the internal security team, and it was very obvious that they took employee privacy very seriously, at least from a philosophical perspective.
Obviously, people used their work computers for personal things. People installed Steam, Skype, whatever. I never heard of anyone getting in trouble for using that kind of software.
Security actually contacted me about my usage once. I was doing some Wikipedia browsing about password cracking cracking software and I noticed that there was one program in particular (Cain and Abel) that was exclusively for Windows. That struck me as weird, because I expected most security software to be run on *nix systems. Anyway, I went to the Cain and Abel homepage, browsed around a bit and then went on my merry way.
The next day, I get a somewhat alarming email from the security team alerting me that my computer had been compromised and removed from the network. I reached out to the guy who sent the email over Slack, and I go to his desk for a quick meeting. I ask him what the deal is, and he checks my Macbook's S/N against the list of flagged machines. He asked me if I had gone to "www.oxid.it" and at that point I freaked out a bit because I didn't recognize the domain. We searched the domain, and it was at that point I realized it was the same software I was looking up the previous day, and told him the site visit had been intentional. He explained that visiting the site is behavior consistent with a compromised machine and that would explain why I had gotten flagged.
I pointed out that C&A is Windows only software and I was running OSX, and he gave me the (somewhat unsatisfying) answer that you can't be too careful. After that, he told me to be careful and sent me on my way.
Overall, I had a pretty positive perception of the internal security team. They had a strong reputation and seemed to execute competently without being obstructionist.
I hope this answer gives you a bit of insight. I'd be interested to see how other shops do it.
Also I am interested to know if this data would fall under GDPR for any employees in the EU, and if so, how that data could be scrubbed if the employee left the company. In the US it's assumed that any information about actions on corporate hardware is owned by the company, but maybe that's not true outside US borders.
As for what to do with all that information, it's definitely a great question. Right now I think the answer is person-driven (get eyeballs on the intel), but obviously that's not going to last. Everyone in the space (AFAIK) is trying to take what analysts do, and productize it somehow.
I'm sure you've got insight into how that's happening on your end, it's a pretty straightforward leap when faced with, "Do I scale my service?" questions.
The privacy concerns with commercial endpoint stuff are comparable (though not the same; with some endpoint tools, it would be difficult for a manager or a random IT person to snoop on what you're doing). The concern about operationalizing is mitigated somewhat, but only if all you're doing is malware/public-threat-feed stuff.
Hmm I wonder if this is true or not, or if it’s a startup thing. As someone who has worked for BigCorp Inc. companies, I know I carefully maintain a strict separation between my work and personal devices, for privacy reasons and simply to follow company policies WRT confidentiality. Is this “crossing the streams” between work and personal stuff that common in the startup world? I’d think at the very least even a small startup would want to keep their secret IP off personal devices and limit risky horseplay on company equipment.
In my previous company we had Linux on laptops and our network were highly monitored. People used their work machines like their own. Company had several checks for network in place so if you’d forgot to turn off VPN, you’d got a notification saying that you probably shouldn’t be doing what you’re doing on corporate network.
1. Code quality (Patrick Wardle is not an engineer and it shows in his code).
This is not terribly important for an end-user utility
but becomes very important when you're deploying kernel
modules across your entire organization that need to be
absolutely rock solid and not introduce additional threat
2. Distributed nature (They're utilities meant to be executed
by end-users on their own machines, not distributed agents
syncing up with cloud servers)
and so on..
Ask HN: Is no anti-virus software still best practice for mac?
Also a discussion of how this type of monitoring worked out in practice in the Google/Uber/Lewandowski case last year:
> the level of detail that Google has over the logs and actions of their laptop
We also offer a commercial service to make custom modifications, bugfixes, and feature enhancements to osquery. It's little known at this point, but we do the same for Google Santa too!
Would be nice for some insight into Dropbox’s solution here...
For mobile (online but outside corporate network) there are two options I've heard of being done:
1. Have each endpoint have a unique TLS certificate, and have the log shipper do mutual TLS to the logging server which has a public IP.
2. Have a backhaul VPN that is allows connected, automatically, to the monitoring network, and send the logs over that. That VPN is different than the user VPN that gives access to the corporate network.
The real difficulty is not finding useful open source tools, it is integrating them into existing monitoring solutions used within an organisation to get a single view of activity on a system and on a collection of systems (i.e. how do you make the tool scale).