>system administrators did not consistently update the inventory system when they added devices to the network. Specifically, we found that 8 of 11 system administrators responsible for managing the 13 systems in our sample maintain a separate inventory spreadsheet of their systems from which they periodically update the information manually in the ITSDB. One system administrator told us he does not regularly enter new devices into the ITSDB as required because the database’s updating function sometimes does not work and he later forgets to enter the asset information.
Other good notes
Lack of training:
> NIST requires that organizations provide security-related technical training specifically tailored for their assigned duties... As of April 2019, JPL did not have a role-based training program, provide additional IT security training for system administrators, nor fund their IT security certifications.
Refusing to let Department of Homeland Security (DHS) complete a thorough post-intrusion assessment:
>However, according to NASA SOC personnel, JPL was concerned with inadvertent access to its corporate network and feared disruption of mission operations. In addition, JPL was unfamiliar with DHS’s standard engagement procedures. Collectively, resolution of these issues resulted in DHS being unable to perform scans of the entire network until 4 months after the incident was detected.
The printout I was handed on my first day had not been updated in several years. It basically contained a tracking ID, what building/room the device was supposedly located, and who it was assigned to.
I spent every day walking building to building, room to room, interviewing employees, trying to track down devices. I never finished updating the list simply because I was never able to track down over half the devices. Outside of a few secure areas I did not have access to, I pretty much turned the campus upside down looking for devices. I can only imagine where all those devices ended up.
"When people can't work with you, they will look for ways to work around you". - former IT boss of mine.
Needless to say, there where dozens of rogue AP's in the network, which where a biatch to find (was a manual job actually walking around with a laptop trying to find them). With the global rollout we made sure "rogue AP detection" was implemented as an additional feature, which came with its own challenges (sometimes not knowing something is easier to deal with...).
Im still bitter about their ITCS 300 policies that dictated I couldn't have access to the LOM of the blades to enable the beacon light for identification. Nothing like walking through multiple 8,000 sft server rooms looking for 1 server among thousands.
In this case, you wouldn't allow a device to access any information on the network without a proper certificate. The public systems need to learn from private companies in this regard.
Combine that with security keys, and you're in a decent spot.
Lack of training doesn't matter at all. This is just a mechanism to blame people, it doesn't ensure security at all.
And of course JPL doesn't have a mechanism to allow DHS to scan its entire network. Nobody has a big red button that says "please provide a back door in every single security policy we have for one party to do whatever they want".
This whole report is bullshit designed to place blame. Why is anyone taking this seriously?
This works very well to identify stuff that actually answers and will yield devices that are not supposed to be there in the first place but someone planning mischief is not going to place a device that is easily identified like that.
Something like the log reviews are a classical thing. Training a sysadmin to know all the new hot attacks and patterns they cause in a log is hard, because that world moves fast. It'd be much more effective to task the admin with a well-defined, easily monitored task: <Ship logs to splunk. Make sure logs are always shipped to splunk>. Might need some definition about format and which logs, but all logs go to splunk. And then it's the security guys job to look for malicious patterns in those logs, probably automatically. Ideally with something simple, like elastic-alert, logstash, you name it, from my own stack.
Similar, why do people have to manually enter systems into the host database? It depends on how far you want to automate that, but firewall all systems to access the central registry only, and widen the firewall after an authorized registration of the system. That way, the admins just have to rack systems with a usb stick with some credentials, and it goes or it doesn't.
If basic things are so hard people don't do them, something is structurally wrong.
Someone first has to build this system, and after accounting for all of the red tape and approvals and training and new audits required and tallying up the total man-hours required to implement, your solution that is supposed to be "less hard" might actually be much harder than the previous system.
It's pretty easy to come up with a multitude of ideas to fix issues like this, but it's another thing entirely to actually implement them, especially in a big government org like NASA. Obviously their current/previous system isn't working and they need to fix it, but I think you would be surprised at how difficult it is to do something even as simple as the system you've conceptualized.
Just to give a small anecdote: I've built asset management systems, and in one case at a major F500 company, one that used USB sticks for something similar to what you're describing. Just getting the approval to purchase the USB sticks and establish a process for properly handling the USB sticks once credentials were put on them was something that, by itself, took months.
We've been acquired by a bigger shop with a lot less technology focus and exactly what you're describing is already happening. Things that should take 2 month waiting for customers already takes 1 month of planning and 2 month of scheduling the person that might be able to schedule the task of 2 month within the next 6 month or more probably never. It's a soft spot for me atm, because if that's the new norm, it'll be time to leave a lot of work behind.
Your suggestions seem to focus a lot on box-ticking. Logs are shipped to someone else. Check! Clearance must be sought to install new machines. Check! None of these practices are strange in themselves, but they also needs to work. It doesn't really matter if you ship your logs to someone else if that someone lacks the the resources, competence, or general interest to read them. And, frankly, if your sysadmins can't be trusted to monitor logs, why would you think someone else is? Larger organization have security specialists, but that is on top of the ops specialists, no instead of.
Please don't think your suggestions are bad. They aren't, they're mostly good. But they're also exactly how these situations arise. When installation becomes bottlenecked people start taking shortcuts. Someone focuses a little bit too much on ticking boxes, while only pushing problems around as they go unsolved.
The article describes a good real world example. These people had most of the processes in place. All the inventory databases and log handling and access control were in place. It's just that they were crap implementations and nobody found themselves in a position to fix it.
"This may take up to a few minutes to process"
They make you wait at this long ass loading screen while they "process" your request not to have cookies.
Here's the outline for people who don't want to wait minutes to read an article. https://outline.com/TZSBv4
A brutal violation of course but I absolutely expect that to be the case.
I’m not holding my breath.
Sites that don't respect their users extremely seldom have anything of quality to offer anyway.
Are you still being "tracked" if all copies of the data are destroyed?
Not that they are aiming for that. God knows what they are attempting.
Modern shady advertising practices and dark UI patterns which punish users who care about privacy are publicly known and well-documented.
At some point the burden of proof lands on the denier and not the privacy advocate. Hanlan's Razor is not evidence towards anything.
This system could be powered by kind volunteers who manually click away the ads and publish their renderings either as images or HTML-1.0 with all shit-ware removed. They could also be paid through micropayments to workers in developing nations. If everyone "tipped" a buck or two per month into a pool that paid work-at-home scavengers to curtail content for us, we would be supporting a real business model while not being harassed and tracked.
Alternatively, you could have AI-powered crawlers that are trained to close videos, identify article text and relevant pictures, and ignore ad banners.
For something more complex like what you described, check out Brave. It blocks ads from tracking ad networks and either replaces them with non-tracking ads or lets you pay the site author directly.
I wonder why these dark patterns are still acceptable on the web. I thought opting-out was supposed to be as easy as opting-in according to the GDPR? The vast majority of sites I see make opting-out a very difficult process, usually hidden behind a tiny grey span of text, while the opt-in is a giant green, frictionless button with immediate effect.
Usually there is misleading title like "We value your privacy", and a giant green accept button making you think you're agreeing with that statement. Then a tiny "Other options" in grey somewhere at the bottom which makes you go through sixteen confusing modal dialogs.
Yes, that’s the idea, but who’s going to enforce it?
The dark patterns trick most people into doing what they want. A small number of technically-savvy users may complain, but we have little leverage.
In theory, people are supposed to be suing the companies that do this, right?
Opting in or out must not be a condition for accessing content, so a popup that covers the page is problematic. Opting out should also be as simple as opting in, not a maze of options with progress spinners.
File a complaint folks.
Working at a large engineering organization, I have given up and now do all engineering work on a stand alone computer, with dongle licensed software. I feel bad about the piles of CDR I burn through to transfer files, but it’s the only solution to getting work done.
Assume that the physical networks are compromised, and have all privileged resources only accept connections over VPN. Is it perfect? No, but it makes further compromise harder. The assumption of no trust also means acknowledging that you need gate incoming connections.
I can assure you that many, many security people (I would say all security people, but I have no doubt that there's some laggards working under the radar somewhere) already think like this. This is all part of a multi-layer security strategy, and having encrypted communication on top of a secured physical network is pretty standard and is what a lot of orgs strive for. Unfortunately, it really isn't feasible and it's not because of any decisions by the security people, because...
> have all privileged resources only accept connections over VPN
This sounds great until you remember that half of your organization runs on legacy software that doesn't play nice with forcing VPNs and your technical architect has informed you that they aren't planned to upgrade to newer software until 2030. This is especially so in government orgs (like NASA).
> isolate those machines
that's exactly what is done, but you just said that security people need to stop thinking in those terms, so...?
And isolating the machines: I mean separate networks, or vpn only access to privileged resources.
And it goes without saying: traffic from unknown systems on a privileged physical network get dropped at the router.
Congratulations to your company! That is quite a great policy. Unfortunately, that would be literally impossible in many of the rest of the world's companies.
We aren't talking about dropping access of a desktop machine that's a few days behind on its Windows updates. We're talking about massive, enterprise-spanning systems like mainframes, ERMs, industrial control systems, data pipelines, etc that interface with hundreds of other applications across your company and are 1000% mission critical. Dropping access could quite literally bring the entire company (and all of its revenue) to a grinding halt. And because of their size and importance, they take years and tens of millions of dollars (not an exaggeration; I've been on teams tasked with upgrades like this) to upgrade even when planned half a decade in advance.
Companies are complex, and security is not one-size-fits-all. You do what you can and hope for the best, but at some point there's only so much you can do without burning the entire company to the ground and starting all over.
A Microsoft certificate training I took recently literally put emphasis on randomizing source port numbers as a way to mitigate attacks.... let that sink in.
Many things, here's three to start:
* A measure of privacy - instead of every rando with ability to sniff packets (activities you have no way to ever know about, available to many parties along the path) only the DNS server (which you choose, presumably trust, and can change) knows what names you resolve.
* Stronger foundation for TLS - LetsEncrypt and other public certificate authorities depend on DNS to issue certificates. If an attacker controls DNS, they could easily generate certificates for any site they wanted to attack.
* There have been many shady incidents with certificate authorities. I just feel that beefing up some of the other layers in the stack is a good idea.
> faking DNS responses just results in a connection that is closed immediately
On the web it's often not closed immediately, the users often get a certificate warning that they may be conditioned to click through. Of course HSTS helps with that, but still... why the hostility to securing the name resolution layer?
I know at least Google and Apple gate almost all internal resources on VPN connection and per machine authentication. This is over 10s of thousands of machines and users.
It also makes attempts to use "unauthorized" devices with your privileged resources harder/impossible.
But more importantly: if you are allowing out of date machines on your network you are by design choosing to allow pretty much every attack that is happening at scale these days. If you are allowing out of date machines to access privileged resources you're rapidly heading to game over from a security PoV.
The Giants are in a class of their own. Lessons are often worth learning, but that doesn't mean most organizations can do what Google can do.
>(The model benefited the fact that all of Google’s internal applications are already on the Web).
All was going well until I put my own backdoors in to speed up my remote logins. Accidentally I denied access to everyone but me to a MOD computer. I had to admit to that one! Luckily my boss handled it and was practically pleased with his student hire. But yes, I can actually claim to have hacked military computers. I doubt my boss has forgotten that day, the day when the men from the ministry arrived.
Happy times, VAX computers were cool and hacking them with genuine VT DEC terminals on those fairly open networks was living the lifestyle.
A great moment when an analogue red box device could be made digital by simply changing out a crystal in a pocket digital speed dialer by RadioShack (RIP).
Nowadays you can find the recipes galore, and have no need to bang keys on your to discover hacks and tricks. It's probably why I have given it up. As they say, "those were the days, my friend."
There are probably a few dozen organizations out there that are properly implementing strong information security practices, and my hats go off to them. But they are the few, and I have never worked for one.
Despite best laid plans and policies, every place I have worked has always had some improperly secured services somewhere on their network. And every place that I've worked has had segmented networks that people end up relying on. And the people working for these organizations are often aware of the improperly secured resources, but they're only in the DMZ, and there are many other things to worry about, so it lives on.
Especially now that we live in an IPv6 world, why not just run everything publicly. Push security all the way down to the applications themselves, and rely on the software development lifecycle process to catch security issues.
Every service has to be secure. And they can get an awful lot of help in this from things like a service mesh architecture, where you're getting mutual TLS from something like Envoy, and the applications won't accept a network connection unless they're specifically authorized.
We need to stop relying on firewalls and network segmentation entirely, and just run everything on the public Internet, and make sure every service is secured.
I will say, when a zero day comes out in whatever proxy you're using to secure your services, you are in for a world of hurt. But there are zero days in firewalls too.
I don't want to go into much detail about our internal network architecture, but suffice to say it's extremely difficult to run any kind of service whatsoever even internally. It has literally taken me years to get approval to expose a fairly simple REST API to the public internet, and I'm not even there yet.
Yeah, wouldn't it be nice if software just didn't have any bugs?
You got the correlation backwards: Software isn't garbage because we can rely on the band-aids. The band-aids were invented because all software is garbage.
The JPL does much more interesting stuff than just NASA, like engines for military and also secret SW programs for the NSA (we know that from Larry Wall who was sysadmin there). And they are just administered by Caltech staff. Whow.
Random hackers are only interested in confirmation of aliens, but NSA or DOD stuff is very, very interesting to the Chinese who hacked these systems last.
Works also on mobile (Android)
Sadly, it doesn’t seem like things have changed.
That is not security 101, that's CYA bullshit that corporations institute once they've been caught with their pants down. "Training" is worth jack. You have to actually implement security practices for them to be worthwhile. Sysadmins are not always the brightest bulbs in the box, but they definitely shouldn't be expected to be doing a security team's job of regularly auditing security policy to make sure it's being enforced.