Hacker News new | past | comments | ask | show | jobs | submit login
NASA Has Been Hacked (forbes.com)
290 points by kevinguay 3 months ago | hide | past | web | favorite | 106 comments



I highly recommend reading the actual audit[1]. There's a lot of good details in there, similar to the Senate report on the Equifax breach a few days ago. There were several problems: the inventory tracking issue was particularly enlightening:

>system administrators did not consistently update the inventory system when they added devices to the network. Specifically, we found that 8 of 11 system administrators responsible for managing the 13 systems in our sample maintain a separate inventory spreadsheet of their systems from which they periodically update the information manually in the ITSDB. One system administrator told us he does not regularly enter new devices into the ITSDB as required because the database’s updating function sometimes does not work and he later forgets to enter the asset information.

Other good notes

Lack of training:

> NIST requires that organizations provide security-related technical training specifically tailored for their assigned duties... As of April 2019, JPL did not have a role-based training program, provide additional IT security training for system administrators, nor fund their IT security certifications.

Refusing to let Department of Homeland Security (DHS) complete a thorough post-intrusion assessment:

>However, according to NASA SOC personnel, JPL was concerned with inadvertent access to its corporate network and feared disruption of mission operations. In addition, JPL was unfamiliar with DHS’s standard engagement procedures. Collectively, resolution of these issues resulted in DHS being unable to perform scans of the entire network until 4 months after the incident was detected.

[1]: https://oig.nasa.gov/docs/IG-19-022.pdf


Back in the early 90s I had a summer internship for a contractor at Goddard Space Flight Center. My job for the entire summer was to track down and inventory a list of 1000s of devices across the entire campus. At the time they were building a tracking database for all the devices on the campus.

The printout I was handed on my first day had not been updated in several years. It basically contained a tracking ID, what building/room the device was supposedly located, and who it was assigned to.

I spent every day walking building to building, room to room, interviewing employees, trying to track down devices. I never finished updating the list simply because I was never able to track down over half the devices. Outside of a few secure areas I did not have access to, I pretty much turned the campus upside down looking for devices. I can only imagine where all those devices ended up.


I interned at Goddard in 2006 and my PI had a rogue wireless access point for his interns to use. Apparently it was a long and convoluted process to get network access for personal computers, so he didn’t even bother trying. I remember some of my fellow interns complaining about having to work offline for the first month of their 10 week internship.


it was a long and convoluted process to get network access for personal computers, so he didn’t even bother trying

"When people can't work with you, they will look for ways to work around you". - former IT boss of mine.

Every time.


Cool thing is: with that argument I was able to convince management to roll out WiFi worldwide at a large corp where the CISO hated WiFi and had halted all projects that sought to implement it.

Needless to say, there where dozens of rogue AP's in the network, which where a biatch to find (was a manual job actually walking around with a laptop trying to find them). With the global rollout we made sure "rogue AP detection" was implemented as an additional feature, which came with its own challenges (sometimes not knowing something is easier to deal with...).


Another fellow Goddard intern checking in! But mine was back before anyone worried about “working offline” vs. “working online”. We just wrote our code—without needing to browse HN and StackOverflow every 10 minutes :-) We still had plenty of other non-Internet related red tape, bureaucracy, and other forms of Work Prevention to overcome and avoid though.


Im having flashbacks to when i had a similar job at IBM back in the day, and we "lost" a z990 system. There was considerable more understanding when I couldn't find a blade server the size of a hardback book than when i couldn't find a machine the size of a car. Thankfully it showed up in Beaverton like 3 months later.

Im still bitter about their ITCS 300 policies that dictated I couldn't have access to the LOM of the blades to enable the beacon light for identification. Nothing like walking through multiple 8,000 sft server rooms looking for 1 server among thousands.


This is expected. Any manual work will have errors and lots of them. If you want a system to be robust, you have to engineer it in a way it stops working if one of the prerequisites aren't satisfied. It's costly but there's no way around it afaict.

In this case, you wouldn't allow a device to access any information on the network without a proper certificate. The public systems need to learn from private companies in this regard.


+100 here. Issue devices with hardware certs, and allow only cert-bearing devices on the network, period. Now you have precise and revocable control over what's on the network. Well, as long as you ensure SSH doesn't mess it up. ;)

Combine that with security keys, and you're in a decent spot.


Nobody ever manually maintains inventory correctly. That's why automated systems are supposed to scan networks and inform the inventory of what is actually there, versus what is "supposed to be there".

Lack of training doesn't matter at all. This is just a mechanism to blame people, it doesn't ensure security at all.

And of course JPL doesn't have a mechanism to allow DHS to scan its entire network. Nobody has a big red button that says "please provide a back door in every single security policy we have for one party to do whatever they want".

This whole report is bullshit designed to place blame. Why is anyone taking this seriously?


> inform the inventory of what is actually there

This works very well to identify stuff that actually answers and will yield devices that are not supposed to be there in the first place but someone planning mischief is not going to place a device that is easily identified like that.


Most corp IT teams I have encountered used some sort of proprietary system for tracking inventory or other management activities. I wish there were robust open-source software stacks available to manage corp IT stuff. Also, I find it weird to let 3rd-party closed source network appliances (security scanners etc) to be simply plugged into your network and wait for them to produce reports.


There are three perspectives on inventory: What you want to have, what you think you have, and what you actually have. Reconcile :)


Thank you Samuel Adams


Reading the audit, this kind of confirms my base question when building infrastructure: If people don't do the right thing the business needs, why is it too hard to do? Can't we reduce the pain to do the right thing so doing the lazy / wrong thing is harder? People not doing thing tends to be an indication of boundaries and responsibilities being drawn in bad ways.

Something like the log reviews are a classical thing. Training a sysadmin to know all the new hot attacks and patterns they cause in a log is hard, because that world moves fast. It'd be much more effective to task the admin with a well-defined, easily monitored task: <Ship logs to splunk. Make sure logs are always shipped to splunk>. Might need some definition about format and which logs, but all logs go to splunk. And then it's the security guys job to look for malicious patterns in those logs, probably automatically. Ideally with something simple, like elastic-alert, logstash, you name it, from my own stack.

Similar, why do people have to manually enter systems into the host database? It depends on how far you want to automate that, but firewall all systems to access the central registry only, and widen the firewall after an authorized registration of the system. That way, the admins just have to rack systems with a usb stick with some credentials, and it goes or it doesn't.

If basic things are so hard people don't do them, something is structurally wrong.


> but firewall all systems to access the central registry only, and widen the firewall after an authorized registration of the system. That way, the admins just have to rack systems with a usb stick with some credentials, and it goes or it doesn't.

Someone first has to build this system, and after accounting for all of the red tape and approvals and training and new audits required and tallying up the total man-hours required to implement, your solution that is supposed to be "less hard" might actually be much harder than the previous system.

It's pretty easy to come up with a multitude of ideas to fix issues like this, but it's another thing entirely to actually implement them, especially in a big government org like NASA. Obviously their current/previous system isn't working and they need to fix it, but I think you would be surprised at how difficult it is to do something even as simple as the system you've conceptualized.

Just to give a small anecdote: I've built asset management systems, and in one case at a major F500 company, one that used USB sticks for something similar to what you're describing. Just getting the approval to purchase the USB sticks and establish a process for properly handling the USB sticks once credentials were put on them was something that, by itself, took months.


Yeah, size is one matter, technology focus is another one.

We've been acquired by a bigger shop with a lot less technology focus and exactly what you're describing is already happening. Things that should take 2 month waiting for customers already takes 1 month of planning and 2 month of scheduling the person that might be able to schedule the task of 2 month within the next 6 month or more probably never. It's a soft spot for me atm, because if that's the new norm, it'll be time to leave a lot of work behind.


Why it's hard? That's because it's actual work, and it's going to be the first thing that's overlooked because the business side doesn't really want it until it's too late. That's not unique to security of course, resiliency and availability has always battled the same problems.

Your suggestions seem to focus a lot on box-ticking. Logs are shipped to someone else. Check! Clearance must be sought to install new machines. Check! None of these practices are strange in themselves, but they also needs to work. It doesn't really matter if you ship your logs to someone else if that someone lacks the the resources, competence, or general interest to read them. And, frankly, if your sysadmins can't be trusted to monitor logs, why would you think someone else is? Larger organization have security specialists, but that is on top of the ops specialists, no instead of.

Please don't think your suggestions are bad. They aren't, they're mostly good. But they're also exactly how these situations arise. When installation becomes bottlenecked people start taking shortcuts. Someone focuses a little bit too much on ticking boxes, while only pushing problems around as they go unsolved.

The article describes a good real world example. These people had most of the processes in place. All the inventory databases and log handling and access control were in place. It's just that they were crap implementations and nobody found themselves in a position to fix it.


One problem is workforce allocation. The argument above is based on the assumption of segregated duties. Some antiquated organizations may have the "sys admin" that is also the "security guy", as well as the backup "software developer", "software QA" etc. because the organization's priorities are elsewhere.


Wow. Try to opt out of their data tracking, an option they're required to add.

"This may take up to a few minutes to process"

They make you wait at this long ass loading screen while they "process" your request not to have cookies.

Here's the outline for people who don't want to wait minutes to read an article. https://outline.com/TZSBv4


I discussed this recently here on HN [0], the fake spinner is a dark-UI to 'punish' you for opting out. If you just accept the popup disappears immediately.

[0] https://news.ycombinator.com/item?id=20131381


I don't think that's actually true. Rather, it's an architectural thing — because all these ad systems were designed without consent in mind, accepting is a no-op, whereas refusing consent requires an outbound request to set some sort of "do not track" flag somewhere (presumably as a cookie).


If accepting is a no-op then you are being tracked even before you make your decision - as the page already have been loaded.

A brutal violation of course but I absolutely expect that to be the case.


I leave the majority of pages that have these popups now. My (likely vain) hope is that the tracking data will show higher bounce rates, and eventually the publishers will explore better financing models.

I’m not holding my breath.


Same here, I've also realized that I pretty much never feel I've missed anything.

Sites that don't respect their users extremely seldom have anything of quality to offer anyway.


Perhaps the purpose of the spinner is to delete the data they've already collected?

Are you still being "tracked" if all copies of the data are destroyed?


That depends on who is doing the actual tracking/correlation and, assuming it's a third party, how quickly the site hands that data off.


Maybe, but hardly GDPR compliant.

Not that they are aiming for that. God knows what they are attempting.


Why do you think it's not true?


Starting from Hanlon's razor, you assume incompetence is likelier than malice. Saving the "do not track" preference as a cookie is the most obvious way to distinguish new visitors with no cookie from users who have opted-out, but this means issuing a request to each and every ad network to store a cookie with them. Indeed, a quick look at Chrome's network tab reveals that they are, indeed, making a bazillion requests that is consistent with that explanation.


I apply Hanlon's razor to individuals, not collective entities such as a company or agency. The behavior in recent history of such entities I think warrants the exception to the rule.


It’s a heuristic that gives you a good starting point, not some sort of law. As it stands, it’s a starting point that’s easy to back with data suggesting it is indeed the case. If you can point me towards evidence that malice is indeed the case here, I’ll willingly change my mind.


What kind of evidence are you looking for?

Modern shady advertising practices and dark UI patterns which punish users who care about privacy are publicly known and well-documented.

At some point the burden of proof lands on the denier and not the privacy advocate. Hanlan's Razor is not evidence towards anything.


I didn't say you were right or wrong here for applying Hanlon's razor. I merely stated that I myself do not use that particular line of thinking when it comes to those entities. I also did not say that it was a case of malice in this case. There is no burden of proof levied on me here, nor was I particularly worried about trying to change your mind.


I was going to screenshot that page. A small auto play video in bottom left corner, a top bar pop up to get the "latest updates from Forbes", an email sign up for the Forbes Daily Dozen and in the background, blurred out is the article.


I envision a distributed system that simply renders pages, clicks "agree" or whatever, and uploads just the content to "archive" servers automatically. After a critical mass is reached, then instead of going to the URL, you plug the URL into archive server to see if it already has a copy and render that version. From there, you could go even further and daily download 100mb or so of pre-sanitized internet from the most likely pages you currently visit. Not only would you elide ads, but your page render times would drop to milliseconds and save money on mobile bandwith.

This system could be powered by kind volunteers who manually click away the ads and publish their renderings either as images or HTML-1.0 with all shit-ware removed. They could also be paid through micropayments to workers in developing nations. If everyone "tipped" a buck or two per month into a pool that paid work-at-home scavengers to curtail content for us, we would be supporting a real business model while not being harassed and tracked.

Alternatively, you could have AI-powered crawlers that are trained to close videos, identify article text and relevant pictures, and ignore ad banners.


If you have Firefox, Edge, or Safari, you can use Readability Mode to strip just the article text and read it.

For something more complex like what you described, check out Brave. It blocks ads from tracking ad networks and either replaces them with non-tracking ads or lets you pay the site author directly.


The one thing that approach would guarantee is that archive servers would end up getting the business end of the stick.


If you disable javascript you can read it easily (and much faster, and without ads).


This is the best way to browse most of the so-called news sites.


It gets stuck for a minute or two on 100%, then says that some trackers cannot use https and makes you click another link to finish the process.

I wonder why these dark patterns are still acceptable on the web. I thought opting-out was supposed to be as easy as opting-in according to the GDPR? The vast majority of sites I see make opting-out a very difficult process, usually hidden behind a tiny grey span of text, while the opt-in is a giant green, frictionless button with immediate effect.

Usually there is misleading title like "We value your privacy", and a giant green accept button making you think you're agreeing with that statement. Then a tiny "Other options" in grey somewhere at the bottom which makes you go through sixteen confusing modal dialogs.


I wonder why these dark patterns are still acceptable on the web. I thought opting-out was supposed to be as easy as opting-in according to the GDPR?

Yes, that’s the idea, but who’s going to enforce it?

The dark patterns trick most people into doing what they want. A small number of technically-savvy users may complain, but we have little leverage.


> Yes, that’s the idea, but who’s going to enforce it?

In theory, people are supposed to be suing the companies that do this, right?


Whenever I get a spinner after clicking "Decline", I just reload the page. Often, it works. Presumable, it sets the cookie on the page ("user accepted/rejected the cookies") before setting the cookies on partner pages...


I used to do that too, thinking there was some kind of bug. And then one day I got distracted by my toddler while rejecting and it turned out that yes, a few minutes later, the thing disappeared. So now I'm tempted to assume that if you don't wait then nothing guarantees that the partners got the message that you're not accepting their tracking. The only dark pattern in there, presumably, is that they're notifying partners serially rather than in parallel.


I was actually surprised that it went all the way through to 100% and did something. Usually those things break down at some point.


Disable Javascript on the page, and you can read it without a problem.


PIA VPN "mace" seems to be saving me from all the torture.


That's grounds for a GDPR complaint against TRUSTe and Forbes.

Opting in or out must not be a condition for accessing content, so a popup that covers the page is problematic. Opting out should also be as simple as opting in, not a maze of options with progress spinners.

File a complaint folks.


Unless they geo-block GDPR countries and call it a day


They do not block EU countries at the moment. It's up to them to react badly, but we must still call out their behavior.


I'd be ok with this in most cases. Other companies will serve those countries, if it's still profitable to do so. If not - oh well.


Hopefully, this doesn't cause fear mongering around raspberry pi devices. It's not a stretch to imagine a bureaucrat reading articles like this, seeing "a raspberry pi was plugged in", and forming a negative opinion of the device and people that use them.


Unfortunately there already is. When I interviewed for a job in Antarctica we had discussed methods of saving on bandwidth usage and I suggested the use of a PiHole to strip out ads to save precious KB and was told that the Raspberry Pi was frowned upon due to previous issues, and it would likely never happen. :(


Then just use a server that does the same thing. If the issue is the buzzword then work around the buzzword.


Yes. Small ARM-powered server. Preferably not expensive, around $35.


Oh of course, software is software, I just mean when a Raspberry Pi was mentioned that it has a distinct stigma.


Good news then: you don’t need an actual Raspberry Pi or to run “pihole” software in order to filter ads via DNS. Just a beige Linux box running dnsmasq is enough!


I mean, it's just a DNS server right? There are probably watches that could run it.


Right, the software does the job not the hardware. I was just trying to point out that the Raspberry Pi was blankedly verboten.


I usually roll my eyes at meta comments on HN about ads or tracking on web pages getting in the way, but good lord. This page first slams you with a nearly full page ad with no dismissal, and then after you read a few paragraphs hits you again with a modal sign up dialog.


The magic combination of adblockers has spared me from this fate, but not from an annoying video about the top 5 richest rappers, for some reason.


Unfortunately this will just make it more difficult to get real work done, as security is tightened further. Maybe they just ought to physically isolate their networks.

Working at a large engineering organization, I have given up and now do all engineering work on a stand alone computer, with dongle licensed software. I feel bad about the piles of CDR I burn through to transfer files, but it’s the only solution to getting work done.


IT security people need to stop thinking in terms of disallowing “unauthorized” devices on physical (wired and WiFi) and recognize start designing for human nature.

Assume that the physical networks are compromised, and have all privileged resources only accept connections over VPN. Is it perfect? No, but it makes further compromise harder. The assumption of no trust also means acknowledging that you need gate incoming connections.


> IT security people need to stop thinking in terms of disallowing “unauthorized” devices on physical (wired and WiFi) and recognize start designing for human nature.

I can assure you that many, many security people (I would say all security people, but I have no doubt that there's some laggards working under the radar somewhere) already think like this. This is all part of a multi-layer security strategy, and having encrypted communication on top of a secured physical network is pretty standard and is what a lot of orgs strive for. Unfortunately, it really isn't feasible and it's not because of any decisions by the security people, because...

> have all privileged resources only accept connections over VPN

This sounds great until you remember that half of your organization runs on legacy software that doesn't play nice with forcing VPNs and your technical architect has informed you that they aren't planned to upgrade to newer software until 2030. This is especially so in government orgs (like NASA).


Then you need to either mandate updating, or isolate those machines.


> mandate updating

hahahahahahahahaahaha

> isolate those machines

that's exactly what is done, but you just said that security people need to stop thinking in those terms, so...?


I work at a company that drops network access to privileged resources of any machine that is more than X days behind installed an update.

And isolating the machines: I mean separate networks, or vpn only access to privileged resources.

And it goes without saying: traffic from unknown systems on a privileged physical network get dropped at the router.


>I work at a company that drops network access to privileged resources of any machine that is more than X days behind installed an update.

Congratulations to your company! That is quite a great policy. Unfortunately, that would be literally impossible in many of the rest of the world's companies.

We aren't talking about dropping access of a desktop machine that's a few days behind on its Windows updates. We're talking about massive, enterprise-spanning systems like mainframes, ERMs, industrial control systems, data pipelines, etc that interface with hundreds of other applications across your company and are 1000% mission critical. Dropping access could quite literally bring the entire company (and all of its revenue) to a grinding halt. And because of their size and importance, they take years and tens of millions of dollars (not an exaggeration; I've been on teams tasked with upgrades like this) to upgrade even when planned half a decade in advance.

Companies are complex, and security is not one-size-fits-all. You do what you can and hope for the best, but at some point there's only so much you can do without burning the entire company to the ground and starting all over.


Meanwhile DNS, which is a precursor to almost every connection ever, is rarely encrypted or authenticated in practice. Standards like DNSSEC and DNS over TLS exist but seem to have lots of vocal opposition without any serious proposals for improvement.

A Microsoft certificate training I took recently literally put emphasis on randomizing source port numbers as a way to mitigate attacks.... let that sink in.


Perhaps tls v1.3 will help? I've read cloudflare is doing major work to find dns solutions in conjunction with Mozilla encrypting sni, configuration for dnssec and so on.

https://blog.cloudflare.com/encrypted-sni

https://blog.cloudflare.com/encrypt-that-sni-firefox-edition


What does encrypted/authenticated DNS gain you? If the application protocol is encrypted and authenticated like https then faking DNS responses just results in a connection that is closed immediately because authentication fails. Encrypting is also useless unless you use a proxy/VPN because otherwise the connection target leaks via the IP header anyway when you open the connection.


> What does encrypted/authenticated DNS gain you?

Many things, here's three to start:

* A measure of privacy - instead of every rando with ability to sniff packets (activities you have no way to ever know about, available to many parties along the path) only the DNS server (which you choose, presumably trust, and can change) knows what names you resolve.

* Stronger foundation for TLS - LetsEncrypt and other public certificate authorities depend on DNS to issue certificates. If an attacker controls DNS, they could easily generate certificates for any site they wanted to attack.

* There have been many shady incidents with certificate authorities. I just feel that beefing up some of the other layers in the stack is a good idea.

> faking DNS responses just results in a connection that is closed immediately

On the web it's often not closed immediately, the users often get a certificate warning that they may be conditioned to click through. Of course HSTS helps with that, but still... why the hostility to securing the name resolution layer?


Making every device connect over VPN is infeasible. There are, however, varying models of governing port-level access exist. dot1x, ISE, and yes, on some networks forcing VPN is doable.


Correct, but not every device needs access to privileged resources.

I know at least Google and Apple gate almost all internal resources on VPN connection and per machine authentication. This is over 10s of thousands of machines and users.

It also makes attempts to use "unauthorized" devices with your privileged resources harder/impossible.

But more importantly: if you are allowing out of date machines on your network you are by design choosing to allow pretty much every attack that is happening at scale these days. If you are allowing out of date machines to access privileged resources you're rapidly heading to game over from a security PoV.


BeyondCorp would be an alternative to consider:

https://thenewstack.io/beyondcorp-google-ditched-virtual-pri...


There should be a law about citing what Google does vs. what the rest of the world does.

The Giants are in a class of their own. Lessons are often worth learning, but that doesn't mean most organizations can do what Google can do.

>(The model benefited the fact that all of Google’s internal applications are already on the Web).

Well then.


How is the US federal government considered exempt from this logic? Their budget dwarves more or less everything else in the world


It could be argued that NASA was the original Google.


Link to the actual audit here: https://oig.nasa.gov/docs/IG-19-022.pdf


I remember back in the late 80s telnetting out of the NYU Bobst library on their VAX 11(?) system to some pretty interesting systems. The Johnson Space Center in Houston (running VAX 11/785s was one I particularly remember. Of course, back then things were not battened down as much as they are now; the spirit was an open network. A sysadmin would interrupt your session with quesitons like "Who is this? You are unauthorized to access this system, etc."


I used to love roving around VAX networks. In the UK the ones for science (and defence research) were all setup the same. They didn't design the login scripts for pests like me. So I was able to go round the different boxes looking for interesting datasets. I only wanted super hi resolution satellite imagery, convinced there was some sub-metre resolution stuff out there.

All was going well until I put my own backdoors in to speed up my remote logins. Accidentally I denied access to everyone but me to a MOD computer. I had to admit to that one! Luckily my boss handled it and was practically pleased with his student hire. But yes, I can actually claim to have hacked military computers. I doubt my boss has forgotten that day, the day when the men from the ministry arrived.

Happy times, VAX computers were cool and hacking them with genuine VT DEC terminals on those fairly open networks was living the lifestyle.


Yes, the early days were more innocent, and really fringe. I didn't connect with other hackers until years later, and then attended the monthly 2600 Wednesday meeting or two (Citicorp bldg.?) in the late 80s/early 90s.

A great moment when an analogue red box device could be made digital by simply changing out a crystal in a pocket digital speed dialer by RadioShack (RIP).

Nowadays you can find the recipes galore, and have no need to bang keys on your to discover hacks and tricks. It's probably why I have given it up. As they say, "those were the days, my friend."


I have two DEC VT510 terminals in my lab, serving mission critical functions, right now.


Wow, that's wild. Never upgraded that system I guess. I hope mission critical is not something that could effect things outside the scope of your workplace. Although, security through obscurity or age here, might actually work ;)


Sadly, that's an era old and gone. I'm still a friend of open WIFI like the freifunkers, or social computing - just ask and share some compute resources in my cellar. It's great to give kids access to programming and server resources. However, that requires a lot of legal considerations at this point to do right even if the users behave. And then you get the guys abusing something like this.


I've found that it is much harder to trust people on the internet now.


Here's food for thought: while a proper firewall and network segmentation is a well-established best practice, I'm not sure this is a winning battle.

There are probably a few dozen organizations out there that are properly implementing strong information security practices, and my hats go off to them. But they are the few, and I have never worked for one.

Despite best laid plans and policies, every place I have worked has always had some improperly secured services somewhere on their network. And every place that I've worked has had segmented networks that people end up relying on. And the people working for these organizations are often aware of the improperly secured resources, but they're only in the DMZ, and there are many other things to worry about, so it lives on.

Especially now that we live in an IPv6 world, why not just run everything publicly. Push security all the way down to the applications themselves, and rely on the software development lifecycle process to catch security issues.

Every service has to be secure. And they can get an awful lot of help in this from things like a service mesh architecture, where you're getting mutual TLS from something like Envoy, and the applications won't accept a network connection unless they're specifically authorized.

We need to stop relying on firewalls and network segmentation entirely, and just run everything on the public Internet, and make sure every service is secured.

I will say, when a zero day comes out in whatever proxy you're using to secure your services, you are in for a world of hurt. But there are zero days in firewalls too.


I work at a non-JPL NASA center. My workstation and internal server resources are already locked down to a barely tolerable extreme. I can't imagine what kind of restrictions would be added if we went forward with something resembling the above proposal.

I don't want to go into much detail about our internal network architecture, but suffice to say it's extremely difficult to run any kind of service whatsoever even internally. It has literally taken me years to get approval to expose a fairly simple REST API to the public internet, and I'm not even there yet.


> Especially now that we live in an IPv6 world, why not just run everything publicly. Push security all the way down to the applications themselves, and rely on the software development lifecycle process to catch security issues.

Yeah, wouldn't it be nice if software just didn't have any bugs?

You got the correlation backwards: Software isn't garbage because we can rely on the band-aids. The band-aids were invented because all software is garbage.


The JPL has been hacked, not just the NASA.

The JPL does much more interesting stuff than just NASA, like engines for military and also secret SW programs for the NSA (we know that from Larry Wall who was sysadmin there). And they are just administered by Caltech staff. Whow.

Random hackers are only interested in confirmation of aliens, but NSA or DOD stuff is very, very interesting to the Chinese who hacked these systems last.


Wow what a horrible website - the entire article was nearly completely covered with popover ads



Don't forget, CBP likely already compromised their security before.

https://www.theatlantic.com/technology/archive/2017/02/a-nas...


For further context, here’s another report on NASA’s security in 2012.

https://oig.nasa.gov/congressional/FINAL_written_statement_f...

Sadly, it doesn’t seem like things have changed.


Note that the report in the OP is a report on JPL, not NASA. JPL is a federally funded research institution that does most of its work for NASA, but it is run by Caltech, not NASA directly. Having seen the process from the inside, I can attest that NASA's security posture has changed enormously over the last 5 years. If anything, we've swung the pendulum so far in the direction of security that measures are being put in place that interfere with our work for little to no real security benefit.


And this is why you need to practice Defense in Depth. DO NOT assume your system is simply hardened and cannot be penetrated. You have to assume the opposite -- assume you will get fcked hard and apply separation among systems such that a wound is just a wound, and not a fatal death.


I was hoping there would be more info about how exactly the RPi was compromised, and the steps that were taken from there.


Raspberry Pi is all over HN today


CAUDIT is a potential mitigation tool that is extensible for data breaches. Ref: https://github.com/pmcao/caudit


Finally ! Now send me the.megadownload-link for Bigfoot and e.t photos


Not: it is NASA, not NASA. I've only ever seen the BBC call it Nasa because of their typographic rules.


> it is NASA, not NASA It's LeviOHsa, not LeviosAH


[flagged]


cronix for president!


YTCracker did it first!


> All in all it reads like a security basics 101 list that has been ignored. System administrators lacked security certifications, no role-based security training was in place and JPL, unlike the main NASA security operations center (SOC), didn't even have a round-the-clock incident reporting capability.

That is not security 101, that's CYA bullshit that corporations institute once they've been caught with their pants down. "Training" is worth jack. You have to actually implement security practices for them to be worthwhile. Sysadmins are not always the brightest bulbs in the box, but they definitely shouldn't be expected to be doing a security team's job of regularly auditing security policy to make sure it's being enforced.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: