Hacker News new | past | comments | ask | show | jobs | submit login
The Insecurity of Secret IT Systems (schneier.com)
132 points by listronica on Feb 14, 2014 | hide | past | web | favorite | 24 comments

I know that NSA/Snowden continues to be at the top of the news, but it's still worth pointing out again that NSA's internal system is probably one of the most secret of internal IT systems and through Snowden's work, we've found out: 1) NSA employees are easily phished and 2) They probably don't have the same level of deterministic dev ops deployments that modern tech companies depend on, given that it was Snowden's job to install an "anti-leak" system and apparently no one double-checked to make sure he had installed it. Hell, who knows if even that secret anti-leak system would actually do anything besides add more cruft to their internal operations? http://arstechnica.com/tech-policy/2013/10/snowdens-nsa-post...

Snowden denies phishing from other NSA employees. And that article you link does not say what you imply it does. It does not say it was Snowden's job to install an anti-leak system. The article just says that one was supposed to be installed, and it wasn't for bandwidth reasons.

I'd not realised that it was the person whose job it was to install the anti-leak system who leaked everything. I guess that's an obvious outcome..

This gives the new meaning to "copying production database to test environment"... ;).

Regarding voting systems, all we ever needed was open source software.

Voters were incorrectly recording their paper ballots. A PC with a punch card machine attached and running open source software could have correctly punched these cards.

And we also could have had another system that read the cards right there in the polling place that the voter could use to confirm their ballot was correctly encoded. Or an phone app that could read a photo of it.

We could have gotten that software for free. It could have run on ancient PCs. It could have solved the actual problem that we had.

But the lobbyists got there first, influencing politicians into buying unneeded and overpriced solutions, just like they do in every other area of government.

As much as I agree that the software should be open source, that's not nearly enough. There are large numbers of rootkits and vulnerabilities for Linux, for instance. There would be a huge incentive to modify the software of these machines in some undetectable way in order to influence elections.

There's been a lot of scholarly literature written on how to do secure electronic voting, and I understand the consensus is that some sort of voter verifiable paper audit trail is the only way to match or exceed the security of the paper ballot system.

If you can look at the paper ballot it produces and confirm your choices are punched, there's no problem.

And people were already looking at their ballots so they could punch the right holes.

Even if only 5% checked their physical ballot to make sure its right, that's enough to detect something fishy.

Just to be clear, the actual physical ballots would still be counted they way they were before 2000. The only thing that needed to be improved was the punching of the holes.

So have a machine punch and have the human visually confirm. Or the phone app or a myriad of other machines confirm. They can't all be controlled by one root kit.

with electronic voiting the wrong question is often asked...

"How can we build a fraud proof system"

While it is a nice ideal, it is not realistic...

The real question should be

"How can we build a system that has less fraud than todays system"

That is far more achievable, today no one really knows how rampant voter fraud is, if at all, due to the nature of the outmoded, non-accountable system the trust is placed with corruptible humans at the polling places, I personally believe that fraud is rampant.

stretchwithme addresses that in the third (and subsequent) sentences of his comment. I know that's a lot to read, but still. They were in fact describing a voter-verified paper trail system.

He appeared to specifically exclude that possibility by requiring a machine to read it.

This entire discussion is predicated on a mistaken assumption that we have significant levels of fraud – often asserted but never convincingly supported – or that electronic systems reduce those odds. We'd be much better off sticking with a simple optical system which can be reviewed and scored by hand and providing a computer-assisted system to help those who have difficulty to fill in that ballot. As a plus, this system is really cheap and easy to scale rather than requiring a bunch of expensive computers and support staff for an infrequent event.

acdha, I proposed machines and apps voters could use to confirm their ballot was punched correctly, to make sure the first machine isn't compromised.

These apps could be made by anybody with access to the details of the particular ballot. In other words, those in control of voting can't control a myriad of independent app developers.

People could also do what they did before: put their ballot into the old voting device and look at the holes themselves.

The actual counting of the ballots could be done however it was done before.

I think open source is not the end solution, though is a great solution.

As with any software there are always vulnerability just waiting for someone to exploit. Before we say "oh government contractors suck", judge our any private product out there. Github is not open source and yet people constantly find vulnerability.

What we need is proper security audit, done by the right professionals and done continuously. The problem is security professionals are very very expensive! That may be the incentive to open source since then the crowd can be your testers. But the thing is: how do we apply to private customers? Not every company can open source their work. What can we do? Are security audit tools out there enough? Can we do better?

Should military stuff be open source? Because of national defense? Well, voting system is a national priority and if hacked can mean a big national crisis. So why shouldn't F16 software be open-sourced? It would make system cheaper, more reliable in theory. No one wants to do that. So if you believe in keeping military system private, how should the government keep their system secure and reliable? What tools do they get?

Trade market. They can't open source their trading system. How can they make the system more secure and reliable? What tools can they use while keeping some maintenance cost down? (maybe a bad example since trading system is managing TONS of money).

Electronic voting systems cannot ever be verifiable by an end-user at the time of voting and so shouldn't be used for important votes.

A person filling out a paper ballot can easily verify the whole system (assuming they can witness the count), where-as an electronic system is almost totally intangible and leaves the voter to rely on unknown persons to verify the system.

Having an open-source code base does not improve electronic voting if the end user (my 80 year old granny) cannot verify that at every step her vote counts.

How can she know that the current, approved code was compiled using the approved compiler, installed by approved persons using proper methods, by a malware free pc, on approved voting hardware, which was handled and stored in an approved manner, and so on.

With a paper ballot all you have to do is check the paper is clean, mark your choice and place it in a box with the others.

Paper ballots are robust to recounts, 3rd party verification and voters are familiar with the system so little instruction is needed, preventing errors by novice users.

In the end voter confidence is what counts, not a drive for technology for the sake of technology.

NB. Hanging chad like problems and choosing the wrong candidate are UI/user error problems and both can occur in both paper and e-voting.

Your 80 year old grandmother probably already doesn't verify that her vote conts at every step. Once you have placed the ballot in a box, you have to be there in the evening to actually count the ballots. And then you have to check that at every step until the final result is obtained the sums are computed correctly. The easiest way to cheat (efficiently) at an election is certainly not to fake individual votes.

There has been a lot of FUD on e-voting, mostly as a reaction to the use of very bad, proprietary, unverified technology for that purpose. There are good e-voting systems coming up, like the one used in Estonia (https://github.com/vvk-ehk/evalimine) or Helios (https://vote.heliosvoting.org/, presentation: http://schedule2012.rmll.info/IMG/pdf/slides.pdf).

Those probably shouldn't be used for the most important elections, but they are already way more secure than, say, vote-by-mail.

His point applies equally to general software quality. Even in the workplace, I always see the bad programmers try to sling shitty code with private repos or direct pushes with no peer review. The good ones always operate in the open and appreciate peer reviews.

How is a an airport xray scanner maker supposed to participate in that iterative process for improving security if they aren't in a mass market? No security researchers took interest for a long time till Rios purchased a scanner. "It runs an outdated windows 98 operating system" just shows how little anyone cares, even if Rios would like it to show how awesome he is as a researcher or how awful windows 98 is as an OS.

Also unrelated, how to factor a large prime to break RSA 1024 quickly is a secret too.

> How is a an airport xray scanner maker supposed to participate in that iterative process for improving security if they aren't in a mass market?

Invite pentesters. Hold competitions for people to try and break it in an isolated part / mockup of an airport. Donate one to your local hackerspace and ask them to have fun with it.

Possibilities are endless; the only things needed is understanding the points in Schneier's essay and a little courage to do the right thing.

are you sure?

secret means many things, but not the meaning of 'how to factor a large prime to break RSA 1024 quickly is a secret too'

degree of labor hours to reach knowledge is not the same as secret

The fact is that sometimes security through obscurity works. Take Skype for example, it was well known that the US government had for a long time wanted access and, depending on who you ask, failed. After being bought by MS and reconfigured, it could be argued that there are now fewer access problems.

Where obscurity fails is where the product has been poorly designed in the first place - perhaps due to lack of time or manufacture costs - or there is a failure to update when the scenario or environment for which it was built changes.

Obscurity is really a term about confidence and PR of a system (eg. ISO standards compliant?) or company (RSA anyone?). How does the company convince you that it is using best practices without compromising its competitive advantage?

The grumbles about running Windows98 are pointless if the system meets the requirements.

"Take Skype for example, it was well known that the US government had for a long time wanted access and, depending on who you ask, failed. After being bought by MS and reconfigured, it could be argued that there are now fewer access problems."

That is not a security through obscurity success story. The Skype design was changed by Microsoft in a way that made government access easier.

"How does the company convince you that it is using best practices without compromising its competitive advantage?"

Your competitive advantage is not my problem. I need a secure voting machine, a secure ATM, a secure medical database, etc. If you cannot deliver a secure system to me in a way that allows me to verify its security, then you never had a competitive advantage in the first place.

"Smart security engineers open their systems to public scrutiny, because that’s how they improve. The truly awful engineers will not only hide their bad designs behind secrecy, but try to belittle any negative security results."

Or restated:

All bad engineers try to hide their work from the public, therefore all good engineers try to show their work to the public. I'm sure there is a logical fallacy in there somewhere.

Windows 98, Really?

wow. I'm just, wow.

Like I can understand an ancient version of Solaris, or Windows NT4 - but Windows 98?

just wow.

security in any application has to start from the beginning and nurtured by all developing this application. security can not be a bolt-on after the fact patch works. when at least these two are not applied all applications will fail miserably. security by obscurity only a make believe.

Is there a video of this talk anywhere?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact