Hacker News new | comments | show | ask | jobs | submit login
“We take security seriously” (troyhunt.com)
83 points by wglb 628 days ago | hide | past | web | 98 comments | favorite



Security is a business cost like anything else and there are tradeoffs. All of these companies have decided that the risk of losing this data was worth the cost savings of not protecting against the attack. So far they all seem to be right (not even Sony went out of business after their massive breach).

So it's perfectly reasonable for them to say they take security seriously and then have these breaches. They take it exactly as seriously as needed to keep their business alive.

Having been on that eBay security team mentioned, I can tell you that we did in fact take security seriously. And sometimes what we recommended was done, and sometimes it was decided the cost wasn't worth the risk. But we were always serious about it.


Most of the cost of bad security of a business are not borne by the business, but by its customers. Business often do a bad job of weighing externalized costs, and security is no exception.


The business is responsible for the costs against itself, which include any fines, any damage to their revenue due to customers leaving, and any resulting litigation, to name a few. The business isn't responsible for costs borne by customers which don't translate to those customers paying less money.

Given that the listed businesses are still operating and in most cases continuing to turn similar profits to what they were doing before, they appear to have done an excellent job weighing the costs in play.

You're welcome to think that the cost to the business should be higher, either due to the public caring more (most paying customers at the moment don't even consider whether the businesses they support care about security) or due to increased fines and penalties for security failings. It may very well be that we need to adjust those things via raising awareness or changing legislation. But based on the current state of the world, these businesses all weighed things in a way that kept them doing what they do.


You guys use the word "seriously" here in an entirely different sense than these statements intend.

These aren't statements made to the board, they're PR statements made to outsiders after huge breaches of customer/employee personal info. No one is reassured in these times by a company being "serious" about security in the sense that they have supposedly calculated its expected impact on their profitability.

> Given that the listed businesses are still operating and in most cases continuing to turn similar profits to what they were doing before, they appear to have done an excellent job weighing the costs in play.

You can't know whether (even in an internal bookkeeping sense) the "tradeoff" was net positive. The incidents still cost the companies money.

BP still exists and is highly profitable despite the spill and the ongoing costs incurred. That doesn't mean, hey they must have given it serious thought and found it costs more to inspect and maintain the rig than the $50B+ spent cleaning it up.


>You guys use the word "seriously" here in an entirely different sense than these statements intend.

I don't think security can be taken seriously unless specifically compliance is taken seriously.

I claim something that few security professionals would say in their career: Compliance is the magic bullet to taking privacy and security seriously. But to understand why, we have to think about the the most reasonable alternative to compliance as an approach to security.

Do you know what “full content packet capture” means? It’s when you’re able to grab every piece of data transmitted over a network using a tool called tcpdump. You can use another tool to reassemble those collected data packets into complete applications, movie or music files, even video chats and phone calls.

As a joke, an instructor once told me that when they went onsite to do security investigations, they would do a full content dump; if someone was downloading videos at the time, my instructor would say, “Thanks for the movie, guys!”

This is what the Utah Data Center is doing. All packets are collected using a tool like tcpdump, and then the Center reassembles them and categorizes the content into data cubes (e.g., movies, video calls, emails) that are easily findable with open source search engines such as Lucene and Solr. Is this taking security seriously? You bet!

Enhanced Intrusions Detection Systems is how security is being taken seriously and with a cost to privacy. If a hosting provider like Akamai — which, runs 15% of the world’s web traffic — is able to reassemble your packets, and even archive them, is that what you want security to be?

Probably not, but when a security professional says "let's take security seriously", to them it means just watch all the things. I remember in 2006, after my IDS training, I was all for collecting every single packet (no matter what) because I was taught as a security professional, that was the only way I could do my job: watch everything, since we need to know everything about anything. If you watch everything, that’s inherently security, and the world is protected. That was my modus operandi for many years.

This is why taking privacy and security seriously really means taking compliance seriously. Compliance lassos in "watch everything" while also providing validation and proof of security. If we’re doing good hygiene (e.g. Key rotation, log review, Change Control) on our systems, there’s no reason to collect and watch everything.

The recipe to collect everything already exists but that means only security is being taken seriously. I hope we start to take compliance seriously and bring that into the privacy/security equation.


You seem to be conflating "security" with "insecure communications"


Share with me what you think are the most reasonable approaches to security and privacy?

Option 1. Collect all the things.

Option 2. Just do good hygiene.

Aside from these two options, what else is available to provide a most reasonable approach to protecting our customer's sensitive data?

EDIT: I'm the head of a compliance agency


You should probably point out that you're the head of a compliance-focused company.

Your previous 2 comments seem to be drawing a very strange dichotomy where the only options for "security" are "capture all traffic" and "compliance". I'm not even sure where to begin in responding to that, because it's so far beyond any facts you provided in either update.

Neither compliance nor traffic capture are "security". Capturing and analyzing traffic can be a facet of a security stance, and structured compliance frameworks can provide structure and goalposts for measuring your security stance, but there's a near-infinite range of other factors at play here.


I'm not sure I understand what you're saying. Capture all the things is exactly what security professionals are asking for as the most reasonable approach to securing sensitive data.

This video is proof: https://www.youtube.com/watch?v=R63CRBNLE2o

Other security researchers have gone so far as suggesting Penetration Testing and Risk Assessment are the most reasonable approaches to providing security for sensitive data.


Maybe if they lose something specific like message content, order history, etc. But businesses are not morally culpable for banks and governments' inexcusable choice of shared secret numbers to prove ownership of identities, bank balances, credit lines, etc. when public key crypto has been available for 20+ years. It should be the entities who deliberately perpetuate broken security architectute like CC numbers, SSNs, paper ID documents, who bear the cost of their inevitable copying.

Businesses have to architect for the world we have, yes, but still.


I wouldn't say they do a bad job, more that they (perhaps correctly) don't care.

If something doesn't hit their bottom line, should a commercial company, who is predicated on providing shareholder value care about it?

Personally as distateful as it is, I think the only way to amke companies take proper account of externalities is regulation / law.

If you take the example of health and safety, or building regulations, there is a reason why those are (in many countries) enshrined in law and not done by companies spontaneously.


I manage security and I like to phrase it as "I take security so seriously, I analyze the risk profile just like any financial analyst would and match exposure to client tolerance."

It just turns out that most people are actually perfectly okay with periodic security breaches as long as the hassle from it is substantially below the amortized savings from less hassle in their day-to-day usage.

So all of your data is protected according to that risk profile, adjusted to feedback as we get data on customer sensitivities.

You know, like a professional manages assets.


To add a point of agreement from another security engineer - this is the best top-level comment on this story.

People love to hate companies for their security practices when a breach happens. It's much harder to form a dispassionate, nuanced view from a place of critical thinking.


Companies aren't people; they aren't entitled to basic human dignity and fellowship. When they fuck up and hurt people, and it's clear that their actions are "all in the game", it's time to start hating that game, perhaps enough to get the rules changed.


I remember going to a security conference where one of the speakers said that security flaws and 'fixit' recommendations come so thick and fast, and are so difficult to implement, that even security researchers, who know all the issues and tradeoffs, often don't follow best practices.

Security is bloody difficult, even for lauded members of the community. Schneier apologises in his blog when he says he still uses Windows, for example.


Wtf does schneier use Windows for, outside the lab?


Well, I'm taking that form his 'air-gapped machine' article linked a couple of months ago, he apologised for using windows for it, due to familiarity. I don't really find Schneier interesting, so I only go by what people link to here.


While I certainly agree that security is a matter of tradeoffs, I have one nit to pick.

"All of these companies have decided that the risk of losing this data was worth the cost savings of not protecting against the attack. So far they all seem to be right (not even Sony went out of business after their massive breach)."

"Went out of business" is not the right criteria for saying "they were right". The question is whether (the EV of) the cost of a breach exceeded (the EV of) the cost of protecting against a breech. It's probable that the delta between those two was not enough to put Sony out of business - they're a massive company with high margins.


That's a fair nit to pick. We won't really know the EVs here, but the point was more about EV tradeoff anyway.


It's possible to take an idea seriously, yet seriously screw up the execution. The criticism presented is criticism of the hollow words that get tossed out there by someone in P.R. after the actual execution failed. It's like a car company announcing a major recall due to defective parts, starting their press release with "At [company], we take quality very seriously!" The proof is in the pudding.


So, instead of saying "we take security seriously", they should add a clause to their EULA stating that in the event of a breach, clients will be compensated such and such.

If they don't have such a clause, then I don't think they take the security of their clients' data very seriously.


Unless they are loosing customers because they don't have this clause, why would they do it?


I'm simply saying that they should not say that they take security seriously.

They should instead say that they take their business seriously, and don't care about the security of their clients' data as long as they don't walk away.


I don't understand the criticism, what are they supposed to say instead? When an attacked company says "we take security seriously" it's probably a statement made by someone in the company who really does care about it and is probably pretty upset about the whole ordeal and wants to fix it. This whole attitude about corporations always being these evil lifeless monoliths who don't care about anything and are just saying whatever they need to stands in contrast with any place I've ever worked. Some of these companies are staffed by people who do care and want to do the right thing and I don't understand what the OP thinks they should say instead.


Companies in general do not really care. I've been on the inside, the infrastructure guy who knows what's actually being done to implement proper security and who's responsible for a lot of it. Companies care much more about the next set of features, the next release, the next big deal that will change everything, the sales goals for this quarter. They may even care about "usability", the latest site re-design, "user stories".

Security is always the very last thing they care about, until there's a huge very costly breach. Then they care for 2 months, and I get to actually work on the security stuff, and get other developers to cooperate, and clean up the known messes left all over in the typical mad dash of feature addition and replacement. Then it's all forgotten about again.

They should say "we suck, we focused 100% on features and market share, we know now what's important", and they should get security right. It does kinda suck that the market often rewards companies that prioritize all else above security, and I wish such companies all the damage a breach can cause. Otherwise, there's no reason to not suck at security.

They should just be honest: "This is what happens when you make a product people love. It's insecure and data is lost and service is interrupted. But you all love it so thanks :)". People should not be under the illusion that their favored products and services are secure. They should know they love insecure shit.


"Caring" doesn't matter though, only execution does. Everyone cares about obvious things that people should care about, it doesn't matter though if you fail at what you are supposed to do. Only execution matters in business.


But the fact is no matter how much money and effort you put in to it, someone determined enough will find a way through.

What most people don't realise when it comes to computer security is, the foundation on which our modern systems are built never anticipated this much growth.

I think I am happy with companies that care enough to come forward and admit their mistakes. IT security is hard, very very hard.


Still it's a well taken point - if you ask most any company if they want to drop the latest project and work on security instead they'd tell you in polite business terms to fuck off. I've had executives try to argue with me - "But nobody knows the URL!" to justify not allocating even the smallest of resources to fix security problems.

You really need a good security guy who can be the bad guy and stop projects in their tracks when it's clear there are security issues. Because asking the same people who are accountable for shipping to stop the presses to fix even the obvious shit you already know about is a challenge - much less investing resources in 'shoring up' against attacks you don't anticipate.


It's not like someone is saying that the companies have board meetings where they decide to be evil or do stupid things. But they DO decide on budgets that in the end means that there isn't enough time and/or resources to do more than the bare minimum.

If there are people that truly do care, they should stand up in the early process and make sure enough budget is allocated.


Maybe "we are sorry this happened and are working to improve" and leave it that that?


The criticism is of the emptiness of the PR-written responses. A few people at a company being "pretty upset" about a security breach doesn't mean the company actually takes security seriously. Responses like this are fairly obviously canned and insincere, and are designed to deflect criticism of companies' actual security practices.


Do you think Sony took security seriously? If not, is "we take security seriously" something they should say?


This article makes fun of people who get breached. This is less than helpful.

> “We take security seriously”, otherwise known as “We didn’t take it seriously enough”

This implies that if only the companies that got breached had taken security more seriously, they wouldn't have gotten breached. In a world where databases are valuable (the AFF example cited in the post, for example), software is virtually impossible to get to zero defects, and where zero-day vulnerabilities are traded on the open market, some big fish are going to get popped.

The idea that getting breached means you're incompetent is toxic and needs to stop; it just means that you're a sufficiently high-value target. It's very possible (and quite likely) that many of those breached expended extensive efforts in defense. The idea that they would've been fine modulo more security expenditure not just a baseless assumption, it is in many cases patently false (granted, there's plenty where it's true). As a security professional working in customer-facing security, I'm helping exactly the people who are getting breached, therefore I say that having a monetary motive to say that they aren't spending enough efforts and should give me more money ;-)

The article also ignores that knowing that you got popped probably already means that you're in one of the higher percentiles of security posture... That's sad, but, again, blaming the victims here helps no-one.

(By the way, if you too would like to help people who get breached instead of making fun of them, we're hiring. Contact info in HN profile.)


As far as I know the most successful attacks on companies were based on very simple techniques like phishing or similar easy techniques like SQL-Injection. When we talk about data breaches of companies, it mostly hasn't something to do with high sophisticated attacks using 0-days I guess. I'm not sure, would you agree?

Of course companies tell you that there were victims of high sophisticated APT attacks, otherwise they would have to admit, they had been compromised by simple script-kiddy attacks.

Of course blaming victims doesn't help anyone. I want to emphasize here, that not only the companies are victims but also the users and customers, who mostly never received compensations after security failures.


The Adult Friend Finder attack in particular is awful - the leaked data is incredibly sensitive and could be used to blackmail or manipulate the victims.


I'd feel awful too if my name was 'nailer' ;-)


I'm not in there. But thank you for the threat.


[flagged]


Consider the case where, for example, you really enjoy humiliation as part of your sex life, or you've got a scat fetish, or you engage in consensual edgeplay, or whatever.

It's legal, it's consensual, it's something you and your partners enjoy--and yet, you really would prefer that it not be Googleable during a board meeting.

Considering how puritanical and shamefully we handle talking about sexuality in the workplace now (despite claiming to support tolerance and diversity more than ever) it hardly seems surprising folks may still want privacy.


Not everyone looking for sex online is looking for an affair.


Victim Blaming, classy.


No, I'm responding to the blackmail concern, and when you do something that if made public would so severely compromise you as to make you vulnerable to blackmail, can you really claim to be blameless?


What if you were gay and when people found out it ruined your life?


Why would it ruin my life? If it would, why am I publishing it to the world on the internet?


How would you feel if everyone you knew, your parents, your friends, your neighbors, your coworkers and bosses, and people you want to like you were told without your consent that you used Adult Friend Finder?

Or that you've paid for sex in Reno? Or that you've done heroin in portugal? All of these are not illegal in their respective areas and they're all socially frowned upon.

Are you not capable of empathizing with other people who don't do exactly what you do? Or are you just being pigheaded?


Anytime I see "We take security seriously" I get double skeptical about their security stance. This is something that's best demonstrated by simply doing it and the results speak for themselves. There's an obvious contrast between organizations with well-managed INFOSEC and those that pretend. Especially when a vulnerability is reported or a breach occurs. Extra-obvious then.

Businesses, just practice INFOSEC instead of preaching it. Better that way.


Though the words are nominally the same, the FBI quote has a distinctly different meaning than all the others—I don't think it's quite fair to lump it in with the rest. The FBI saying "we take threats seriously" means they are willing to throw money at investigations and prosecutions. The other companies don't have that particular power.


It's not exactly fair because security is a process. Even following all best practices, new attacks happen all the time. Sometimes the security process includes post-attack investigation and mitigation.

It's quite easy to point a finger when you have a few servers that you spend all of your time securing and monitoring. It's something else when you have departments of people connected to your network and running services, breaking policies, taking laptops out of the office, etc.

Nobody wants to see their data being breached. I applaud companies for publicly sharing their investigation and response.


Taking security seriously, and knowing what you are doing are often mutually exclusive in business.


Hackers takes your security (and your money) seriously too.


Is the issue that there was a lack of technical investment in security or a shortcoming in their ability to communicate with their customers?

I'd argue many of the companies mentioned have invested heavily in security. Whether their investment will prevent a compromise from a determined adversary is likely unrelated to their investment.

Unfortunately, I suspect many of the mentioned companies had not equally invested in how to properly communicate a compromise with their customers.


This is human nature: we usually take actions after the fact.


People in the know: how do these breaches keep happening?


It is dazzlingly expensive to run a 500+ seat IT organization in a matter that is meaningfully hardened against attackers. Meanwhile, IT budgets are calibrated against the last 20 years of typical IT expenses. That's despite that fact that the last 10 years have drastically increased the risk of IT security attacks, because:

* the technology adoption curve of attack techniques has made powerful attacks available to the "early majority" cohort of criminals

* computer science has demonstrated flaws in foundational technology that weren't widely known 15 years ago

* as online attacks have mainstreamed, an industry value chain has developed to monetize weaknesses

So it's business-as-usual in IT, despite the fact that when it comes to security, "usual" has been redefined.

Basically no medium/medium-large company in the world is willing to spend the amount of money (and make the usability sacrifices) it would take to reasonably ensure resiliency against attacks. The most secure firms spend anomalous amounts of money simply to elevate themselves to a point where they (a) aren't the easiest targets and (b) have bought enough time to detect and respond to incidents as they occur.


It also wouldn't hurt to move away from the "network security" model to a "zero trust network" model, where each computer is secured against everyone else in the internal network, just like it would be against the Internet, and the users have very limited privileges to only allow them to do the job they are required to do.

http://blogs.wsj.com/cio/2014/04/24/google-cio-enterprises-s...

https://static.googleusercontent.com/media/research.google.c...


That's the logical extreme of the "network segmentation" bullet I listed above. The problem with endpoint security and internal trust is that the endpoints aren't the most valuable goal on the internal network; we attack them because they're pivots to internal applications, which are the most valuable goal. So making it harder to compromise an individual desktop from within an internal network doesn't really do much. Making it harder for an arbitrary desktop to reach the document management server, though, does make a difference.


If one had an infinite monetary and user acceptability/retraining budget... what would a modern, secure 500+ seat IT infrastructure look like? Even in the broadest of terms.


* No unfiltered attachments in email, for definitions of "filter" that include "stripping out content and re-rendering in simpler formats", like "PDF->RTF" or something equally terrible.

* Fine-grained segmentation of all networks on need-to-know basis, informed by the org chart and detailed role descriptions for virtually all employees.

* No employee access to arbitrary Internet sites

* For employees that require Internet access, air gaps between computers that can hit Google and computers that can access company email

* Formal audits for minor software releases

* Expensive, heavily tested secure coding training for all developers

* Adoption of secure coding/design standards ("this is the XYZcorp way to make an SQL query" and "this is the XYZcorp way to render HTML"). Strict bans on deprecated interfaces.

* Employee access to sensitive internal applications (like document and image management) gated through Citrix-like environments, so you have to remote terminal in to get to the browser that actually talks to the application.

* Extremely minimal access provided to VPN users.

* Total 8021x-style lockdown of network ports and fascist policies against bringing own devices.

And so on.

There is zero chance this is ever going to describe any huge company.


If I worked as a developer inside such an organisation, I'd quit. A sad side-effect of rigorous IT security is the lack of productivity and day-to-day frustrations experienced by the staff.

I've experienced this myself inside a major central bank. Most development staff were a) demoralised and b) frequently adopting insecure work-arounds.


So would I. But another way to look at this is, instead of "I'd quit", saying "they'd have to pay me 3x more to work in a place like that". At which point you can again start looking at it as an economic problem.


You're right. I work at a place that does all but two of those, and I wouldn't if they didn't pay me as much as they do!


If you work at a company with more than 500 employees that does all but two of those, I'd like to hear more about that. As I wrote those bullets, I filtered them through my experience of consulting for F500 companies, and when I found myself writing something I'd seen reliably deployed across entire organizations, I edited the bullet until that was no longer the case.

About the closest I've come to seeing any of these bullets deployed reliably is Microsoft's developer training and deprecation of the standard C library, and that initiative was so out-of-the-ordinary that it was newsworthy, and widely reported.

But to actually earn that bullet, Microsoft would need to deploy those same measures across all its contractors, and, more importantly, deploy them on internal IT and line of business systems, not just the Windows and Office codebases.


I imagine some government facilities could hit that mark—based loosely on descriptions of friends who have worked in such environments.


Bingo.


See the sibling commenter, but think government.

Plus, the posts I read earlier didn't mention 'Fortune 500' or 'reliably' - and I'm glad you didn't use words like 'productive' and 'efficient'!


You mean you don't want to use two entirely separate computers just so you can access email?

Paranoia in excess.


Hey, we used to use a Mac for Internet and email and a Windows machine for "work", both on the same desk.


> Employee access to sensitive internal applications (like document and image management) gated through Citrix-like environments, so you have to remote terminal in to get to the browser that actually talks to the application.

That may be a realistic proposal, but I'm skeptical that it's the best answer, maybe because 100% of my Citrix experiences have been awful. Wouldn't "ensure internal app adhere to all other points, rewriting them if necessary" be a better answer for "given infinite money" question?


Sure, I think you could add a bullet for "applications not gated through Citrix-like environments are written to be hardened against arbitrary client software, as opposed to the single company-spec version of Firefox that users are allowed to use in a Citrix session".

You'd still have Citrix lockdown environments for any application that you can't rewrite. For instance, an F500 might license the web app it uses for image management for things like scanned medical records.

I should add: I don't think any of these are realistic proposals.


I feel that the biggest thing that most companies could do to safeguard a user's privacy is to just respect it: the majority of websites that ask for an email address to sign up don't truly require it for proper functioning.

Either they need to sell my address to spammers to finance their business or they delusionally believe I want to read their spam.


> No employee access to arbitrary Internet sites

zomg! muh freedoms!


You would also need to ask how much the company is willing to trade off productivity for security. With enough budget, you could have security like that of any three-letter-agency, or even more draconian, but good luck getting work done, or retaining employees.

That's not to say that user data should go unprotected; the majority of people in the company should have zero access to that, such that even if their systems were compromised user data would remain safe.


Check with a Swiss bank. They typically spend 10% of their IT costs on security.

Swiss banks have a tradition of security overkill. Branches in nice areas have bulletproofing rarely seen in the US outside of very bad neighborhoods. It would be un-Swiss not to be prepared.


Have some experience there. Not meaningfully different from finance IT as a whole. Finance orgs generally fall under the bucket I described in my first comment here, of "spending enough not to be the easiest targets", so there's that!


> infinite monetary and user acceptability/retraining budget

In that case every employee would have one or more security people standing behind them the whole day. No email would be opened without a phone call to the sender to confirm that it's not spear phishing.

But more realistically, loads and loads of 24/7 monitoring and internal auditing and testing. Attackers have to abuse something to get in, and if the company is internally doing the same thing with a large enough team, 99% is probably secure. Hackers will probably look for easier targets then. Still, this is quite unrealistic to happen in present times.


I'd agree with everything you've written but also add that the direction that technology itself is taking (very fast adoption of new technologies, applications being composites of large quantities of 3rd party code with unknown provenance) makes it even less likely that an organisation would be able to resist modern attack methods from a focused well-funded attacker


That's a large part of what makes me no longer enthusiastic about working in the space at all.

Which, as it happens, I've rather pointedly not been.


> computer science has demonstrated flaws in foundational technology that weren't widely known 15 years ago

Can you list a few examples of such flaws?


Two big classes of flaws that seem new:

* Exploitable memory corruption (really only began to be recognized in late 1995, and only became mainstream with modern heap exploits, perhaps 10 years later). Really, this trend meaningfully picks up speed with the dawn of the clientside exploit era, in which we no longer obsess about Sendmail vulnerabilities and start obsessing about browser vulnerabilities. In essence: the revelation that memory-unsafe languages are insecure in the presence of virtually any defect, not merely unsafe buffer copies on the stack.

* Side channels, not so much for crypto (crypto attacks are fun but rare, and certainly not the low-hanging fruit used to compromise most networks) but for weaponization of other flaws. Side channels are the other side of the "covert channel" coin, which coin basically describes "surreptitious unintended exfiltration of data from software". So here you're also talking about things like blind SQLI.

In-band signaling and the insecurity of using simple strings to encode program behavior would be a third major class of foundational flaws, but it isn't new; it was well-known in the late 1980s. But industry certainly hasn't adapted to eliminate it; witness, for instance, very widespread Java application frameworks that embed executable server-side scripting code in UI inputs!


Heartbleed and Shellshock come to mind. There have been others, and the "revelations" of NSA et al actively poking or preventing the closing of holes in all kinds of software and hardware.


Aren't these "simply" software implementation bugs? Unless I misunderstood what the GP meant by "computer science".


Ah no I think I misunderstood. tptacek replied with examples though.


ROP attacks, or more specifically, that you can build an arbitrarily powerful program given only a buffer overflow to an unexecutable stack and libc.


Excellent and very succinct analysis.


Too prevent it, you have to protect against 100% of all attacks (including new emerging attacks) 100% of the time. The bad guys just have to get lucky once.


The best course of action is to not store the data at all. Use "Sign in with Facebook" et. al and never even get the user password for example. I'm so tired of needing to have a password manager when we have good authentication platforms today.


Too bad we don't have a viable decentralized option, I don't at all relish the idea of Facebook Inc. owning my online identity.


Diaspora. Friendica. Pump.io. If you're more technically inclined, you can self-host one of many:

https://indiewebcamp.com/projects

There isn't just one alternative, and I don't think there really should be - the fact that most of them are interoperable means that not everyone has to be using the same software. We need to standardize on a protocol, not so much an implementation.

(But yes, there does need to be a nice-looking implementation if we want the general public to use it.)


Merely having wide adoption of OpenID or Mozilla Persona would be a worthy start.

I just don't have much faith in these idealistic open source projects these days; when I google "diaspora oauth" and the first thing I find is abandonware (https://github.com/diaspora/diaspora-client), it pretty much confirms my cynicism. "Nice-looking" isn't the only thing we need from identity management.


Ah, okay, you were referring to identity in the technical sense rather than the social sense. I agree that increased adoption of OpenID or Persona would be great. Persona holds the most promise in my opinion, since it's similar enough to existing sign-in systems as to not confuse the average user... but it does require providers to actually use it.

IndieAuth is interesting too: https://indieauth.com/


Prioritizing other activities over security. Probably in every one of these cases someone on the inside warned that they weren't doing enough about security, and nobody took action because there were no additional people available to work on it.

Saying that security is hard is a cop out. It's the cost of doing business. Most places that are hacked should have been doing more about security.


Assuming top-notch everything by industry standards, the systems will still likely get breached by attackers with 0-days in implementations. Brian Snow, a NSA Technical Director, explained the reason very well in his essay We Need Assurance [1]:

"The problem is innately difficult because from the beginning (ENIAC, 1944), due to the high cost of components, computers were built to share resources (memory, processors, buses, etc.). If you look for a one-word synopsis of computer design philosophy, it was and is SHARING. In the security realm, the one word synopsis is SEPARATION: keeping the bad guys away from the good guys’ stuff!

So today, making a computer secure requires imposing a “separation paradigm” on top of an architecture built to share. That is tough! Even when partially successful, the residual problem is going to be covert channels. We really need to focus on making a secure computer, not on making a computer secure – the point of view changes your beginning assumptions and requirements!"

I'll add that the fundamental constructs and operations of a computer are designed a dumb way that will follow the most self-defeating orders. Smarter architectures, like Burroughs B5500 and System/38, existed in the past that had CPU's that enforced fine-grained separation (helps secrecy), protected pointers/stacks/arrays from abuse (stops much code injection), and could spot many problems like interface abuse (80+% of issues) at runtime. The dumb systems were cheaper, faster, & backward-compatible with garbage tools in widespread use. So the market went with them. To this day, most "security professionals" have no clue that there existed hardware & software systems so resistant to compromise that NSA's red teams gave up on even watered-down versions of them. And then bought them for protecting most sensitive stuff while pushing weaker stuff on us for their other mission. ;)

Fortunately, there is a tiny, niche part of the security community working on such "high assurance" solutions (eg crash-safe.org) or at least better architectures w/ some higher assurance components (eg genode.org). Our niche is not popular, has few customers, and never will mainstream due to tough tradeoffs it forces. Yet, real security is gaining a bit more traction in academia and some software firms. One day such platforms might get more affordable and widely available so they you can use one without lost sleep due to someone opening an email. :)

[1] http://www.acsac.org/2005/papers/Snow.pdf


Security is very hard, even for the best at their best. And the people who have the information that is most sensitive are not the best, and aren't giving their best.

I'm not in the know, but it seems pretty basic. Lemme know if I'm wrong.


Insufficient liability, one would guess. Cars, airplanes, industrial plant, all these have software in them but the liability is wholly different, should an accident happen due to a software bug.


Careful, though. If I put up a new Tetris clone on GitHub, and you decide to use it to control your power plant...

Liability should first rest with the entities deploying the software, since those are the only ones with a picture of what can go wrong in the context where it's being deployed. Those entities should then be demanding warranty/indemnity as appropriate from their suppliers (or possibly third parties, for F/OSS).


I don't think that's a relevant comparison. You can be sure that when cars, airplanes and industrial plant have available access through internet, they will be powned as well and there will be accidents.


This argument was implied. If they will own your data when you put your HR database on the internet, maybe you shouldn't put your HR database on the internet to begin with. I can't be the only one who feels that computer security is a field that is insufficiently mature, yet people put confidential data on the internet all the time even though the technology isn't ripe yet to do so safely.

Compare this attitude with the extremely conservative approach in rocketry. The technology is positively from the stone age, but everyone agrees that it is well understood. You can't take any other approach when space missions are multi-decade projects and the price tag has nine figures.


First of all, there is no perfect security. Moreover, very few people know how to build a complete soup-to-nuts internet-facing system that has "good-enough" security at every level. The fact that there are few resources available on how to do this is an indictment of our industry.

Even if your digital security model is impeccable, I'd wager a large share of these attacks involve a social engineering/disgruntled employee vector as well, which is vastly harder to protect against.


> Moreover, very few people know how to build a complete soup-to-nuts internet-facing system that has "good-enough" security at every level.

And even the people who do know, often don't get the time to do it correctly.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: