Hacker News new | past | comments | ask | show | jobs | submit login
Nobody Cares About Security (adatosystems.com)
124 points by mooreds 9 days ago | hide | past | favorite | 86 comments





I've been saying this since at least 2009 when the company I worked for was sending credit card info from card readers across the network in plain text and they dragged their feet to fix it even though they knew we were violating some serious SOX policies.

At another company in 2015, I discovered we were sending user credentials for a large hospitals in plain text across the network and need to fix this ASAP. When I brought this up to management, they shrugged and said "it's been this way for years and we have other priorities". After a large New York hospital was breached, I suddenly had management show up to my desk in panic asking how we can fix the issue immediately. I came up with a nice write-up for both our company and the hospital IT groups on how to secure our infrastructure with certs and whatnot and they still half-assed it with self-signed certificates.

In 2018 I worked for a fin-tech company in charge of some A-list celebrity 401K portfolios. We had a default admin password for our production database. ZERO encryption on the data; including SSN, birth-date, phone numbers, addresses, beneficiaries info, etc.

Last year one of the leading payroll providers in the country (USA) I used to work for started aggressively out-sourcing the software and allegedly had a really nasty security breach due to a back-door inserted by one of the out-sourced team members. It's alleged that leadership threatened employees if they disclosed the incident to anyone outside a key group of members.

Nobody give a damn about security until their identity is stolen and they have to spend hours/days/weeks/months putting their lives back together from the fallout.


Until they are fined 10% of yearly revenue. That's why you need a strong government.

An alternate, market-based solution would be insurance companies who impose requirements for insurance. That increases the chances of finding an economic balance between security and productivity. A government regulation applies to everyone, even if it no longer makes sense: an insurance company whose requirements are out-dated will be out-competed by others, while an insurance company whose requirements are insufficient will go out of business.

> An alternate, market-based solution would be insurance companies who impose requirements for insurance.

This is exactly why the CrowdStrike disaster happened: https://news.ycombinator.com/item?id=41011065


The market for cybersecurity insurance is collapsing:

https://www.theinformation.com/articles/companies-are-ditchi...

but the OP's arguments also apply to insurance, yet businesses buy insurance every day. The difference is the fines and liability for data breaches are so paltry it is the rational thing to do not to invest in security. This can only change through legislative action. I wouldn't hold my breath.


It's interesting that you mention this. It's part of the follow up article coming soon.

Can we not also have really extreme punishments for government officials that break the law?

Checks & balances do not work in the absence of empowered incorruptible entities because of collusion. At best, they just slow things down.

agree in spirit. better federal laws to hold companies accountable

Financial companies (and companies providing financial services) are protected by auditing and the law around finances, not infrastructural security. This is a major best-practice breaking point between two ecosystems of software service.

In the banking sector, you can still, to this day, try to get away with stealing money by forging a paper check and hoping the bank honors it. And they may let you walk away with the cash! What protects the system is that when the fraud is detected via auditing and resolution, actual law enforcement will come find you.

The early web never had such legal protections ("Wikipedia's database was hacked? What even is a wikipedia? They got their facts disrupted? What does that mean?"), so they were forced to grow infrastructural protections or be consumed by attackers. But there's a case to be made that the costs inherent with hardening infrastructure like that are unnecessary to bear when the law will actually show up to stop the criminals messing with your infrastructure. It's counter-intuitive for those of us raised on the "You can only rely on yourself; everyone else is a potential attacker" side of the fence, but it's a way to be.


I'd argue that at least some of the problem is that we are forced to record fundamentally insecure data.

If we could replace things like SSNs, passport numbers, credit card numbers, etc. with org-specific tokens/certificates, they'd be largely useless to anyone else.

Sadly, no one cares enough about security to fix the problem though, not even governments.


We are replacing such things, although USA is a decade or so behind the rest of the world due to various legitimate sociopolitical and historical reasons.

In most places worldwide identifiers equivalent to SSNs and passport numbers aren't really treated as financial secrets; they may not be totally public due to certain privacy aspects, but they generally don't result in financial identity theft, that's a fixable problem of certain regions (like USA and a few others). Similarly, moving to proper credit card authentication (chip&pin or wireless chip when card is present, 3dsecure when not, etc) has made many credit card numbers mostly useless for thieves unless accompanied by a more serious compromise.

But all these things above have been implemented only because (and where, and when) the actual companies became financially liable for the consequences - as long as the losses/fraud/etc hit only the users/consumers, there is no motivation to fix anything. Shift the liability to the company which accepts that fundamentally insecure data as good enough, and they'll figure out some way to implement a secure process.


You make an interesting point about the lack of incentive to protect others' private data - it may only hurt the subject of the data and leave a negligent company unscathed. But how might we shift the liability from those companies without encouraging regulatory agencies to maximize data theft?

I am a bit confused why shifting liability would be linked to maximizing data theft, and why would that data theft be done by some regulatory agencies - can you elaborate?

The liability shift that I had in mind is mostly about immunity from liability for the impersonated person, like, if some criminal defrauds a company by claiming to be Bob, then shifting the liability for that risk (compared to currently common cases in USA) to that company which had lax processes and was defrauded would be various consumer protection mechanisms for things like credit score, preventing that company from trying to collect that money from Bob, preventing them from reporting that Bob owes them money (as he doesn't) and requiring that company to correct any adverse credit reports if they had already made them, etc, various means to ensure that the fraud stays between the fraudster and the defrauded company and doesn't affect the person whose identity was falsely used; and removing the implication that they are somehow responsible if that information (which they aren't legally required to keep secret) is used by someone else.


> I came up with a nice write-up for both our company and the hospital IT groups on how to secure our infrastructure with certs and whatnot and they still half-assed it with self-signed certificates.

What's the problem with self-signed certificates? Did they not know each other?


IME when people start using self signed certificates they trust anything that is presented, with no pinning. That means that so long as you MITM it with something with it's own self signed cert it will work just fine.

This is why "self-signed" is a misleading term, as it means both literally self-signed, as in, "we have added root of trust that we control and our devices trust only certificates signed by ourselves, as cryptographically verified", and also "our devices trust any certificate signed by anyone and ignore errors", and doesn't make a distinction between these two very different cases.

Especially for internal server-to-server connections there shouldn't be any security weaknesses in a fully self-signed architecture where the same scripts that deploy the certificates will also deploy the configuration on other servers specifying that this is the only thing that should be trusted.


I have a feeling that we've developed an economy that's too focused on short term goals and this prevents us from making more success. Short term goals matter, but not at the cost of long term.

I think these are the main problems and why it's hard to tackle. I think they can help prevent issues but I'd like to hear other suggestions:

1) You can't measure maintenance or security benefits the same way as you can measure costs of failures. You can measure the cost it takes to implement, but when you do you can't measure the counterfactual cost of if you hadn't. It's not actualized. But I'm confident maintenance is always cheaper than repairs. Don't fix things that aren't broken, but do fix things before they break. We need to learn the difference.

2) the world is complex and many costs are outsourced and distributed. I like to think of this like the inverse software success. Software is great because once made you can copy it trivialy and distribute it. But the down side is the mistakes propagate too! So something that may only cause a second delay is seen as miniscule but it's not when you consider a hundred million users using it every day. Enshitification is about these little things adding up and accumulating.

3) we avoid slack like the plague. I don't mean slacking off, but slack in the system. You don't make a ship and only have enough lifeboats for exactly the number of passengers. You need more because you can't assume everyone perfectly makes it to the right life boat, especially in an emergency. Covid should have been a real wakeup call but the global economy shouldn't shut down anytime a single ship gets stuck.

4) doing good and quality work has the stability of an inverse pendulum. Most evil and shit isn't caused by malice, it's that it's harder to do good and with longer and more complex tasks it's easier for something to go wrong along the way which then snowballs.

So a big belief I have is trust we need to slow down if we want to speed up. Move fast and best things is great when problem solving but you need to also also go back and clean up all the mess you've left behind. We've become so accustomed to technical debt we aren't even recognizing how much we have.

We can't be just focused on the next quarter. We're in a much more complex world. Even early humans had to plan for winter. I know what I'm asking for is difficult but nothing worth doing is usually easy. All of us have done hard things and continue to do hard things. But importantly, try not to distribute and amplify mistakes. Squash them when they're small. Don't ask for permission, just fix things.


Remember: nobody likes the safety inspector, everybody loves the fireman!

We need county/city dashboards of safety inspectors vs. fire and other hazard timelines.

And monthly insurance bills that explicitly reflect discounts/charges paired to inspection histories & inspector statistics.

Suddenly efficient but strident inspectors will find themselves gods of seasonal demand.


There are many common software tasks that are just hard to do securely, and there is an incentive to keep it that way. Security is a huge industry mostly filled with people who check boxes and memorize obscure trivia.

Consider TLS, the "industry standard" for connecting two processes securely over the network. There is a huge amount of complexity just to accomplish something that should be secure by default. Certificates, algorithms, cipher suites, domain names (for some reason). Cue the people who have traded some of their valuable time to memorize this trivia showing up to defend TLS and how simple it is. They remark that anyone who hasn't memorized this trivia "isn't a security expert".

Contrast that to something like Wireguard. There are keys, one is private, and I swap the public ones. Simple as that. If the only way to connect two processes was to transfer a string `<public-key>@<ip address>` between them, think about how many problems would be avoided, and how many fewer security "experts" the industry would need.


Reminds me of homomorphically-encrypted cloud-computing efforts. Apparently, there is a fully functional, RISC-like virtual CPU (built out of Boolean gates) that C can compile to and that one can use for processing your secret data on untrusted machines [0]... and a very, very beefy Amazon instance can emulate it running at ~1 Hz clock. No, not 1 kHz. 1 Hz.

At which point I just had to stop and re-evaluate the problem again. If I have such a small amount of data that I am find at processing it at single-digit IPS (micro-MIPSes? geez...) then surely I could just process it locally, and much faster (and cheaper to boot)? And if I have an amount of data large enough to warrant using cloud resources, and important enough to justify paying astronomical sums... surely I could just spend that kind of money to simply buy lots of hardware and, again, run it locally, and much faster (and cheaper to boot)?

[0] https://arxiv.org/abs/2010.09410


Sure, now you just need a way to validate the public key and IP address genuinely belongs to the claimed identity. Could use a certificate?

The IP address is just a hint in that example because we don't yet have robust identity based networking. It's actually meaningless, either I successfully authenticate with the public key on the other end, or I don't. I don't care about getting the wrong IP address, worst thing that could happen is that I bother the wrong process and it can't establish a connection with me.

You must be a TLS expert because saying we need to check if a public key "belongs to the claimed identity" is creating a problem where none exists. The public key is an identity, that's who I want to connect to. In the Wireguard example, it really is that easy.


TLS (as in https for websites) solves a different trust issue than wireguard. The case you present was two trusted parties (or one party ) set up a VPN between two hosts. It assumes there is some pre existing secure channel to exchange the public keys. If you were to simply exchange those over telnet, you're open to mitm attacks.

With https this key exchange mitm aspect is acknowledged and that is where most of the complexity comes in. Since a client typically connects to a website without prior keys, we need a trusted third party like a CA or trust chain to verify were connecting to the domain we intended to connect to.

Not staying tls isn't bloated, but to be fair, it solves a more complicated problem.


I don't think these use cases are as separable as the TLS "experts" would claim. If something is giving me the IP address, why can't it also give me the public key? We just don't have a convention of passing those things around together, security came after the fact. We didn't design for security.

In a lot of cases the IP comes from a configuration file. That's assumed to be secure. In some cases it comes from an authority. If I'm trusting the authority for the IP address, why can't I trust them for the identity as well? I have to trust a different authority for that? That is twice as complicated as necessary.

DNS should really be about which identity owns a name. We are starting to get there with DNSSec, once that's sorted out we could easily return `<public key>@<ip:port>` entries in authenticated DNS records. The root server (transitively) claimed that this public key represents this domain name, and it was last seen accepting connections at this IP and port.


Purely from a technical perspective, whatever is giving you an IP address from a name lookup could indeed give you a certificate. The IETF has for 30 years been hypnotized by this fact, in much the same way as Joel Spolsky used to write about the "everything software project is some specific subclass of 'spreadsheet'".

The problems with make name lookups yield certificates are all real-world, pragmatic, people-based issues:

* The deployed base of DNS software, which includes middleboxes of all shapes and sizes that pay attention to DNS, is hostile to things that look like DANE lookups, so DANE lookups (really, all DNSSEC lookups) have a high failure rate, so high that anything using DNSSEC needs to have an alternate path to building a secure channel without DNSSEC.

* The entities that ultimately decide what DNS lookups are going to return (note well: those entities are rarely ever the people who "own" the zones themselves) are themselves firms with roughly the same shape as certificate authorities, but with none of the accountability mechanisms; browser vendors had to force legacy CAs to adopt Certificate Transparency and no such leverage exists for the DNS.

* For bonus fun, add in that ultimately most of the (~all of the popular) TLDs are de jure government controlled --- and governments have a lot of practice exploiting their control over DNS for policy ends.

DNSSEC is a dead letter. Deployment in .COM actually WENT DOWN within the last 24 months, and it was trifling to begin with. The most common experience large tech companies have had with deploying it is "falling off the face of the Internet for several hours due to misconfiguration". Stick a fork in it. It was a reasonable idea that, like many reasonable ideas, has turned out to be completely impractical in the real world.


WireGuard is deployed between mutually trusting, pre-introduced endpoints. TLS has to work for huge numbers of anonymous untrusting clients, often with transaction time budgets denominated in the tens of milliseconds. This is also where perennial exhortations to adopt SSH-style "ToFU" models on the public web run aground.

The public key can be linked to an identity only if the client already own that mapping (identity <-> public key)

For the vast majority of use cases, this cannot be done, hence certificates (with CA)


I think man in the middle attacks are the concern. But still better than plaintext.

>filled with people who check boxes

That's why my switch from software development to application security only lasted four years. It drove me up the wall to find so many vulns and have them ignored because they weren't necessary to fix for the various compliance checklists to be completed. No one cared if systems were actually secured, they just cared that they got their compliance certified so if there ever were a breach, they'd have coverage for their liability. It's also an area where most people in it have little to no actual programming experience so when tools mark potential vulns, programmers can get away with claiming it's a false positive and everyone moves on, even if it's a plain as day case of clear text PII going out over the internet or some ancient injection vuln that no one wants to get their hands dirty fixing.


The weirdest part about TLS is how a non standardized format (PEM) has become the defacto standard and is the leading cause of complexity in setting up TLS. Certbot doesn't support PKCS12, which is the actual standard format. So you have to put an extra step in-between certbot and your application.

Security should not be box checking. Security is seeing your cyber domain as territory you are militarily holding from the enemy. Security is understanding you're in a game of cyber spy vs spy with a globe of adversaries working to 'get' you. You can't checkbox this, it takes fluid, dynamic, multi-step thinking. In security, you are at constant war. It's not like being a Rent-A-Cop™ at the mall or bank, who is mainly a box checking scarecrow. Security is srs bidness.

What do you think secure by default means?

By secure I mean at least confidential, authenticated, and repudiable. So no one can read the data sent between A and B, B knows that it is communicating with A, and B cannot prove to anyone else that A sent particular data.

There are more formal/rigorously defined terms for this and variations. You can google IND-CCA to start digging in.

By default I mean that the least complicated, easiest to use, front-and-center API establishes this sort of connection with the 2nd party. And sending insecure data becomes akin to dealing with raw ethernet frames, possible, sometimes necessary, but not even allowed on most operating systems without elevated privileges. Concretely that might look like replacing `dial(host, port) -> Conn` functions with `dial(publicKey, host, port) -> SecureConn` functions.


I'm confused as to why you think you can replace all the things in TLS that provide security with some sort of magical SecureConn function. TLS is the secure connection, so much so that your code example is exactly what happens in Golang when you use the TLS library. net.TLS provides a Dial() and a net.Conn implementation.

Your argument seems to be "TLS is bad because it is complicated. We should replace it with something logically equivalent, but better in some way that I have not defined." This is a fundamentally unserious argument, unless you can say what is wrong and what the requirements for a new solution are that are not provided by TLS currently.


Go write up the code to connect two parties. Just A and B. A knows about B and wants to connect to B specifically, B will talk to anyone, but wants to know exactly who. It's far more than just Serve and Dial, and often you have to involve a 3rd party, or know a lot of details about self-signed certificates and the particulars of authenticating them.

If it was just Serve and Dial with the guarantees I mentioned, we would be in agreement that it was easy and a suitable default.

> "TLS is bad because it is complicated. We should replace it with something logically equivalent, but better in some way that I have not defined."

I thought I was fairly clear in saying: less complicated == better. That is the way in which it is better, that I am now defining explicitly for you. If that's controversial then that probably explains most your disagreement. The complexity has to be so low relative to other solutions that it is more likely to be used than not.


I agree that simpler is better than more complex, but you're not saying what is wrong with the current approach. I gather you're upset about certificates (who isn't annoyed with X509?), but ultimately all you're saying is "I wish Serve and Dial were more secure, but without using the mechanism that specifically exists to make them secure." This is just one big No True Scotsman.

Software security is the absence of vulnerabilities, which is a special case of the absence of bugs. People are not interested in security because they are not interested in quality. Even those environments that are supposed to be high security, are in fact buggy, slow and very frustrating to use - revealing that they are almost certainly riddled with vulnerabilities as well. It's implausible that a system could be secure if it's not also the highest quality you've ever seen.

"Software security is the absence of vulnerabilities"

I must disagree. a vulnerability where a threat actor has no way of exploiting it in the real world is not a security issue. On the flip side, a software that magically lacks any vulnerability in its code can still have design issues like bad UX or easy to misconfigure (is it elastic's fault when people expose their elastic db to the internet for example).

In the software dev world, there is this view that security is absolute. in reality, it is very much relative to the data and real-world threat. Since software devs aren't expected to know details about current threats, they're expected to think in terms of absolute and hypothetical scenarios. Which is great for writing software,but when evaluating or discussing security (software or otherwise), it's not about how many vulnerabilities there are, it is about data vs threat actors and how that risk impacts you.


You are talking about security through correctness, which is indeed not achievable. However it's not the only approach to security [0]. Security by isolation really works according to the statistics [1].

[0] https://blog.invisiblethings.org/2008/09/02/three-approaches...

[1] https://www.qubes-os.org/security/xsa/


I don't think this is true. The opposite, really. I think that we continue to present security as a "shift left" ("SHIT left") strategy, dumping the responsibility on devs without any framework for why they should care.

But if we built a culture and practice that low-security code is low-quality code, and made security issues a software defect like any other, it would get handled. Plenty of developers (and leads, and PMs) are fine with shipping low-security code, but would fight to the death if accused of shipping low-quality code.


> The problem with security is that it’s impossible to measure your ROI

Sometimes I wonder what we lost by only working on things with measurable impact


There's an actual name for this fallacy: https://en.wikipedia.org/wiki/McNamara_fallacy

> The McNamara fallacy (also known as the quantitative fallacy),[1] named for Robert McNamara, the US Secretary of Defense from 1961 to 1968, involves making a decision based solely on quantitative observations (or metrics) and ignoring all others.

Which had a profound effect on the Vietnam War

I agree, by the way, we shouldn't focus only on measurable metrics. Humans, and businesses should also value "norms and values". Although I have no idea how to evangelise this


The ROI of security is too noisy at a single company where you (hopefully) get less than one incident per year. But across the industry there is enough data to estimate ROI of various procedures.

You can estimate: https://www.wiley.com/en-us/How+to+Measure+Anything+in+Cyber...

It's just that it is not the norm, so expectations are low. Some companies consistently do better than others, so clearly it's not all down to luck.


The author raises two major arguments for why the current low level of care is entirely appropriate -

"the prevailing attitude among business leaders is:

Damage to the company’s reputation SOUNDS bad, but (so the thinking goes) it’s really too amorphous to quantify. Plus, many companies in recent memory were the victims of massive cyber attacks, took a hit to their reputation or stock price, but saw it rebound a week later with no other ill effects. (again, that’s the belief. More on this later)

The fines currently in place appear to be lower than the expected cost to improve the company’s security posture."

But where is the counterargument against that? There is no "more on this" in the article, and if those two things are true, then it would be wrong for companies to start caring more, as it's cheaper/more effective to suffer the not-that-bad-really consequences than bear the substantial effort and expense of trying to prevent them.


"more on this" got pushed to the follow-up piece, which is coming soon. Sorry to keep you in suspence. I had to balance people's time to read with the length of the information I was sharing.

Strong agree. I'll tell you the other reason not cited: it slows down organizations. Doing things right to avoid the (seemingly) small chance at being massively wrong is the inverse of the bet that doing many different things quickly has a small chance at a massive payout.

Let's say I'm an executive and I think there's a 1% chance of a breach that costs me 100x and a 1% chance of a 100x payout on every project.

I have 2 projects that each make $X. Let's say $X is $1000. 1 project will go from $X to $X/100 based on breach, so it's now worth $10. 1 project will go from $X to $X*100. It's now worth $100,000.

I went from making $2000 to $99,990.

This goes back to the argument about fines. They aren't NEARLY severe enough. If I'm an executive at a big company, I may enforce greater security on the "cash cow" projects (e.g. ad revenue and GSuite at Google [but not the Pixel or GCloud], AWS and Retail at Amazon [but not Alexa, Kindle, etc]) but the rest? I need to get ANOTHER cash cow. If my service that's only netting me $1M/year goes to $0, and I needed a service that would make $1B, I literally do not care.

If adding in-depth security to the $1M/year project makes delivery 2x slower, I've now spent 2x on something that probably wasn't even worth it. This is a game of stats; businesses and features as cattle not pets. I'd rather have 2 projects and another dice roll than 1 project that's just "meh".

That's not how I operate, but if you're playing this game as an executive, that's the most logical outcome.


The recent Crowd Strike issue offered great insight into common public perception. Failures of that nature, let alone a real attack, are perceived in the same event class as 'natural disasters' to those who don't understand the problem.

There's also a severe over-reliance on completing a checklist, rather than having an answer for a given class of issue. Asking the correct question is important for receiving a good answer. 'Restores are what people care about.'

Critical vendor failure and ability to operate independently in an isolated recovery mode might be new features added to recently updated checklists.


An angle that sometimes helps is reframing security as (business) validation and introducing proper type modeling.

The best thing anybody can do for this is making apis that make `unrepresentable` unsafe things.

` A classic is password length. Instead of `login(user:str, pwd:str)`, do `login(user:NotEmptyStr, pwd:ValidPwd)`.

This is stuff that must be done in the lower layers, to take advantage of how lazy people is. Do it for the most popular libraries and frameworks and we are talking about real impact.


Is the idea that the attacker is brute forcing the login method and they are less likely to be successful if the input must be passed in an unexpected format?

Types where the constructor takes care of the rules and aborts if you pass it something stupid (like an empty string for a user name).

This is a bit of a stretch, but I'm willing to entertain the idea if you could toss me a link or two to more in-depth write ups on the topic

The author ain't wrong - security has a massive usability issue, and a lot of legacy security vendors don't seem to care about understanding the UX or workflows of various different personas.

The newer generation of companies and startups are better, but it's still a work in progress.


> security has a massive usability issue

Security cares about nobody.


If you choose "security vendor", you have been scammed already. This is the whole issue: you have to work (cleverly), not just pay for it.

Many aren't better just fancier.

"Nobody (i.e., business leaders)"

Glad that was clarified, I was afraid I was "Nobody".

What the author is trying to articulate makes me wonder if he considered what security is.The core properties we mean when we say "security" are the CIA triad (confidentiality, integrity and availability). You can't tell me a "business leader" doesn't care about any of those. You have business related information that is vital for your business continuity and profitability. The confidentiality, integrity and availability are what we generally mean (not always though) when we say "security".

I'd argue that business leaders do care about security a lot. I think what the author means is "nobody cares about security for the sake of saying you're secure", but even then, business leaders do care about theatrical security, because it helps them sell their products and services. "we have a state of the art, military grade, encrypted cybers, unlike the competition". There are even popular and profitable security vendors whose main service is rating the security posture of companies, so that when you do B2B you avoid poorly rated businesses that won't protect the data you will share with them.

Security for individuals is a different topic than businesses, it's almost a different ballgame altogether.

Use APTs as an example, should some mom&pop small/medium size business care about them? Certainly not. They should care about ransomware though, because chances are, they can't afford the downtime and ransom payment. Should a defense contractor business care about APTs? yeah, like all of them and then some.

Context and nuance are important.


Those are great points. And what you're saying is why I used the "nobody cares about backups" analogy.

It's NOT that nobody cares about the results of security. It's that those results ("not losing our sales database")are often not presented clearly or coherently enough for the decision makers to recognize the value of the activity ("doing regular backups, paying for offsite storage, etc.")


No, I think I get you. my point was, unlike backups, security is formally defined as those results. it isn't just the decision makers but the technical professionals that don't get what security is. if you design a database, you probably care about the type of security (which is just secure coding/design) you said nobody cares about, but if you admin a database, then security is all about protecting the data that will impact the business in a meaningful way. i.e.: even if it contains a meaningless data, an exposed db on the internet can impact reputation and potential revenue. or if it's a DoS attack, the availability of the service provided will be impacted (a security property).

To sum it up, what business people think about the term "secure" in terms of computer information is "The data we need for business has confidentiality, I can rely on its integrity and it will be available when we need it for business reasons". They may not necessarily be concerned abut quantifiable and/or short-term profits. appearances, morale, ability to recruit new hires, come up with new solutions/products better than the competition can, because the systems we use are reliable and secure with less hoops to jump through because of "security theatrics".


Quality companies building quality products do care, successfully.

For those who are following along, here's what I think will help: "Starting to Care About Security" https://www.adatosystems.com/2024/09/11/starting-to-care-abo...

Not exactly the authors point but quite a lot of end consumers care about security. This has helped the likes of Apple who have a good reputation do well. (2004 market caps Apple 8bn, MSFT 298bn, Apple now bigger than MSFT). And there are other examples that are similar - I switched my mum from yahoo mail to gmail when yahoo kept getting hacked - Google doing well, Yahoo basically went bust and so on.

Computer security is a solved problem.[1] The problem arose during the VietNam conflict, and was solved in the 1970s.

I suspect that the current widespread ignorance of this fact is the result of a covert operation by one or more TLAs.

[1] https://en.wikipedia.org/wiki/Multilevel_security


It's about the same level of solved as moon landings and supersonic flight...

If we could now find out how to do security in an economical, non-annoying way, that would be great


That's definitely not "solved" for any but the highest level definition, simply due to the fact that it's an absolute nightmare to maintain this kind of approach in large organizations due to people coming, leaving, changing their security levels.

Basically you have to have a whole team of people whose only job is to maintain the correct/accurate security levels for everyone else, grant and revoke them in a timely manner.

And a whole another team to maintain this for every single software service that the company runs and uses, internally or externally.

So the solution is either a huge performance or cost penalty, or both.

> I suspect that the current widespread ignorance of this fact is the result of a covert operation by one or more TLAs.

No. It's a simple fact that it's difficult and costly to maintain and most orgs don't have the talent to do so in-house or the money to hire external experts.


We deploy systems of a similar nature to protect our electrical grid, all the way down to individual outlets.

Image if some agency had decided that knowledge of fuses and circuit breakers should be restricted in the national interest. This would make electo-security a Trillion dollar industry.

Just as cyber security is now.

--

We let random people plug things into the electrical grid. We don't have to run an "electrical scan" for each item plugged in. We don't have central authentication of people and devices being plugged in. This is because the default is NOT to get all of the power in the US electrical grid in every outlet. It's a default deny policy. We all understand how it works.

When experts are required, they come in, make required changes, and leave. On site electricians aren't a common site except for weird things like McCormick place in Chicago, where they are required to plug ANYTHING in.

If a user wants to plug in an Electric Range, Electric Dryer, Welder, etc. they have to find an appropriate outlet. The capabilities of each outlet are defined in advance. There are widely respected standards.

On the other hand, we are forced to run virus scanners, and try to enumerate every piece of code before we use it to compute. If we run code, it can use all of the resources of the host it's running on, in almost all cases. The policy is default allow. We blame users, applications, and software vendors for mishaps. There are no standards. Everything is layers of band-aids on top of Operating Systems that are insecure by design.

This is madness.


Running attacks on the electrical grid is dangerous and expensive. This is nothing like how cyberattacks work.

It's also hard to sell security; it's something that is hard to quantify. Features that can be reduced to a checkbox matrix are selling points, but you can't really do that with security.

Therefore products that increase security and products that decrease security do not look any different to those buying them.


I have over 6,000 hours of on-site data breach consulting experience.

Companies do not give a sh*t...

When we have GDPR style laws in place in the U.S., companies will start to care..

4% of a companies annual worldwide turnover is a tremendous deterrent..


This applies to all kinds of user interaction.

It's a lesson that I've learned, the hard way.

I've seen things, man...


Surprisingly few companies (or people) care about paying for good security.

The problem with paying for good security is that it's very difficult for non-security experts to evaluate the genuinely effective ways to do that.

Is buying antivirus "paying for good security"? Hiring the first security firm that showed up in a Google search?

If you advertise for a security person to join your company, how do you effectively interview candidates?


No F500 tier executive is doing that.

They paid Accenture and Gartner to tell them what to do.

Ditto for having them set up a security organization -- get Accenture to sit a temporary CISO, hire some people, and then fuck off. Hopefully the replacements work!

Mom and Pop shops might use Google, but in 2024 they're usually using whatever the local, oversubscribed MSP is selling.


and the problem there (as I see it) is that they don't care about security, they care about passing their audit.

"Passing our audit" has been presented with measurable consequences (cannot sell to customers) and finite, well-defined actions (this is what the audit list looks like).

What I'd like (the goal of the follow up article, coming soon) is to present the value of security in a way that makes the justification of the effort viable and palatable.


Would argue the opposite. Many people pay cloud providers because of the built in security and auditing. See AWS gov cloud for an entire sector.

People do care about security. They will strengthen their roofs as hurricanes blow up worse. They buy big and tough cars to better survive auto-accidents. They will accompany their kids home from school and install burglar alarms. Plenty of Americans are even happy carrying a firearm around just in case...

What people do not give a shit about is digital security. Because nothing about computers or the Internet "is real". And it's getting less real by the day. That's the fascinating psychological talking point.


This is just a specific case of the general problem of long-term, cultivated, or difficult-to-measure goods. Who gets more recognition or reward, the guy who hardened his software over time to prevent the bug, or the guy who swoops in to fix the bug? The guy who tested his code to prevent bugs, or the 10x rOcKsTaR who shat out a mess of an app that appears to do what it should, but leaves everyone else cleaning up the disaster later?

Our culture in particular excels at implementing this bias.


Security is hard, and determining what is worth paying for when it comes to security is arguably even harder - there seem to be a higher than typical amount of snake oil salesmen and grifters in the industry.

Nobody cares about anything. There are a ton of people who shouldn't even belong in front of a computer at all let alone managing software projects.

Nobody cares about eating healthy either, but being lean with a six pack. That said, many still try to eat healthy. I dont understand the point of this article

Disagree. Security isn’t just about recovery. Say you get breached. Many threat actors are well aware of global privacy laws and exfiltrate data and threaten to release it if not paid the ransom. Some go a step further to notify privacy regulators of the breach to further leverage ransom payment.

Recovery from an encryption event is great and all, but it doesn’t solve the problem of your new regulatory fine and legal problems.


Isn't that basically the author's point?

> This brings me back to my original point: Nobody (i.e., business leaders) cares about security. What they care about is avoiding lost revenue due to application downtime, extortion, and lawsuits.

Followed by arguing that fines and reputation loss, under the current status quo, aren't seen by business leaders as being extraordinarily disastrous.


I guess the free market will determine what security is worth. Extortion demands will rise until they stop being paid. Then we will know. Unfortunately the real victims (us) will not be part of the negotiations.

> What they care about is avoiding lost revenue due to application downtime, extortion, and lawsuits.

This is starting to align with security needs now too (eg. ransomware, data breaches, etc).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: