I have trouble understanding this mindset. It's like, if you were walking away from your car in a parking lot, and someone said "Hey! You've left your car unlocked!", and you yelled at them angrily "Stop looking at my car!!!". It makes no sense at all, and yet it's practically the universal response from people who don't know what they're doing.
People occasionally suggest that software engineers should be professionally licensed. I have a different proposal: I think that people who want to manage a business involving software development should have to get trained and licensed.
ETA: while my proposal is somewhat facetious when considered about all software development, perhaps it's not completely inconceivable that we could require businesses collecting any personal information from users to be licensed and audited. We already have PCI-DSS compliance rules for businesses using credit cards; this would be analogous, though it would have to be enforced by the government, as credit cards wouldn't necessarily be involved.
1) The people who built the site initially are gone. Even the people who hired them to do it are likely gone now.
2) The person paid to "maintain" the site is just a technology "manager" who doesn't really know all that much about how it works.
3) There is nobody at the company who can tell if the "audits" they are paying for are snake-oil or not, but they're expensive so they must be good.
4) Even if the threat is real, they don't even know where to start in assessing it so they just fall back on their expensive "audit".
This leaves them unable to tell well intention-ed do-gooders from Nigerian princes so their initial response of just blocking them might not be that far fetched. It does start to look increasingly bad when it becomes clear that there is a real problem at hand. The quality of management can be measured by how quickly they identify the blind spot.
This is the most plausible explanation, in my view. I've seen companies where the software was considered "done" and they literally had no software engineers around anymore who could even modify it, let alone the original people who built it. Just a bunch of managers and salespeople milking it. If anything were to go wrong they'd have to bring in an expensive contractor or just shut down that part of the business (if the numbers made sense to do so).
5) That twitter account he's arguing with is run by a social media agency that doesn't have any actual connection to people running Kids Pass, let alone direct contact with the developers.
You forgot another possibility which is that the organization might have a bad office culture and the person is just instinctively protecting their turf.
> It's like, if you were walking away from your car in a parking lot, and someone said "Hey! You've left your car unlocked!", and you yelled at them angrily "Stop looking at my car!!!".
Its probably more like some rando yelling out from where he's loitering by the cart return "hey, your binzinger's habroodled! Your car might cause and accident when it snerts!". What does he want? Is it a scam? Are those really parts of a car? He doesn't even know you, what's his angle? -- What do you do? Look down and keep walking, that's what. It almost seems reasonable.
The key phrases are recognisable though. It's not "bizinger" and "habroodled" and "snerts" - it's "vulnerable", and "security issue", and "data protection", and "safety". These aren't alien or strange terms, they're simply words you don't want to hear about your product.
When a manufacturer issues a safety recall, you don't need to understand things like the necessary gap between mains voltage and 12V in a transformer, or the biological reaction of a high levels of insecticide in an egg, you simply need to recognise a safety warning from an industry professional.
The issue in these cases isn't that the people in charge don't recognise technical terms, it's that they wilfully ignore the voices of caution, warning them about safety issues. In many industries, that lands them in court.
I think you might be vastly overestimating the technical competence of a lot of people in non-tech management. In my experience, a lot of people really do believe those nonsense phrases we hear in response to a disclosure - "our system has been audited, so we know it's secure", "we use military-grade password encryption" etc. To the average user, this stuff is basically voodoo.
Imagine that you got a letter from a stranger, telling you that the locks on your upstairs windows didn't work properly. How confident are you that you'd take it as a helpful suggestion, rather than being completely creeped out by this menacing weirdo?
I think that negative responses to disclosure are generally grounded in a mixture of fear, mistrust, misunderstanding and arse-covering. Someone who discovers a vulnerability is seen as inherently untrustworthy, because why else would they be snooping about and trying out the locks? We think of computer systems as inherently insecure until proven otherwise, but they see their systems as fundamentally secure until someone comes along and breaks it. If you're fearful of technology, it's easy to hear "excuse me, I think your system is vulnerable" as "nice system you have here, it'd be a shame if someone broke into it". Denials and cover-ups are often the default corporate response, because being the bearer of bad news can be a career-limiting move in many organisations.
> We think of computer systems as inherently insecure until proven otherwise, but they see their system as fundamentally secure until someone comes along and breaks it.
Hmm, I think you're onto something. Maybe they just don't get that the vulnerabilities are already there, waiting to be exploited -- they think that the person they've heard from actually broke something that will now make it possible for others to get in. I guess if you don't know what's going on, that's as reasonable a theory as any.
> We think of computer systems as inherently insecure until proven otherwise, but they see their systems as fundamentally secure until someone comes along and breaks it.
Another way of saying essentially the same thing: both parties believe that "extraordinary claims require extraordinary evidence." However, for us, the extraordinary claim is that software is secure; whereas, for them, the extraordinary claim is that the software for which they have paid so much money can somehow be insecure.
> Imagine that you got a letter from a stranger, telling you that the locks on your upstairs windows didn't work properly. How confident are you that you'd take it as a helpful suggestion, rather than being completely creeped out by this menacing weirdo?
If I were really creeped out, I'd be even more likely to spend effort making sure the locks on my windows are secure, as now I know there's a menacing weirdo looking at them, so I'd want to be extra sure he couldn't get it.
I know there's a menacing weirdo looking at them, so I'd want to be extra sure he couldn't get it.
Now, imagine that the menacing weirdo had included a return address on his letter. Would you report him to the police? If he got locked up, then that would be one way to make extra sure he couldn't get in.
And then the problem is solved, do you don't even need to fix the locks...
This is where the analogy breaks down; if my metaphorical windows were on the open internet, I wouldn't feel at all secure after locking up one single weirdo.
If your site collects and stores private information, someone working for you needs to know enough about security to sort the crazies from the real security researchers.
It's more like running valet parking and leaving other people's cars unlocked. Yeah, you should update your process so that your drivers lock the cars, but oh man, that's kind of hard. What if we just tell people their cars are locked up nice and safe and ignore anyone who says otherwise? That's much easier.
I think this response is pretty awful, but I do understand it. This website was probably either made by contractors who are long gone or an internal team who are too incompetent to fix it. Getting either of those parties to address the problem in a timely manner is a huge hassle (that could potentially cost lots of money). Ignoring the problem is easy and free. There's also likely the fear of "oh god, what have we done, and what kind of liability did this open us up to?" that is hard to stomach. It's incredibly stupid, but people usually are when they're both panicked and got caught doing something bad.
> This website was probably either made by contractors who are long gone or an internal team who are too incompetent to fix it.
If I paid a construction contractor to build my office and someone notified me that parts were unsafe or violated the building code, I would either hire the original contractors or new ones to fix it, because otherwise I would be legally liable is I still used it.
If my own employees built it, we would be having an interesting discussion about how it happened, and whether I could trust them to fix it or would need to hire a contractor (or at least fire that manager).
Whenever something new is built that people use, care needs to be taken with safety. The sooner average people realize that effects digital constructs the same as physical ones, the better.
Perhaps the contractors who built the site followed the letter of the contract, and the security requirements were insufficiently specified. Or maybe, in the case of an internal development team, they followed the spec which itself was ambiguous.
People forget that not all developers out there are Silicon Valley Rockstar Unicorn developers, who are thinking about the product's needs and the users and the edge cases. Lots of this kind of work is done at body shops where, if the customer specified the name input field should allow 8 characters, they'll make it allow at most 8 characters even though they know that people have names longer than that. If it comes back as a change request, $$$ cha-ching!
I actually wasn't making an argument about contractors at all, but operators. If you operate something that you've been informed is unsafe, you fix it or stop using it. It gets fixed or you are purposefully endangering people that use it. How it gets fixed and who pays are separate issues.
> perhaps it's not completely inconceivable that we could require businesses collecting any personal information from users to be licensed and audited
In Sweden it used to be like this. Starting with a law in 1973 that grew out of fears of big corporation mainframe databases, everyone with a registry of personal information had to register with the government, pay a license fee and comply with a strict data privacy law. Then the 80's and 90's came and the law was slowly weakened and eventually replaced with an implementation of the EU data protection directive which is more self-regulatory.
Good find! So maybe this organization needs to go beyond mere registration and start educating people, at the very least -- actual audits would be better, but of course much more expensive.
Given civil engineers need to be licensed to sign-off on projects where real risk exists - I feel the same should apply to software too, especially as the proliferation of autonomous vehicles and other robots mean that there is now a real risk of bad code resulting in people dying. I think the problem should be addrsssed preemptively. I hope we can have a security and safety-critical software engineering certification program that avoids aspects of national protectionism (e.g. it's very difficult for foreign engineers to get certified, even when their degree course was accredited by the same orgs) and elitism (e.g. arbitrarily restricting it to Masters-level degree holders). We need an open, international standard for this kind of certification.
To clarify, I know it would be inappropriate for direct federal regulation of the industry - it's too fast-moving and, like any guild (which is what it is, essentially) is subject to being politicised. I'm proposing something far simpler, such as: "if someone dies from software or SSNs get leaked, and the system was not signed-off by a state-licensed SE, then the company responsible is subject to extra damages for negligence" - and state-licensing should be under the purview of a non-profit board (with a bias towards governance from academia instead of industry). I think that would work.
I ignore 100% of mails telling me my computer is infected, my paypal account is compromised, my credit card stolen. I even flag the most convincing as scam!
If I was on twitter, I'm sure I would do the same.
Maybe I don't understand the situation correctly but the tweet was: "Hey @KidsPass - when do you plan on doing anything about the massive security issues with your website?" Followed by a link explaining <wavy-hands> how important it is to be secure. </wavy-hands>
I can understand why someone could block that as spam, especially someone who host such a blatant security flaw.
> ...it's practically the universal response from people who don't know what they're doing.
It would be a start to simply mandate that businesses collecting any personal information from users publish a policy on how users can report breaches. Fine them much more severely if they have a real breach but did not publish such a policy, or if they ignored a previous report of that breach submitted to them under that policy.
It's the Dictator's Response. At the first sign of trouble, silence the troublemakers, because if there's no smoke, then there's no fire.
Responding to a security threat like this requires funding. Funding requires spending political capital, and funding firefighting requires spending political capital on something which isn't a "feature" or an "achievement", so it's a negative for the manager in charge. It's best if the problem "just went away"...
To correct your analogy they are probably thinking more along the lines of:
"I just custom built a car but bought a knock-off car door that has a fake keyhole and no real locks"
"Hey! You've left your car unlocked! I got in with any key!"
"Why were you trying to get into my car?"
Of course the analogy breaks down even further when you consider that the "car" in this case contains stuff(sensitive user data) that doesn't belong to the car owner. Analogies are hard.
They are, and they're never perfect. But there's a simple question people should ask themselves when someone tells them their site is insecure: If this person intended to misuse this information, why are they telling me about it? And I think the analogy I offered also suggests this question.
While the reaction of the company is moronic, there is still an idiot developer somewhere who is taking inputs from the client without checking appropriate access. This is web development 101. But in a world where anyone who can write two lines of php can call himself a web developer, it's pretty much the norm.
What if someone you didn't know said "look how easy it was for me to break into your house!" I understand the reaction. People just need to be better informed on how to deal with computer security.
People occasionally suggest that software engineers should be professionally licensed. I have a different proposal: I think that people who want to manage a business involving software development should have to get trained and licensed.
ETA: while my proposal is somewhat facetious when considered about all software development, perhaps it's not completely inconceivable that we could require businesses collecting any personal information from users to be licensed and audited. We already have PCI-DSS compliance rules for businesses using credit cards; this would be analogous, though it would have to be enforced by the government, as credit cards wouldn't necessarily be involved.