People occasionally suggest that software engineers should be professionally licensed. I have a different proposal: I think that people who want to manage a business involving software development should have to get trained and licensed.
ETA: while my proposal is somewhat facetious when considered about all software development, perhaps it's not completely inconceivable that we could require businesses collecting any personal information from users to be licensed and audited. We already have PCI-DSS compliance rules for businesses using credit cards; this would be analogous, though it would have to be enforced by the government, as credit cards wouldn't necessarily be involved.
1) The people who built the site initially are gone. Even the people who hired them to do it are likely gone now.
2) The person paid to "maintain" the site is just a technology "manager" who doesn't really know all that much about how it works.
3) There is nobody at the company who can tell if the "audits" they are paying for are snake-oil or not, but they're expensive so they must be good.
4) Even if the threat is real, they don't even know where to start in assessing it so they just fall back on their expensive "audit".
This leaves them unable to tell well intention-ed do-gooders from Nigerian princes so their initial response of just blocking them might not be that far fetched. It does start to look increasingly bad when it becomes clear that there is a real problem at hand. The quality of management can be measured by how quickly they identify the blind spot.
Its probably more like some rando yelling out from where he's loitering by the cart return "hey, your binzinger's habroodled! Your car might cause and accident when it snerts!". What does he want? Is it a scam? Are those really parts of a car? He doesn't even know you, what's his angle? -- What do you do? Look down and keep walking, that's what. It almost seems reasonable.
When a manufacturer issues a safety recall, you don't need to understand things like the necessary gap between mains voltage and 12V in a transformer, or the biological reaction of a high levels of insecticide in an egg, you simply need to recognise a safety warning from an industry professional.
The issue in these cases isn't that the people in charge don't recognise technical terms, it's that they wilfully ignore the voices of caution, warning them about safety issues. In many industries, that lands them in court.
Imagine that you got a letter from a stranger, telling you that the locks on your upstairs windows didn't work properly. How confident are you that you'd take it as a helpful suggestion, rather than being completely creeped out by this menacing weirdo?
I think that negative responses to disclosure are generally grounded in a mixture of fear, mistrust, misunderstanding and arse-covering. Someone who discovers a vulnerability is seen as inherently untrustworthy, because why else would they be snooping about and trying out the locks? We think of computer systems as inherently insecure until proven otherwise, but they see their systems as fundamentally secure until someone comes along and breaks it. If you're fearful of technology, it's easy to hear "excuse me, I think your system is vulnerable" as "nice system you have here, it'd be a shame if someone broke into it". Denials and cover-ups are often the default corporate response, because being the bearer of bad news can be a career-limiting move in many organisations.
Hmm, I think you're onto something. Maybe they just don't get that the vulnerabilities are already there, waiting to be exploited -- they think that the person they've heard from actually broke something that will now make it possible for others to get in. I guess if you don't know what's going on, that's as reasonable a theory as any.
Another way of saying essentially the same thing: both parties believe that "extraordinary claims require extraordinary evidence." However, for us, the extraordinary claim is that software is secure; whereas, for them, the extraordinary claim is that the software for which they have paid so much money can somehow be insecure.
If I were really creeped out, I'd be even more likely to spend effort making sure the locks on my windows are secure, as now I know there's a menacing weirdo looking at them, so I'd want to be extra sure he couldn't get it.
Now, imagine that the menacing weirdo had included a return address on his letter. Would you report him to the police? If he got locked up, then that would be one way to make extra sure he couldn't get in.
And then the problem is solved, do you don't even need to fix the locks...
Because there are multiple weirdos/criminals in the world, and protecting my stuff is worth it to me.
so just send the money.
I think this response is pretty awful, but I do understand it. This website was probably either made by contractors who are long gone or an internal team who are too incompetent to fix it. Getting either of those parties to address the problem in a timely manner is a huge hassle (that could potentially cost lots of money). Ignoring the problem is easy and free. There's also likely the fear of "oh god, what have we done, and what kind of liability did this open us up to?" that is hard to stomach. It's incredibly stupid, but people usually are when they're both panicked and got caught doing something bad.
If I paid a construction contractor to build my office and someone notified me that parts were unsafe or violated the building code, I would either hire the original contractors or new ones to fix it, because otherwise I would be legally liable is I still used it.
If my own employees built it, we would be having an interesting discussion about how it happened, and whether I could trust them to fix it or would need to hire a contractor (or at least fire that manager).
Whenever something new is built that people use, care needs to be taken with safety. The sooner average people realize that effects digital constructs the same as physical ones, the better.
People forget that not all developers out there are Silicon Valley Rockstar Unicorn developers, who are thinking about the product's needs and the users and the edge cases. Lots of this kind of work is done at body shops where, if the customer specified the name input field should allow 8 characters, they'll make it allow at most 8 characters even though they know that people have names longer than that. If it comes back as a change request, $$$ cha-ching!
In Sweden it used to be like this. Starting with a law in 1973 that grew out of fears of big corporation mainframe databases, everyone with a registry of personal information had to register with the government, pay a license fee and comply with a strict data privacy law. Then the 80's and 90's came and the law was slowly weakened and eventually replaced with an implementation of the EU data protection directive which is more self-regulatory.
Kids Pass are registered: https://ico.org.uk/ESDWebPages/Entry/ZA145885
Until we are able to abolish the cfaa, I have zero faith in our ability to legislate or regulate.
tl;Dr abolish the cfaa before any new restrictions on software development
To clarify, I know it would be inappropriate for direct federal regulation of the industry - it's too fast-moving and, like any guild (which is what it is, essentially) is subject to being politicised. I'm proposing something far simpler, such as: "if someone dies from software or SSNs get leaked, and the system was not signed-off by a state-licensed SE, then the company responsible is subject to extra damages for negligence" - and state-licensing should be under the purview of a non-profit board (with a bias towards governance from academia instead of industry). I think that would work.
If I was on twitter, I'm sure I would do the same.
I can understand why someone could block that as spam, especially someone who host such a blatant security flaw.
It would be a start to simply mandate that businesses collecting any personal information from users publish a policy on how users can report breaches. Fine them much more severely if they have a real breach but did not publish such a policy, or if they ignored a previous report of that breach submitted to them under that policy.
It's the Dictator's Response. At the first sign of trouble, silence the troublemakers, because if there's no smoke, then there's no fire.
Responding to a security threat like this requires funding. Funding requires spending political capital, and funding firefighting requires spending political capital on something which isn't a "feature" or an "achievement", so it's a negative for the manager in charge. It's best if the problem "just went away"...
"I just custom built a car but bought a knock-off car door that has a fake keyhole and no real locks"
"Hey! You've left your car unlocked! I got in with any key!"
"Why were you trying to get into my car?"
Of course the analogy breaks down even further when you consider that the "car" in this case contains stuff(sensitive user data) that doesn't belong to the car owner. Analogies are hard.
They are, and they're never perfect. But there's a simple question people should ask themselves when someone tells them their site is insecure: If this person intended to misuse this information, why are they telling me about it? And I think the analogy I offered also suggests this question.
Mr. Feynman famously found you could lift the combo off a safe [with the a-bomb's secrets] when it was empty. When he alerted the Colonel not to leave his safe open, the response was to:
send a note around to everyone in the plant which said, “During his last visit, was Mr. Feyman at any time in your office, near your office, or a walking through your office?” Some people answered yes; others said no. The ones who said yes got another note: “Please change the combination of your safe.
That was his solution. _I_ was the danger!”
1. Kidspass spokeswoman said that it was their off-hours crew that blocked Alex and Troy. They were unblocked 10 hours later.
2. They will institute a vulnerability policy as a result of this.
Usually happens when a politician or business leader sends a racist/pornographic message.
That's a funny way to spell "employee".
It's probably also hard to know what a good security audit looks like, unless you grasp basic security in the first place.
I have no idea what the solution is.
And it's kind of a legalize burglary, so that people will be forced to live in fortified compounds.
I like this plan. Can't wait for my father, who can barely figure out how to attach a picture to an email in AOL's webmail to start poking for XSS and CSRF vulnerabilities on the sites his spam mail links to, and changing his username to "1;DROP TABLE users" everywhere.
We already have a process in place to deal with flaws in products produced by unlicensed entrepreneurs. We just need to extend it to apply to software products and services.
If you replaced security with accounting, the above still makes sense! Why do companies pay through the nose to get an accounting audit done right, but much less willing to do so for a security audit?
I say the solution is to put the (legal) responsibility to the company. Once there's financial incentive, it becomes a priority.
Is there no open-source standard for authentication and user-data management? Do companies really need to roll their own each time?
"What? I have to check my email for a password reset link? That's confusing?! Our users will not be able to use that! My temporary password is xpJ38@#K1o1n$5@wlo%!pq? This is horrible? My cousin does UX and says this lowers our SEO! Just have it email their password to them! It'll confuse people!"
Amalgamation of various reactions I've heard over the years when implementing standard process for password creation/resets.
Jakob's Law of the Internet User Experience (2000):
Users spend most of their time on other sites.
[Granted, this may not apply to Google/Facebook, it's from an earlier, more civilized, I mean decentralized age.]
Also, the standard would use 7-word long pass-phrases  which are much more readable, memorable and secure than the abomination above.
No, all companies that use technology are technology companies, they just fail to realize this and do not act accordingly.
A friend of mine who works with one of the really expensive consulting companies witnessed someone lashing out on twitter about how bad such and such people where.
So he answered along the lines of: I grew up in such and such home, my experience is totally different and I'll be happy to buy you lunch.
Blocking is a power thing for some people. IIRC it used to be a thing in the old Usenet and of course it existed before that in other forms.
Eg, when you don't have the resources to pay for bug bounties etc.
How can we teach these entrepreneurs to act? Perhaps by creating an accessible and gently worded guide on how to act; an FAQ from a reputable organization that you can link to every time you disclose a vulnerability? IEEE, EFF I'm looking at both of you.
Is this a joke?
This is awfully close to "if you see something bad happening, just ignore it" which I find rather ridiculous. I think the morally correct thing to do is inform them privately and, if the owners of the site don't respond (or block you, like in this case), go public so that laypersons know their information is not secure.
A 2 second google search for kidspass uk shows a sitelink for their "Contact Us" page, listing both a phone number and email address: https://www.kidspass.co.uk/contact-us
So unless that page was added post-incident, then IMO both Alex and Troy did not do responsible disclosure.
There's nothing to stop you downloading all the PDFs you'll ever need and then ending your free trial. Other than the fact that that's technically fraud.
That's the only way to ensure that the security is taken seriously.
Huh. I didn't think there was anyone even more "kill them all and let God sort them out" than I am. I think the janitors or the accountants are clearly not at fault in this case, and including them would be wrong.
The Information Commissioner is the regulator for this kind of thing.
They do take action on this kind of thing.
They’ve also been quite notably toothless, for example TalkTalk was fined a “record” £400k for their 2015 disaster: https://ico.org.uk/about-the-ico/news-and-events/news-and-bl...
ICO is one path to make that happen.