I have trouble understanding this mindset. It's like, if you were walking away from your car in a parking lot, and someone said "Hey! You've left your car unlocked!", and you yelled at them angrily "Stop looking at my car!!!". It makes no sense at all, and yet it's practically the universal response from people who don't know what they're doing.
People occasionally suggest that software engineers should be professionally licensed. I have a different proposal: I think that people who want to manage a business involving software development should have to get trained and licensed.
ETA: while my proposal is somewhat facetious when considered about all software development, perhaps it's not completely inconceivable that we could require businesses collecting any personal information from users to be licensed and audited. We already have PCI-DSS compliance rules for businesses using credit cards; this would be analogous, though it would have to be enforced by the government, as credit cards wouldn't necessarily be involved.
1) The people who built the site initially are gone. Even the people who hired them to do it are likely gone now.
2) The person paid to "maintain" the site is just a technology "manager" who doesn't really know all that much about how it works.
3) There is nobody at the company who can tell if the "audits" they are paying for are snake-oil or not, but they're expensive so they must be good.
4) Even if the threat is real, they don't even know where to start in assessing it so they just fall back on their expensive "audit".
This leaves them unable to tell well intention-ed do-gooders from Nigerian princes so their initial response of just blocking them might not be that far fetched. It does start to look increasingly bad when it becomes clear that there is a real problem at hand. The quality of management can be measured by how quickly they identify the blind spot.
This is the most plausible explanation, in my view. I've seen companies where the software was considered "done" and they literally had no software engineers around anymore who could even modify it, let alone the original people who built it. Just a bunch of managers and salespeople milking it. If anything were to go wrong they'd have to bring in an expensive contractor or just shut down that part of the business (if the numbers made sense to do so).
5) That twitter account he's arguing with is run by a social media agency that doesn't have any actual connection to people running Kids Pass, let alone direct contact with the developers.
You forgot another possibility which is that the organization might have a bad office culture and the person is just instinctively protecting their turf.
> It's like, if you were walking away from your car in a parking lot, and someone said "Hey! You've left your car unlocked!", and you yelled at them angrily "Stop looking at my car!!!".
Its probably more like some rando yelling out from where he's loitering by the cart return "hey, your binzinger's habroodled! Your car might cause and accident when it snerts!". What does he want? Is it a scam? Are those really parts of a car? He doesn't even know you, what's his angle? -- What do you do? Look down and keep walking, that's what. It almost seems reasonable.
The key phrases are recognisable though. It's not "bizinger" and "habroodled" and "snerts" - it's "vulnerable", and "security issue", and "data protection", and "safety". These aren't alien or strange terms, they're simply words you don't want to hear about your product.
When a manufacturer issues a safety recall, you don't need to understand things like the necessary gap between mains voltage and 12V in a transformer, or the biological reaction of a high levels of insecticide in an egg, you simply need to recognise a safety warning from an industry professional.
The issue in these cases isn't that the people in charge don't recognise technical terms, it's that they wilfully ignore the voices of caution, warning them about safety issues. In many industries, that lands them in court.
I think you might be vastly overestimating the technical competence of a lot of people in non-tech management. In my experience, a lot of people really do believe those nonsense phrases we hear in response to a disclosure - "our system has been audited, so we know it's secure", "we use military-grade password encryption" etc. To the average user, this stuff is basically voodoo.
Imagine that you got a letter from a stranger, telling you that the locks on your upstairs windows didn't work properly. How confident are you that you'd take it as a helpful suggestion, rather than being completely creeped out by this menacing weirdo?
I think that negative responses to disclosure are generally grounded in a mixture of fear, mistrust, misunderstanding and arse-covering. Someone who discovers a vulnerability is seen as inherently untrustworthy, because why else would they be snooping about and trying out the locks? We think of computer systems as inherently insecure until proven otherwise, but they see their systems as fundamentally secure until someone comes along and breaks it. If you're fearful of technology, it's easy to hear "excuse me, I think your system is vulnerable" as "nice system you have here, it'd be a shame if someone broke into it". Denials and cover-ups are often the default corporate response, because being the bearer of bad news can be a career-limiting move in many organisations.
> We think of computer systems as inherently insecure until proven otherwise, but they see their system as fundamentally secure until someone comes along and breaks it.
Hmm, I think you're onto something. Maybe they just don't get that the vulnerabilities are already there, waiting to be exploited -- they think that the person they've heard from actually broke something that will now make it possible for others to get in. I guess if you don't know what's going on, that's as reasonable a theory as any.
> We think of computer systems as inherently insecure until proven otherwise, but they see their systems as fundamentally secure until someone comes along and breaks it.
Another way of saying essentially the same thing: both parties believe that "extraordinary claims require extraordinary evidence." However, for us, the extraordinary claim is that software is secure; whereas, for them, the extraordinary claim is that the software for which they have paid so much money can somehow be insecure.
> Imagine that you got a letter from a stranger, telling you that the locks on your upstairs windows didn't work properly. How confident are you that you'd take it as a helpful suggestion, rather than being completely creeped out by this menacing weirdo?
If I were really creeped out, I'd be even more likely to spend effort making sure the locks on my windows are secure, as now I know there's a menacing weirdo looking at them, so I'd want to be extra sure he couldn't get it.
I know there's a menacing weirdo looking at them, so I'd want to be extra sure he couldn't get it.
Now, imagine that the menacing weirdo had included a return address on his letter. Would you report him to the police? If he got locked up, then that would be one way to make extra sure he couldn't get in.
And then the problem is solved, do you don't even need to fix the locks...
This is where the analogy breaks down; if my metaphorical windows were on the open internet, I wouldn't feel at all secure after locking up one single weirdo.
If your site collects and stores private information, someone working for you needs to know enough about security to sort the crazies from the real security researchers.
It's more like running valet parking and leaving other people's cars unlocked. Yeah, you should update your process so that your drivers lock the cars, but oh man, that's kind of hard. What if we just tell people their cars are locked up nice and safe and ignore anyone who says otherwise? That's much easier.
I think this response is pretty awful, but I do understand it. This website was probably either made by contractors who are long gone or an internal team who are too incompetent to fix it. Getting either of those parties to address the problem in a timely manner is a huge hassle (that could potentially cost lots of money). Ignoring the problem is easy and free. There's also likely the fear of "oh god, what have we done, and what kind of liability did this open us up to?" that is hard to stomach. It's incredibly stupid, but people usually are when they're both panicked and got caught doing something bad.
> This website was probably either made by contractors who are long gone or an internal team who are too incompetent to fix it.
If I paid a construction contractor to build my office and someone notified me that parts were unsafe or violated the building code, I would either hire the original contractors or new ones to fix it, because otherwise I would be legally liable is I still used it.
If my own employees built it, we would be having an interesting discussion about how it happened, and whether I could trust them to fix it or would need to hire a contractor (or at least fire that manager).
Whenever something new is built that people use, care needs to be taken with safety. The sooner average people realize that effects digital constructs the same as physical ones, the better.
Perhaps the contractors who built the site followed the letter of the contract, and the security requirements were insufficiently specified. Or maybe, in the case of an internal development team, they followed the spec which itself was ambiguous.
People forget that not all developers out there are Silicon Valley Rockstar Unicorn developers, who are thinking about the product's needs and the users and the edge cases. Lots of this kind of work is done at body shops where, if the customer specified the name input field should allow 8 characters, they'll make it allow at most 8 characters even though they know that people have names longer than that. If it comes back as a change request, $$$ cha-ching!
I actually wasn't making an argument about contractors at all, but operators. If you operate something that you've been informed is unsafe, you fix it or stop using it. It gets fixed or you are purposefully endangering people that use it. How it gets fixed and who pays are separate issues.
> perhaps it's not completely inconceivable that we could require businesses collecting any personal information from users to be licensed and audited
In Sweden it used to be like this. Starting with a law in 1973 that grew out of fears of big corporation mainframe databases, everyone with a registry of personal information had to register with the government, pay a license fee and comply with a strict data privacy law. Then the 80's and 90's came and the law was slowly weakened and eventually replaced with an implementation of the EU data protection directive which is more self-regulatory.
Good find! So maybe this organization needs to go beyond mere registration and start educating people, at the very least -- actual audits would be better, but of course much more expensive.
Given civil engineers need to be licensed to sign-off on projects where real risk exists - I feel the same should apply to software too, especially as the proliferation of autonomous vehicles and other robots mean that there is now a real risk of bad code resulting in people dying. I think the problem should be addrsssed preemptively. I hope we can have a security and safety-critical software engineering certification program that avoids aspects of national protectionism (e.g. it's very difficult for foreign engineers to get certified, even when their degree course was accredited by the same orgs) and elitism (e.g. arbitrarily restricting it to Masters-level degree holders). We need an open, international standard for this kind of certification.
To clarify, I know it would be inappropriate for direct federal regulation of the industry - it's too fast-moving and, like any guild (which is what it is, essentially) is subject to being politicised. I'm proposing something far simpler, such as: "if someone dies from software or SSNs get leaked, and the system was not signed-off by a state-licensed SE, then the company responsible is subject to extra damages for negligence" - and state-licensing should be under the purview of a non-profit board (with a bias towards governance from academia instead of industry). I think that would work.
I ignore 100% of mails telling me my computer is infected, my paypal account is compromised, my credit card stolen. I even flag the most convincing as scam!
If I was on twitter, I'm sure I would do the same.
Maybe I don't understand the situation correctly but the tweet was: "Hey @KidsPass - when do you plan on doing anything about the massive security issues with your website?" Followed by a link explaining <wavy-hands> how important it is to be secure. </wavy-hands>
I can understand why someone could block that as spam, especially someone who host such a blatant security flaw.
> ...it's practically the universal response from people who don't know what they're doing.
It would be a start to simply mandate that businesses collecting any personal information from users publish a policy on how users can report breaches. Fine them much more severely if they have a real breach but did not publish such a policy, or if they ignored a previous report of that breach submitted to them under that policy.
It's the Dictator's Response. At the first sign of trouble, silence the troublemakers, because if there's no smoke, then there's no fire.
Responding to a security threat like this requires funding. Funding requires spending political capital, and funding firefighting requires spending political capital on something which isn't a "feature" or an "achievement", so it's a negative for the manager in charge. It's best if the problem "just went away"...
To correct your analogy they are probably thinking more along the lines of:
"I just custom built a car but bought a knock-off car door that has a fake keyhole and no real locks"
"Hey! You've left your car unlocked! I got in with any key!"
"Why were you trying to get into my car?"
Of course the analogy breaks down even further when you consider that the "car" in this case contains stuff(sensitive user data) that doesn't belong to the car owner. Analogies are hard.
They are, and they're never perfect. But there's a simple question people should ask themselves when someone tells them their site is insecure: If this person intended to misuse this information, why are they telling me about it? And I think the analogy I offered also suggests this question.
While the reaction of the company is moronic, there is still an idiot developer somewhere who is taking inputs from the client without checking appropriate access. This is web development 101. But in a world where anyone who can write two lines of php can call himself a web developer, it's pretty much the norm.
What if someone you didn't know said "look how easy it was for me to break into your house!" I understand the reaction. People just need to be better informed on how to deal with computer security.
Mr. Feynman famously found you could lift the combo off a safe [with the a-bomb's secrets] when it was empty. When he alerted the Colonel not to leave his safe open, the response was to:
send a note around to everyone in the plant which said, “During his last visit, was Mr. Feyman at any time in your office, near your office, or a walking through your office?” Some people answered yes; others said no. The ones who said yes got another note: “Please change the combination of your safe.
That was his solution. _I_ was the danger!”
Ah yes, the unverifiable temp-worker / contractor / person-who-no-longer-works-here scapegoat that skirts the company from any responsibility. It's always amazing how they would have you believe that the bad actor is never a real employee.
"We admittedly hire substandard employees in order to call our SOC '24/7' and they have embarrassed us. We're sending them to SANS training to remedy the situation."
Putting text on a page isn't hard stuff. The hard stuff is teaching computer security to an organization that mistakes responsible disclosure for a hack attempt, and thinks a Twitter block will protect them.
It's probably also hard to know what a good security audit looks like, unless you grasp basic security in the first place.
I have an idea for a solution. Legalize hacking, let everyone hack each other and spy on each other and what not, make every company out there explicitly aware that the internet is a jungle. This will create a demand for security and force software and hardware manufacturers to actually care about security and so on.
The internet is already a hostile jungle (look at the logs of any server you control). Legalizing hacking won't change that. It will just add a little more immunity to bad actors.
And it's kind of a legalize burglary, so that people will be forced to live in fortified compounds.
The NRA approach to safety and securing yourself and belongings. If everyone has a gun, everyone will be safe. Except for those who once owned guns and after a very scary mishap believe that it's safer to not own guns, and those that have already criminally mishandled guns and used them in commission of serious and violent crimes, and those who after weeks at the shooting range still can't hit the broadside of a barn. Except those people, everyone will be safe. Oh, and children. Screw the children, because they just don't have the mental capacity to understand the consequences that may occur after one shoots another.
I like this plan. Can't wait for my father, who can barely figure out how to attach a picture to an email in AOL's webmail to start poking for XSS and CSRF vulnerabilities on the sites his spam mail links to, and changing his username to "1;DROP TABLE users" everywhere.
Cyber weapons are already legal to download and posses, it's a very different story from guns. Imagine if your competition could be legally allowed to steal your trade secrets, clients, employees, but only through hacking. Would you not care enough to invest into security?
Perhaps rather than legalizing hacking, we should inject a common middleman into the vulnerability reporting process. In the US, it could be the Consumer Product Safety Commission. It would ensure that those accepting reports are knowledgeable and treat the report as important and also protect companies from having to deal with random white hats. Serious enough vulnerabilities could result in fines that are partially paid to the reporter as a bounty. It would also protect white hats from threats or retribution since they could remain anonymous if they choose.
We already have a process in place to deal with flaws in products produced by unlicensed entrepreneurs. We just need to extend it to apply to software products and services.
No, it wont. Because people hack right now too. What it would do however, is that people who have ethical considerations would have to step back against people who dont have them. So, prepare for a lot of insider attacks and back doors. Not that those don't exist now, they do. But your proposal would put more advantage to bad actors as they hAve now, so they would get their way more often.
> It's probably also hard to know what a good security audit looks like, unless you grasp basic security in the first place.
If you replaced security with accounting, the above still makes sense! Why do companies pay through the nose to get an accounting audit done right, but much less willing to do so for a security audit?
I say the solution is to put the (legal) responsibility to the company. Once there's financial incentive, it becomes a priority.
I'm currently working for a real estate startup and I have to seriously twist their arm to convince them we should authenticate private api requests at all, let alone run vulnerability testing.
After so many decades in this industry nothing surprises me at all. Security is usually an afterthought that barely warrants spending more that a token amount. I once did a contract at a public university and found the app that every department used to verify with the state that money was appropriately spent used incrementing id's in the url and used GET to handle the delete button. I wound up fixing it for them on the way out (after weeks of telling me it wasn't a concern). A simple command line script would have deleted the entire database leaving the university with no budget for the upcoming year. Another place I worked kept production passwords in the code repository; when I complained they told me they passed their audits every year so it didn't matter. HIPAA company in the US no less.
it doesn't matter if standards exist. the "right way" to do stuff is "confusing" and "hard to use" for people who are not technology people.
"What? I have to check my email for a password reset link? That's confusing?! Our users will not be able to use that! My temporary password is xpJ38@#K1o1n$5@wlo%!pq? This is horrible? My cousin does UX and says this lowers our SEO! Just have it email their password to them! It'll confuse people!"
Amalgamation of various reactions I've heard over the years when implementing standard process for password creation/resets.
You are correct in this universe. But in the universe the grand-parent imagines, none of this is confusing, because EACH AND EVERY WEBSITE on the planet does this, so users (and even managers) have seen this many times already.
Jakob's Law of the Internet User Experience (2000):
Users spend most of their time on other sites[0].
[Granted, this may not apply to Google/Facebook, it's from an earlier, more civilized, I mean decentralized age.]
Also, the standard would use 7-word long pass-phrases [1] which are much more readable, memorable and secure than the abomination above.
That doesn't make sense. That's like calling my neighborhood pizzeria a car company because they use cars to deliver pizzas. They are a phone company too, because that's how people place orders.
A friend of mine who works with one of the really expensive consulting companies witnessed someone lashing out on twitter about how bad such and such people where.
So he answered along the lines of: I grew up in such and such home, my experience is totally different and I'll be happy to buy you lunch.
Answer: blocked.
Blocking is a power thing for some people. IIRC it used to be a thing in the old Usenet and of course it existed before that in other forms.
I do that (blocking people) as an energy-saving device. Reading messages from people with a completely different worldview than mine (e.g., communists) is extremely annoying. It raises my blood pressure. So, I block them and become happier as a result.
All I can imagine that is happening there is panic. Defensive behavior such as this indicates either they don't really know how to fix this quickly, or they just don't care.
Panic is the worst response to have in this situation. You have someone who is reporting this in good faith, who probably has an idea as to why the vulnerability exists. Just come out and be like "I don't know what I'm doing" to the individual and see if they can help, and look to hire someone that does know what they're doing.
This seems like an area where a trusted organization (perhaps the EFF?) could do a lot of good by creating a "for dummies" webpage where the vulnerability disclosure process is explained in layman's terms (i.e. with suitable car analogies...) from a website owner's perspective. Those who discover a vulnerability in a company's IT infrastructure can then submit a link to this page with their reports.
A page that asks for you to give them your real name and promises to not take legal action against you only if you follow very specific rules that could be violated by accident? I wouldn't report to them like that; they've left enough rope to hang you with.
Sometimes we forget the entrepreneurs behind these services can be technologically illiterate. When they realize they have a problem they don't understand, they get scared, and can easily get confrontational and try to dodge any liability (e.g. by getting the police involved).
How can we teach these entrepreneurs to act? Perhaps by creating an accessible and gently worded guide on how to act; an FAQ from a reputable organization that you can link to every time you disclose a vulnerability? IEEE, EFF I'm looking at both of you.
Writing "They have a serious vuln" on twitter is not responsible. Try to hack those who have bounties, please leave the others alone, or at least contact them privately when you find a vuln. Give them a chance to fix it, and if you want to be helpful also tell them what the issue is.
>Try to hack those who have bounties, please leave the others alone
Is this a joke?
This is awfully close to "if you see something bad happening, just ignore it" which I find rather ridiculous. I think the morally correct thing to do is inform them privately and, if the owners of the site don't respond (or block you, like in this case), go public so that laypersons know their information is not secure.
IANAL. If you have reported the vuln they are required by law to report a "breach" to the authorities and if they do not do it willingly they might be forced to tell all their customers about the breach. There's really no good in telling your 70k followers that said site has vulns, at least not until they have fixed them. If moths have passed, the vulns are still not fixed and you have not heard back from them you could try contacting the authorities yourself. Self publishing your own "hacks" are not a good idea though, you should get a newspaper or other "researcher" to do it for you. You can brag about it later when the vulns have been fixed and you got permission. Only try to hack those who have asked for it, and make sure you join their bounty program for a reward.
Agreed. Especially considering how easy it would have been to contact them privately and directly.
A 2 second google search for kidspass uk shows a sitelink for their "Contact Us" page, listing both a phone number and email address: https://www.kidspass.co.uk/contact-us
So unless that page was added post-incident, then IMO both Alex and Troy did not do responsible disclosure.
Hopefylly they where give more then a few weeks time to fix it. But by reading the article it seems they where impatient and didn't even wait for the weekend to be over before publicly announcing that the site had a vuln. And he didn't give instructions on how to repeat the "bug" in the DM (Twitter Direct Message).
Considering his follower base there might have been a number of people interested to know what it was, and capable of finding out by themselves. And from the article it seems the tweet did set off a "hacker feast" against the site.
Having used Kids pass. It's a pointless product anyway. Snake oil. All the "deals" are just links to PDFs, many of which have no barcodes and the ones that do are just generic barcodes that don't relate to a particular Kids Pass account. Also more often than not, when you redeem the vouchers they aren't scanned.
There's nothing to stop you downloading all the PDFs you'll ever need and then ending your free trial. Other than the fact that that's technically fraud.
The solution to this is simple. Disclose everything. Have these companies destroyed. Have everyone who works for them fired and become unhirable. Have their houses foreclosed on because they cannot afford to pay the mortgages or rent.
That's the only way to ensure that the security is taken seriously.
> Have everyone who works for them fired and become unhirable.
Huh. I didn't think there was anyone even more "kill them all and let God sort them out" than I am. I think the janitors or the accountants are clearly not at fault in this case, and including them would be wrong.
The ICO was the direct result of the EU Data Protection Durectives of the 80s and are replicated throughout the Union, so I’m not sure they’re the best example of the “oppressive regime” in the UK.
There are new Data Protection Regulations (GDPR) coming in soon on 25 May 2018, which will allow EU bodies to fine organisations up to 4% of their global revenue.
That’s true but if there’s very little penalty (continuing the example, TalkTalk made an after-tax profit of £72M [1] so the fine was 0.6% of that) for not fixing it, there isn’t much reason for an already-negligent business to care what the ICO say.
People occasionally suggest that software engineers should be professionally licensed. I have a different proposal: I think that people who want to manage a business involving software development should have to get trained and licensed.
ETA: while my proposal is somewhat facetious when considered about all software development, perhaps it's not completely inconceivable that we could require businesses collecting any personal information from users to be licensed and audited. We already have PCI-DSS compliance rules for businesses using credit cards; this would be analogous, though it would have to be enforced by the government, as credit cards wouldn't necessarily be involved.