Most importantly: there are no formal definitions for what these disclosure terms mean, the distinctions between them are not unimportant, and the most important distinctions have nothing at all to do with "disclosure".
To me, as a working professional in this field, "disclosure: full" informs me that if I report a vulnerability in your service, you'll publish some sort of public advisory, or agree to me doing so, on my own timeline, regardless of when you're fully patched and regardless of whether you've confirmed or triaged the bug.
Is that exactly your understanding of "disclosure: full"? If not, there's your first minor problem.
But there's more important problems. Actually, I don't need any permission whatsoever to publish a vulnerability I find in your site, nor is there any real norm in the infosec community that I shouldn't ("responsible disclosure" is a meme propagated by vendors who want to control independent researchers). You can state your preference that I coordinate with you ("coordinated disclosure" is the researcher-friendly way to write "responsible disclosure"), but you can't dictate that to me --- and the suggestion that you could is presumptuous and rude.
But that's still not the big problem. The BIG problem is that disclosure policy has nothing to do with whether you can test.
There is a widespread belief on the US Internet that light web application testing of SaaS products is allowed. It is very much not. Without explicit permission to test a website I put on the Internet, your doing so violates (and probably should violate!) CFAA, and in some states also state statutes. That goes not just for disruptive testing, like seeing if you can break a SQL query, but also for stuff that most testers believe is harmless, like light XSS testing. I can't tell you how many times an XSS test vector "broke" a site for a client, for instance when it got cached as a "recent search" and displayed, in a DOM-breaking way, to every user on the home page for the site. Bring down a site like that without permission, and even if it says "disclosure: full", you might be civilly liable for the downtime.
I think the "disclosure" field here is thus pretty ill-advised. It advertises a policy that isn't super useful for researchers to know, and doesn't define the most important policy, which is "am I allowed to test this site and what are the rules of engagement for doing so".
What does that leave us with? A PGP key and the name of your security email address. The PGP key is useful, but the contact address should just always be "security@" anyways.
There's really no need for a standard format for this stuff. Just create a "security.html" or whatever that reminds people to send mail to security@, publishes your PGP (note to Symantec: public key) key, and gives people permission to test the site.
One of the authors DM'd me to say they're removing "Disclosure" from the next version of the draft, which is I think the right move.
Is there really no norm to at least try and contact the affected service before publishing an exploit? Seems like basic courteousy, though I understand some people are not super receptive.
Might of misread what "norm" meant here
As for the rest of your comment... are you really a "working professional in this field"? By that I'm asking whether you are self-employed and playing fast and loose according to your own rules, or if you instead have an established career within the industry for which your professional peers stand beside your methods? I have to believe that honest professionals with a shred of reputability in this field would not be advocating that playing nice, so to speak, is some altruistic gift on the part of the researcher. "Security Researchers", quoted to loosely include "rogue" grey and black hats who think they have free reign to hack in any way they see fit, have gone to jail for what you appear to be claiming is a risk-free "right". It's not black and white, and the courts seem to favour whichever side suits their fancy in each individual case.
It really bothers me to see anyone suggest that any public port on a machine is 100% free reign to abuse. If public-facing access is all it takes to be free game, then I must be morally and legally in the right to pick the door locks of private citizens and businesses, or peel off the face of any ATM in an effort to "gauge its security". The loophole excuse is that the Computer Fraud and Abuse Act, in theory, only covers computer owned by the government and financial institutions. When all moral obligations are ignored, and skirting around lacklustre laws is the only defence, the intentions of some "researchers" quickly become questionable.
Your stance on the subject is probably fairly common in terms of what people want to be labelled as fair game, while in the real world researchers have to dial back a bit and play nice in order to maintain an ounce of respect in the field.
His comment on responsible disclosure is about the other part of the industry. The one where you install a vendors software on your machine, or where you analyze client code with out their server. As a researcher you have zero responsibility to play nice with the distributors of that content. That's like saying movie reviewers can only print reviews of already released movies on a schedule and in a way approved by the studios (certainly studios - and software companies - are allowed to make deals - contracts - with reviewers for embargoed, or even private, reviews of content before it's released).
> I'm asking [...] if you [...] have an established career within the industry for which your professional peers stand beside your methods?
I'm not sure if you're already aware of tptacek's reputation in the field and you're hinting at something different, or if you're asking these questions in a more direct sense. If the latter, I'd recommend checking out his profile and quickly Googling.
> in the real world researchers have to dial back a bit and play nice in order to maintain an ounce of respect in the field.
Despite what I mentioned above, it may be that this is actually true; that Thomas' reputation gives him a certain level of immunity whereas most "normal" researchers would have to stick to a stricter level of etiquette.
All said though, I think SolarNet may also be correct that you seem to have misinterpreted at least some of Thomas' post.
Could you explain? I never really understood him as an individual & he's been... well, harsh at times to me & others.
I think you are in the minority on this. If the hacker community go too far in this direction don't be surprised if there are calls to reign it in with legislation.
There are few things more frustrating than dealing with a vendor that doesn't understand this.
On the other hand, if I find something while being paid to look then I must tell only the client and hope that they are responsible enough to disclose it instead of silently patching.
I don't these are minority views.
I think they do understand this very well; they just want to brainwash people into thinking it's a civic duty to do what they say. It's not. Ethical security researchers' responsibility is to the users, not to the vendors.
IANAL: And the prosc has to prove that you intend or effectively did both market/sell and that it was for illegal use - selling information about an exploit in e.g Chrome is quite different if you try to sell it to the Chrome developers.
Now, people that try to legislate community norms - outside of obvious cases like prohibiting murder - usually make things worse. But ignoring community norms is also not a smart thing to do.
only if your access to the vendor's systems wasn't precluded with an eula that you agreed to beforehand. I don't see company be stupid enough to not put in broad legal terms in the eula to prohibit this sort of penetration.
But if you weren't given permission first (which may involve said eula) then that must mean you're accessing without permission - which is illegal.
they have the option - and some chose not to sue for either PR reasons and/or unlikely to recoupe anything from a researcher anyway, so rather not spend the cost.
It took over a week of me searching the website for a security page, trying to contact support on Twitter, emailing security addresses that turned out to be undeliverable, looking through my network for a contact, etc. Someone with a better network than me finally put me in touch with the right person in the security org, and the first response I received was, essentially, "Are you trying to sell us something? This seems manipulative. Why didn't you email our [undeliverable] address?"
As far as I'm aware, the vulnerability still hasn't been resolved. Here are the problems I see:
1. I don't know if this is a good implementation of this idea in the first place - how does this handle liability? I agree with what tptacek mentioned in this thread already: this seems underspecified, and companies looking to adopt this will want to have specific assurances with regards to liability and what's allowed. What exactly does "Disclosure Type: Full" mean?
2. If a company is not already in the tech trendy group that like to host security pages and use bug bounties, this is probably not even going to be on their radar. What is different about this that will appeal to them? How do you get a large, faceless organization which regards its security organization as more of a risk/compliance/continuity division than a technical software security division to adopt this standard?
To be clear, as someone who has gone through this song and dance several times, I absolutely would like to see improvements. But I don't know that this is the best way to do it. A more realistic standard might be a central tracker that maintains a list of key security contacts and their email addresses at various organizations. There are already lists like this floating around on GitHub, but they tend to be extremely out of date. Other than that it would be helpful to try and standardize a /security.html page with details about who and where to contact (instead of, say, a page that assumes every customer looking for a security contact mistakenly believes their accounts have been "hacked").
After they finally acknowledged the problem they started treating me like I was a criminal who attacked them. I strictly follow responsible disclosure, but it's crap like that that makes me want to reconsider sometimes.
- end-to-end encrypted with a brand new, limited-time GPG key
- use a disposable email service
- make up an alias
- send from public WiFi at a coffeeshop some distance away that doesn’t have corporate CCTV
- Don’t bother with Tor or a VPN because it advertises “suspicious behavior” across network hops that
If you volunteer information about your identity, it can be easily misused to attack you in a myriad of legal, professional, social media and other dirty-tricks ways.
If I may give a hint: Sometimes a good way to handle such issues is going through the media.
(I've handled such things in the past, you can mail me if you want, but I don't want to see this as self-advertisement. I guess there are plenty of other Journalists covering IT security who are willing to handle such issues as well.)
Change billid to 8394811... you're seeing someone else's bill.
Tried to contact Ameritech for a couple of days... got nowhere. Had a friend with connections at a major news network, and sent some example links (should have sent screenshots?), but he waited too long to click and the session id had timed out, and he wrote back and said to stop wasting his time.
I ended up connecting with some consumer advocate with a passion against ameritech - he owned 'fuckameritech.com' and he posted details of my exploit (although... without naming me as the reporter - still not sure if I should have pressed for that or not), and he contacted a bunch of Chicago-area media... and... something like 45 minutes after he posted that day their entire 'customer portal' was down for about 4 days. When it came back up, the new URL was something like
If you're the guy who ran fuckameritech, thanks for helping get that out. :)
Some had a bug bounty/disclosure program, and fixed it within minutes. Some had CERTs... who never acknowledged the email. Some didn’t have any public contact methods, so had to hunt for them via google. Some acknowledged the issue, removed the private data but didn’t fix the actual ‘hole’...
On the whole, I spent more time trying to find contacts for websites, than actually finding the issue(s).
Also Occams razor, a lot of websites are hell to navigate and find the security reporting contact information. Why not make it super easy?
rfc5785 "Defining Well-Known Uniform Resource Identifiers (URIs)"
[edit: I see the FAQ says that, while the linked Internet Draft says it goes at top-level like robots.txt]
For others who are interested, here a detailed description: https://tools.ietf.org/html/rfc5785
I actually think we are almost there now: DNS to find it and /.wellknown/ to describe it.
SNI avoidance is a weird one and you are probably right but SNI is only important (hah!) if you consider that we should all be using IPv6 by now and have billions of address per host to play with 8)
The rest of the world gets uptight about TLDs but there is one - relating to ENUM - that no one seems to mention and get upset about. If they did, instead of crapping on about whether Brazil or a US company owned "amazon" for example, then I'll give you (nearly) free telephone calls and a load of big Telcos will implode. Except they wont and the world will move on to the next logical step where all internet connections are no longer considered a sub-function of a telephone connection and the notion of a "leased line" is an anomaly.
Whoops, got carried away there. As you say - SRV should get more love.
No, they're really not. An SRV record points to a hostname, not an arbitrary string. You can do lookups in one RTT, not two.
Gives a 404
Has the file... maybe they should either follow their own advice or update the text of the advisory?
Like this it appears a bit inconsistent.
So yeah, it's a bit inconsistent.
It should not be too hard to make an exception for this path in your web server configuration. Also note that URLs do not have to match the local filesystem, so you could as well use an alias to a different local path.
Also, Google has increasingly made it difficult to automate searches, so the effort required to notify multiple websites increases quickly.
That's an understatement. Doing something as simple as
Since then, it's been hard to justify using gmail for anything serious.
Yes, predating this and .well-known, the principle exists as
There was a website way back in the 90s which encouraged a standardisation of paths such as /about, /products, /services etc.
Also the Contributions AKA "these random accounts from twitter may have participated to the project to some degree, but we don't want to disclose their names or contributions" section of the website made me smile.
doesn't seem very useful to me. since the document supports comments, just put that link in a comment.
This one line is particularly egregious:
"Security.txt defines a standard to help organizations define the process for security researchers to securely disclose security vulnerabilities."
> Defines a X to help Y define a Z for Q to W.
> Securely close security vulnerabilities.
_Not everything needs to be a large project._ A template right on the webpage would have the same value. That raises the point that a blog post about the RFC would have greater utility.
In practicality, I don't see a need for this .txt file to be present. There are other .txt files, such as humans.txt, that don't have widespread adoption, due to lack of generated value.
While there are many issues inherent in reporting vulnerabilities, this .txt file will not solve them. Not only is the file's existence error-prone from a maintenance perspective, but my experience has shown me that such a file is not needed.
Due to these factors, I believe that the existence of such a file is unwarranted complexity without a corresponding level of benefit.
Not sure why you linked that, it doesn't fit at all. They're solving completely different problems.
Google is built by a large team of engineers, designers, researchers, robots, and others in many different sites across the globe. It is updated continuously, and built with more tools and technologies than we can shake a stick at. If you'd like to help us out, see google.com/careers.
It's part of every new engineer's first day or two. It's their first shipped PR to add themselves as a human.
And going over my logs for the past year, I only found 6 requests for 'humans.txt' on my server.
It should really be human.txt...
I could see some value in meta-data for automated reporting of security issues.
But the spec would have to be less informal.
There should either be a well known convention (like security@ as others hadn't mentioned), or an external public registry of this sort of thing)
What I mean to say is that the concept of a domain is much deeper than html, and we use it so shallowly. The web is distributed and simple if we want it back.
But then, the company would want to brand it, and the marketing team would want to track views, and the sales team would want some kind of call to action, and the product managers would want to make it cross platform and internationalized. And suddenly we're back at something that looks like HTML. sigh
Specify your disclosure policy. This directive MUST be a disclosure
type. The "Full" value stands for full disclosure, "Partial" for
partial disclosure and "None" means you do not want to disclose
reports after the issue has been resolved. The presence of a
disclosure field is NOT permission to disclose vulnerabilities and
explicit permission MUST be saught where possible.
And it's worth remembering that any kind of disclosure "policy" is a request for a favor from the researcher, so it's good to word things accordingly. You wouldn't generally ask for a concession (like honoring an embargo on publicly reporting a finding you took time to generate) right after also "asking" the reporter to report "in good faith".
If you have anything more you want in it, please let me know.
The coordination is inherit from the fact that they are honouring your disclosure policy. The word coordinated is redundant.
Objectively, it should be called a "security disclosure" policy.
How would you improve it to clarify the aim is to be generally useful for both sides?
For example, when I personally discover a security issue, I want to be able to report it to a company, and also include a link to this doc, and ask "Here's how I suggest we interact and why; what do you think?".
In my life, I have only reported two different vulnerabilities to two different vendors.
One of them didn't care at all. They were transmitting usernames and passwords in plaintext and I actually showed one of their engineers this, live, in person, and he just shrugged.
The other one told me something to the effect of "yes, this is very serious. We'll look into it right away" and actually fixed it... 3 years later.
Your biggest hurdle isn't reporting vulnerabilities. That's the easy part.
The biggest hurdle is getting someone to care.
Interesting. I don't think this is for you.
Ever have the task of needing to report a security issue to 10k sites and wish you could have any hope of automating it?
I certainly haven't. More like 100k!
Don't be so negative when people try to do good.
The example security.txt for this site currently has a twitter handle as the contact. How are you going to automate that?
Major risks and CVEs are already published in the appropriate news channels, which is far more efficient and effective.
There's a number of people out there anonymously helping others close their holes. Some holes can be easily scanned.
Here, now that I've shown you that, let me show you the MITM I've been running during the conference! So far I've collected NNN credential sets
Edit: forgot link: https://github.com/CVEProject/docs/issues/53
I'm a bit negative, but I don't want more useless emails from security researchers. Companies that do security well will adopt this, and are already easy to get ahold of...
And should alsonspecofy how private data is stored
And how private data is collected, processed and for how long it is retained
LibreJS already has standardised ways for websites to report this, which are far more flexible and useful.
Changing things is how you get detected.
Much like that recent thread where John Kelly brought his phone in to IT because it wouldn't update and they found malware. That's an obvious failure in any "APT" hackers guide.
This isn't like modifying .bash_history to hide malicious behaviour (or worse removing command auditing entirely when it's a potential company/personal policy to activate it on all machines).
Yet, for many, many years, that was probably the most common thing that attackers (including a much younger, teenaged version of yours truly) did. At that time, however, "defacing" web sites was done mostly for bragging rights and without malicious intent. In many (most?) cases, the original index.html page would be copied/backed up and the "new" web page (created haphazardly using vi, perhaps) would replace or augment it.
For a long time, there were even mirrors  of these defaced web pages. attrition.org ran one from 1995-2001 and, for much of that time, even ran a mailing list where they would announce these "defacements" -- often, immediately after they occurred (after being tipped off by the attackers). It was pretty common to be able to view the "still defaced" site while it was still live (before being taken down and/or restored). As a frame of reference, attrition.org stopped mirroring these sites in May 2001 .
> ... it will no longer mirror the defacements because keeping up with the volume and rate of hacks is too much work ... ()
So, yeah, it happened A LOT. It is, of course, a much different world now (although defacements still occur regularly).
Just make an abuse@ or security@ catch-all for your mail server and point all your domains' MX there. If you're worried about spam, build in a bounce-back that asks for a signed email to make it past the bounce. Security researchers can figure that out.
This is really bad advice. Always disable spam filtering outright on abuse@ and postmaster@, never filter them, never bounce anything. Read every one. I'm not saying hook those mailboxes up to automatically make a JIRA case on each inbound -- there is screening to be done. However, the idea of abuse@ is that spam is often forwarded directly to it if it's originating from you; if you are catching spam to abuse@ or postmaster@, well, do the math.
abuse@, postmaster@, security@ and friends are discussed in RFC 2142, with a couple "must" directives that are important:
> However, if a given service is offerred [sp], then the associated mailbox name(es) must be supported, resulting in delivery to a recipient
I recently ran a port scan of a subset of the internet on a single port. (To get an estimate how many servers with a certain old protocol version were still running, as I wanted to drop it from a client I was working on)
I previously messaged the contact email given by the AS of every affected subnet.
I got over 4 weeks not a single complaint or request to be excluded.
Then I did the scan, and got tons of complaints – and once I pointed out the email I sent them, most got very quiet, but one continued to complain and tell me I should have tried harder to contact them.
My favorite ones to ignore are from a company who allegedly represents Viacom, Warner Bros., TNT, etc., keeping me up to date on what our customers are torrenting.
Well... the most recent time I've seen abuse@ mentioned, it was Hetzner looking at fake IPs on DDoS traffic and spamming abuse complaints at the wrong people. https://www.reddit.com/r/discordapp/comments/70dwa6/ip_banne...
It'd be cool if we could throw RADb, PeeringDB, RFC 2142 contacts, WHOIS data, the routing table, courtesy phones, spam blacklists, and all the other tools available to administrators in a blender and come up with a nice, centralized way to instant message the right person with root at another Internet company for any possible scenario from any possible identifiable resource involved in the Internet without having to possess the tribal knowledge you pick up after doing it a while. Such a facility doesn't line up with the decentralized nature of the Internet, which is why I think it hasn't happened yet. How would you talk people into using it, too? There's a critical mass problem. I've always wanted to build exactly that thing, but I fear nobody would use it.
Until that exists, it's important to watch your abuse mailboxes. Only point I'm making. The Internet moves fast.
If people have trouble contacting a business by email, they don't try to use email to fix the problem.
They very sensibly go to the web site and try by phone or Twitter or some other medium that isn't the one that isn't working.
Unfortunately, I'd guess that ~75% of the domains I try to contact about that don't have it. (And only slightly better with abuse@, for that matter.)
And then there are the shops who seem to believe that sending mail from bad addresses and publishing no public email addresses for anything is cool. I'm not fond of this solution, but have started considering them spammers and blocking them entirely. (I don't particularly care about them - if they want to send email to my systems they can put their big-pants on and join the rest of the responsible operators; I just don't like seeing services balkanizing like this.)
If you operate SMTP and I can't get through to you on postmaster@ ("yo, back off your retries," for example), I'm probably going to blacklist you. I'm not unique in the slightest. The decentralized nature of SMTP demands that I have a hotline to you, the MX operator, without having to navigate a Web site to maybe find you.
So I see no justification there for paying staff to manually filter postmaster@ spam.
Instead, we should have a central vulnerability repository that is open-source, standardized, public, and transparent. This repository should vet vulnerability reports before contacting the software owner prior to disclosure.
There are plenty of those. Generally, they take a cut of the bounty.