Hacker News new | comments | show | ask | jobs | submit login
Security.txt (securitytxt.org)
705 points by janvdberg 46 days ago | hide | past | web | 141 comments | favorite



I don't think this is fully baked.

Most importantly: there are no formal definitions for what these disclosure terms mean, the distinctions between them are not unimportant, and the most important distinctions have nothing at all to do with "disclosure".

To me, as a working professional in this field, "disclosure: full" informs me that if I report a vulnerability in your service, you'll publish some sort of public advisory, or agree to me doing so, on my own timeline, regardless of when you're fully patched and regardless of whether you've confirmed or triaged the bug.

Is that exactly your understanding of "disclosure: full"? If not, there's your first minor problem.

But there's more important problems. Actually, I don't need any permission whatsoever to publish a vulnerability I find in your site, nor is there any real norm in the infosec community that I shouldn't ("responsible disclosure" is a meme propagated by vendors who want to control independent researchers). You can state your preference that I coordinate with you ("coordinated disclosure" is the researcher-friendly way to write "responsible disclosure"), but you can't dictate that to me --- and the suggestion that you could is presumptuous and rude.

But that's still not the big problem. The BIG problem is that disclosure policy has nothing to do with whether you can test.

There is a widespread belief on the US Internet that light web application testing of SaaS products is allowed. It is very much not. Without explicit permission to test a website I put on the Internet, your doing so violates (and probably should violate!) CFAA, and in some states also state statutes. That goes not just for disruptive testing, like seeing if you can break a SQL query, but also for stuff that most testers believe is harmless, like light XSS testing. I can't tell you how many times an XSS test vector "broke" a site for a client, for instance when it got cached as a "recent search" and displayed, in a DOM-breaking way, to every user on the home page for the site. Bring down a site like that without permission, and even if it says "disclosure: full", you might be civilly liable for the downtime.

I think the "disclosure" field here is thus pretty ill-advised. It advertises a policy that isn't super useful for researchers to know, and doesn't define the most important policy, which is "am I allowed to test this site and what are the rules of engagement for doing so".

What does that leave us with? A PGP key and the name of your security email address. The PGP key is useful, but the contact address should just always be "security@" anyways.

There's really no need for a standard format for this stuff. Just create a "security.html" or whatever that reminds people to send mail to security@, publishes your PGP (note to Symantec: public key) key, and gives people permission to test the site.

Update

One of the authors DM'd me to say they're removing "Disclosure" from the next version of the draft, which is I think the right move.


> Actually, I don't need any permission whatsoever to publish a vulnerability I find in your site, nor is there any real norm in the infosec community that I shouldn't

Is there really no norm to at least try and contact the affected service before publishing an exploit? Seems like basic courteousy, though I understand some people are not super receptive.



Yeah that post makes a lot of sense. It seems like this would establish a norm of generally trying to coordinate with a vendor in many circumstances (if a web service is vulnerable hard for users to fix it themselves after all).

Might of misread what "norm" meant here


Regarding your analysis as to the actual usefulness of this spec - or lack thereof - I agree. There's not much substance or detail; the result seems like the bullet points of an initial PowerPoint presentation for such a concept, rather than the final output of a panel attempting to create a proposal meant to be seriously considered and adopted.

As for the rest of your comment... are you really a "working professional in this field"? By that I'm asking whether you are self-employed and playing fast and loose according to your own rules, or if you instead have an established career within the industry for which your professional peers stand beside your methods? I have to believe that honest professionals with a shred of reputability in this field would not be advocating that playing nice, so to speak, is some altruistic gift on the part of the researcher. "Security Researchers", quoted to loosely include "rogue" grey and black hats who think they have free reign to hack in any way they see fit, have gone to jail for what you appear to be claiming is a risk-free "right". It's not black and white, and the courts seem to favour whichever side suits their fancy in each individual case.

It really bothers me to see anyone suggest that any public port on a machine is 100% free reign to abuse. If public-facing access is all it takes to be free game, then I must be morally and legally in the right to pick the door locks of private citizens and businesses, or peel off the face of any ATM in an effort to "gauge its security". The loophole excuse is that the Computer Fraud and Abuse Act, in theory, only covers computer owned by the government and financial institutions. When all moral obligations are ignored, and skirting around lacklustre laws is the only defence, the intentions of some "researchers" quickly become questionable.

Your stance on the subject is probably fairly common in terms of what people want to be labelled as fair game, while in the real world researchers have to dial back a bit and play nice in order to maintain an ounce of respect in the field.


Read what he typed again, you didn't get it the first time. He was pretty clear, in CAPS LOCK LETTERS NO LESS, that the problem here was that this policy was miscommunicating what can be done, as you put it "It really bothers me to see anyone suggest that any public port on a machine is 100% free reign to abuse." He was very very clear that he believes you have 0% right to do that, and that the system proposed in the OP would encourage people to do that, that was his BIG problem with it, so you completely misread what he said (or I suspect, stopped halfway through, about on his fifth paragraph).

His comment on responsible disclosure is about the other part of the industry. The one where you install a vendors software on your machine, or where you analyze client code with out their server. As a researcher you have zero responsibility to play nice with the distributors of that content. That's like saying movie reviewers can only print reviews of already released movies on a schedule and in a way approved by the studios (certainly studios - and software companies - are allowed to make deals - contracts - with reviewers for embargoed, or even private, reviews of content before it's released).


> are you really a "working professional in this field"?

> I'm asking [...] if you [...] have an established career within the industry for which your professional peers stand beside your methods?

I'm not sure if you're already aware of tptacek's reputation in the field and you're hinting at something different, or if you're asking these questions in a more direct sense. If the latter, I'd recommend checking out his profile and quickly Googling.

> in the real world researchers have to dial back a bit and play nice in order to maintain an ounce of respect in the field.

Despite what I mentioned above, it may be that this is actually true; that Thomas' reputation gives him a certain level of immunity whereas most "normal" researchers would have to stick to a stricter level of etiquette.

All said though, I think SolarNet may also be correct that you seem to have misinterpreted at least some of Thomas' post.


>tptacek's reputation

Could you explain? I never really understood him as an individual & he's been... well, harsh at times to me & others.


> You can state your preference that I coordinate with you... but you can't dictate that to me --- and the suggestion that you could is presumptuous and rude.

I think you are in the minority on this. If the hacker community go too far in this direction don't be surprised if there are calls to reign it in with legislation.


If I find a bug on my own I can do as I please with it. I have no obligation to inform the vendor. I can write an exploit and publish it, share it with my friends, or sell knowledge of it to whomever I please. The vendor has zero right to my work or to dictate what I do with it.

There are few things more frustrating than dealing with a vendor that doesn't understand this.

On the other hand, if I find something while being paid to look then I must tell only the client and hope that they are responsible enough to disclose it instead of silently patching.

I don't these are minority views.


> There are few things more frustrating than dealing with a vendor that doesn't understand this.

I think they do understand this very well; they just want to brainwash people into thinking it's a civic duty to do what they say. It's not. Ethical security researchers' responsibility is to the users, not to the vendors.


The line is crossed if you attempt to market and sell it for illicit use, or a prosecutor thinks they can demonstrate that you did effectively that. See: recent case about the guy writing the remote system control software then became a popular hacking tool / RaT.


Just to clarify, he was not charged for writting the RAT, he was charged for maybe wanting to sell an exploit in some fashion, possibly to a third-party that wanted to sell it on.

IANAL: And the prosc has to prove that you intend or effectively did both market/sell and that it was for illegal use - selling information about an exploit in e.g Chrome is quite different if you try to sell it to the Chrome developers.


There's right and there's right thing. The right is something that if you go against it, you are violating the law and can be punished by the police, courts, etc. But the right thing is much wider - there is myriad of things which are legal but not the right thing to do. The right thing is regulated by community norms and culture. Responsible disclosure is that kind of thing. It's not unlawful, but it is a becoming more and more a community norm.

Now, people that try to legislate community norms - outside of obvious cases like prohibiting murder - usually make things worse. But ignoring community norms is also not a smart thing to do.


> If I find a bug on my own I can do as I please with it.

only if your access to the vendor's systems wasn't precluded with an eula that you agreed to beforehand. I don't see company be stupid enough to not put in broad legal terms in the eula to prohibit this sort of penetration.

But if you weren't given permission first (which may involve said eula) then that must mean you're accessing without permission - which is illegal.


That argument doesn't hold up. If it did security researchers who publish findings in commercial products would all have been sued by now. Many countries have fair use exemptions to prevent copyright holders from engaging in the kind of abuse you describe.



Yeah, companies sometimes bully and intimidate, but all those links point at baseless lawsuits that didn't legally succeed.


> If it did security researchers who publish findings in commercial products would all have been sued by now.

they have the option - and some chose not to sue for either PR reasons and/or unlikely to recoupe anything from a researcher anyway, so rather not spend the cost.


Good luck getting this adopted. A couple of months ago I was trying to responsibly disclose the complete exposure of every customer's name, email address, phone number and the last four digits of their credit card to a public QSR company that allows online orders. It was straightforward enough that I found it passively while trying to login.

It took over a week of me searching the website for a security page, trying to contact support on Twitter, emailing security addresses that turned out to be undeliverable, looking through my network for a contact, etc. Someone with a better network than me finally put me in touch with the right person in the security org, and the first response I received was, essentially, "Are you trying to sell us something? This seems manipulative. Why didn't you email our [undeliverable] address?"

As far as I'm aware, the vulnerability still hasn't been resolved. Here are the problems I see:

1. I don't know if this is a good implementation of this idea in the first place - how does this handle liability? I agree with what tptacek mentioned in this thread already: this seems underspecified, and companies looking to adopt this will want to have specific assurances with regards to liability and what's allowed. What exactly does "Disclosure Type: Full" mean?

2. If a company is not already in the tech trendy group that like to host security pages and use bug bounties, this is probably not even going to be on their radar. What is different about this that will appeal to them? How do you get a large, faceless organization which regards its security organization as more of a risk/compliance/continuity division than a technical software security division to adopt this standard?

To be clear, as someone who has gone through this song and dance several times, I absolutely would like to see improvements. But I don't know that this is the best way to do it. A more realistic standard might be a central tracker that maintains a list of key security contacts and their email addresses at various organizations. There are already lists like this floating around on GitHub, but they tend to be extremely out of date. Other than that it would be helpful to try and standardize a /security.html page with details about who and where to contact (instead of, say, a page that assumes every customer looking for a security contact mistakenly believes their accounts have been "hacked").


This is utterly maddening. I had a similar experience, except that at first they didn't believe me regarding the vulnerability. I basically said, "cool, I'm glad it's not a problem, so when I release the deets and write up a blog post you have nothing to worry about ;-)"

After they finally acknowledged the problem they started treating me like I was a criminal who attacked them. I strictly follow responsible disclosure, but it's crap like that that makes me want to reconsider sometimes.


Yup. They often claim you’re “malicious” to cover their incompetent butts. This is why reporting should be:

- end-to-end encrypted with a brand new, limited-time GPG key

- use a disposable email service

- make up an alias

- send from public WiFi at a coffeeshop some distance away that doesn’t have corporate CCTV

- Don’t bother with Tor or a VPN because it advertises “suspicious behavior” across network hops that maybe/are monitored/logged

If you volunteer information about your identity, it can be easily misused to attack you in a myriad of legal, professional, social media and other dirty-tricks ways.


IMO, the problem with disclosure is one of the biggest problems with infosec right now...


> Good luck getting this adopted. A couple of months ago I was trying to responsibly disclose the complete exposure of every customer's name, email address, phone number and the last four digits of their credit card to a public QSR company that allows online orders. It was straightforward enough that I found it passively while trying to login.

If I may give a hint: Sometimes a good way to handle such issues is going through the media.

(I've handled such things in the past, you can mail me if you want, but I don't want to see this as self-advertisement. I guess there are plenty of other Journalists covering IT security who are willing to handle such issues as well.)


I had 2 instances of this sort of thing back in ... 99 or 2000. One in particular was a pretty explicit disclosure in Ameritech's online phone bill viewer. (I think this was 2000?)

ameritech.net/viewmybill.do?foo=bar&x=y&sessionid=xpq82947wrwd&billid=8394810

Change billid to 8394811... you're seeing someone else's bill.

Tried to contact Ameritech for a couple of days... got nowhere. Had a friend with connections at a major news network, and sent some example links (should have sent screenshots?), but he waited too long to click and the session id had timed out, and he wrote back and said to stop wasting his time.

I ended up connecting with some consumer advocate with a passion against ameritech - he owned 'fuckameritech.com' and he posted details of my exploit (although... without naming me as the reporter - still not sure if I should have pressed for that or not), and he contacted a bunch of Chicago-area media... and... something like 45 minutes after he posted that day their entire 'customer portal' was down for about 4 days. When it came back up, the new URL was something like

  ameritech.net/viewmybill.do?foo=bar&x=y&sessionid=xpq82947wrwd&billid=8628AWIEQIUASDASPDQKLMCLKALMCNMQWEOUGI8761238762139ewrdsfEIURHFDSKJBDOSIDSKJBFNBOIKJDNSFKJNSDFISODSOFU8321270r75123670124sfsdhlbhfuasbyrlewcbhrdkjsfhdsfer78984y32hrfdj....etc
The bill ID was now something like 500+ characters long - probably a hash of something, but not as easily randomly guessable. IIRC, some versions of either netscape or IE had troubles with URLs that were that long, so whatever I was using I needed to switch to a different browser.

If you're the guy who ran fuckameritech, thanks for helping get that out. :)


Also, a lot of companies/orgs deny vulnerabilities exist or even “shoot the messenger.” Reporting is often an inconsistent, chaotic gamble, which pushes more people to either not report the issue, give up or go black hat to monetize their findings.


Agreed regarding trying to get the information in front of the right people, or even just into the company. Recently been trying to contact 20-30+ companies regarding public exposure of private data, and I’ve had a real spectrum of results...

Some had a bug bounty/disclosure program, and fixed it within minutes. Some had CERTs... who never acknowledged the email. Some didn’t have any public contact methods, so had to hunt for them via google. Some acknowledged the issue, removed the private data but didn’t fix the actual ‘hole’...

On the whole, I spent more time trying to find contacts for websites, than actually finding the issue(s).


It doesn't handle liability, and really nothing does other than the company in question doing the right thing (e.g. running a bug bounty via a well known third party and signing legal docs, or making public statements backed with documents.

Also Occams razor, a lot of websites are hell to navigate and find the security reporting contact information. Why not make it super easy?


Hmm; I wonder, maybe in such case contacting some kind of national office which has the power to fine companies for breaching personal data protection laws could be an answer?


Should surely be .well-known/security.txt, or if not explain why not.

rfc5785 "Defining Well-Known Uniform Resource Identifiers (URIs)"

[edit: I see the FAQ says that, while the linked Internet Draft says it goes at top-level like robots.txt]


I had only seen ".well-known/" used by Let's Encrypt, and wasn't aware that it was part of a larger (proposed) standard. Thank you for mentioning it.

For others who are interested, here a detailed description: https://tools.ietf.org/html/rfc5785


A bit off-topic, but I don't understand the popularity of hidden directories for what should be human-visible files. Why be intentionally obscure rather than discoverable?


To avoid namespace collisions. It's not discoverable in any case because no one has a directory listing as their homepage.


Ahhh. That explains why lets encrypt does it!


There's also Netscape's (?) email autodiscover mechanism with a .xml file served from there that describes how to do POP/IMAP/SMTP etc. However, despite /.wellknown/ being a bit of a standard, you'd be amazed at how many mechanisms you have to support to get autodiscover to work for all email clients. As it turns out SRV DNS records are also a bit of a standard for publishing pointers to services, and TXT records, and RANDOMLY_NAMED_RECORDS because updating your server n resolver to support another record type is such fun.

I actually think we are almost there now: DNS to find it and /.wellknown/ to describe it.


SRV records are extremely popular outside of the web. I really wish they would get more adoption. It's such a clean solution. We could have avoided the entire need for SNI if browsers supported service records.


I love SRV records but they are really just TXT with an API! PTR is A in a weirdly named domain ...

SNI avoidance is a weird one and you are probably right but SNI is only important (hah!) if you consider that we should all be using IPv6 by now and have billions of address per host to play with 8)

The rest of the world gets uptight about TLDs but there is one - relating to ENUM - that no one seems to mention and get upset about. If they did, instead of crapping on about whether Brazil or a US company owned "amazon" for example, then I'll give you (nearly) free telephone calls and a load of big Telcos will implode. Except they wont and the world will move on to the next logical step where all internet connections are no longer considered a sub-function of a telephone connection and the notion of a "leased line" is an anomaly.

Whoops, got carried away there. As you say - SRV should get more love.


> they are really just TXT with an API

No, they're really not. An SRV record points to a hostname, not an arbitrary string. You can do lookups in one RTT, not two.


https://securitytxt.org/.well-known/security.txt

Gives a 404

but

https://securitytxt.org/security.txt

Has the file... maybe they should either follow their own advice or update the text of the advisory?

Like this it appears a bit inconsistent.


Also, the contact is a twitter instead of an email, doesn't that break the whole reason for this, which is being able to automate it to some extent?


Per the Draft RFC `security.txt` is meant to be akin to `robots.txt`, in that it's at the root of the path.

So yeah, it's a bit inconsistent.


That is going to cause some problems with a very large percentage of webservers being configured to never allow the fetching of any dotfiles to avoid accidentally exposing stuff that wasn't meant to be in the web tree in the first place.


/.well-known/ is defined in RFC5785 and paths under this prefix can be registered with IETF. For example, /.well-known/acme-challenge/ is used by ACME, the domain verification protocol used by Let's Encrypt.

https://www.ietf.org/rfc/rfc5785.txt

It should not be too hard to make an exception for this path in your web server configuration. Also note that URLs do not have to match the local filesystem, so you could as well use an alias to a different local path.


Seems to be fixed?


It is now present at both urls... well, at least it is there :)


Probably symlinked for backward compatibility while they modify their RFC to make it consistent.


This is completely useless. Any company that cares about security enough to do this will already have security@, support@, abuse@ emails setup. Those are the well known contact endpoints. Bug bounty platforms also have contact info and messaging built-in.


I agree that any company that cares about security already has the emails setup, however I don't think it's completely useless. So many companies really don't know what they don't know regarding security. Easy digestable steps can be helpful. But I agree that in practice 95% of companies won't give a crap about this and therefore won't do anything with it.


what RFC describes that standard?


Well, common sense for one, but also https://www.ietf.org/rfc/rfc2142.txt section 4!


cf. RFC2142


If you have a disclosure policy, chances are googling "company name + disclosure" will bring it up. Don't really see benefits of standardising this, not going to help any companies that previously had no disclosure policies decide to implement one and "discoverability of existing disclosure policies" doesn't really seem like a big issue.


The more you use a Google feature as a crutch, the less the web is semantic.

Also, Google has increasingly made it difficult to automate searches, so the effort required to notify multiple websites increases quickly.


> Also, Google has increasingly made it difficult to automate searches

That's an understatement. Doing something as simple as

    inurl:"humans.txt"
...and consuming the first three pages showed me CAPTCHA. It was done via a browser, manually. Not all links were clicked.


Yep, I run across this often. Kind of funny that using a bit more advanced functionality instantly makes you suspicious.


A few years ago, my gmail account was suspended for suspicious activity after I sent two messages to verify that my own email server is working following some updates & configuration changes.

Since then, it's been hard to justify using gmail for anything serious.


> ... the less the web is semantic.

Yes, predating this and .well-known, the principle exists as https://en.wikipedia.org/wiki/Semantic_URL

There was a website way back in the 90s which encouraged a standardisation of paths such as /about, /products, /services etc.


Enables automated vulnerability disclosure.


Nobody wants this. Automated disclosure means getting even more automated-scan reports. Total waste of time.


The data behind this is not machine readable anyway, what's the gain here?


A section for acknowledgements?

Also the Contributions AKA "these random accounts from twitter may have participated to the project to some degree, but we don't want to disclose their names or contributions" section of the website made me smile.


according to the IETF draft, the acknowledgements directive is meant to point to a page thanking people who have discovered and reported vulnerabilities in the past.

doesn't seem very useful to me. since the document supports comments, just put that link in a comment.


This entire page reads as a stream of consciousness.

This one line is particularly egregious:

"Security.txt defines a standard to help organizations define the process for security researchers to securely disclose security vulnerabilities."

> Defines a X to help Y define a Z for Q to W.

> Securely close security vulnerabilities.

Ick.


This site is horribly complex for creating a plain-text file with four fields. Perhaps, the complexity comes from having 10 contributors.

_Not everything needs to be a large project._ A template right on the webpage would have the same value. That raises the point that a blog post about the RFC would have greater utility.


There are people out there who spend the time and effort to create a good looking site, even for something as simple as generating a plain text file with 4 fields. Your comment would have greater utility if it was actually about the subject of the site itself, and not about how it was designed.


That's a fair point.

In practicality, I don't see a need for this .txt file to be present. There are other .txt files, such as humans.txt, that don't have widespread adoption, due to lack of generated value.

While there are many issues inherent in reporting vulnerabilities, this .txt file will not solve them. Not only is the file's existence error-prone from a maintenance perspective, but my experience has shown me that such a file is not needed.

Due to these factors, I believe that the existence of such a file is unwarranted complexity without a corresponding level of benefit.


I think humans.txt (http://humanstxt.org/) fits more, and already exists.


> It's an initiative for knowing the people behind a website. It's a TXT file that contains information about the different people who have contributed to building the website.

Not sure why you linked that, it doesn't fit at all. They're solving completely different problems.


Neither are solving any actual problems.


I've never heard of this. I wonder how many websites implement this. I tried a bunch of sites, but the only one I could find that had a humans.txt file was google:

Google is built by a large team of engineers, designers, researchers, robots, and others in many different sites across the globe. It is updated continuously, and built with more tools and technologies than we can shake a stick at. If you'd like to help us out, see google.com/careers.


We do: https://esharesinc.com/humans.txt

It's part of every new engineer's first day or two. It's their first shipped PR to add themselves as a human.



I found this as part of a user agent in logs for my webserver: https://disqus.com/humans.txt

And going over my logs for the past year, I only found 6 requests for 'humans.txt' on my server.



I do! https://redfern.me/humans.txt

It should really be human.txt...


A redirect to LinkedIn definitely isn't a proper implementation.


I agree. It was meant to be a bit... cheeky. Honest question though, is your issue with the fact that LinkedIn isn't plain-text, or the nature of the information that LinkedIn provides?


Neither. If any of us curl/wget/whatever that file, we gain no usable, plain text, human information. We then have to, manually, notice the redirect, take a manual action to follow up, and then those two further objections would apply. Why implement a text file spec that doesn't serve a text file? Even if all your text file contained was "Please see http://linkedin.com/example” it would be a correct implementation.


I was thinking the same thing. It seems the security team would be made up of humans :)


Huh, apparently GitHub redirects theirs to their (HTML) about page.


Nice idea, but a little inconsistent. Even their own site doesn't comply with their own RFC.


Yeah, the RFC seems very vague too.

I could see some value in meta-data for automated reporting of security issues.

But the spec would have to be less informal.


So when a site is hacked, the hackers replace the email and public key in this file with their own (or one that goes to /dev/null), and the site's owners are never informed when someone else notices the breach and tries to use this file as intended.

There should either be a well known convention (like security@ as others hadn't mentioned), or an external public registry of this sort of thing)


Because it's a standardized text file in a standard location, it's quite easy to add a black box monitor of the file and alert if the values change. If anything, mucking around with the file is the last thing hackers want to do, because it would tip off the operators that there had been a breach, that the operators otherwise would not have noticed.


Good points, though those should be spelt out in the original document/site


Whether or not such a file makes sense is another question, but I find the mentality of the "Disclosure" field quite puzzling. It is not up to an affected party to decide what disclosure terms apply. After responsibly disclosing an issue to a site a security bug finder should disclose it to the public, whether the affected site likes that or not.


I seem to remember suggesting jobs.txt as a listing for a company’s open positions. Someone even made a site promoting the idea.

What I mean to say is that the concept of a domain is much deeper than html, and we use it so shallowly. The web is distributed and simple if we want it back.


That's interesting. I love the idea of sites exposing various types of data through plain text files with well-known names. Kind of like (a vastly simplified) RSS feed, but for whatever. Or something vaguely like Gopher sites, back in the day.

But then, the company would want to brand it, and the marketing team would want to track views, and the sales team would want some kind of call to action, and the product managers would want to make it cross platform and internationalized. And suddenly we're back at something that looks like HTML. sigh


Can you expand more on this? What do you mean by “much deeper than html”?


He means if you have a domain you can pretty much serve whatever you want off of it.


Maybe html was wrong phrase. I mean the distributed nature as typified by everyone having their own domain - with attached storage. This was the default back in the dial up days (side thought is that Apple could bring that back - iCloud is nice as storage and all but how much will it take to go from iCloud to paulsdomain.com/photojournal - that’s almost a Facebook killer and at some point is much much easier to distribute)


I can't seem to find a definition of "Partial disclosure" on either the website or the IETF RFC draft.


Yeah, I was looking for the same. All the draft [1] says is:

    2.5.  Disclosure:

    Specify your disclosure policy.  This directive MUST be a disclosure
    type.  The "Full" value stands for full disclosure, "Partial" for
    partial disclosure and "None" means you do not want to disclose
    reports after the issue has been resolved.  The presence of a
    disclosure field is NOT permission to disclose vulnerabilities and
    explicit permission MUST be saught where possible.
In contrast, the actual generator tool on the website uses a URL (https://example.com/disclosure.html) as a placeholder, which doesn't comply with this section.

[1] https://tools.ietf.org/html/draft-foudil-securitytxt-00#sect...


Explicit permission must be saught (sic)? How does that work?


It doesn't even make sense if you assume the terms are defined, because disclosure obligations are bilateral. If I'm reporting a bug to your site because I've found a new ImageMagick vulnerability, it is more likely that I the reporter want an embargo from you the site operator than the other way around.


If you're considering this, please consider responsible disclosure steps. Here are ones I use with consulting clients.

https://github.com/joelparkerhenderson/responsible_disclosur...


Please consider replacing the term "responsible disclosure" with "coordinated disclosure", and revise your steps accordingly. Non-coordinated disclosure isn't necessarily "irresponsible", and the suggestion that it is is frowned upon among serious testers, which are presumably the ones you want to attract with disclosure policy.

And it's worth remembering that any kind of disclosure "policy" is a request for a favor from the researcher, so it's good to word things accordingly. You wouldn't generally ask for a concession (like honoring an embargo on publicly reporting a finding you took time to generate) right after also "asking" the reporter to report "in good faith".


Done. Thank you for the advice, and all your security work.

https://github.com/joelparkerhenderson/coordinated_disclosur...

If you have anything more you want in it, please let me know.


Isn't responsible disclosure the aim here? I don't think substituting coordination for responsible is a sensible strategy for your project.

The coordination is inherit from the fact that they are honouring your disclosure policy. The word coordinated is redundant.

Objectively, it should be called a "security disclosure" policy.


You make good points.

How would you improve it to clarify the aim is to be generally useful for both sides?

For example, when I personally discover a security issue, I want to be able to report it to a company, and also include a link to this doc, and ask "Here's how I suggest we interact and why; what do you think?".


Once again, "responsible disclosure" is a term of art in the industry, and it communicates something he probably doesn't mean to communicate.


This is just "feel-good" bullshit that doesn't actually solve any real problem.

In my life, I have only reported two different vulnerabilities to two different vendors.

One of them didn't care at all. They were transmitting usernames and passwords in plaintext and I actually showed one of their engineers this, live, in person, and he just shrugged.

The other one told me something to the effect of "yes, this is very serious. We'll look into it right away" and actually fixed it... 3 years later.

Your biggest hurdle isn't reporting vulnerabilities. That's the easy part.

The biggest hurdle is getting someone to care.


>In my life, I have only reported two different vulnerabilities to two different vendors.

Interesting. I don't think this is for you.

Ever have the task of needing to report a security issue to 10k sites and wish you could have any hope of automating it?

I certainly haven't. More like 100k!

Don't be so negative when people try to do good.


You needed to contact 100k sites about security? Do you have more details on this?

The example security.txt for this site currently has a twitter handle as the contact. How are you going to automate that?


One may want to notify many website owners at once that it’s a good time to apply a patch in response to a security vulnerability affecting a technology they are using. Ex: WordPress, node.js, MongoDB, etc.


There are 100s of millions of sites, who’s going to crawl all of them and send out alerts? How would you know what backend tech is being used?

Major risks and CVEs are already published in the appropriate news channels, which is far more efficient and effective.


It should be, but it isn't, and more importantly, there's lots of critical infrastructure out there that is highly vulnerable.


Who is running critical infrastructure that you cant easily reach right now? Are you saying you know of a major CVE that has no coverage?


> You needed to contact 100k sites about security? Do you have more details on this?

There's a number of people out there anonymously helping others close their holes. Some holes can be easily scanned.


> One of them didn't care at all. They were transmitting usernames and passwords in plaintext and I actually showed one of their engineers this, live, in person, and he just shrugged.

Here, now that I've shown you that, let me show you the MITM I've been running during the conference! So far I've collected NNN credential sets


Suggested for policy inclusion for CNA (CVE Numbering Authority) guidelines. And yes, I would suggest they remove the disclosure policy, it's not really all that germane for the vendor, from the security researchers point of view.

Edit: forgot link: https://github.com/CVEProject/docs/issues/53


Here's my interview with Ed on the IETF proposal: https://www.bleepingcomputer.com/news/security/security-txt-...


how about just put the right address in whois information??? >.> ... if you gonna put it on the public internet anyhow might aswell use the age old mechanism already in place to do this...


I don't understand why businesses use use private WHOIS services. Surely if you're a business you'll have public contact information? I was once tasked with trying to get in touch with a company who's domain name had expired, but with no website (read: contact page) or real email address in the WHOIS I had to give up.


Ive only done security in SaaS in my career. This text file is pretty useleas. Disclosure is very out of place, and indicates to me that it's pretty much just for companies with a bug bounty

I'm a bit negative, but I don't want more useless emails from security researchers. Companies that do security well will adopt this, and are already easy to get ahold of...


Should have a section for which software license the website code is under

Also for any runtime JavaScripts

And should alsonspecofy how private data is stored

And how private data is collected, processed and for how long it is retained


> Should have a section for which software license the website code is under

LibreJS already has standardised ways for websites to report this, which are far more flexible and useful.


I smell an opportunity for spam bots.


A bit sad that it does not allow for the file to be signed with OpenPGP.


What precludes you from including a comment with a link to the signed version of the file with ".asc" appended to the filename?


Nothing, but then it's an extension to a spec for a five-line plaintext file. Seems quite wasteful.


The first thing every attacker will do: replace security.txt


Attackers don't do things that overtly "change" things... especially something that is visually apparent on public facing websites...

Changing things is how you get detected.

Much like that recent thread where John Kelly brought his phone in to IT because it wouldn't update and they found malware. That's an obvious failure in any "APT" hackers guide.

This isn't like modifying .bash_history to hide malicious behaviour (or worse removing command auditing entirely when it's a potential company/personal policy to activate it on all machines).


> Attackers don't do things that overtly "change" things... especially something that is visually apparent on public facing websites...

Yet, for many, many years, that was probably the most common thing that attackers (including a much younger, teenaged version of yours truly) did. At that time, however, "defacing" web sites was done mostly for bragging rights and without malicious intent. In many (most?) cases, the original index.html page would be copied/backed up and the "new" web page (created haphazardly using vi, perhaps) would replace or augment it.

For a long time, there were even mirrors [0] of these defaced web pages. attrition.org ran one from 1995-2001 and, for much of that time, even ran a mailing list where they would announce these "defacements" -- often, immediately after they occurred (after being tipped off by the attackers). It was pretty common to be able to view the "still defaced" site while it was still live (before being taken down and/or restored). As a frame of reference, attrition.org stopped mirroring these sites in May 2001 [1].

> ... it will no longer mirror the defacements because keeping up with the volume and rate of hacks is too much work ... ([1])

So, yeah, it happened A LOT. It is, of course, a much different world now (although defacements still occur regularly).

[0]: http://attrition.org/mirror/

[1]: https://www.computerworld.com/article/2582627/security0/attr...


Heh I thought the domain ending was "txt" and was trying to register domain with ".txt" before I relised that it doesn't exist and that it's securitytxt.org and not secuirty.txt ! Silly me


....because security teams have great working relationships with all the the owners of all the content platforms in an org...

Just make an abuse@ or security@ catch-all for your mail server and point all your domains' MX there. If you're worried about spam, build in a bounce-back that asks for a signed email to make it past the bounce. Security researchers can figure that out.


> If you're worried about spam, build in a bounce-back

This is really bad advice. Always disable spam filtering outright on abuse@ and postmaster@, never filter them, never bounce anything. Read every one. I'm not saying hook those mailboxes up to automatically make a JIRA case on each inbound -- there is screening to be done. However, the idea of abuse@ is that spam is often forwarded directly to it if it's originating from you; if you are catching spam to abuse@ or postmaster@, well, do the math.

abuse@, postmaster@, security@ and friends are discussed in RFC 2142, with a couple "must" directives that are important:

> However, if a given service is offerred [sp], then the associated mailbox name(es) must be supported, resulting in delivery to a recipient

Emphasis mine.

https://www.ietf.org/rfc/rfc2142.txt


So much this.

I recently ran a port scan of a subset of the internet on a single port. (To get an estimate how many servers with a certain old protocol version were still running, as I wanted to drop it from a client I was working on)

I previously messaged the contact email given by the AS of every affected subnet.

I got over 4 weeks not a single complaint or request to be excluded.

Then I did the scan, and got tons of complaints – and once I pointed out the email I sent them, most got very quiet, but one continued to complain and tell me I should have tried harder to contact them.


I work for an ISP and mails for abuse@ and friends end up in my mailbox. Occasionally, there's a "legit" one that I need to look into but the majority of them are automatically-generated reports that some IP address in our netblocks port scanned them or something stupid. The first one gets ignored, the second one usually gets a response from me to the effect of "please stop sending us these automated notifications", and the third one gets the sending mail server's IP address added to my local blacklists.

My favorite ones to ignore are from a company who allegedly represents Viacom, Warner Bros., TNT, etc., keeping me up to date on what our customers are torrenting.


> if you are catching spam to abuse@ or postmaster@, well, do the math.

Well... the most recent time I've seen abuse@ mentioned, it was Hetzner looking at fake IPs on DDoS traffic and spamming abuse complaints at the wrong people. https://www.reddit.com/r/discordapp/comments/70dwa6/ip_banne...


Hetzner is a long-running social experiment to frustrate other Internet network administrators trying to hail it on all frequencies, so that sort of thing doesn't surprise me. We should all be better actors on the Internet, as this thread underscores, and it's just too bad you sound like you're wearing a sandwich board when you advocate for it. The incentives don't line up with running a business, much like with IPv6 and BCP38.

It'd be cool if we could throw RADb, PeeringDB, RFC 2142 contacts, WHOIS data, the routing table, courtesy phones, spam blacklists, and all the other tools available to administrators in a blender and come up with a nice, centralized way to instant message the right person with root at another Internet company for any possible scenario from any possible identifiable resource involved in the Internet without having to possess the tribal knowledge you pick up after doing it a while. Such a facility doesn't line up with the decentralized nature of the Internet, which is why I think it hasn't happened yet. How would you talk people into using it, too? There's a critical mass problem. I've always wanted to build exactly that thing, but I fear nobody would use it.

Until that exists, it's important to watch your abuse mailboxes. Only point I'm making. The Internet moves fast.


This is great information! Slightly off topic, but I’m curious about your username. Why have you kept on using a throwaway for all of these years?


Reading all mail to postmaster@ has been utterly impractical for many years.


If you're one person with a vanity domain, maybe. If you're running a business that operates mail, it's mandatory.


abuse@: yes. postmaster@: no.

If people have trouble contacting a business by email, they don't try to use email to fix the problem.

They very sensibly go to the web site and try by phone or Twitter or some other medium that isn't the one that isn't working.


I always try postmaster@ first, when reporting mail problems.

Unfortunately, I'd guess that ~75% of the domains I try to contact about that don't have it. (And only slightly better with abuse@, for that matter.)

And then there are the shops who seem to believe that sending mail from bad addresses and publishing no public email addresses for anything is cool. I'm not fond of this solution, but have started considering them spammers and blocking them entirely. (I don't particularly care about them - if they want to send email to my systems they can put their big-pants on and join the rest of the responsible operators; I just don't like seeing services balkanizing like this.)


postmaster@ exists for many, many more reasons than non-deliverability. Again, multiple RFCs dictate its existence and management. Advice to the contrary demands a better rationale than you're providing, and is ill-advised.

If you operate SMTP and I can't get through to you on postmaster@ ("yo, back off your retries," for example), I'm probably going to blacklist you. I'm not unique in the slightest. The decentralized nature of SMTP demands that I have a hotline to you, the MX operator, without having to navigate a Web site to maybe find you.


Deliverability problems may be caused by misconfigured spam filters. Over-aggressive retries won't be.

So I see no justification there for paying staff to manually filter postmaster@ spam.


As an operator of many websites... I don't want more emails from "Security Researchers" who want $100 to tell me the obvious such as, "Login form should be rate-limited" or "emails can be enumerated using password reset form" and the like...

Instead, we should have a central vulnerability repository that is open-source, standardized, public, and transparent. This repository should vet vulnerability reports before contacting the software owner prior to disclosure.


On the flip side, such a vulnerability repository may not have incentives aligned with either the bug reporter or the software owner. See https://palant.de/2017/10/04/observations-on-managed-bug-bou... for a recent post about that sort of problem.


> Instead, we should have a central vulnerability repository that is open-source, standardized, public, and transparent. This repository should vet vulnerability reports before contacting the software owner prior to disclosure.

There are plenty of those. Generally, they take a cut of the bounty.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: