It was a massive waste of time for me to engage with a group of "unknowns" -without prior relationship- who now had a channel to fast-track into my inbox (and got my attention).
The volume of mails was anywhere from 5-20 emails per week which is a lot if you need to go through the points in every mail but you don't have any idea if the entity behind the poor-security-report is even skilled. So you start in good-faith that the report they gave you (e.g. you tell yourself that your own time analysing it is justified "they're probably competent", "they know what they're doing otherwise they'd have a different job", "maybe somebody will get lucky this time", or depending how many bad reports you already read you might be down to "even a blind chicken finds a kernel of corn every now and then")
it seems I ended up having endless discussions with people who automated the whole thing: they crawl the web for /.well-known/security.txt URI and if the find it, automatically start-up metasploit or burp-suite and then send you the canned report while asking you to fix these "serious problems".
Yet if you quiz any of these "researchers" deeper about individual items in their canned reports you get nothing but blank stares, incompetence and attempts to weasel out: "but burpsuite says that it is an error and you should correct it", ...
Initially I went through every email patiently. I tried to engage these guys on why they strongly felt it were vulns or 0days (LOL). I knew what they reported was canned and they never questioned the context or the errors themselves. None of them were people that I would hire or trust to give me good advise. Advise wasn't always just bad. Often they have a whole consulting company sitting behind them who not only "fix your problems" but also migrate your whole stack to Drupal or Wordpress or some such nonsense.
If I look at the other end of the spectrum (infosec-memes and "thought leaders" on twitter) I get people complaining that some customers just don't know how bad their security is and they even dare to ignore their reports or worse (question them on their authority in whether this is a bug or not). The whole thing is the deaf leading the blind here. I get what security.txt was trying to improve because there is/was a real issue for people to find a point of contact. But I do not think security.txt is in any way useful. And it's a total waste of time and money for small companies and bigger companies alike. (if you're bigger you get more attention = more reports but that doesn't improve quality - it only adds more work on your end because now you have N-people (instead of 1 or 2) discussing these constant "non-problems".
On a related note, I often feel uncomfortable reading about a responsible disclosure, and find coordinated disclosure to be a much more balanced term in the context of disclosing security vulnerabilities.
It seems that the underlying problem is that those that do good work in this space don't scan the web to find new customers/leads to pitch their service in shambolic ways. And the skiddies who want to make a quick buck will outnumber the good who might accidentally have ended up on your site (because they like your product etc).
the noise/quality ratio in the whole approach is just too big for this to work well in practice. I'm still waiting for the recruitment industry to catch up with the practice and use the security.txt as a sink for people who want to be added to a list of experts that will be contacted when "the company is ready to do a full security assessment post-MVP". I realize this would be fraudulent and I'm not advocating for it - just saying that fake-job offers aren't uncommon either so this will just be a question of time.
There were a handful of genuinely good contributors, but probably under 10% of reports.
I conjecture that with these drive-by style reports you won't see a huge fluctuation in reports and traffic might not have a huge effect. I got no proof for this but if my hunch is right then there is a limited number of bad-actors that operate in this market. And if they all operate by copying their model from one another then the traffic should be similar regardless of how your popular domains are. Would be cool to have some data on this.
Last year, I discovered a severe security flaw on a couple dozen websites and the sheer communication of this was super painful. I can't just e-mail someone at firstname.lastname@example.org, I'd usually have to send an e-mail along the lines of "Can you give me a contact for your admin/security guy? I have something here and I can't quite disclose it just now." The response rate was extremely low.
If you just give me a security.txt, at least I know I can disclose something and I have some level of certainty that the e-mail will be read.
I'm willing to bet that 10 years after this is standardized the percentage of HTTP/S domains hosting security.txt files will be around 1%. And that 1% will be websites that are mostly already doing the right security things.
That's what spammers,scammers and sales-people do. Just describe how to repeat the issue. Most of the time they will forward to the right person. Don't expect to hear anything from them.
Knowing that someone have been into your house (server) and sneaked around is very uncomfortably, so don't expect even a thank you.
Like this: https://kloudtrader.com/security
And on other other: what's with this domain-specific metadata? If semantic metadata were the issue, why not solve it more generally with RDF or something that's a part of a greater, more uniform solution that people actually deploy and people actually use? All these little extra files and custom formats creates a hodgepodge of muck.
If anything, downloadable artifacts and contact information especially, through metalink or RDF, need to be annotated with public keys and/or cryptographic hashes where available. And what about a standard for contact phone numbers given a website? The list of metadata idea is endless, hence the need to complicate the web the least and to do it thoughtfully, in a standard manner.
It's nice when a website is well-laid out and everything is where you expect, but having semantic metadata in a mechanically-queryable form is even more valuable... a data "API" for assembling public information uniformly reduces work for almost everyone.
Well, that's what in a security.txt file basically, contact address and encryption key.
> And what about a standard for contact phone numbers given a website?
Use tel URI in Contact field.
By putting this in a non-trivially discoverable (yet well-known) location it should cut down on exactly the kind of noise that makes keeping security vulnerability reporting channels open and clear hard.
Any new or relevant updates?
The diff: https://tools.ietf.org/rfcdiff?difftype=--hwdiff&url2=draft-...
Mattias Geniar has a good write up on it .
The RFC says:
> 2. Why /.well-known? It's short, descriptive, and according to search indices, not widely used.
This gives reasoning for the name of the subpath, but not its existence.
This would also answer the question I've had for so long as to why Wikipedia chose to have /wiki/ before article titles in their URLs. I guess it was so that the article https://en.wikipedia.org/wiki/robots.txt did not collide with their robots.txt file.
Would've been nice if that explanation were included in the RFC.
I still would like to know the backstory around this folder, just like OP.
I count 6460 sites on the linked list, and loads of these are subdomains on tumblr.org, so I wonder what the number of actual TLDs using security.txt is...
No need to overcomplicate this.
.well-known is a terrible idea by the way, a <link rel> or <meta> would make much more sense.
HTTP/1.1 200 OK
rel="next"; title="next image in album"
Better link from stedaniels further in comments: https://ma.ttias.be/well-known-directory-webservers-aka-rfc-...