Hacker News new | past | comments | ask | show | jobs | submit login
Security.txt (2017) (securitytxt.org)
161 points by 1nvalid 34 days ago | hide | past | web | favorite | 54 comments



I understand and welcomed the initiative when it was first discussed. Meanwhile I've implemented it on a couple of domains and hat it running for all of 2018. Last month I removed it all again (the sites still have a responsible disclosure link but not at a standardized URI).

It was a massive waste of time for me to engage with a group of "unknowns" -without prior relationship- who now had a channel to fast-track into my inbox (and got my attention).

The volume of mails was anywhere from 5-20 emails per week which is a lot if you need to go through the points in every mail but you don't have any idea if the entity behind the poor-security-report is even skilled. So you start in good-faith that the report they gave you (e.g. you tell yourself that your own time analysing it is justified "they're probably competent", "they know what they're doing otherwise they'd have a different job", "maybe somebody will get lucky this time", or depending how many bad reports you already read you might be down to "even a blind chicken finds a kernel of corn every now and then")

it seems I ended up having endless discussions with people who automated the whole thing: they crawl the web for /.well-known/security.txt URI and if the find it, automatically start-up metasploit or burp-suite and then send you the canned report while asking you to fix these "serious problems". Yet if you quiz any of these "researchers" deeper about individual items in their canned reports you get nothing but blank stares, incompetence and attempts to weasel out: "but burpsuite says that it is an error and you should correct it", ...

Initially I went through every email patiently. I tried to engage these guys on why they strongly felt it were vulns or 0days (LOL). I knew what they reported was canned and they never questioned the context or the errors themselves. None of them were people that I would hire or trust to give me good advise. Advise wasn't always just bad. Often they have a whole consulting company sitting behind them who not only "fix your problems" but also migrate your whole stack to Drupal or Wordpress or some such nonsense.

If I look at the other end of the spectrum (infosec-memes and "thought leaders" on twitter) I get people complaining that some customers just don't know how bad their security is and they even dare to ignore their reports or worse (question them on their authority in whether this is a bug or not). The whole thing is the deaf leading the blind here. I get what security.txt was trying to improve because there is/was a real issue for people to find a point of contact. But I do not think security.txt is in any way useful. And it's a total waste of time and money for small companies and bigger companies alike. (if you're bigger you get more attention = more reports but that doesn't improve quality - it only adds more work on your end because now you have N-people (instead of 1 or 2) discussing these constant "non-problems".


Reminds me of all the “dubious security vulnerability” stories on the Old New Thing blog, e.g. https://blogs.msdn.microsoft.com/oldnewthing/20111215-00/?p=... where MSFT needs to investigate obviously bogus “security problems” just to be sure they didn’t miss anything important.


This is a rather common tactic, they spam out automated security reports, and request a payment for their "services" from those who reply and engage further with them.

On a related note, I often feel uncomfortable reading about a responsible disclosure, and find coordinated disclosure to be a much more balanced term in the context of disclosing security vulnerabilities.


What if it was required to encrypt the message? Do you think the number of spam would go down?


I haven't explicitly tried to enforce encryption, but probably the drive-by style reports would require extra steps that their automation might not handle. So probably a good first filter. But then I'm still no wiser since the ability to use pgp isn't a qualifier regarding knowledge of the engineer or quality of their report.

It seems that the underlying problem is that those that do good work in this space don't scan the web to find new customers/leads to pitch their service in shambolic ways. And the skiddies who want to make a quick buck will outnumber the good who might accidentally have ended up on your site (because they like your product etc).

the noise/quality ratio in the whole approach is just too big for this to work well in practice. I'm still waiting for the recruitment industry to catch up with the practice and use the security.txt as a sink for people who want to be added to a list of experts that will be contacted when "the company is ready to do a full security assessment post-MVP". I realize this would be fraudulent and I'm not advocating for it - just saying that fake-job offers aren't uncommon either so this will just be a question of time.


What you're describing is a lot like the bug bounty program I ran for a previous employer. It was mostly low-effort scans and "reports" templated from something a big company had made public once. No understanding of if not using HSTS was actually a vulnerability, just the expectation of burp -> report -> $$$.

There were a handful of genuinely good contributors, but probably under 10% of reports.


If you don't mind sharing, what kind of ballpark amount of traffic were you getting for those domains that generated that many emails per week on the subject?


just north of 500K visitors/day (that's across all sites but they were anyway operated by the same org and in 1 specific industry niche).

I conjecture that with these drive-by style reports you won't see a huge fluctuation in reports and traffic might not have a huge effect. I got no proof for this but if my hunch is right then there is a limited number of bad-actors that operate in this market. And if they all operate by copying their model from one another then the traffic should be similar regardless of how your popular domains are. Would be cool to have some data on this.


Thanks for this post. Gives more than enough reasons not to use security.txt. Having it in a place which requires human interaction to form a contract makes more sense. Cant automate that.


Great write-up, thanks for sharing your real-world experience.


Reminds me of the horrific metrics associated with public bug bounties. Thanks for sharing, that's pretty rough.


How many of those mails were actually encrypted?


majority wasn't.


Yes please!

Last year, I discovered a severe security flaw on a couple dozen websites and the sheer communication of this was super painful. I can't just e-mail someone at hi@foobar.com, I'd usually have to send an e-mail along the lines of "Can you give me a contact for your admin/security guy? I have something here and I can't quite disclose it just now." The response rate was extremely low.

If you just give me a security.txt, at least I know I can disclose something and I have some level of certainty that the e-mail will be read.


The people who know about and will use security.txt are probably not going to be the people who you now have problems communicating with.

I'm willing to bet that 10 years after this is standardized the percentage of HTTP/S domains hosting security.txt files will be around 1%. And that 1% will be websites that are mostly already doing the right security things.


> Can you give me a contact for your admin/security guy?

That's what spammers,scammers and sales-people do. Just describe how to repeat the issue. Most of the time they will forward to the right person. Don't expect to hear anything from them.

Knowing that someone have been into your house (server) and sneaked around is very uncomfortably, so don't expect even a thank you.


For us we just put it in the footer :D

Like this: https://kloudtrader.com/security


On one-side: Wouldn't it be simpler and more effective to just have a link to a public key and email address for security reporting?

And on other other: what's with this domain-specific metadata? If semantic metadata were the issue, why not solve it more generally with RDF or something that's a part of a greater, more uniform solution that people actually deploy and people actually use? All these little extra files and custom formats creates a hodgepodge of muck.

If anything, downloadable artifacts and contact information especially, through metalink or RDF, need to be annotated with public keys and/or cryptographic hashes where available. And what about a standard for contact phone numbers given a website? The list of metadata idea is endless, hence the need to complicate the web the least and to do it thoughtfully, in a standard manner.

It's nice when a website is well-laid out and everything is where you expect, but having semantic metadata in a mechanically-queryable form is even more valuable... a data "API" for assembling public information uniformly reduces work for almost everyone.


> On one-side: Wouldn't it be simpler and more effective to just have a link to a public key and email address for security reporting?

Well, that's what in a security.txt file basically, contact address and encryption key.

> And what about a standard for contact phone numbers given a website?

Use tel URI in Contact field.


Why not just put this information in a "Contact" page, which most businesses already have on their websites?


Because that is a user-facing page, and this is a developer(/hacker?)-facing page. Anything listed on the contact page, particularly for larger organisations will get contacts from any slightly annoyed user who hasn't been able to make contact through other methods.

By putting this in a non-trivially discoverable (yet well-known) location it should cut down on exactly the kind of noise that makes keeping security vulnerability reporting channels open and clear hard.


Resubmission from a couple years ago: https://news.ycombinator.com/item?id=15416198

Any new or relevant updates?


New draft was released on January 12, 2019.

The diff: https://tools.ietf.org/rfcdiff?difftype=--hwdiff&url2=draft-...


Great idea, but my servers are constantly being "scanned" for the existence of security.txt, fills the logs up quite a bit.


Why the /.well-known/ subdirectory? Is this a commonly used directory for web dev things? From what I recall, items like robots.txt and .htaccess normally just go in the current directory.


Is been an RFC for almost a decade, RFC5785 [0].

Mattias Geniar has a good write up on it [1].

[0] https://tools.ietf.org/html/rfc5785

[1] https://ma.ttias.be/well-known-directory-webservers-aka-rfc-...


Neither seems to explain why these files aren't just placed in root, like robots.txt. What's the point of having a subpath?

The RFC says:

> 2. Why /.well-known? It's short, descriptive, and according to search indices, not widely used.

This gives reasoning for the name of the subpath, but not its existence.


To minimize collisions, and so we're not retreading this argument in another ten years: https://news.ycombinator.com/item?id=19063727


Ok, yeah. It occurs to me that websites might want the whole path to be variable text, maybe user defined.

This would also answer the question I've had for so long as to why Wikipedia chose to have /wiki/ before article titles in their URLs. I guess it was so that the article https://en.wikipedia.org/wiki/robots.txt did not collide with their robots.txt file.

Would've been nice if that explanation were included in the RFC.


Interesting. I guess I've been out of the web dev scene for a little over a decade now, so that makes sense.


.well-known is also used for ACME for auto-certificate generation and renewal.

I still would like to know the backstory around this folder, just like OP.


It's a standard directory with an RFC and all[1] and is widely used for a whole bunch of things, not just LetsEncrypt (such as WebDav).

[1]: https://tools.ietf.org/html/rfc5785


FYI .htaccess shouldn't generally be publically readable, as it's for your webserver, not the public. It's also only relevant to the directory it's in, hence why it's not in a separate directory.


Why are dotfiles generally considered "hidden" and not to be served (such as .htaccess, .env, etc) but .well-known is expressly intended to be served? It should have just been named well-known, IMO.


Because that made it unlikely to collide with existing / claimable uses. Consider https://github.com/well-known or https://instagram.com/well-known .


nginx is your friend.


I was working on a shared hosting environment (EIG) and saw this newly created directory (well-known) and promptly freaked. I viewed that dir and inside was "acme-challenge" and I freaked even more, thinking my client had been hacked. Was very happy to see this was not a hack, but found the naming conventions very odd.


Used by the Let's Encrypt client to verify site ownership.


I know that now ;-) but it was about 5 minutes of panic until I found my answer on Google.


It's used for a number of other things, from proof of control for cert issuers to information for OIDC. One of its main advantages is that it's self-describing. That is, it's a well-known place to put this sort of metadata about a site.


Auth0 also uses a "/.well-known/" URL path prefix as the default location for public keys.


I wonder if anyone has written a successful exploit of the crawlers folks use to scan this?


And the discussion from Troy Hunt... https://twitter.com/troyhunt/status/1082890150223302657


> 8,582 websites in the Alexa Top 1M implementing a security.txt file

I count 6460 sites on the linked list, and loads of these are subdomains on tumblr.org, so I wonder what the number of actual TLDs using security.txt is...


I have been thinking of a similar idea, but for warrant canaries.


I wonder how long it will take to include additional "imprint" and "data protection declaration" link fields… #GDPR #EU


Do the major browsers support it?


This is for humans, not browsers.


webmaster@domain.com

No need to overcomplicate this.

.well-known is a terrible idea by the way, a <link rel> or <meta> would make much more sense.


not everything HTTP serves HTML


The `Link` header exists which is the moral equivalent of the `link` element in HTML. That way you can express link relations on resources whose representations do not have hyperlinks. Example:

    HTTP/1.1 200 OK
    Content-Length: 153054
    Content-Type: image/webp
    Link: <http://example.com/mediterranian-2019/olives.webp>;
      rel="next"; title="next image in album"
See https://tools.ietf.org/html/rfc8288


<link rel> is for a single resource, .well-known is for a whole site.


Good idea but terrible URL. Web servers by default don’t server hidden folders. Why not just example.com/security.txt instead of example/.well-known/security.txt?


It's a .well-known way to serve additional metadata for a website.

https://en.wikipedia.org/wiki/List_of_/.well-known/_services...

Better link from stedaniels further in comments: https://ma.ttias.be/well-known-directory-webservers-aka-rfc-...




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: