
The “security.txt” proposal reached last step in the IETF process - nwcs
https://mailarchive.ietf.org/arch/msg/ietf-announce/OFuiGlVv6WgvEEABaGmnYi120yU
======
rahuldottech
For the uninitiated: [https://securitytxt.org/](https://securitytxt.org/)

Original HN thread:
[https://news.ycombinator.com/item?id=15416198](https://news.ycombinator.com/item?id=15416198)

~~~
blue_shirt
At one point in time I believe there was also a discussion on why somebody was
removing it from their site. I'm having a hard time finding it currently and
remember it being for excessive contacts from people using automated tools to
scan their site.

Edit: Found it,
[https://news.ycombinator.com/item?id=19152145](https://news.ycombinator.com/item?id=19152145)

~~~
DyslexicAtheist
there is a draft (from November 27, 2019) that makes the presence of a
disclosure policy (using security.txt) mandatory on gov domains:

 _" Binding Operational Directive 20-01 Develop and Publish a Vulnerability
Disclosure Policy"_
[https://cyber.dhs.gov/bod/20-01/](https://cyber.dhs.gov/bod/20-01/)

 _> Create a security.txt15 file at the “/.well-known/” path16 of the agency’s
primary .gov domain. This file must include the Policy and Contact fields, as
specified in the Internet-Draft.17_

------
tptacek
I know Google, Facebook, and Github now have this, though Apple, Microsoft,
Amazon, Stripe, and Square do not (to rattle off the other tech companies with
huge security teams.)

But I don't really get it. Why is this a good idea?

As 'DyslexicAtheist related, if you put this page on your site, people will
crawl for it and find you, and then kick off horrible automated scanners that
generate bogus bugs on every site they check, and then submit bounty requests
for them.

Meanwhile: what serious tester would be impeded by not having this page to
refer to, if you have a /security page on your main site, which you have to
have anyways for this to work? Recall that most of the fields in security.txt
are themselves just URL pointers.

I also think the RFC itself is kind of funny. Do not rely on security.txt as
authorization to test a site! it says, as if an RFC really had the authority
to establish that, rather than an expensive court case in which both sides of
the argument will have competing claims about whether testing was allowed or
(the US default) not.

~~~
Avamander
Bogus reports are an issue security.txt or not. The single and only thing the
file provides is a trivially discoverable way to get the contact/submission
information for security issues, should they matter for whoever posted the
file. That's it. Why make people search for 10min when they can be at a
standardized location.

~~~
theamk
Because then the automated scanners find it too, and send automated reports?

It is just like spam - cheap to send, expensive to read - except you cannot
throw it out right away because it may contain actual vulnerability.

~~~
Avamander
Are you saying automated scanners can't find your other pages? You can also
put a submission URL with a captcha in the security.txt file.

~~~
comfydragon
If I'm understanding correctly, when "automated scanners" is mentioned here,
what's meant is tools that someone runs directly, which scan the
software/device and generate loads of red flags. Having a security.txt file
makes it that much easier for the tool (or the user) to report the issue,
valid or not.

~~~
Avamander
You are correct, but only finding the contact details is easier, submission
can still have a high difficulty, methods against spam.

------
cstuder
A list of all official .well-known files and paths:

[https://www.iana.org/assignments/well-known-uris/well-
known-...](https://www.iana.org/assignments/well-known-uris/well-known-
uris.xhtml)

Wikipedia knows some less official ones too:

[https://en.wikipedia.org/wiki/List_of_/.well-
known/_services...](https://en.wikipedia.org/wiki/List_of_/.well-
known/_services_offered_by_webservers)

~~~
yrro
EST is a new one to me, and yikes!

------
mikeiz404
If you’re looking for an example security.txt, you can have a look at
[https://www.google.com/.well-
known/security.txt](https://www.google.com/.well-known/security.txt).

There is also this site which lets you create create a security.txt file by
filling out some fields: [https://securitytxt.org/](https://securitytxt.org/)

------
tareqak
I think it is great that this proposal is almost done getting through the
process.

I also think there should be a similar file for the regular expression that
expresses a site’s allowable passwords.

~~~
ibly31
Wouldn't this just make password crackers easier? If there's a Regex of what
passwords are okay, it lowers the search space.

To save (potentially rate limited) requests, the ripper software could compare
the regex against a candidate password and skip it altogether.

It's like expressing in machine-readable form how many/which characters they
can skip.

~~~
Johnny555
_Wouldn 't this just make password crackers easier? If there's a Regex of what
passwords are okay, it lowers the search space._

If knowing the rules for acceptable passwords makes it significantly easier to
brute force passwords, that sounds like more of an argument to not have those
rules in the first place since it wouldn't take an attacker long to figure
them out himself even if they aren't published. Hiding the password policy is
a very weak form of security through obscurity.

I guess it would make it easier to programmatically determine which websites
have insecure passoword policies (like an alphanumeric passsword no more than
8 characters long), but the problem here is the password policy, not
publishing the rules.

Even the NIST recommends that sites stop requiring these arbitrary password
rules as they don't actually improve password security:
[https://www.alvaka.net/new-password-guidelines-us-federal-
go...](https://www.alvaka.net/new-password-guidelines-us-federal-government-
via-nist/)

~~~
Quekid5
You're probably right theoretically, but people tend to choose really weak
password if they can choose _anything_.

Maybe passwords should always just be auto-generated and people should be told
to write them down in a... actually, nevermind that. Passwords should be a
thing that's integrated in your browser/computer experience... this is
something that can and should be handled by computers. You should only ever
have to log into your computer and be secure from then on.

This whole insanity of point-to-point invention of secrets needs to die.

~~~
ethbro
_> Passwords should be a thing that's integrated in your browser/computer
experience... this is something that can and should be handled by computers._

I had this crazy idea, whereby computers could themselves come up with very
long, random sequences of bits.

They would then use these openers (couldn't think of a better word) to
authenticate to each other, secured by mathematical operations, in place of
passwords.

Sadly, I've never seen it used on websites, so it must not be a good idea. /s

~~~
HorstG
Certificate auth for http with TLS and Kerberos auth for http is specified,
supported by all (major desktop) browsers.

However: The UX on the browser side is shitty as hell. Certificates display
weird nagscreens, without being able to specify proper defaults like "use this
cert for that site and don't bother me again". Certificate enrollment has been
broken (not that the form element was ever great) by all major browsers, to be
replaced by "do something in Javascript maybe, if we get around to
implementing a new API some time". Oh, and no logout...

Kerberos needs a parameter at browser start or an about:config setting, is
incompatible with using multiple TGTs let alone automatically selecting the
right one or _gasp_ getting a new TGT for the user from the proper KDC. The
only thing that kinda works mostly is using the standard company login TGT.
Oh, and logging out doesn't work...

Oh, and of course most mobile systems are broken or just unsupported.

The sorry state of browser auth is 100% on browser vendors dragging their feet
on those problems that have been known for around 20 years or so. And no,
webauth won't save us, it'll just be another shitshow most likely.

~~~
ethbro
This is the reply I was honestly hoping to get.

I always felt like one of the UX blockers for key exchange was the assumption
that people couldn't be expected to learn the basics, because it's too
complicated.

And every attempt to ignore or hide it has just made the entire thing more
complicated or confusing.

You can shoot yourself in the foot with a gun or crash your car into a wall,
and yet most people manage not to do so on a daily basis.

Sometimes simplicity is a false virtue.

------
nwcs
Comments should be directed to the IETF last-call list via email: last-
call@ietf.org

List info: [https://www.ietf.org/mailman/listinfo/last-
call](https://www.ietf.org/mailman/listinfo/last-call)

Guidance on the last call process:
[https://ietf.org/about/groups/iesg/statements/last-call-
guid...](https://ietf.org/about/groups/iesg/statements/last-call-guidance/)

Archive of comments so far: [https://mailarchive.ietf.org/arch/msg/last-
call/aDZq1M-m4du-...](https://mailarchive.ietf.org/arch/msg/last-
call/aDZq1M-m4du-wN5b58puSQ_7y2M)

------
alasdair_
An interesting attack for any website that allows customers to choose their
own string for use in the url would be to choose the name ".well-known" \- I'm
sure I've seen sites that take user input and create a folder based on that
name. Similar to reddit.com/r/.well-known except as a top-level folder but I
can't find a good example right now.

Once that happens, they could put up a malicious security.txt file and get
free security reports sent to an email of their choice.

~~~
tialaramex
But as you hint, r/.well-known does NOT in fact exist and you can't make it

The /.well-known/ paths were reserved by RFC specifically because this doesn't
really happen. You could _make_ a site with the defect you imagine on purpose
and perhaps somewhere in the vast galaxy of sites there's already one that can
be abused to do this, but they're vanishingly rare. To the point where I don't
know of one even though I went looking.

One reason this prefix would be expected to be especially rare is that dot
directories are both illegal in Windows (though the NT kernel has no problem
with them) and special in Unix systems (they're legal but they're treated
differently because it was convenient) and so any developer or administrator
working with file paths for their web server is likely to disable names
starting with dot at about the same time they disable names starting with
slash and other shenanigans which blow stuff up.

Now of course a URL doesn't have to reflect a path on disk, but as soon as it
doesn't we're talking about a slightly more clued in developer, maybe someone
who has heard about security and thinks it might be a good idea.

~~~
Cogito
And for anyone who doesn't know why .files are treated differently:

In every Unix folder there are two special files, named '.' and '..' \- the
first is a link to the current folder, the second to the parent.

It was annoying to see these two files listed every time you wanted to see
what was in a folder (using ls) so `ls` learnt to not show these files by
default. The way that was coded was (erroneously) to not show any file whose
name started with a '.'

People eventually realised that this had the unintended side effect of hiding
files that start with '.' and found it useful, so the bug became a feature.

~~~
itsjloh
Thats super interesting! Thanks for sharing.

------
arminiusreturns
I actually think this is a great proposal. I may start implementing it
regardless.

------
jillesvangurp
I think this is nice but having this in and a lot of other meta data in a
machine readable form would be nicer. You could also think of things like
licenses a and copyrights.

Technically, the problem this specification solves is making it easier to find
security related metadata for projects that typically have this information
already but just not an easy to find place. So, the next logical step would be
making this machine readable so IDEs, project hosting websites, and other
tools, can do the right things instead of having to parse files intended for
humans with no consistently used syntax.

~~~
sho
Looks pretty machine readable to me? This _is_ the consistent syntax to be
used.

~~~
jillesvangurp
Was expecting something like a yaml or json file to be a good standard for
this. It's nearly 2020, we should not be improvising text file formats in
standards anymore. .txt screams intended for humans.

~~~
sho
You're thinking too short term. .txt far predates, and will far outlast, any
standard _du jour_. I bet you wouldn't be so gung ho about security.xml.

I really don't see the issue. robots.txt works just fine, and this is simply
following in its highly successful footsteps. What you suggest is change for
change's sake.

------
quotemstr
What's wrong with emailing webmaster@domain.tld?

~~~
ShakataGaNai
Where does that go? To whom? What do they do with it?

In a lot of companies, any generic name email address probably goes to the
trash, some group like customer service, an auto-responder that says "Sorry
no" and sometimes to marketing. None of these help you. On rare occasion
webmaster@ might get to someone who is useful and technical, but it might also
be in a folder with thousands of other junk messages.

A better guess would be security@ . But still, wouldn't it just be easier to
have a standard place to lookup who or where to go to contact someone for
security issues? oh yea... security.txt

~~~
defanor
Both "webmaster" and "security" are standardized in RFC 2142, so one should be
able to contact those easily. It's indeed not what happens in practice most of
the time, but not sure if throwing more standards at it is a good solution:
if/when most of the online services will follow those, they'd probably follow
different ones.

------
technion
For people discussing automated tools and noise, prediction: we will start
seeing automated tools flagging "missing security.txt" as some sort of
vulnerability and I will end up implementing it just to stop the noise. It
feels like a lose-lose situation.

------
_pmf_
From a liability point of view, this only has downsides for a "target"
company.

------
superkuh
Okay. I implemented it on my main domain for kicks. I wonder how long until
bots begin looking for it.

------
disconnected
This seems like a really poorly thought out proposal.

From the draft's [1] intro, which provides motivation for the proposal:

> When security vulnerabilities are discovered by independent security
> researchers, they often lack the channels to report them properly [...]

Occam's Razor: they lack the channel to report those vulnerabilities because
the company doesn't give a damn about security, not because of some hitherto
unsolved technical difficulty.

Companies that give a damn about security already have either a "catch-all"
contact for security related things displayed on their contacts page, or have
dedicated pages detailing what the hell you are supposed to do to report a
vulnerability. Companies that don't give a damn will continue to not give a
damn and will ignore your document, and will continue to suck at security
until someone starts punishing them - monetarily - for such appalling
behavior.

But let's leave that aside for a moment. Researchers spot vulnerabilities.
Need a way to report them. So what's the solution?

Apparently, a text file with only one mandatory field...

> 3.5.3. Contact

> This directive indicates an address that researchers should use for
> reporting security vulnerabilities. The value MAY be an email address, a
> phone number and/or a web page with contact information.

...which contains... er... a catch-all contact for security related things and
or a link to a contacts page detailing exactly what the hell you are supposed
to do to report a vulnerability? sigh...

But the real kicker is that that the standard doesn't even require that the
contact information is up to date or valid:

> 6.2. Incorrect or Stale Information

> [...] Organizations SHOULD ensure that information in this file and any
> referenced resources such as web pages, email addresses and telephone
> numbers are kept current, are accessible, controlled by the organization,
> and are kept secure.

In case the meaning of "SHOULD" is in question see RFC 2119 [2], but
basically, they "strongly recommend" that you keep your contact information up
to day - instead of, I dunno, REQUIRING it, since having a channel to properly
report security vulnerabilities is the ENTIRE POINT of this exercise?

/facepalm

Are we really supposed to take this seriously? This is simply security
theater.

[1] [https://tools.ietf.org/html/draft-foudil-
securitytxt-08](https://tools.ietf.org/html/draft-foudil-securitytxt-08)

[2]
[https://www.ietf.org/rfc/rfc2119.txt](https://www.ietf.org/rfc/rfc2119.txt)

~~~
velosol
Isn't this part of why postmaster and abuse addresses exist? Perhaps a WHOIS
entry that was specifically for security issues?

------
phkamp
My first thought was "XKCD 463"

